Skip to content

↑ Audio Features

Audio Features Preprocessing

Example of a preprocessing specification (assuming the audio files have a sample rate of 16000):

name: audio_path
type: audio
preprocessing:
  audio_file_length_limit_in_s: 7.5
  type: stft
  window_length_in_s: 0.04
  window_shift_in_s: 0.02
  num_fft_points: 800
  window_type: boxcar

Ludwig supports reading audio files using PyTorch's Torchaudio library. This library supports WAV, AMB, MP3, FLAC, OGG/VORBIS, OPUS, SPHERE, and AMR-NB formats.

Preprocessing parameters:

  • audio_file_length_limit_in_s: (default 7.5): float value that defines the maximum limit of the audio file in seconds. All files longer than this limit are cut off. All files shorter than this limit are padded with padding_value
  • missing_value_strategy (default: bfill): what strategy to follow when there's a missing value in an audio column. The value should be one of fill_with_const (replaces the missing value with a specific value specified with the fill_value parameter), fill_with_mode (replaces the missing values with the most frequent value in the column), bfill (replaces the missing values with the next valid value), ffill (replaces the missing values with the previous valid value) or drop_row.
  • in_memory (default true): defines whether an audio dataset will reside in memory during the training process or will be dynamically fetched from disk (useful for large datasets). In the latter case a training batch of input audio files will be fetched from disk each training iteration. At the moment only in_memory = true is supported.
  • padding_value: (default 0): float value that is used for padding.
  • norm: (default null) the normalization method that can be used for the input data. Supported methods: null (data is not normalized), per_file (z-norm is applied on a “per file” level)
  • type (default raw): Defines the type of audio feature to be used. Supported types at the moment are raw, stft, stft_phase, group_delay. For more detail, check Audio Input Features and Encoders.
  • window_length_in_s: Defines the window length used for the short time Fourier transformation (only needed if type != raw).
  • window_shift_in_s: Defines the window shift used for the short time Fourier transformation (also called hop_length) (only needed if type != raw).
  • num_fft_points: (default window_length_in_s * sample_rate of audio file) Defines the number of fft points used for the short time Fourier transformation. If num_fft_points > window_length_in_s * sample_rate, then the signal is zero-padded at the end. num_fft_points has to be >= window_length_in_s * sample_rate (only needed if type != raw).
  • window_type: (default hamming): Defines the type window the signal is weighted before the short time Fourier transformation. Current supported options are: (bartlett, blackman, hamming, and hann). For more information on these window types, check out scipy’s window function (only needed if type != raw).
  • num_filter_bands: Defines the number of filters used in the filterbank (only needed if type == fbank).

Preprocessing parameters can also be defined once and applied to all audio input features using the Type-Global Preprocessing section.

Audio Input Features and Encoders

Audio files are transformed into one of the following types according to type under the preprocessing configuration.

  • raw: Audio file is transformed into a float valued tensor of size N x L x W (where N is the size of the dataset and L corresponds to audio_file_length_limit_in_s * sample_rate and W = 1).
  • stft: Audio is transformed to the stft magnitude. Audio file is transformed into a float valued tensor of size N x L x W (where N is the size of the dataset, L corresponds to ceil(audio_file_length_limit_in_s * sample_rate - window_length_in_s * sample_rate + 1/ window_shift_in_s * sample_rate) + 1 and W corresponds to num_fft_points / 2).
  • fbank: Audio file is transformed to FBANK features (also called log Mel-filter bank values). FBANK features are implemented according to their definition in the HTK Book: Raw Signal -> Preemphasis -> DC mean removal -> stft magnitude -> Power spectrum: stft^2 -> mel-filter bank values: triangular filters equally spaced on a Mel-scale are applied -> log-compression: log(). Overall the audio file is transformed into a float valued tensor of size N x L x W with N,L being equal to the ones in stft and W being equal to num_filter_bands.
  • stft_phase: The phase information for each stft bin is appended to the stft magnitude so that the audio file is transformed into a float valued tensor of size N x L x 2W with N,L,W being equal to the ones in stft.
  • group_delay: Audio is transformed to group delay features according to Equation (23) in this paper. Group_delay features has the same tensor size as stft.

The encoder parameters specified at the feature level are:

  • tied (default null): name of another input feature to tie the weights of the encoder with. It needs to be the name of a feature of the same type and with the same encoder parameters. -

Example audio feature entry in the input features list:

name: audio_column_name
type: audio
tied: null
encoder: 
    type: parallel_cnn

Audio feature encoders are the same as for Sequence Features.

Encoder type and encoder parameters can also be defined once and applied to all audio input features using the Type-Global Encoder section.

Audio Output Features and Decoders

There are no audio decoders at the moment.

If this unlocks an interesting use case for your application, please file a GitHub Issue or ping the Ludwig Slack.