• Docs >
  • torchaudio.functional
Shortcuts

torchaudio.functional

Functions to perform common audio operations.

istft

torchaudio.functional.istft(stft_matrix, n_fft, hop_length=None, win_length=None, window=None, center=True, pad_mode='reflect', normalized=False, onesided=True, length=None)[source]

Inverse short time Fourier Transform. This is expected to be the inverse of torch.stft. It has the same parameters (+ additional optional parameter of length) and it should return the least squares estimation of the original signal. The algorithm will check using the NOLA condition ( nonzero overlap).

Important consideration in the parameters window and center so that the envelop created by the summation of all the windows is never zero at certain point in time. Specifically, \(\sum_{t=-\infty}^{\infty} w^2[n-t\times hop\_length] \cancel{=} 0\).

Since stft discards elements at the end of the signal if they do not fit in a frame, the istft may return a shorter signal than the original signal (can occur if center is False since the signal isn’t padded).

If center is True, then there will be padding e.g. ‘constant’, ‘reflect’, etc. Left padding can be trimmed off exactly because they can be calculated but right padding cannot be calculated without additional information.

Example: Suppose the last window is: [17, 18, 0, 0, 0] vs [18, 0, 0, 0, 0]

The n_frames, hop_length, win_length are all the same which prevents the calculation of right padding. These additional values could be zeros or a reflection of the signal so providing length could be useful. If length is None then padding will be aggressively removed (some loss of signal).

[1] D. W. Griffin and J. S. Lim, “Signal estimation from modified short-time Fourier transform,” IEEE Trans. ASSP, vol.32, no.2, pp.236-243, Apr. 1984.

Parameters
  • stft_matrix (torch.Tensor) – Output of stft where each row of a channel is a frequency and each column is a window. it has a size of either (channel, fft_size, n_frames, 2) or ( fft_size, n_frames, 2)

  • n_fft (int) – Size of Fourier transform

  • hop_length (Optional[int]) – The distance between neighboring sliding window frames. (Default: win_length // 4)

  • win_length (Optional[int]) – The size of window frame and STFT filter. (Default: n_fft)

  • window (Optional[torch.Tensor]) – The optional window function. (Default: torch.ones(win_length))

  • center (bool) – Whether input was padded on both sides so that the \(t\)-th frame is centered at time \(t \times \text{hop\_length}\). (Default: True)

  • pad_mode (str) – Controls the padding method used when center is True. (Default: 'reflect')

  • normalized (bool) – Whether the STFT was normalized. (Default: False)

  • onesided (bool) – Whether the STFT is onesided. (Default: True)

  • length (Optional[int]) – The amount to trim the signal by (i.e. the original signal length). (Default: whole signal)

Returns

Least squares estimation of the original signal of size (channel, signal_length) or (signal_length)

Return type

torch.Tensor

spectrogram

torchaudio.functional.spectrogram()

Create a spectrogram from a raw audio signal.

Parameters
  • waveform (torch.Tensor) – Tensor of audio of dimension (channel, time)

  • pad (int) – Two sided padding of signal

  • window (torch.Tensor) – Window tensor that is applied/multiplied to each frame/window

  • n_fft (int) – Size of FFT

  • hop_length (int) – Length of hop between STFT windows

  • win_length (int) – Window size

  • power (int) – Exponent for the magnitude spectrogram, (must be > 0) e.g., 1 for energy, 2 for power, etc.

  • normalized (bool) – Whether to normalize by magnitude after stft

Returns

Dimension (channel, freq, time), where channel is unchanged, freq is n_fft // 2 + 1 where n_fft is the number of Fourier bins, and time is the number of window hops (n_frames).

Return type

torch.Tensor

amplitude_to_DB

torchaudio.functional.amplitude_to_DB()

Turns a tensor from the power/amplitude scale to the decibel scale.

This output depends on the maximum value in the input tensor, and so may return different values for an audio clip split into snippets vs. a a full clip.

Parameters
  • x (torch.Tensor) – Input tensor before being converted to decibel scale

  • multiplier (float) – Use 10. for power and 20. for amplitude

  • amin (float) – Number to clamp x

  • db_multiplier (float) – Log10(max(reference value and amin))

  • top_db (Optional[float]) – Minimum negative cut-off in decibels. A reasonable number is 80. (Default: None)

Returns

Output tensor in decibel scale

Return type

torch.Tensor

create_fb_matrix

torchaudio.functional.create_fb_matrix()

Create a frequency bin conversion matrix.

Parameters
  • n_freqs (int) – Number of frequencies to highlight/apply

  • f_min (float) – Minimum frequency

  • f_max (float) – Maximum frequency

  • n_mels (int) – Number of mel filterbanks

Returns

Triangular filter banks (fb matrix) of size (n_freqs, n_mels) meaning number of frequencies to highlight/apply to x the number of filterbanks. Each column is a filterbank so that assuming there is a matrix A of size (…, n_freqs), the applied result would be A * create_fb_matrix(A.size(-1), ...).

Return type

torch.Tensor

create_dct

torchaudio.functional.create_dct()

Creates a DCT transformation matrix with shape (n_mels, n_mfcc), normalized depending on norm.

Parameters
  • n_mfcc (int) – Number of mfc coefficients to retain

  • n_mels (int) – Number of mel filterbanks

  • norm (Optional[str]) – Norm to use (either ‘ortho’ or None)

Returns

The transformation matrix, to be right-multiplied to row-wise data of size (n_mels, n_mfcc).

Return type

torch.Tensor

mu_law_encoding

torchaudio.functional.mu_law_encoding()

Encode signal based on mu-law companding. For more info see the Wikipedia Entry

This algorithm assumes the signal has been scaled to between -1 and 1 and returns a signal encoded with values from 0 to quantization_channels - 1.

Parameters
  • x (torch.Tensor) – Input tensor

  • quantization_channels (int) – Number of channels

Returns

Input after mu-law encoding

Return type

torch.Tensor

mu_law_decoding

torchaudio.functional.mu_law_decoding()

Decode mu-law encoded signal. For more info see the Wikipedia Entry

This expects an input with values between 0 and quantization_channels - 1 and returns a signal scaled between -1 and 1.

Parameters
  • x_mu (torch.Tensor) – Input tensor

  • quantization_channels (int) – Number of channels

Returns

Input after mu-law decoding

Return type

torch.Tensor

complex_norm

torchaudio.functional.complex_norm(complex_tensor, power=1.0)[source]

Compute the norm of complex tensor input.

Parameters
  • complex_tensor (torch.Tensor) – Tensor shape of (*, complex=2)

  • power (float) – Power of the norm. (Default: 1.0).

Returns

Power of the normed input tensor. Shape of (*, )

Return type

torch.Tensor

angle

torchaudio.functional.angle(complex_tensor)[source]

Compute the angle of complex tensor input.

Parameters

complex_tensor (torch.Tensor) – Tensor shape of (*, complex=2)

Returns

Angle of a complex tensor. Shape of (*, )

Return type

torch.Tensor

magphase

torchaudio.functional.magphase(complex_tensor, power=1.0)[source]

Separate a complex-valued spectrogram with shape (*, 2) into its magnitude and phase.

Parameters
  • complex_tensor (torch.Tensor) – Tensor shape of (*, complex=2)

  • power (float) – Power of the norm. (Default: 1.0)

Returns

The magnitude and phase of the complex tensor

Return type

Tuple[torch.Tensor, torch.Tensor]

phase_vocoder

torchaudio.functional.phase_vocoder(complex_specgrams, rate, phase_advance)[source]

Given a STFT tensor, speed up in time without modifying pitch by a factor of rate.

Parameters
  • complex_specgrams (torch.Tensor) – Dimension of (*, channel, freq, time, complex=2)

  • rate (float) – Speed-up factor

  • phase_advance (torch.Tensor) – Expected phase advance in each bin. Dimension of (freq, 1)

Returns

Dimension of (*, channel, freq, ceil(time/rate), complex=2)

Return type

complex_specgrams_stretch (torch.Tensor)

Example
>>> num_freqs, hop_length = 1025, 512
>>> # (batch, channel, num_freqs, time, complex=2)
>>> complex_specgrams = torch.randn(16, 1, num_freqs, 300, 2)
>>> rate = 1.3 # Slow down by 30%
>>> phase_advance = torch.linspace(
>>>    0, math.pi * hop_length, num_freqs)[..., None]
>>> x = phase_vocoder(complex_specgrams, rate, phase_advance)
>>> x.shape # with 231 == ceil(300 / 1.3)
torch.Size([16, 1, 1025, 231, 2])

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources