Skip to the content.

Table of contents

Abstract

Most neural vocoders employ band-limited mel-spectrograms to generate waveforms. If full-band spectral features are used as the input, the vocoder can be provided with as much acoustic information as possible. However, in some models employing full-band mel-spectrograms, an over-smoothing problem occurs as part of which non-sharp spectrograms are generated. To address this problem, we propose UnivNet, a neural vocoder that synthesizes high-fidelity waveforms in real time. Inspired by works in the field of voice activity detection, we added a multi-resolution spectrogram discriminator that employs multiple linear spectrogram magnitudes computed using various parameter sets. Using full-band mel-spectrograms as input, we expect to generate high-resolution signals by adding a discriminator that employs spectrograms of multiple resolutions as the input. In an evaluation on a dataset containing information on hundreds of speakers, UnivNet obtained the best objective and subjective results among competing models for both seen and unseen speakers. These results, including the best subjective score for text-to-speech, demonstrate the potential for fast adaptation to new speakers without a need for training from scratch.

Comparison with existing models

Seen speakers (‘LibriTTS/train-clean-360’ dataset)

Models #1 #2 #3 #4 #5
Recordings
MelGAN
Parallel WaveGAN
HiFi-GAN V1
UnivNet-c16
UnivNet-c32

Unseen speakers (‘LibriTTS/test-clean’ dataset)

Models #1 #2 #3 #4 #5
Recordings
MelGAN
Parallel WaveGAN
HiFi-GAN V1
UnivNet-c16
UnivNet-c32

Text-to-speech (‘LJSpeech’ dataset)

Models #1 #2 #3 #4 #5
MelGAN
Parallel WaveGAN
HiFi-GAN V1
UnivNet-c16
UnivNet-c32

Ablation study

Seen speakers (‘LibriTTS/train-clean-360’ dataset)

Instances #1 #2 #3 #4 #5
Recordings
UnivNet-c16
Without LVC
Without GAU
Without MRSD
Without MPWD
With MSWD
MPWD->MSWD