Most neural vocoders employ band-limited mel-spectrograms to generate waveforms. If full-band spectral features are used as the input, the vocoder can be provided with as much acoustic information as possible. However, in some models employing full-band mel-spectrograms, an over-smoothing problem occurs as part of which non-sharp spectrograms are generated. To address this problem, we propose UnivNet, a neural vocoder that synthesizes high-fidelity waveforms in real time. Inspired by works in the field of voice activity detection, we added a multi-resolution spectrogram discriminator that employs multiple linear spectrogram magnitudes computed using various parameter sets. Using full-band mel-spectrograms as input, we expect to generate high-resolution signals by adding a discriminator that employs spectrograms of multiple resolutions as the input. In an evaluation on a dataset containing information on hundreds of speakers, UnivNet obtained the best objective and subjective results among competing models for both seen and unseen speakers. These results, including the best subjective score for text-to-speech, demonstrate the potential for fast adaptation to new speakers without a need for training from scratch.
Comparison with existing models
The LibriTTS dataset is an English multi-speaker audiobook dataset.
The ‘train-clean-360’ subset was used to train the model and evaluate the speakers used for training (seen speakers).
The ‘test-clean’ subset was used to evaluate the speakers not used for training (unseen speakers).
Seen speakers (‘LibriTTS/train-clean-360’ dataset)
Models
#1
#2
#3
#4
#5
Recordings
MelGAN
Parallel WaveGAN
HiFi-GAN V1
UnivNet-c16
UnivNet-c32
Unseen speakers (‘LibriTTS/test-clean’ dataset)
Models
#1
#2
#3
#4
#5
Recordings
MelGAN
Parallel WaveGAN
HiFi-GAN V1
UnivNet-c16
UnivNet-c32
Text-to-speech (‘LJSpeech’ dataset)
For text-to-speech evaluation, we used JDI-T acoustic model with a pitch and energy predictor.
Each trained vocoder was fine-tuned using ground truth waveforms and predicted log-mel-spectrograms.
The LJSpeech dataset was used to train JDI-T and fine-tune each vocoder.
Models
#1
#2
#3
#4
#5
MelGAN
Parallel WaveGAN
HiFi-GAN V1
UnivNet-c16
UnivNet-c32
Ablation study
To demonstrate the validity of the proposed model configuration, we prepared instances in which each component (i.e. LVC, GAU, MRSD and MPWD) of the model was removed.