Multilingual Speech Synthesis
This repository contains an implementation of Tacotron 2 that supports multilingual experiments and that implements different approaches to encoder parameter sharing. It also presents a model combining ideas from Learning to speak fluently in a foreign language: Multilingual speech synthesis and cross-language voice cloning, End-to-End Code-Switched TTS with Mix of Monolingual Recordings, and Contextual Parameter Generation for Universal Neural Machine Translation.
We provide synthesized samples, training and evaluation data, source code, and parameters for comparison of three multilingual text-to-speech models. The first shares the whole encoder and uses an adversarial classifier to remove speaker-dependent information from the encoder. The second has separate encoders for each language. Finally, the third is our attempt to combine the best of both previous approaches, i.e., effective parameter sharing of the first method and flexibility of the second. It has a fully convolutional encoder with language-specific parameters generated by a parameter generator. It also makes use of an adversarial speaker classifier which follows principles of domain adversarial training. See the illustration above.
Many samples synthesized using the three compared models are at this website. It contains also a few samples synthesized by a monolingual vanilla Tacotron trained on LJ Speech with the Griffin-Lim vocoder (a sanity check of our implementation).
:octocat: Clone repository
git clone https://github.com/Tomiinek/Multilingual_Text_to_Speech.git cd Multilingual_Text_to_Speech
:mortar_board: Install python requirements
pip3 install -r requirements.txt
:hourglass: Download datasets
Download the CSS10 dataset (Apache License 2.0) and our cleaned Common Voice data (Creative Commons CC0).
Visit the CSS10 repository and download data for all languages.
Extract the downloaded archives. For example, in the case of French, you should see the following folder structure:
data/css10/french/lesmis/ data/css10/french/lupincontresholme/ data/css10/french/transcript.txt
Next, download our cleaned Common Voice dataset:
wget https://www.dropbox.com/s/axoic9eoeii1zyd/clean_comvoi.tar.gz tar -zxvf clean_comvoi.tar.gz rm clean_comvoi.tar.gz
:scroll: Prepare spectrograms
This repository provides cleaned transcripts and meta-files and you have already downloaded corresponding
.wav files. However, it is handy to
precompute spectrograms (it speeds up training). In view of that, you can run an ad-hoc script that will create mel and linear spectrograms for you:
cd /project_root/data/ python3 prepare_css_spectrograms.py
You can create the meta-file, spectrograms, and phonemicized transcripts for other datasets by applying the
method to the original downloaded and extracted data (like LJ Speech, M-AILABs, etc., see
dataset/loaders.py for supported datasets). Note that it is then needed to split the meta-file into
Now, we can run training. See the
params/params.py file with an exhaustive description of parameters.
params folder also contains prepared parameter configurations (such as
generated_switching.json) for multilingual training on the whole CSS10 dataset and for training of code-switching models on the dataset that consists of Cleaned Common Voice and five languages of CSS10.
Train with predefined configurations (recommended for quick start), for example:
PYTHONIOENCODING=utf-8 python3 train.py --hyper_parameters generated_switching
Please note the missing extension (
Or with default parameters (default dataset is LJ Speech):
PYTHONIOENCODING=utf-8 python3 train.py
By default, training logs are saved into the
logs directory. Use Tensorboard to monitor training:
tensorboard --logdir logs --port 6666 &
Checkpoints are saved into the
checkpoints directory by default. They contain model weights, parameters, the optimizer state, and the state of the scheduler. To restore training from a checkpoint, let's say named
PYTHONIOENCODING=utf-8 python3 train.py --checkpoint CHECKPOINT-1
For generating spectrograms, see
synthesize.py or interactive Colab notebooks (here and here). An example call that uses a checkpoint
and that saves both the synthesized spectrogram and the corresponding waveform vocoded using Griffin-Lim algorithm:
echo "01|Dies ist ein Beispieltext.|00-fr|de" | python3 synthesize.py --checkpoint checkpoints/CHECKPOINT-1 --save_spec
Please, see this file for more details about the contained source-code and its structure.