Real-Time Voice Cloning
This repository is an implementation of Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis (SV2TTS) with a vocoder that works in real-time. Feel free to check my thesis if you're curious or if you're looking for info I haven't documented yet (don't hesitate to make an issue for that too). Mostly I would recommend giving a quick look to the figures beyond the introduction.
SV2TTS is a three-stage deep learning framework that allows to create a numerical representation of a voice from a few seconds of audio, and to use it to condition a text-to-speech model trained to generalize to new voices.
|1806.04558||SV2TTS||Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis||This repo|
|1802.08435||WaveRNN (vocoder)||Efficient Neural Audio Synthesis||fatchord/WaveRNN|
|1712.05884||Tacotron 2 (synthesizer)||Natural TTS Synthesis by Conditioning Wavenet on Mel Spectrogram Predictions||Rayhane-mamah/Tacotron-2|
|1710.10467||GE2E (encoder)||Generalized End-To-End Loss for Speaker Verification||This repo|
06/07/19: Need to run within a docker container on a remote server? See here.
25/06/19: Experimental support for low-memory GPUs (~2gb) added for the synthesizer. Pass
demo_toolbox.py to enable it. It adds a big overhead, so it's not recommended if you have enough VRAM.
You will need the following whether you plan to use the toolbox only or to retrain the models.
Python 3.7. Python 3.6 might work too, but I wouldn't go lower because I make extensive use of pathlib.
pip install -r requirements.txt to install the necessary packages. Additionally you will need PyTorch (>=1.0.1).
A GPU is mandatory, but you don't necessarily need a high tier GPU if you only want to use the toolbox.
Download the latest here.
Before you download any dataset, you can begin by testing your configuration with:
If all tests pass, you're good to go.
For playing with the toolbox alone, I only recommend downloading
LibriSpeech/train-clean-100. Extract the contents as
<datasets_root> is a directory of your choosing. Other datasets are supported in the toolbox, see here. You're free not to download any dataset, but then you will need your own data as audio files or you will have to record it with the toolbox.
You can then try the toolbox:
python demo_toolbox.py -d <datasets_root>
depending on whether you downloaded any datasets. If you are running an X-server or if you have the error
Aborted (core dumped), see this issue.