This repository is forked from Real-Time-Voice-Cloning which only support English.


? Chinese supported mandarin and tested with dataset: aidatatang_200zh

? PyTorch worked for pytorch, tested in version of 1.9.0(latest in August 2021), with GPU Tesla T4 and GTX 2060

? Windows + Linux tested in both Windows OS and linux OS after fixing nits

? Easy & Awesome effect with only newly-trained synthesizer, by reusing the pretrained encoder/vocoder


Quick Start

1. Install Requirements

Follow the original repo to test if you got all environment ready.
**Python 3.7 or higher ** is needed to run the toolbox.

  • Install PyTorch.
  • Install ffmpeg.
  • Run pip install -r requirements.txt to install the remaining necessary packages.

2. Train synthesizer with aidatatang_200zh

  • Download aidatatang_200zh dataset and unzip: make sure you can access all .wav in train folder

  • Preprocess with the audios and the mel spectrograms:
    python <datasets_root>

  • Preprocess the embeddings:
    python <datasets_root>/SV2TTS/synthesizer

  • Train the synthesizer:
    python mandarin <datasets_root>/SV2TTS/synthesizer

  • Go to next step when you see attention line show and loss meet your need in training folder synthesizer/saved_models/.

FYI, my attention came after 18k steps and loss became lower than 0.4 after 50k steps.

3. Launch the Toolbox

You can then try the toolbox:

python -d <datasets_root>


  • [x] Add demo video
  • [ ] Add support for more dataset
  • [ ] Upload pretrained model
  • ? Welcome to add more