Source-Filter HiFi-GAN (SiFi-GAN)

This repo provides official PyTorch implementation of SiFi-GAN, a fast and pitch controllable high-fidelity neural vocoder. For more information, please see our demo.

Environment setup

$ cd SiFiGAN
$ pip install -e .

Please refer to the Parallel WaveGAN repo for more details.

Folder architecture

  • egs: The folder for projects.
  • egs/namine_ritsu: The folder of the Namine Ritsu project example.
  • sifigan: The folder of the source codes.

The dataset preparation of Namine Ritsu database is based on NNSVS. Please refer to it for the procedure and details.


In this repo, hyperparameters are managed using Hydra. Hydra provides an easy way to dynamically create a hierarchical configuration by composition and override it through config files and the command line.

Dataset preparation

Make dataset and scp files denoting paths to each audio files according to your own dataset (E.g., egs/namine_ritsu/data/scp/namine_ritsu.scp). List files denoting paths to the extracted features are automatically created in the next step (E.g., egs/namine_ritsu/data/scp/namine_ritsu.list). Note that scp/list files for training/validation/evaluation are needed.


# Move to the project directory
$ cd egs/namine_ritsu

# Extract acoustic features (F0, mel-cepstrum, and etc.)
# You can customize parameters according to sifigan/bin/config/extract_features.yaml
$ sifigan-extract-features audio=data/scp/namine_ritsu_all.scp

# Compute statistics of training data
$ sifigan-compute-statistics feats=data/scp/namine_ritsu_train.list stats=data/stats/namine_ritsu_train.joblib


# Train a model customizing the hyperparameters as you like
$ sifigan-train generator=sifigan discriminator=univnet train=sifigan data=namine_ritsu out_dir=exp/sifigan


# Decode with several F0 scaling factors
$ sifigan-decode out_dir=exp/sifigan checkpoint_steps=400000 f0_factors=[0.5,1.0,2.0]

Monitor training progress

$ tensorboard --logdir exp


If you find the code is helpful, please cite the following article.


Development: Reo Yoneyama @ Nagoya University, Japan E-mail: [email protected]

Advisors: Yi-Chiao Wu @ Meta Reality Labs Research, USA E-mail: [email protected] Tomoki Toda @ Nagoya University, Japan E-mail: [email protected]


View Github