diff --git a/README.md b/README.md index adc1344..7a6f2f8 100644 --- a/README.md +++ b/README.md @@ -1,370 +1,6 @@ -
+# GPT-SoVITS_inference - -

GPT-SoVITS-WebUI

-A Powerful Few-shot Voice Conversion and Text-to-Speech WebUI.

- -[![madewithlove](https://img.shields.io/badge/made_with-%E2%9D%A4-red?style=for-the-badge&labelColor=orange)](https://github.com/RVC-Boss/GPT-SoVITS) - -RVC-Boss%2FGPT-SoVITS | Trendshift - - - -[![Open In Colab](https://img.shields.io/badge/Colab-F9AB00?style=for-the-badge&logo=googlecolab&color=525252)](https://colab.research.google.com/github/RVC-Boss/GPT-SoVITS/blob/main/colab_webui.ipynb) -[![License](https://img.shields.io/badge/LICENSE-MIT-green.svg?style=for-the-badge)](https://github.com/RVC-Boss/GPT-SoVITS/blob/main/LICENSE) -[![Huggingface](https://img.shields.io/badge/🤗%20-online%20demo-yellow.svg?style=for-the-badge)](https://huggingface.co/spaces/lj1995/GPT-SoVITS-v2) -[![Discord](https://img.shields.io/discord/1198701940511617164?color=%23738ADB&label=Discord&style=for-the-badge)](https://discord.gg/dnrgs5GHfG) - -**English** | [**中文简体**](./docs/cn/README.md) | [**日本語**](./docs/ja/README.md) | [**한국어**](./docs/ko/README.md) | [**Türkçe**](./docs/tr/README.md) - -
- ---- - -## Features: - -1. **Zero-shot TTS:** Input a 5-second vocal sample and experience instant text-to-speech conversion. - -2. **Few-shot TTS:** Fine-tune the model with just 1 minute of training data for improved voice similarity and realism. - -3. **Cross-lingual Support:** Inference in languages different from the training dataset, currently supporting English, Japanese, Korean, Cantonese and Chinese. - -4. **WebUI Tools:** Integrated tools include voice accompaniment separation, automatic training set segmentation, Chinese ASR, and text labeling, assisting beginners in creating training datasets and GPT/SoVITS models. - -**Check out our [demo video](https://www.bilibili.com/video/BV12g4y1m7Uw) here!** - -Unseen speakers few-shot fine-tuning demo: - -https://github.com/RVC-Boss/GPT-SoVITS/assets/129054828/05bee1fa-bdd8-4d85-9350-80c060ab47fb - -**User guide: [简体中文](https://www.yuque.com/baicaigongchang1145haoyuangong/ib3g1e) | [English](https://rentry.co/GPT-SoVITS-guide#/)** - -## Installation - -For users in China, you can [click here](https://www.codewithgpu.com/i/RVC-Boss/GPT-SoVITS/GPT-SoVITS-Official) to use AutoDL Cloud Docker to experience the full functionality online. - -### Tested Environments - -- Python 3.9, PyTorch 2.0.1, CUDA 11 -- Python 3.10.13, PyTorch 2.1.2, CUDA 12.3 -- Python 3.9, PyTorch 2.2.2, macOS 14.4.1 (Apple silicon) -- Python 3.9, PyTorch 2.2.2, CPU devices - -_Note: numba==0.56.4 requires py<3.11_ - -### Windows - -If you are a Windows user (tested with win>=10), you can [download the integrated package](https://huggingface.co/lj1995/GPT-SoVITS-windows-package/resolve/main/GPT-SoVITS-beta.7z?download=true) and double-click on _go-webui.bat_ to start GPT-SoVITS-WebUI. - -**Users in China can [download the package here](https://www.yuque.com/baicaigongchang1145haoyuangong/ib3g1e/dkxgpiy9zb96hob4#KTvnO).** - -### Linux - -```bash -conda create -n GPTSoVits python=3.9 -conda activate GPTSoVits -bash install.sh -``` - -### macOS - -**Note: The models trained with GPUs on Macs result in significantly lower quality compared to those trained on other devices, so we are temporarily using CPUs instead.** - -1. Install Xcode command-line tools by running `xcode-select --install`. -2. Install FFmpeg by running `brew install ffmpeg`. -3. Install the program by running the following commands: - -```bash -conda create -n GPTSoVits python=3.9 -conda activate GPTSoVits -pip install -r requirements.txt -``` - -### Install Manually - -#### Install FFmpeg - -##### Conda Users - -```bash -conda install ffmpeg -``` - -##### Ubuntu/Debian Users - -```bash -sudo apt install ffmpeg -sudo apt install libsox-dev -conda install -c conda-forge 'ffmpeg<7' -``` - -##### Windows Users - -Download and place [ffmpeg.exe](https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/ffmpeg.exe) and [ffprobe.exe](https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/ffprobe.exe) in the GPT-SoVITS root. - -Install [Visual Studio 2017](https://aka.ms/vs/17/release/vc_redist.x86.exe) (Korean TTS Only) - -##### MacOS Users -```bash -brew install ffmpeg -``` - -#### Install Dependences - -```bash -pip install -r requirements.txt -``` - -### Using Docker - -#### docker-compose.yaml configuration - -0. Regarding image tags: Due to rapid updates in the codebase and the slow process of packaging and testing images, please check [Docker Hub](https://hub.docker.com/r/breakstring/gpt-sovits) for the currently packaged latest images and select as per your situation, or alternatively, build locally using a Dockerfile according to your own needs. -1. Environment Variables: - - is_half: Controls half-precision/double-precision. This is typically the cause if the content under the directories 4-cnhubert/5-wav32k is not generated correctly during the "SSL extracting" step. Adjust to True or False based on your actual situation. -2. Volumes Configuration,The application's root directory inside the container is set to /workspace. The default docker-compose.yaml lists some practical examples for uploading/downloading content. -3. shm_size: The default available memory for Docker Desktop on Windows is too small, which can cause abnormal operations. Adjust according to your own situation. -4. Under the deploy section, GPU-related settings should be adjusted cautiously according to your system and actual circumstances. - -#### Running with docker compose - -``` -docker compose -f "docker-compose.yaml" up -d -``` - -#### Running with docker command - -As above, modify the corresponding parameters based on your actual situation, then run the following command: - -``` -docker run --rm -it --gpus=all --env=is_half=False --volume=G:\GPT-SoVITS-DockerTest\output:/workspace/output --volume=G:\GPT-SoVITS-DockerTest\logs:/workspace/logs --volume=G:\GPT-SoVITS-DockerTest\SoVITS_weights:/workspace/SoVITS_weights --workdir=/workspace -p 9880:9880 -p 9871:9871 -p 9872:9872 -p 9873:9873 -p 9874:9874 --shm-size="16G" -d breakstring/gpt-sovits:xxxxx -``` - -## Pretrained Models - -**Users in China can [download all these models here](https://www.yuque.com/baicaigongchang1145haoyuangong/ib3g1e/dkxgpiy9zb96hob4#nVNhX).** - -1. Download pretrained models from [GPT-SoVITS Models](https://huggingface.co/lj1995/GPT-SoVITS) and place them in `GPT_SoVITS/pretrained_models`. - -2. Download G2PW models from [G2PWModel_1.1.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/g2p/G2PWModel_1.1.zip), unzip and rename to `G2PWModel`, and then place them in `GPT_SoVITS/text`.(Chinese TTS Only) - -3. For UVR5 (Vocals/Accompaniment Separation & Reverberation Removal, additionally), download models from [UVR5 Weights](https://huggingface.co/lj1995/VoiceConversionWebUI/tree/main/uvr5_weights) and place them in `tools/uvr5/uvr5_weights`. - - - If you want to use `bs_roformer` or `mel_band_roformer` models for UVR5, you can manually download the model and corresponding configuration file, and put them in `tools/uvr5/uvr5_weights`. **Rename the model file and configuration file, ensure that the model and configuration files have the same and corresponding names except for the suffix**. In addition, the model and configuration file names **must include `roformer`** in order to be recognized as models of the roformer class. - - - The suggestion is to **directly specify the model type** in the model name and configuration file name, such as `mel_mand_roformer`, `bs_roformer`. If not specified, the features will be compared from the configuration file to determine which type of model it is. For example, the model `bs_roformer_ep_368_sdr_12.9628.ckpt` and its corresponding configuration file `bs_roformer_ep_368_sdr_12.9628.yaml` are a pair, `kim_mel_band_roformer.ckpt` and `kim_mel_band_roformer.yaml` are also a pair. - -4. For Chinese ASR (additionally), download models from [Damo ASR Model](https://modelscope.cn/models/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/files), [Damo VAD Model](https://modelscope.cn/models/damo/speech_fsmn_vad_zh-cn-16k-common-pytorch/files), and [Damo Punc Model](https://modelscope.cn/models/damo/punc_ct-transformer_zh-cn-common-vocab272727-pytorch/files) and place them in `tools/asr/models`. - -5. For English or Japanese ASR (additionally), download models from [Faster Whisper Large V3](https://huggingface.co/Systran/faster-whisper-large-v3) and place them in `tools/asr/models`. Also, [other models](https://huggingface.co/Systran) may have the similar effect with smaller disk footprint. - -## Dataset Format - -The TTS annotation .list file format: - -``` -vocal_path|speaker_name|language|text -``` - -Language dictionary: - -- 'zh': Chinese -- 'ja': Japanese -- 'en': English -- 'ko': Korean -- 'yue': Cantonese - -Example: - -``` -D:\GPT-SoVITS\xxx/xxx.wav|xxx|en|I like playing Genshin. -``` - -## Finetune and inference - -### Open WebUI - -#### Integrated Package Users - -Double-click `go-webui.bat`or use `go-webui.ps1` -if you want to switch to V1,then double-click`go-webui-v1.bat` or use `go-webui-v1.ps1` - -#### Others - -```bash -python webui.py -``` - -if you want to switch to V1,then - -```bash -python webui.py v1 -``` -Or maunally switch version in WebUI - -### Finetune - -#### Path Auto-filling is now supported - - 1. Fill in the audio path - 2. Slice the audio into small chunks - 3. Denoise(optinal) - 4. ASR - 5. Proofreading ASR transcriptions - 6. Go to the next Tab, then finetune the model - -### Open Inference WebUI - -#### Integrated Package Users - -Double-click `go-webui-v2.bat` or use `go-webui-v2.ps1` ,then open the inference webui at `1-GPT-SoVITS-TTS/1C-inference` - -#### Others - -```bash -python GPT_SoVITS/inference_webui.py -``` -OR - -```bash -python webui.py -``` -then open the inference webui at `1-GPT-SoVITS-TTS/1C-inference` - -## V2 Release Notes - -New Features: - -1. Support Korean and Cantonese - -2. An optimized text frontend - -3. Pre-trained model extended from 2k hours to 5k hours - -4. Improved synthesis quality for low-quality reference audio - - [more details](https://github.com/RVC-Boss/GPT-SoVITS/wiki/GPT%E2%80%90SoVITS%E2%80%90v2%E2%80%90features-(%E6%96%B0%E7%89%B9%E6%80%A7)) - -Use v2 from v1 environment: - -1. `pip install -r requirements.txt` to update some packages - -2. Clone the latest codes from github. - -3. Download v2 pretrained models from [huggingface](https://huggingface.co/lj1995/GPT-SoVITS/tree/main/gsv-v2final-pretrained) and put them into `GPT_SoVITS\pretrained_models\gsv-v2final-pretrained`. - - Chinese v2 additional: [G2PWModel_1.1.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/g2p/G2PWModel_1.1.zip)(Download G2PW models, unzip and rename to `G2PWModel`, and then place them in `GPT_SoVITS/text`. - -## V3 Release Notes - -New Features: - -1. The timbre similarity is higher, requiring less training data to approximate the target speaker (the timbre similarity is significantly improved using the base model directly without fine-tuning). - -2. GPT model is more stable, with fewer repetitions and omissions, and it is easier to generate speech with richer emotional expression. - - [more details](https://github.com/RVC-Boss/GPT-SoVITS/wiki/GPT%E2%80%90SoVITS%E2%80%90v3%E2%80%90features-(%E6%96%B0%E7%89%B9%E6%80%A7)) - -Use v3 from v2 environment: - -1. `pip install -r requirements.txt` to update some packages - -2. Clone the latest codes from github. - -3. Download v3 pretrained models (s1v3.ckpt, s2Gv3.pth and models--nvidia--bigvgan_v2_24khz_100band_256x folder) from [huggingface](https://huggingface.co/lj1995/GPT-SoVITS/tree/main) and put them into `GPT_SoVITS\pretrained_models`. - - additional: for Audio Super Resolution model, you can read [how to download](./tools/AP_BWE_main/24kto48k/readme.txt) - - -## Todo List - -- [x] **High Priority:** - - - [x] Localization in Japanese and English. - - [x] User guide. - - [x] Japanese and English dataset fine tune training. - -- [ ] **Features:** - - [x] Zero-shot voice conversion (5s) / few-shot voice conversion (1min). - - [x] TTS speaking speed control. - - [ ] ~~Enhanced TTS emotion control.~~ Maybe use pretrained finetuned preset GPT models for better emotion. - - [ ] Experiment with changing SoVITS token inputs to probability distribution of GPT vocabs (transformer latent). - - [x] Improve English and Japanese text frontend. - - [ ] Develop tiny and larger-sized TTS models. - - [x] Colab scripts. - - [x] Try expand training dataset (2k hours -> 10k hours). - - [x] better sovits base model (enhanced audio quality) - - [ ] model mix - -## (Additional) Method for running from the command line -Use the command line to open the WebUI for UVR5 -``` -python tools/uvr5/webui.py "" -``` - -This is how the audio segmentation of the dataset is done using the command line -``` -python audio_slicer.py \ - --input_path "" \ - --output_root "" \ - --threshold \ - --min_length \ - --min_interval - --hop_size -``` -This is how dataset ASR processing is done using the command line(Only Chinese) -``` -python tools/asr/funasr_asr.py -i -o -``` -ASR processing is performed through Faster_Whisper(ASR marking except Chinese) - -(No progress bars, GPU performance may cause time delays) -``` -python ./tools/asr/fasterwhisper_asr.py -i -o -l -p -``` -A custom list save path is enabled - -## Credits - -Special thanks to the following projects and contributors: - -### Theoretical Research -- [ar-vits](https://github.com/innnky/ar-vits) -- [SoundStorm](https://github.com/yangdongchao/SoundStorm/tree/master/soundstorm/s1/AR) -- [vits](https://github.com/jaywalnut310/vits) -- [TransferTTS](https://github.com/hcy71o/TransferTTS/blob/master/models.py#L556) -- [contentvec](https://github.com/auspicious3000/contentvec/) -- [hifi-gan](https://github.com/jik876/hifi-gan) -- [fish-speech](https://github.com/fishaudio/fish-speech/blob/main/tools/llama/generate.py#L41) -- [f5-TTS](https://github.com/SWivid/F5-TTS/blob/main/src/f5_tts/model/backbones/dit.py) -- [shortcut flow matching](https://github.com/kvfrans/shortcut-models/blob/main/targets_shortcut.py) -### Pretrained Models -- [Chinese Speech Pretrain](https://github.com/TencentGameMate/chinese_speech_pretrain) -- [Chinese-Roberta-WWM-Ext-Large](https://huggingface.co/hfl/chinese-roberta-wwm-ext-large) -- [BigVGAN](https://github.com/NVIDIA/BigVGAN) -### Text Frontend for Inference -- [paddlespeech zh_normalization](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/paddlespeech/t2s/frontend/zh_normalization) -- [split-lang](https://github.com/DoodleBears/split-lang) -- [g2pW](https://github.com/GitYCC/g2pW) -- [pypinyin-g2pW](https://github.com/mozillazg/pypinyin-g2pW) -- [paddlespeech g2pw](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/paddlespeech/t2s/frontend/g2pw) -### WebUI Tools -- [ultimatevocalremovergui](https://github.com/Anjok07/ultimatevocalremovergui) -- [audio-slicer](https://github.com/openvpi/audio-slicer) -- [SubFix](https://github.com/cronrpc/SubFix) -- [FFmpeg](https://github.com/FFmpeg/FFmpeg) -- [gradio](https://github.com/gradio-app/gradio) -- [faster-whisper](https://github.com/SYSTRAN/faster-whisper) -- [FunASR](https://github.com/alibaba-damo-academy/FunASR) -- [AP-BWE](https://github.com/yxlu-0102/AP-BWE) - -Thankful to @Naozumi520 for providing the Cantonese training set and for the guidance on Cantonese-related knowledge. - -## Thanks to all contributors for their efforts - - - - +- 项目基于[GPT-SoVITS](https://github.com/RVC-Boss/GPT-SoVITS)进行修改。 +- 项目应用场景为使用GPT-SoVITS完成语音模型的训练之后需要整合到其他项目之中,故而删除了训练部分以及基于webUI推理部分的代码。 +- 项目在完成环境配置(详见[GPT-SoVITS文档](./README_origin.md))后,修改inference.py中line 42行部分的speakers。该部分主要包含训练完成的模型地址以及参考语音地址等相关的内容。 +- 直接运行可以得到output.wav,该文件即为模型生成的语音文件。 \ No newline at end of file diff --git a/README_origin.md b/README_origin.md new file mode 100644 index 0000000..adc1344 --- /dev/null +++ b/README_origin.md @@ -0,0 +1,370 @@ +
+ + +

GPT-SoVITS-WebUI

+A Powerful Few-shot Voice Conversion and Text-to-Speech WebUI.

+ +[![madewithlove](https://img.shields.io/badge/made_with-%E2%9D%A4-red?style=for-the-badge&labelColor=orange)](https://github.com/RVC-Boss/GPT-SoVITS) + +RVC-Boss%2FGPT-SoVITS | Trendshift + + + +[![Open In Colab](https://img.shields.io/badge/Colab-F9AB00?style=for-the-badge&logo=googlecolab&color=525252)](https://colab.research.google.com/github/RVC-Boss/GPT-SoVITS/blob/main/colab_webui.ipynb) +[![License](https://img.shields.io/badge/LICENSE-MIT-green.svg?style=for-the-badge)](https://github.com/RVC-Boss/GPT-SoVITS/blob/main/LICENSE) +[![Huggingface](https://img.shields.io/badge/🤗%20-online%20demo-yellow.svg?style=for-the-badge)](https://huggingface.co/spaces/lj1995/GPT-SoVITS-v2) +[![Discord](https://img.shields.io/discord/1198701940511617164?color=%23738ADB&label=Discord&style=for-the-badge)](https://discord.gg/dnrgs5GHfG) + +**English** | [**中文简体**](./docs/cn/README.md) | [**日本語**](./docs/ja/README.md) | [**한국어**](./docs/ko/README.md) | [**Türkçe**](./docs/tr/README.md) + +
+ +--- + +## Features: + +1. **Zero-shot TTS:** Input a 5-second vocal sample and experience instant text-to-speech conversion. + +2. **Few-shot TTS:** Fine-tune the model with just 1 minute of training data for improved voice similarity and realism. + +3. **Cross-lingual Support:** Inference in languages different from the training dataset, currently supporting English, Japanese, Korean, Cantonese and Chinese. + +4. **WebUI Tools:** Integrated tools include voice accompaniment separation, automatic training set segmentation, Chinese ASR, and text labeling, assisting beginners in creating training datasets and GPT/SoVITS models. + +**Check out our [demo video](https://www.bilibili.com/video/BV12g4y1m7Uw) here!** + +Unseen speakers few-shot fine-tuning demo: + +https://github.com/RVC-Boss/GPT-SoVITS/assets/129054828/05bee1fa-bdd8-4d85-9350-80c060ab47fb + +**User guide: [简体中文](https://www.yuque.com/baicaigongchang1145haoyuangong/ib3g1e) | [English](https://rentry.co/GPT-SoVITS-guide#/)** + +## Installation + +For users in China, you can [click here](https://www.codewithgpu.com/i/RVC-Boss/GPT-SoVITS/GPT-SoVITS-Official) to use AutoDL Cloud Docker to experience the full functionality online. + +### Tested Environments + +- Python 3.9, PyTorch 2.0.1, CUDA 11 +- Python 3.10.13, PyTorch 2.1.2, CUDA 12.3 +- Python 3.9, PyTorch 2.2.2, macOS 14.4.1 (Apple silicon) +- Python 3.9, PyTorch 2.2.2, CPU devices + +_Note: numba==0.56.4 requires py<3.11_ + +### Windows + +If you are a Windows user (tested with win>=10), you can [download the integrated package](https://huggingface.co/lj1995/GPT-SoVITS-windows-package/resolve/main/GPT-SoVITS-beta.7z?download=true) and double-click on _go-webui.bat_ to start GPT-SoVITS-WebUI. + +**Users in China can [download the package here](https://www.yuque.com/baicaigongchang1145haoyuangong/ib3g1e/dkxgpiy9zb96hob4#KTvnO).** + +### Linux + +```bash +conda create -n GPTSoVits python=3.9 +conda activate GPTSoVits +bash install.sh +``` + +### macOS + +**Note: The models trained with GPUs on Macs result in significantly lower quality compared to those trained on other devices, so we are temporarily using CPUs instead.** + +1. Install Xcode command-line tools by running `xcode-select --install`. +2. Install FFmpeg by running `brew install ffmpeg`. +3. Install the program by running the following commands: + +```bash +conda create -n GPTSoVits python=3.9 +conda activate GPTSoVits +pip install -r requirements.txt +``` + +### Install Manually + +#### Install FFmpeg + +##### Conda Users + +```bash +conda install ffmpeg +``` + +##### Ubuntu/Debian Users + +```bash +sudo apt install ffmpeg +sudo apt install libsox-dev +conda install -c conda-forge 'ffmpeg<7' +``` + +##### Windows Users + +Download and place [ffmpeg.exe](https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/ffmpeg.exe) and [ffprobe.exe](https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/ffprobe.exe) in the GPT-SoVITS root. + +Install [Visual Studio 2017](https://aka.ms/vs/17/release/vc_redist.x86.exe) (Korean TTS Only) + +##### MacOS Users +```bash +brew install ffmpeg +``` + +#### Install Dependences + +```bash +pip install -r requirements.txt +``` + +### Using Docker + +#### docker-compose.yaml configuration + +0. Regarding image tags: Due to rapid updates in the codebase and the slow process of packaging and testing images, please check [Docker Hub](https://hub.docker.com/r/breakstring/gpt-sovits) for the currently packaged latest images and select as per your situation, or alternatively, build locally using a Dockerfile according to your own needs. +1. Environment Variables: + - is_half: Controls half-precision/double-precision. This is typically the cause if the content under the directories 4-cnhubert/5-wav32k is not generated correctly during the "SSL extracting" step. Adjust to True or False based on your actual situation. +2. Volumes Configuration,The application's root directory inside the container is set to /workspace. The default docker-compose.yaml lists some practical examples for uploading/downloading content. +3. shm_size: The default available memory for Docker Desktop on Windows is too small, which can cause abnormal operations. Adjust according to your own situation. +4. Under the deploy section, GPU-related settings should be adjusted cautiously according to your system and actual circumstances. + +#### Running with docker compose + +``` +docker compose -f "docker-compose.yaml" up -d +``` + +#### Running with docker command + +As above, modify the corresponding parameters based on your actual situation, then run the following command: + +``` +docker run --rm -it --gpus=all --env=is_half=False --volume=G:\GPT-SoVITS-DockerTest\output:/workspace/output --volume=G:\GPT-SoVITS-DockerTest\logs:/workspace/logs --volume=G:\GPT-SoVITS-DockerTest\SoVITS_weights:/workspace/SoVITS_weights --workdir=/workspace -p 9880:9880 -p 9871:9871 -p 9872:9872 -p 9873:9873 -p 9874:9874 --shm-size="16G" -d breakstring/gpt-sovits:xxxxx +``` + +## Pretrained Models + +**Users in China can [download all these models here](https://www.yuque.com/baicaigongchang1145haoyuangong/ib3g1e/dkxgpiy9zb96hob4#nVNhX).** + +1. Download pretrained models from [GPT-SoVITS Models](https://huggingface.co/lj1995/GPT-SoVITS) and place them in `GPT_SoVITS/pretrained_models`. + +2. Download G2PW models from [G2PWModel_1.1.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/g2p/G2PWModel_1.1.zip), unzip and rename to `G2PWModel`, and then place them in `GPT_SoVITS/text`.(Chinese TTS Only) + +3. For UVR5 (Vocals/Accompaniment Separation & Reverberation Removal, additionally), download models from [UVR5 Weights](https://huggingface.co/lj1995/VoiceConversionWebUI/tree/main/uvr5_weights) and place them in `tools/uvr5/uvr5_weights`. + + - If you want to use `bs_roformer` or `mel_band_roformer` models for UVR5, you can manually download the model and corresponding configuration file, and put them in `tools/uvr5/uvr5_weights`. **Rename the model file and configuration file, ensure that the model and configuration files have the same and corresponding names except for the suffix**. In addition, the model and configuration file names **must include `roformer`** in order to be recognized as models of the roformer class. + + - The suggestion is to **directly specify the model type** in the model name and configuration file name, such as `mel_mand_roformer`, `bs_roformer`. If not specified, the features will be compared from the configuration file to determine which type of model it is. For example, the model `bs_roformer_ep_368_sdr_12.9628.ckpt` and its corresponding configuration file `bs_roformer_ep_368_sdr_12.9628.yaml` are a pair, `kim_mel_band_roformer.ckpt` and `kim_mel_band_roformer.yaml` are also a pair. + +4. For Chinese ASR (additionally), download models from [Damo ASR Model](https://modelscope.cn/models/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/files), [Damo VAD Model](https://modelscope.cn/models/damo/speech_fsmn_vad_zh-cn-16k-common-pytorch/files), and [Damo Punc Model](https://modelscope.cn/models/damo/punc_ct-transformer_zh-cn-common-vocab272727-pytorch/files) and place them in `tools/asr/models`. + +5. For English or Japanese ASR (additionally), download models from [Faster Whisper Large V3](https://huggingface.co/Systran/faster-whisper-large-v3) and place them in `tools/asr/models`. Also, [other models](https://huggingface.co/Systran) may have the similar effect with smaller disk footprint. + +## Dataset Format + +The TTS annotation .list file format: + +``` +vocal_path|speaker_name|language|text +``` + +Language dictionary: + +- 'zh': Chinese +- 'ja': Japanese +- 'en': English +- 'ko': Korean +- 'yue': Cantonese + +Example: + +``` +D:\GPT-SoVITS\xxx/xxx.wav|xxx|en|I like playing Genshin. +``` + +## Finetune and inference + +### Open WebUI + +#### Integrated Package Users + +Double-click `go-webui.bat`or use `go-webui.ps1` +if you want to switch to V1,then double-click`go-webui-v1.bat` or use `go-webui-v1.ps1` + +#### Others + +```bash +python webui.py +``` + +if you want to switch to V1,then + +```bash +python webui.py v1 +``` +Or maunally switch version in WebUI + +### Finetune + +#### Path Auto-filling is now supported + + 1. Fill in the audio path + 2. Slice the audio into small chunks + 3. Denoise(optinal) + 4. ASR + 5. Proofreading ASR transcriptions + 6. Go to the next Tab, then finetune the model + +### Open Inference WebUI + +#### Integrated Package Users + +Double-click `go-webui-v2.bat` or use `go-webui-v2.ps1` ,then open the inference webui at `1-GPT-SoVITS-TTS/1C-inference` + +#### Others + +```bash +python GPT_SoVITS/inference_webui.py +``` +OR + +```bash +python webui.py +``` +then open the inference webui at `1-GPT-SoVITS-TTS/1C-inference` + +## V2 Release Notes + +New Features: + +1. Support Korean and Cantonese + +2. An optimized text frontend + +3. Pre-trained model extended from 2k hours to 5k hours + +4. Improved synthesis quality for low-quality reference audio + + [more details](https://github.com/RVC-Boss/GPT-SoVITS/wiki/GPT%E2%80%90SoVITS%E2%80%90v2%E2%80%90features-(%E6%96%B0%E7%89%B9%E6%80%A7)) + +Use v2 from v1 environment: + +1. `pip install -r requirements.txt` to update some packages + +2. Clone the latest codes from github. + +3. Download v2 pretrained models from [huggingface](https://huggingface.co/lj1995/GPT-SoVITS/tree/main/gsv-v2final-pretrained) and put them into `GPT_SoVITS\pretrained_models\gsv-v2final-pretrained`. + + Chinese v2 additional: [G2PWModel_1.1.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/g2p/G2PWModel_1.1.zip)(Download G2PW models, unzip and rename to `G2PWModel`, and then place them in `GPT_SoVITS/text`. + +## V3 Release Notes + +New Features: + +1. The timbre similarity is higher, requiring less training data to approximate the target speaker (the timbre similarity is significantly improved using the base model directly without fine-tuning). + +2. GPT model is more stable, with fewer repetitions and omissions, and it is easier to generate speech with richer emotional expression. + + [more details](https://github.com/RVC-Boss/GPT-SoVITS/wiki/GPT%E2%80%90SoVITS%E2%80%90v3%E2%80%90features-(%E6%96%B0%E7%89%B9%E6%80%A7)) + +Use v3 from v2 environment: + +1. `pip install -r requirements.txt` to update some packages + +2. Clone the latest codes from github. + +3. Download v3 pretrained models (s1v3.ckpt, s2Gv3.pth and models--nvidia--bigvgan_v2_24khz_100band_256x folder) from [huggingface](https://huggingface.co/lj1995/GPT-SoVITS/tree/main) and put them into `GPT_SoVITS\pretrained_models`. + + additional: for Audio Super Resolution model, you can read [how to download](./tools/AP_BWE_main/24kto48k/readme.txt) + + +## Todo List + +- [x] **High Priority:** + + - [x] Localization in Japanese and English. + - [x] User guide. + - [x] Japanese and English dataset fine tune training. + +- [ ] **Features:** + - [x] Zero-shot voice conversion (5s) / few-shot voice conversion (1min). + - [x] TTS speaking speed control. + - [ ] ~~Enhanced TTS emotion control.~~ Maybe use pretrained finetuned preset GPT models for better emotion. + - [ ] Experiment with changing SoVITS token inputs to probability distribution of GPT vocabs (transformer latent). + - [x] Improve English and Japanese text frontend. + - [ ] Develop tiny and larger-sized TTS models. + - [x] Colab scripts. + - [x] Try expand training dataset (2k hours -> 10k hours). + - [x] better sovits base model (enhanced audio quality) + - [ ] model mix + +## (Additional) Method for running from the command line +Use the command line to open the WebUI for UVR5 +``` +python tools/uvr5/webui.py "" +``` + +This is how the audio segmentation of the dataset is done using the command line +``` +python audio_slicer.py \ + --input_path "" \ + --output_root "" \ + --threshold \ + --min_length \ + --min_interval + --hop_size +``` +This is how dataset ASR processing is done using the command line(Only Chinese) +``` +python tools/asr/funasr_asr.py -i -o +``` +ASR processing is performed through Faster_Whisper(ASR marking except Chinese) + +(No progress bars, GPU performance may cause time delays) +``` +python ./tools/asr/fasterwhisper_asr.py -i -o -l -p +``` +A custom list save path is enabled + +## Credits + +Special thanks to the following projects and contributors: + +### Theoretical Research +- [ar-vits](https://github.com/innnky/ar-vits) +- [SoundStorm](https://github.com/yangdongchao/SoundStorm/tree/master/soundstorm/s1/AR) +- [vits](https://github.com/jaywalnut310/vits) +- [TransferTTS](https://github.com/hcy71o/TransferTTS/blob/master/models.py#L556) +- [contentvec](https://github.com/auspicious3000/contentvec/) +- [hifi-gan](https://github.com/jik876/hifi-gan) +- [fish-speech](https://github.com/fishaudio/fish-speech/blob/main/tools/llama/generate.py#L41) +- [f5-TTS](https://github.com/SWivid/F5-TTS/blob/main/src/f5_tts/model/backbones/dit.py) +- [shortcut flow matching](https://github.com/kvfrans/shortcut-models/blob/main/targets_shortcut.py) +### Pretrained Models +- [Chinese Speech Pretrain](https://github.com/TencentGameMate/chinese_speech_pretrain) +- [Chinese-Roberta-WWM-Ext-Large](https://huggingface.co/hfl/chinese-roberta-wwm-ext-large) +- [BigVGAN](https://github.com/NVIDIA/BigVGAN) +### Text Frontend for Inference +- [paddlespeech zh_normalization](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/paddlespeech/t2s/frontend/zh_normalization) +- [split-lang](https://github.com/DoodleBears/split-lang) +- [g2pW](https://github.com/GitYCC/g2pW) +- [pypinyin-g2pW](https://github.com/mozillazg/pypinyin-g2pW) +- [paddlespeech g2pw](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/paddlespeech/t2s/frontend/g2pw) +### WebUI Tools +- [ultimatevocalremovergui](https://github.com/Anjok07/ultimatevocalremovergui) +- [audio-slicer](https://github.com/openvpi/audio-slicer) +- [SubFix](https://github.com/cronrpc/SubFix) +- [FFmpeg](https://github.com/FFmpeg/FFmpeg) +- [gradio](https://github.com/gradio-app/gradio) +- [faster-whisper](https://github.com/SYSTRAN/faster-whisper) +- [FunASR](https://github.com/alibaba-damo-academy/FunASR) +- [AP-BWE](https://github.com/yxlu-0102/AP-BWE) + +Thankful to @Naozumi520 for providing the Cantonese training set and for the guidance on Cantonese-related knowledge. + +## Thanks to all contributors for their efforts + + + +