diff --git a/0-bat-files/0 Update.bat b/0 Bat Files/0 Update.bat
similarity index 100%
rename from 0-bat-files/0 Update.bat
rename to 0 Bat Files/0 Update.bat
diff --git a/0-bat-files/1 Update Pip.bat b/0 Bat Files/1 Update Pip.bat
similarity index 100%
rename from 0-bat-files/1 Update Pip.bat
rename to 0 Bat Files/1 Update Pip.bat
diff --git a/0-bat-files/10 Model Management(Optional).bat b/0 Bat Files/10 Model Management(Optional).bat
similarity index 100%
rename from 0-bat-files/10 Model Management(Optional).bat
rename to 0 Bat Files/10 Model Management(Optional).bat
diff --git a/0-bat-files/3 run Single File Gradio App.bat b/0 Bat Files/3 run Single File Gradio App.bat
similarity index 100%
rename from 0-bat-files/3 run Single File Gradio App.bat
rename to 0 Bat Files/3 run Single File Gradio App.bat
diff --git a/0-bat-files/5 run Backend.bat b/0 Bat Files/5 run Backend.bat
similarity index 100%
rename from 0-bat-files/5 run Backend.bat
rename to 0 Bat Files/5 run Backend.bat
diff --git a/0-bat-files/6 run Frontend(need Backend).bat b/0 Bat Files/6 run Frontend(need Backend).bat
similarity index 100%
rename from 0-bat-files/6 run Frontend(need Backend).bat
rename to 0 Bat Files/6 run Frontend(need Backend).bat
diff --git a/0-bat-files/999 Force Updating.bat b/0 Bat Files/999 Force Updating.bat
similarity index 100%
rename from 0-bat-files/999 Force Updating.bat
rename to 0 Bat Files/999 Force Updating.bat
diff --git a/Inference b/Inference
index ea3e3fea..fab890cf 160000
--- a/Inference
+++ b/Inference
@@ -1 +1 @@
-Subproject commit ea3e3fea3509dd6148a2e7f18b3edc3a00dcd17b
+Subproject commit fab890cf7c4543665bce47181115ad39cd6f518a
diff --git a/README.md b/README.md
index 96f31b72..95f0b82b 100644
--- a/README.md
+++ b/README.md
@@ -1,43 +1,87 @@
-
+# GSVI : GPT-SoVITS Inference Plugin
-
GPT-SoVITS-WebUI
-A Powerful Few-shot Voice Conversion and Text-to-Speech WebUI.
+Welcome to GSVI, an inference-specialized plugin built on top of GPT-SoVITS to enhance your text-to-speech (TTS) experience with a user-friendly API interface. This plugin enriches the [original GPT-SoVITS project](https://github.com/RVC-Boss/GPT-SoVITS), making voice synthesis more accessible and versatile.
-[](https://github.com/RVC-Boss/GPT-SoVITS)
+Please note that we do not recommend using GSVI for training. Its existence is to make the process of using GPT-soVITS simpler and more comfortable for others, and to make model sharing easier.
-

+## Features
-[](https://colab.research.google.com/github/RVC-Boss/GPT-SoVITS/blob/main/colab_webui.ipynb)
-[](https://github.com/RVC-Boss/GPT-SoVITS/blob/main/LICENSE)
-[](https://huggingface.co/lj1995/GPT-SoVITS/tree/main)
+- High-level abstract interface for easy character and emotion selection
+- Comprehensive TTS engine support (speaker selection, speed adjustment, volume control)
+- User-friendly design for everyone
+- Simply place the shared character model folder, and you can quickly use it.
+- High compatibility and extensibility for various platforms and applications (for example: SillyTavern)
-[**English**](./README.md) | [**中文简体**](./docs/cn/README.md) | [**日本語**](./docs/ja/README.md) | [**한국어**](./docs/ko/README.md)
+## Getting Started
-
+1. Install manually or use prezip for Windows
+2. Put your character model folders
+3. Run bat file or run python file manually
+4. If you encounter issues, join our community or consult the FAQ. QQ Group: 863760614 , Discord (AI Hub):
----
+We look forward to seeing how you use GSVI to bring your creative projects to life!
-## Features:
+## Usage
-1. **Zero-shot TTS:** Input a 5-second vocal sample and experience instant text-to-speech conversion.
+### Use With Bat Files
-2. **Few-shot TTS:** Fine-tune the model with just 1 minute of training data for improved voice similarity and realism.
+You could see a bunch of bat files in `0 Bat Files/`
-3. **Cross-lingual Support:** Inference in languages different from the training dataset, currently supporting English, Japanese, and Chinese.
+If you want to update, then run bat 0 and 1 (or 999 0 1)
+If you want to start with a single gradio file, then run bat 3
+If you want to start with backend and frontend , run 5 and 6
-4. **WebUI Tools:** Integrated tools include voice accompaniment separation, automatic training set segmentation, Chinese ASR, and text labeling, assisting beginners in creating training datasets and GPT/SoVITS models.
+If you want to manage your models, run 10.bat
-**Check out our [demo video](https://www.bilibili.com/video/BV12g4y1m7Uw) here!**
+### Python Files
-Unseen speakers few-shot fine-tuning demo:
+#### Start with a single gradio file
-https://github.com/RVC-Boss/GPT-SoVITS/assets/129054828/05bee1fa-bdd8-4d85-9350-80c060ab47fb
+- Gradio Application: `app.py` (In the root of GSVI)
-**User guide: [简体中文](https://www.yuque.com/baicaigongchang1145haoyuangong/ib3g1e) | [English](https://rentry.co/GPT-SoVITS-guide#/)**
+#### Start with backend and frontend mod
+
+- Flask Backend Program: `Inference/src/tts_backend.py`
+- Gradio Frontend Application: `Inference/src/TTS_Webui.py`
+- Other Frontend Applications or Services Using Our API
+
+### Model Management
+
+- Gradio Model Management Interface: `Inference/src/Character_Manager.py`
+
+## API Documentation
+
+For API documentation, visit our [Yuque documentation page](https://www.yuque.com/xter/zibxlp/knu8p82lb5ipufqy). or [API Doc.md](./api_doc.md)
+
+## Model Folder Format
+
+In a character model folder, like `trained/Character1/`
+
+Put the pth / ckpt / wav files in it, the wav should be named as the prompt text
+
+Like :
+
+```
+trained
+--hutao
+----hutao-e75.ckpt
+----hutao_e60_s3360.pth
+----hutao said something.wav
+```
+
+### Add a emotion for your model
+
+To make that, open the Model Manage Tool (10.bat / Inference/src/Character_Manager.py)
+
+It can assign a reference audio to each emotion, aiming to achieve the implementation of emotion options.
## Installation
-For users in China region, you can [click here](https://www.codewithgpu.com/i/RVC-Boss/GPT-SoVITS/GPT-SoVITS-Official) to use AutoDL Cloud Docker to experience the full functionality online.
+You could install this with the guide bellow, then download pretrained models from [GPT-SoVITS Models](https://huggingface.co/lj1995/GPT-SoVITS) and place them in `GPT_SoVITS/pretrained_models`, and put your character model folder in `trained`
+
+Or just download the pre-packaged distribution for Windows. ( then put your character model folder in `trained` )
+
+About the character model folder, see below
### Tested Environments
@@ -49,7 +93,9 @@ _Note: numba==0.56.4 requires py<3.11_
### Windows
-If you are a Windows user (tested with win>=10), you can directly download the [pre-packaged distribution](https://huggingface.co/lj1995/GPT-SoVITS-windows-package/resolve/main/GPT-SoVITS-beta.7z?download=true) and double-click on _go-webui.bat_ to start GPT-SoVITS-WebUI.
+If you are a Windows user (tested with win>=10), you can directly download the [pre-packaged distribution]() and double-click on _go-webui.bat_ to start GPT-SoVITS-WebUI.
+
+Or ```pip install -r requirements.txt``` , and then double click the `install.bat`
### Linux
@@ -70,25 +116,19 @@ conda create -n GPTSoVits python=3.9
conda activate GPTSoVits
pip install -r requirements.txt
+git submodule init
+git submodule update --init --recursive
```
-### Install Manually
+### Install FFmpeg ( No need if use prezip )
-#### Install Dependences
-
-```bash
-pip install -r requirements.txt
-```
-
-#### Install FFmpeg
-
-##### Conda Users
+#### Conda Users
```bash
conda install ffmpeg
```
-##### Ubuntu/Debian Users
+#### Ubuntu/Debian Users
```bash
sudo apt install ffmpeg
@@ -96,151 +136,22 @@ sudo apt install libsox-dev
conda install -c conda-forge 'ffmpeg<7'
```
-##### Windows Users
+#### Windows Users
Download and place [ffmpeg.exe](https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/ffmpeg.exe) and [ffprobe.exe](https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/ffprobe.exe) in the GPT-SoVITS root.
-### Using Docker
-
-#### docker-compose.yaml configuration
-
-0. Regarding image tags: Due to rapid updates in the codebase and the slow process of packaging and testing images, please check [Docker Hub](https://hub.docker.com/r/breakstring/gpt-sovits) for the currently packaged latest images and select as per your situation, or alternatively, build locally using a Dockerfile according to your own needs.
-1. Environment Variables:
-
-- is_half: Controls half-precision/double-precision. This is typically the cause if the content under the directories 4-cnhubert/5-wav32k is not generated correctly during the "SSL extracting" step. Adjust to True or False based on your actual situation.
-
-2. Volumes Configuration,The application's root directory inside the container is set to /workspace. The default docker-compose.yaml lists some practical examples for uploading/downloading content.
-3. shm_size: The default available memory for Docker Desktop on Windows is too small, which can cause abnormal operations. Adjust according to your own situation.
-4. Under the deploy section, GPU-related settings should be adjusted cautiously according to your system and actual circumstances.
-
-#### Running with docker compose
-
-```
-docker compose -f "docker-compose.yaml" up -d
-```
-
-#### Running with docker command
-
-As above, modify the corresponding parameters based on your actual situation, then run the following command:
-
-```
-docker run --rm -it --gpus=all --env=is_half=False --volume=G:\GPT-SoVITS-DockerTest\output:/workspace/output --volume=G:\GPT-SoVITS-DockerTest\logs:/workspace/logs --volume=G:\GPT-SoVITS-DockerTest\SoVITS_weights:/workspace/SoVITS_weights --workdir=/workspace -p 9880:9880 -p 9871:9871 -p 9872:9872 -p 9873:9873 -p 9874:9874 --shm-size="16G" -d breakstring/gpt-sovits:xxxxx
-```
-
-## Pretrained Models
+### Pretrained Models ( No need if use prezip )
Download pretrained models from [GPT-SoVITS Models](https://huggingface.co/lj1995/GPT-SoVITS) and place them in `GPT_SoVITS/pretrained_models`.
-For UVR5 (Vocals/Accompaniment Separation & Reverberation Removal, additionally), download models from [UVR5 Weights](https://huggingface.co/lj1995/VoiceConversionWebUI/tree/main/uvr5_weights) and place them in `tools/uvr5/uvr5_weights`.
-Users in China region can download these two models by entering the links below and clicking "Download a copy"
+## Docker
-- [GPT-SoVITS Models](https://www.icloud.com.cn/iclouddrive/056y_Xog_HXpALuVUjscIwTtg#GPT-SoVITS_Models)
+Writing Now, Please Wait
-- [UVR5 Weights](https://www.icloud.com.cn/iclouddrive/0bekRKDiJXboFhbfm3lM2fVbA#UVR5_Weights)
+Remove the pyaudio in the requirements.txt !!!!
-For Chinese ASR (additionally), download models from [Damo ASR Model](https://modelscope.cn/models/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/files), [Damo VAD Model](https://modelscope.cn/models/damo/speech_fsmn_vad_zh-cn-16k-common-pytorch/files), and [Damo Punc Model](https://modelscope.cn/models/damo/punc_ct-transformer_zh-cn-common-vocab272727-pytorch/files) and place them in `tools/damo_asr/models`.
-## Dataset Format
-The TTS annotation .list file format:
-```
-vocal_path|speaker_name|language|text
-```
-Language dictionary:
-
-- 'zh': Chinese
-- 'ja': Japanese
-- 'en': English
-
-Example:
-
-```
-D:\GPT-SoVITS\xxx/xxx.wav|xxx|en|I like playing Genshin.
-```
-
-## Todo List
-
-- [ ] **High Priority:**
-
- - [x] Localization in Japanese and English.
- - [x] User guide.
- - [x] Japanese and English dataset fine tune training.
-
-- [ ] **Features:**
- - [ ] Zero-shot voice conversion (5s) / few-shot voice conversion (1min).
- - [ ] TTS speaking speed control.
- - [ ] Enhanced TTS emotion control.
- - [ ] Experiment with changing SoVITS token inputs to probability distribution of vocabs.
- - [ ] Improve English and Japanese text frontend.
- - [ ] Develop tiny and larger-sized TTS models.
- - [x] Colab scripts.
- - [ ] Try expand training dataset (2k hours -> 10k hours).
- - [ ] better sovits base model (enhanced audio quality)
- - [ ] model mix
-
-## (Optional) If you need, here will provide the command line operation mode
-Use the command line to open the WebUI for UVR5
-```
-python tools/uvr5/webui.py "