From 62076f5409a30c65647ea2dced405c975bbc1a52 Mon Sep 17 00:00:00 2001 From: XTer Date: Mon, 18 Mar 2024 00:31:31 +0800 Subject: [PATCH] =?UTF-8?q?=E6=9B=B4=E6=96=B0=E4=BA=86=E6=96=87=E6=A1=A3?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- {0-bat-files => 0 Bat Files}/0 Update.bat | 0 {0-bat-files => 0 Bat Files}/1 Update Pip.bat | 0 .../10 Model Management(Optional).bat | 0 .../3 run Single File Gradio App.bat | 0 .../5 run Backend.bat | 0 .../6 run Frontend(need Backend).bat | 0 .../999 Force Updating.bat | 0 Inference | 2 +- README.md | 245 +++----- api.py | 559 ------------------ api_doc.md | 102 ++++ 11 files changed, 181 insertions(+), 727 deletions(-) rename {0-bat-files => 0 Bat Files}/0 Update.bat (100%) rename {0-bat-files => 0 Bat Files}/1 Update Pip.bat (100%) rename {0-bat-files => 0 Bat Files}/10 Model Management(Optional).bat (100%) rename {0-bat-files => 0 Bat Files}/3 run Single File Gradio App.bat (100%) rename {0-bat-files => 0 Bat Files}/5 run Backend.bat (100%) rename {0-bat-files => 0 Bat Files}/6 run Frontend(need Backend).bat (100%) rename {0-bat-files => 0 Bat Files}/999 Force Updating.bat (100%) delete mode 100644 api.py create mode 100644 api_doc.md diff --git a/0-bat-files/0 Update.bat b/0 Bat Files/0 Update.bat similarity index 100% rename from 0-bat-files/0 Update.bat rename to 0 Bat Files/0 Update.bat diff --git a/0-bat-files/1 Update Pip.bat b/0 Bat Files/1 Update Pip.bat similarity index 100% rename from 0-bat-files/1 Update Pip.bat rename to 0 Bat Files/1 Update Pip.bat diff --git a/0-bat-files/10 Model Management(Optional).bat b/0 Bat Files/10 Model Management(Optional).bat similarity index 100% rename from 0-bat-files/10 Model Management(Optional).bat rename to 0 Bat Files/10 Model Management(Optional).bat diff --git a/0-bat-files/3 run Single File Gradio App.bat b/0 Bat Files/3 run Single File Gradio App.bat similarity index 100% rename from 0-bat-files/3 run Single File Gradio App.bat rename to 0 Bat Files/3 run Single File Gradio App.bat diff --git a/0-bat-files/5 run Backend.bat b/0 Bat Files/5 run Backend.bat similarity index 100% rename from 0-bat-files/5 run Backend.bat rename to 0 Bat Files/5 run Backend.bat diff --git a/0-bat-files/6 run Frontend(need Backend).bat b/0 Bat Files/6 run Frontend(need Backend).bat similarity index 100% rename from 0-bat-files/6 run Frontend(need Backend).bat rename to 0 Bat Files/6 run Frontend(need Backend).bat diff --git a/0-bat-files/999 Force Updating.bat b/0 Bat Files/999 Force Updating.bat similarity index 100% rename from 0-bat-files/999 Force Updating.bat rename to 0 Bat Files/999 Force Updating.bat diff --git a/Inference b/Inference index ea3e3fea..fab890cf 160000 --- a/Inference +++ b/Inference @@ -1 +1 @@ -Subproject commit ea3e3fea3509dd6148a2e7f18b3edc3a00dcd17b +Subproject commit fab890cf7c4543665bce47181115ad39cd6f518a diff --git a/README.md b/README.md index 96f31b72..95f0b82b 100644 --- a/README.md +++ b/README.md @@ -1,43 +1,87 @@ -
+# GSVI : GPT-SoVITS Inference Plugin -

GPT-SoVITS-WebUI

-A Powerful Few-shot Voice Conversion and Text-to-Speech WebUI.

+Welcome to GSVI, an inference-specialized plugin built on top of GPT-SoVITS to enhance your text-to-speech (TTS) experience with a user-friendly API interface. This plugin enriches the [original GPT-SoVITS project](https://github.com/RVC-Boss/GPT-SoVITS), making voice synthesis more accessible and versatile. -[![madewithlove](https://img.shields.io/badge/made_with-%E2%9D%A4-red?style=for-the-badge&labelColor=orange)](https://github.com/RVC-Boss/GPT-SoVITS) +Please note that we do not recommend using GSVI for training. Its existence is to make the process of using GPT-soVITS simpler and more comfortable for others, and to make model sharing easier. -
+## Features -[![Open In Colab](https://img.shields.io/badge/Colab-F9AB00?style=for-the-badge&logo=googlecolab&color=525252)](https://colab.research.google.com/github/RVC-Boss/GPT-SoVITS/blob/main/colab_webui.ipynb) -[![Licence](https://img.shields.io/badge/LICENSE-MIT-green.svg?style=for-the-badge)](https://github.com/RVC-Boss/GPT-SoVITS/blob/main/LICENSE) -[![Huggingface](https://img.shields.io/badge/🤗%20-Models%20Repo-yellow.svg?style=for-the-badge)](https://huggingface.co/lj1995/GPT-SoVITS/tree/main) +- High-level abstract interface for easy character and emotion selection +- Comprehensive TTS engine support (speaker selection, speed adjustment, volume control) +- User-friendly design for everyone +- Simply place the shared character model folder, and you can quickly use it. +- High compatibility and extensibility for various platforms and applications (for example: SillyTavern) -[**English**](./README.md) | [**中文简体**](./docs/cn/README.md) | [**日本語**](./docs/ja/README.md) | [**한국어**](./docs/ko/README.md) +## Getting Started -
+1. Install manually or use prezip for Windows +2. Put your character model folders +3. Run bat file or run python file manually +4. If you encounter issues, join our community or consult the FAQ. QQ Group: 863760614 , Discord (AI Hub): ---- +We look forward to seeing how you use GSVI to bring your creative projects to life! -## Features: +## Usage -1. **Zero-shot TTS:** Input a 5-second vocal sample and experience instant text-to-speech conversion. +### Use With Bat Files -2. **Few-shot TTS:** Fine-tune the model with just 1 minute of training data for improved voice similarity and realism. +You could see a bunch of bat files in `0 Bat Files/` -3. **Cross-lingual Support:** Inference in languages different from the training dataset, currently supporting English, Japanese, and Chinese. +If you want to update, then run bat 0 and 1 (or 999 0 1) +If you want to start with a single gradio file, then run bat 3 +If you want to start with backend and frontend , run 5 and 6 -4. **WebUI Tools:** Integrated tools include voice accompaniment separation, automatic training set segmentation, Chinese ASR, and text labeling, assisting beginners in creating training datasets and GPT/SoVITS models. +If you want to manage your models, run 10.bat -**Check out our [demo video](https://www.bilibili.com/video/BV12g4y1m7Uw) here!** +### Python Files -Unseen speakers few-shot fine-tuning demo: +#### Start with a single gradio file -https://github.com/RVC-Boss/GPT-SoVITS/assets/129054828/05bee1fa-bdd8-4d85-9350-80c060ab47fb +- Gradio Application: `app.py` (In the root of GSVI) -**User guide: [简体中文](https://www.yuque.com/baicaigongchang1145haoyuangong/ib3g1e) | [English](https://rentry.co/GPT-SoVITS-guide#/)** +#### Start with backend and frontend mod + +- Flask Backend Program: `Inference/src/tts_backend.py` +- Gradio Frontend Application: `Inference/src/TTS_Webui.py` +- Other Frontend Applications or Services Using Our API + +### Model Management + +- Gradio Model Management Interface: `Inference/src/Character_Manager.py` + +## API Documentation + +For API documentation, visit our [Yuque documentation page](https://www.yuque.com/xter/zibxlp/knu8p82lb5ipufqy). or [API Doc.md](./api_doc.md) + +## Model Folder Format + +In a character model folder, like `trained/Character1/` + +Put the pth / ckpt / wav files in it, the wav should be named as the prompt text + +Like : + +``` +trained +--hutao +----hutao-e75.ckpt +----hutao_e60_s3360.pth +----hutao said something.wav +``` + +### Add a emotion for your model + +To make that, open the Model Manage Tool (10.bat / Inference/src/Character_Manager.py) + +It can assign a reference audio to each emotion, aiming to achieve the implementation of emotion options. ## Installation -For users in China region, you can [click here](https://www.codewithgpu.com/i/RVC-Boss/GPT-SoVITS/GPT-SoVITS-Official) to use AutoDL Cloud Docker to experience the full functionality online. +You could install this with the guide bellow, then download pretrained models from [GPT-SoVITS Models](https://huggingface.co/lj1995/GPT-SoVITS) and place them in `GPT_SoVITS/pretrained_models`, and put your character model folder in `trained` + +Or just download the pre-packaged distribution for Windows. ( then put your character model folder in `trained` ) + +About the character model folder, see below ### Tested Environments @@ -49,7 +93,9 @@ _Note: numba==0.56.4 requires py<3.11_ ### Windows -If you are a Windows user (tested with win>=10), you can directly download the [pre-packaged distribution](https://huggingface.co/lj1995/GPT-SoVITS-windows-package/resolve/main/GPT-SoVITS-beta.7z?download=true) and double-click on _go-webui.bat_ to start GPT-SoVITS-WebUI. +If you are a Windows user (tested with win>=10), you can directly download the [pre-packaged distribution]() and double-click on _go-webui.bat_ to start GPT-SoVITS-WebUI. + +Or ```pip install -r requirements.txt``` , and then double click the `install.bat` ### Linux @@ -70,25 +116,19 @@ conda create -n GPTSoVits python=3.9 conda activate GPTSoVits pip install -r requirements.txt +git submodule init +git submodule update --init --recursive ``` -### Install Manually +### Install FFmpeg ( No need if use prezip ) -#### Install Dependences - -```bash -pip install -r requirements.txt -``` - -#### Install FFmpeg - -##### Conda Users +#### Conda Users ```bash conda install ffmpeg ``` -##### Ubuntu/Debian Users +#### Ubuntu/Debian Users ```bash sudo apt install ffmpeg @@ -96,151 +136,22 @@ sudo apt install libsox-dev conda install -c conda-forge 'ffmpeg<7' ``` -##### Windows Users +#### Windows Users Download and place [ffmpeg.exe](https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/ffmpeg.exe) and [ffprobe.exe](https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/ffprobe.exe) in the GPT-SoVITS root. -### Using Docker - -#### docker-compose.yaml configuration - -0. Regarding image tags: Due to rapid updates in the codebase and the slow process of packaging and testing images, please check [Docker Hub](https://hub.docker.com/r/breakstring/gpt-sovits) for the currently packaged latest images and select as per your situation, or alternatively, build locally using a Dockerfile according to your own needs. -1. Environment Variables: - -- is_half: Controls half-precision/double-precision. This is typically the cause if the content under the directories 4-cnhubert/5-wav32k is not generated correctly during the "SSL extracting" step. Adjust to True or False based on your actual situation. - -2. Volumes Configuration,The application's root directory inside the container is set to /workspace. The default docker-compose.yaml lists some practical examples for uploading/downloading content. -3. shm_size: The default available memory for Docker Desktop on Windows is too small, which can cause abnormal operations. Adjust according to your own situation. -4. Under the deploy section, GPU-related settings should be adjusted cautiously according to your system and actual circumstances. - -#### Running with docker compose - -``` -docker compose -f "docker-compose.yaml" up -d -``` - -#### Running with docker command - -As above, modify the corresponding parameters based on your actual situation, then run the following command: - -``` -docker run --rm -it --gpus=all --env=is_half=False --volume=G:\GPT-SoVITS-DockerTest\output:/workspace/output --volume=G:\GPT-SoVITS-DockerTest\logs:/workspace/logs --volume=G:\GPT-SoVITS-DockerTest\SoVITS_weights:/workspace/SoVITS_weights --workdir=/workspace -p 9880:9880 -p 9871:9871 -p 9872:9872 -p 9873:9873 -p 9874:9874 --shm-size="16G" -d breakstring/gpt-sovits:xxxxx -``` - -## Pretrained Models +### Pretrained Models ( No need if use prezip ) Download pretrained models from [GPT-SoVITS Models](https://huggingface.co/lj1995/GPT-SoVITS) and place them in `GPT_SoVITS/pretrained_models`. -For UVR5 (Vocals/Accompaniment Separation & Reverberation Removal, additionally), download models from [UVR5 Weights](https://huggingface.co/lj1995/VoiceConversionWebUI/tree/main/uvr5_weights) and place them in `tools/uvr5/uvr5_weights`. -Users in China region can download these two models by entering the links below and clicking "Download a copy" +## Docker -- [GPT-SoVITS Models](https://www.icloud.com.cn/iclouddrive/056y_Xog_HXpALuVUjscIwTtg#GPT-SoVITS_Models) +Writing Now, Please Wait -- [UVR5 Weights](https://www.icloud.com.cn/iclouddrive/0bekRKDiJXboFhbfm3lM2fVbA#UVR5_Weights) +Remove the pyaudio in the requirements.txt !!!! -For Chinese ASR (additionally), download models from [Damo ASR Model](https://modelscope.cn/models/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/files), [Damo VAD Model](https://modelscope.cn/models/damo/speech_fsmn_vad_zh-cn-16k-common-pytorch/files), and [Damo Punc Model](https://modelscope.cn/models/damo/punc_ct-transformer_zh-cn-common-vocab272727-pytorch/files) and place them in `tools/damo_asr/models`. -## Dataset Format -The TTS annotation .list file format: -``` -vocal_path|speaker_name|language|text -``` -Language dictionary: - -- 'zh': Chinese -- 'ja': Japanese -- 'en': English - -Example: - -``` -D:\GPT-SoVITS\xxx/xxx.wav|xxx|en|I like playing Genshin. -``` - -## Todo List - -- [ ] **High Priority:** - - - [x] Localization in Japanese and English. - - [x] User guide. - - [x] Japanese and English dataset fine tune training. - -- [ ] **Features:** - - [ ] Zero-shot voice conversion (5s) / few-shot voice conversion (1min). - - [ ] TTS speaking speed control. - - [ ] Enhanced TTS emotion control. - - [ ] Experiment with changing SoVITS token inputs to probability distribution of vocabs. - - [ ] Improve English and Japanese text frontend. - - [ ] Develop tiny and larger-sized TTS models. - - [x] Colab scripts. - - [ ] Try expand training dataset (2k hours -> 10k hours). - - [ ] better sovits base model (enhanced audio quality) - - [ ] model mix - -## (Optional) If you need, here will provide the command line operation mode -Use the command line to open the WebUI for UVR5 -``` -python tools/uvr5/webui.py "" -``` -If you can't open a browser, follow the format below for UVR processing,This is using mdxnet for audio processing -``` -python mdxnet.py --model --input_root --output_vocal --output_ins --agg_level --format --device --is_half_precision -``` -This is how the audio segmentation of the dataset is done using the command line -``` -python audio_slicer.py \ - --input_path "" \ - --output_root "" \ - --threshold \ - --min_length \ - --min_interval - --hop_size -``` -This is how dataset ASR processing is done using the command line(Only Chinese) -``` -python tools/damo_asr/cmd-asr.py "" -``` -ASR processing is performed through Faster_Whisper(ASR marking except Chinese) - -(No progress bars, GPU performance may cause time delays) -``` -python ./tools/damo_asr/WhisperASR.py -i -o -f -l -``` -A custom list save path is enabled - -## Credits - -Special thanks to the following projects and contributors: - -### Theoretical -- [ar-vits](https://github.com/innnky/ar-vits) -- [SoundStorm](https://github.com/yangdongchao/SoundStorm/tree/master/soundstorm/s1/AR) -- [vits](https://github.com/jaywalnut310/vits) -- [TransferTTS](https://github.com/hcy71o/TransferTTS/blob/master/models.py#L556) -- [contentvec](https://github.com/auspicious3000/contentvec/) -- [hifi-gan](https://github.com/jik876/hifi-gan) -- [fish-speech](https://github.com/fishaudio/fish-speech/blob/main/tools/llama/generate.py#L41) -### Pretrained Models -- [Chinese Speech Pretrain](https://github.com/TencentGameMate/chinese_speech_pretrain) -- [Chinese-Roberta-WWM-Ext-Large](https://huggingface.co/hfl/chinese-roberta-wwm-ext-large) -### Text Frontend for Inference -- [paddlespeech zh_normalization](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/paddlespeech/t2s/frontend/zh_normalization) -- [LangSegment](https://github.com/juntaosun/LangSegment) -### WebUI Tools -- [ultimatevocalremovergui](https://github.com/Anjok07/ultimatevocalremovergui) -- [audio-slicer](https://github.com/openvpi/audio-slicer) -- [SubFix](https://github.com/cronrpc/SubFix) -- [FFmpeg](https://github.com/FFmpeg/FFmpeg) -- [gradio](https://github.com/gradio-app/gradio) -- [faster-whisper](https://github.com/SYSTRAN/faster-whisper) -- [FunASR](https://github.com/alibaba-damo-academy/FunASR) - -## Thanks to all contributors for their efforts - - - - diff --git a/api.py b/api.py deleted file mode 100644 index 34adfbe9..00000000 --- a/api.py +++ /dev/null @@ -1,559 +0,0 @@ -""" -# api.py usage - -` python api.py -dr "123.wav" -dt "一二三。" -dl "zh" ` - -## 执行参数: - -`-s` - `SoVITS模型路径, 可在 config.py 中指定` -`-g` - `GPT模型路径, 可在 config.py 中指定` - -调用请求缺少参考音频时使用 -`-dr` - `默认参考音频路径` -`-dt` - `默认参考音频文本` -`-dl` - `默认参考音频语种, "中文","英文","日文","zh","en","ja"` - -`-d` - `推理设备, "cuda","cpu"` -`-a` - `绑定地址, 默认"127.0.0.1"` -`-p` - `绑定端口, 默认9880, 可在 config.py 中指定` -`-fp` - `覆盖 config.py 使用全精度` -`-hp` - `覆盖 config.py 使用半精度` - -`-hb` - `cnhubert路径` -`-b` - `bert路径` - -## 调用: - -### 推理 - -endpoint: `/` - -使用执行参数指定的参考音频: -GET: - `http://127.0.0.1:9880?text=先帝创业未半而中道崩殂,今天下三分,益州疲弊,此诚危急存亡之秋也。&text_language=zh` -POST: -```json -{ - "text": "先帝创业未半而中道崩殂,今天下三分,益州疲弊,此诚危急存亡之秋也。", - "text_language": "zh" -} -``` - -手动指定当次推理所使用的参考音频: -GET: - `http://127.0.0.1:9880?refer_wav_path=123.wav&prompt_text=一二三。&prompt_language=zh&text=先帝创业未半而中道崩殂,今天下三分,益州疲弊,此诚危急存亡之秋也。&text_language=zh` -POST: -```json -{ - "refer_wav_path": "123.wav", - "prompt_text": "一二三。", - "prompt_language": "zh", - "text": "先帝创业未半而中道崩殂,今天下三分,益州疲弊,此诚危急存亡之秋也。", - "text_language": "zh" -} -``` - -RESP: -成功: 直接返回 wav 音频流, http code 200 -失败: 返回包含错误信息的 json, http code 400 - - -### 更换默认参考音频 - -endpoint: `/change_refer` - -key与推理端一样 - -GET: - `http://127.0.0.1:9880/change_refer?refer_wav_path=123.wav&prompt_text=一二三。&prompt_language=zh` -POST: -```json -{ - "refer_wav_path": "123.wav", - "prompt_text": "一二三。", - "prompt_language": "zh" -} -``` - -RESP: -成功: json, http code 200 -失败: json, 400 - - -### 命令控制 - -endpoint: `/control` - -command: -"restart": 重新运行 -"exit": 结束运行 - -GET: - `http://127.0.0.1:9880/control?command=restart` -POST: -```json -{ - "command": "restart" -} -``` - -RESP: 无 - -""" - - -import argparse -import os -import sys - -now_dir = os.getcwd() -sys.path.append(now_dir) -sys.path.append("%s/GPT_SoVITS" % (now_dir)) - -import signal -from time import time as ttime -import torch -import librosa -import soundfile as sf -from fastapi import FastAPI, Request, HTTPException -from fastapi.responses import StreamingResponse, JSONResponse -import uvicorn -from transformers import AutoModelForMaskedLM, AutoTokenizer -import numpy as np -from feature_extractor import cnhubert -from io import BytesIO -from module.models import SynthesizerTrn -from AR.models.t2s_lightning_module import Text2SemanticLightningModule -from text import cleaned_text_to_sequence -from text.cleaner import clean_text -from module.mel_processing import spectrogram_torch -from my_utils import load_audio -import config as global_config - -g_config = global_config.Config() - -# AVAILABLE_COMPUTE = "cuda" if torch.cuda.is_available() else "cpu" - -parser = argparse.ArgumentParser(description="GPT-SoVITS api") - -parser.add_argument("-s", "--sovits_path", type=str, default=g_config.sovits_path, help="SoVITS模型路径") -parser.add_argument("-g", "--gpt_path", type=str, default=g_config.gpt_path, help="GPT模型路径") - -parser.add_argument("-dr", "--default_refer_path", type=str, default="", help="默认参考音频路径") -parser.add_argument("-dt", "--default_refer_text", type=str, default="", help="默认参考音频文本") -parser.add_argument("-dl", "--default_refer_language", type=str, default="", help="默认参考音频语种") - -parser.add_argument("-d", "--device", type=str, default=g_config.infer_device, help="cuda / cpu") -parser.add_argument("-a", "--bind_addr", type=str, default="0.0.0.0", help="default: 0.0.0.0") -parser.add_argument("-p", "--port", type=int, default=g_config.api_port, help="default: 9880") -parser.add_argument("-fp", "--full_precision", action="store_true", default=False, help="覆盖config.is_half为False, 使用全精度") -parser.add_argument("-hp", "--half_precision", action="store_true", default=False, help="覆盖config.is_half为True, 使用半精度") -# bool值的用法为 `python ./api.py -fp ...` -# 此时 full_precision==True, half_precision==False - -parser.add_argument("-hb", "--hubert_path", type=str, default=g_config.cnhubert_path, help="覆盖config.cnhubert_path") -parser.add_argument("-b", "--bert_path", type=str, default=g_config.bert_path, help="覆盖config.bert_path") - -args = parser.parse_args() - -sovits_path = args.sovits_path -gpt_path = args.gpt_path - - -class DefaultRefer: - def __init__(self, path, text, language): - self.path = args.default_refer_path - self.text = args.default_refer_text - self.language = args.default_refer_language - - def is_ready(self) -> bool: - return is_full(self.path, self.text, self.language) - - -default_refer = DefaultRefer(args.default_refer_path, args.default_refer_text, args.default_refer_language) - -device = args.device -port = args.port -host = args.bind_addr - -if sovits_path == "": - sovits_path = g_config.pretrained_sovits_path - print(f"[WARN] 未指定SoVITS模型路径, fallback后当前值: {sovits_path}") -if gpt_path == "": - gpt_path = g_config.pretrained_gpt_path - print(f"[WARN] 未指定GPT模型路径, fallback后当前值: {gpt_path}") - -# 指定默认参考音频, 调用方 未提供/未给全 参考音频参数时使用 -if default_refer.path == "" or default_refer.text == "" or default_refer.language == "": - default_refer.path, default_refer.text, default_refer.language = "", "", "" - print("[INFO] 未指定默认参考音频") -else: - print(f"[INFO] 默认参考音频路径: {default_refer.path}") - print(f"[INFO] 默认参考音频文本: {default_refer.text}") - print(f"[INFO] 默认参考音频语种: {default_refer.language}") - -is_half = g_config.is_half -if args.full_precision: - is_half = False -if args.half_precision: - is_half = True -if args.full_precision and args.half_precision: - is_half = g_config.is_half # 炒饭fallback - -print(f"[INFO] 半精: {is_half}") - -cnhubert_base_path = args.hubert_path -bert_path = args.bert_path - -cnhubert.cnhubert_base_path = cnhubert_base_path -tokenizer = AutoTokenizer.from_pretrained(bert_path) -bert_model = AutoModelForMaskedLM.from_pretrained(bert_path) -if is_half: - bert_model = bert_model.half().to(device) -else: - bert_model = bert_model.to(device) - - -def is_empty(*items): # 任意一项不为空返回False - for item in items: - if item is not None and item != "": - return False - return True - - -def is_full(*items): # 任意一项为空返回False - for item in items: - if item is None or item == "": - return False - return True - -def change_sovits_weights(sovits_path): - global vq_model, hps - dict_s2 = torch.load(sovits_path, map_location="cpu") - hps = dict_s2["config"] - hps = DictToAttrRecursive(hps) - hps.model.semantic_frame_rate = "25hz" - vq_model = SynthesizerTrn( - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - n_speakers=hps.data.n_speakers, - **hps.model - ) - if ("pretrained" not in sovits_path): - del vq_model.enc_q - if is_half == True: - vq_model = vq_model.half().to(device) - else: - vq_model = vq_model.to(device) - vq_model.eval() - print(vq_model.load_state_dict(dict_s2["weight"], strict=False)) - with open("./sweight.txt", "w", encoding="utf-8") as f: - f.write(sovits_path) -def change_gpt_weights(gpt_path): - global hz, max_sec, t2s_model, config - hz = 50 - dict_s1 = torch.load(gpt_path, map_location="cpu") - config = dict_s1["config"] - max_sec = config["data"]["max_sec"] - t2s_model = Text2SemanticLightningModule(config, "****", is_train=False) - t2s_model.load_state_dict(dict_s1["weight"]) - if is_half == True: - t2s_model = t2s_model.half() - t2s_model = t2s_model.to(device) - t2s_model.eval() - total = sum([param.nelement() for param in t2s_model.parameters()]) - print("Number of parameter: %.2fM" % (total / 1e6)) - with open("./gweight.txt", "w", encoding="utf-8") as f: f.write(gpt_path) - - -def get_bert_feature(text, word2ph): - with torch.no_grad(): - inputs = tokenizer(text, return_tensors="pt") - for i in inputs: - inputs[i] = inputs[i].to(device) #####输入是long不用管精度问题,精度随bert_model - res = bert_model(**inputs, output_hidden_states=True) - res = torch.cat(res["hidden_states"][-3:-2], -1)[0].cpu()[1:-1] - assert len(word2ph) == len(text) - phone_level_feature = [] - for i in range(len(word2ph)): - repeat_feature = res[i].repeat(word2ph[i], 1) - phone_level_feature.append(repeat_feature) - phone_level_feature = torch.cat(phone_level_feature, dim=0) - # if(is_half==True):phone_level_feature=phone_level_feature.half() - return phone_level_feature.T - - -n_semantic = 1024 -dict_s2 = torch.load(sovits_path, map_location="cpu") -hps = dict_s2["config"] - - -class DictToAttrRecursive: - def __init__(self, input_dict): - for key, value in input_dict.items(): - if isinstance(value, dict): - # 如果值是字典,递归调用构造函数 - setattr(self, key, DictToAttrRecursive(value)) - else: - setattr(self, key, value) - - -hps = DictToAttrRecursive(hps) -hps.model.semantic_frame_rate = "25hz" -dict_s1 = torch.load(gpt_path, map_location="cpu") -config = dict_s1["config"] -ssl_model = cnhubert.get_model() -if is_half: - ssl_model = ssl_model.half().to(device) -else: - ssl_model = ssl_model.to(device) - -vq_model = SynthesizerTrn( - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - n_speakers=hps.data.n_speakers, - **hps.model) -if is_half: - vq_model = vq_model.half().to(device) -else: - vq_model = vq_model.to(device) -vq_model.eval() -print(vq_model.load_state_dict(dict_s2["weight"], strict=False)) -hz = 50 -max_sec = config['data']['max_sec'] -t2s_model = Text2SemanticLightningModule(config, "****", is_train=False) -t2s_model.load_state_dict(dict_s1["weight"]) -if is_half: - t2s_model = t2s_model.half() -t2s_model = t2s_model.to(device) -t2s_model.eval() -total = sum([param.nelement() for param in t2s_model.parameters()]) -print("Number of parameter: %.2fM" % (total / 1e6)) - - -def get_spepc(hps, filename): - audio = load_audio(filename, int(hps.data.sampling_rate)) - audio = torch.FloatTensor(audio) - audio_norm = audio - audio_norm = audio_norm.unsqueeze(0) - spec = spectrogram_torch(audio_norm, hps.data.filter_length, hps.data.sampling_rate, hps.data.hop_length, - hps.data.win_length, center=False) - return spec - - -dict_language = { - "中文": "zh", - "英文": "en", - "日文": "ja", - "ZH": "zh", - "EN": "en", - "JA": "ja", - "zh": "zh", - "en": "en", - "ja": "ja" -} - - -def get_tts_wav(ref_wav_path, prompt_text, prompt_language, text, text_language): - t0 = ttime() - prompt_text = prompt_text.strip("\n") - prompt_language, text = prompt_language, text.strip("\n") - zero_wav = np.zeros(int(hps.data.sampling_rate * 0.3), dtype=np.float16 if is_half == True else np.float32) - with torch.no_grad(): - wav16k, sr = librosa.load(ref_wav_path, sr=16000) - wav16k = torch.from_numpy(wav16k) - zero_wav_torch = torch.from_numpy(zero_wav) - if (is_half == True): - wav16k = wav16k.half().to(device) - zero_wav_torch = zero_wav_torch.half().to(device) - else: - wav16k = wav16k.to(device) - zero_wav_torch = zero_wav_torch.to(device) - wav16k = torch.cat([wav16k, zero_wav_torch]) - ssl_content = ssl_model.model(wav16k.unsqueeze(0))["last_hidden_state"].transpose(1, 2) # .float() - codes = vq_model.extract_latent(ssl_content) - prompt_semantic = codes[0, 0] - t1 = ttime() - prompt_language = dict_language[prompt_language] - text_language = dict_language[text_language] - phones1, word2ph1, norm_text1 = clean_text(prompt_text, prompt_language) - phones1 = cleaned_text_to_sequence(phones1) - texts = text.split("\n") - audio_opt = [] - - for text in texts: - phones2, word2ph2, norm_text2 = clean_text(text, text_language) - phones2 = cleaned_text_to_sequence(phones2) - if (prompt_language == "zh"): - bert1 = get_bert_feature(norm_text1, word2ph1).to(device) - else: - bert1 = torch.zeros((1024, len(phones1)), dtype=torch.float16 if is_half == True else torch.float32).to( - device) - if (text_language == "zh"): - bert2 = get_bert_feature(norm_text2, word2ph2).to(device) - else: - bert2 = torch.zeros((1024, len(phones2))).to(bert1) - bert = torch.cat([bert1, bert2], 1) - - all_phoneme_ids = torch.LongTensor(phones1 + phones2).to(device).unsqueeze(0) - bert = bert.to(device).unsqueeze(0) - all_phoneme_len = torch.tensor([all_phoneme_ids.shape[-1]]).to(device) - prompt = prompt_semantic.unsqueeze(0).to(device) - t2 = ttime() - with torch.no_grad(): - # pred_semantic = t2s_model.model.infer( - pred_semantic, idx = t2s_model.model.infer_panel( - all_phoneme_ids, - all_phoneme_len, - prompt, - bert, - # prompt_phone_len=ph_offset, - top_k=config['inference']['top_k'], - early_stop_num=hz * max_sec) - t3 = ttime() - # print(pred_semantic.shape,idx) - pred_semantic = pred_semantic[:, -idx:].unsqueeze(0) # .unsqueeze(0)#mq要多unsqueeze一次 - refer = get_spepc(hps, ref_wav_path) # .to(device) - if (is_half == True): - refer = refer.half().to(device) - else: - refer = refer.to(device) - # audio = vq_model.decode(pred_semantic, all_phoneme_ids, refer).detach().cpu().numpy()[0, 0] - audio = \ - vq_model.decode(pred_semantic, torch.LongTensor(phones2).to(device).unsqueeze(0), - refer).detach().cpu().numpy()[ - 0, 0] ###试试重建不带上prompt部分 - audio_opt.append(audio) - audio_opt.append(zero_wav) - t4 = ttime() - print("%.3f\t%.3f\t%.3f\t%.3f" % (t1 - t0, t2 - t1, t3 - t2, t4 - t3)) - yield hps.data.sampling_rate, (np.concatenate(audio_opt, 0) * 32768).astype(np.int16) - - -def handle_control(command): - if command == "restart": - os.execl(g_config.python_exec, g_config.python_exec, *sys.argv) - elif command == "exit": - os.kill(os.getpid(), signal.SIGTERM) - exit(0) - - -def handle_change(path, text, language): - if is_empty(path, text, language): - return JSONResponse({"code": 400, "message": '缺少任意一项以下参数: "path", "text", "language"'}, status_code=400) - - if path != "" or path is not None: - default_refer.path = path - if text != "" or text is not None: - default_refer.text = text - if language != "" or language is not None: - default_refer.language = language - - print(f"[INFO] 当前默认参考音频路径: {default_refer.path}") - print(f"[INFO] 当前默认参考音频文本: {default_refer.text}") - print(f"[INFO] 当前默认参考音频语种: {default_refer.language}") - print(f"[INFO] is_ready: {default_refer.is_ready()}") - - return JSONResponse({"code": 0, "message": "Success"}, status_code=200) - - -def handle(refer_wav_path, prompt_text, prompt_language, text, text_language): - if ( - refer_wav_path == "" or refer_wav_path is None - or prompt_text == "" or prompt_text is None - or prompt_language == "" or prompt_language is None - ): - refer_wav_path, prompt_text, prompt_language = ( - default_refer.path, - default_refer.text, - default_refer.language, - ) - if not default_refer.is_ready(): - return JSONResponse({"code": 400, "message": "未指定参考音频且接口无预设"}, status_code=400) - - with torch.no_grad(): - gen = get_tts_wav( - refer_wav_path, prompt_text, prompt_language, text, text_language - ) - sampling_rate, audio_data = next(gen) - - wav = BytesIO() - sf.write(wav, audio_data, sampling_rate, format="wav") - wav.seek(0) - - torch.cuda.empty_cache() - return StreamingResponse(wav, media_type="audio/wav") - - -app = FastAPI() - -#clark新增-----2024-02-21 -#可在启动后动态修改模型,以此满足同一个api不同的朗读者请求 -@app.post("/set_model") -async def set_model(request: Request): - json_post_raw = await request.json() - global gpt_path - gpt_path=json_post_raw.get("gpt_model_path") - global sovits_path - sovits_path=json_post_raw.get("sovits_model_path") - print("gptpath"+gpt_path+";vitspath"+sovits_path) - change_sovits_weights(sovits_path) - change_gpt_weights(gpt_path) - return "ok" -# 新增-----end------ - -@app.post("/control") -async def control(request: Request): - json_post_raw = await request.json() - return handle_control(json_post_raw.get("command")) - - -@app.get("/control") -async def control(command: str = None): - return handle_control(command) - - -@app.post("/change_refer") -async def change_refer(request: Request): - json_post_raw = await request.json() - return handle_change( - json_post_raw.get("refer_wav_path"), - json_post_raw.get("prompt_text"), - json_post_raw.get("prompt_language") - ) - - -@app.get("/change_refer") -async def change_refer( - refer_wav_path: str = None, - prompt_text: str = None, - prompt_language: str = None -): - return handle_change(refer_wav_path, prompt_text, prompt_language) - - -@app.post("/") -async def tts_endpoint(request: Request): - json_post_raw = await request.json() - return handle( - json_post_raw.get("refer_wav_path"), - json_post_raw.get("prompt_text"), - json_post_raw.get("prompt_language"), - json_post_raw.get("text"), - json_post_raw.get("text_language"), - ) - - -@app.get("/") -async def tts_endpoint( - refer_wav_path: str = None, - prompt_text: str = None, - prompt_language: str = None, - text: str = None, - text_language: str = None, -): - return handle(refer_wav_path, prompt_text, prompt_language, text, text_language) - - -if __name__ == "__main__": - uvicorn.run(app, host=host, port=port, workers=1) diff --git a/api_doc.md b/api_doc.md new file mode 100644 index 00000000..0d23cd54 --- /dev/null +++ b/api_doc.md @@ -0,0 +1,102 @@ +## Overview + +This document aims to introduce how to use our Text-to-Speech API, including making requests via GET and POST methods. This API supports converting text into the voice of specified characters and supports different languages and emotional expressions. + +## Character and Emotion List + +To obtain the supported characters and their corresponding emotions, please visit the following URL: + +- URL: `http://127.0.0.1:5000/character_list` +- Returns: A JSON format list of characters and corresponding emotions +- Method: `GET` + +``` +{ + "Hanabi": [ + "default", + "Normal", + "Yandere", + ], + "Hutao": [ + "default" + ] +} +``` + +## Regarding Aliases + +From version 2.2.4, an alias system was added. Detailed allowed aliases can be found in `Inference/params_config.json`. + +## Text-to-Speech + +- URL: `http://127.0.0.1:5000/tts` +- Returns: Audio on success. Error message on failure. +- Method: `GET`/`POST` + +### GET Method + +#### Format + +``` +http://127.0.0.1:5000/tts?character={{characterName}}&text={{text}} +``` + +- Parameter explanation: + - `character`: The name of the character folder, pay attention to case sensitivity, full/half width, and language (Chinese/English). + - `text`: The text to be converted, URL encoding is recommended. + - Optional parameters include `text_language`, `format`, `top_k`, `top_p`, `batch_size`, `speed`, `temperature`, `emotion`, `save_temp`, and `stream`, detailed explanations are provided in the POST section below. +- From version 2.2.4, an alias system was added, with detailed allowed aliases found in `Inference/params_config.json`. + +### POST Method + +#### JSON Package Format + +##### All Parameters + +``` +{ + "method": "POST", + "body": { + "character": "${chaName}", + "emotion": "${Emotion}", + "text": "${speakText}", + "text_language": "${textLanguage}", + "batch_size": ${batch_size}, + "speed": ${speed}, + "top_k": ${topK}, + "top_p": ${topP}, + "temperature": ${temperature}, + "stream": "${stream}", + "format": "${Format}", + "save_temp": "${saveTemp}" + } +} +``` + +You can omit one or more items. From version 2.2.4, an alias system was introduced, detailed allowed aliases can be found in `Inference/params_config.json`. + +##### Minimal Data: + +``` +{ + "method": "POST", + "body": { + "text": "${speakText}" + } +} +``` + +##### Parameter Explanation + +- **text**: The text to be converted, URL encoding is recommended. +- **character**: Character folder name, pay attention to case sensitivity, full/half width, and language. +- **emotion**: Character emotion, must be an actually supported emotion of the character, otherwise, the default emotion will be used. +- **text_language**: Text language (auto / zh / en / ja), default is multilingual mixed. +- **top_k**, **top_p**, **temperature**: GPT model parameters, no need to modify if unfamiliar. + +- **batch_size**: How many batches at a time, can be increased for faster processing if you have a powerful computer, integer, default is 1. +- **speed**: Speech speed, default is 1.0. +- **save_temp**: Whether to save temporary files, when true, the backend will save the generated audio, and subsequent identical requests will directly return that data, default is false. +- **stream**: Whether to stream, when true, audio will be returned sentence by sentence, default is false. +- **format**: Format, default is WAV, allows MP3/ WAV/ OGG. +