mirror of
https://github.com/RVC-Boss/GPT-SoVITS.git
synced 2025-04-05 12:38:35 +08:00
optimize the structure
This commit is contained in:
parent
939971afe3
commit
0d88cff99e
117
README.md
117
README.md
@ -17,14 +17,6 @@ A Powerful Few-shot Voice Conversion and Text-to-Speech WebUI.<br><br>
|
||||
|
||||
---
|
||||
|
||||
> Check out our [demo video](https://www.bilibili.com/video/BV12g4y1m7Uw) here!
|
||||
|
||||
Unseen speakers few-shot fine-tuning demo:
|
||||
|
||||
https://github.com/RVC-Boss/GPT-SoVITS/assets/129054828/05bee1fa-bdd8-4d85-9350-80c060ab47fb
|
||||
|
||||
For users in China region, you can use AutoDL Cloud Docker to experience the full functionality online: https://www.codewithgpu.com/i/RVC-Boss/GPT-SoVITS/GPT-SoVITS-Official
|
||||
|
||||
## Features:
|
||||
|
||||
1. **Zero-shot TTS:** Input a 5-second vocal sample and experience instant text-to-speech conversion.
|
||||
@ -35,19 +27,29 @@ For users in China region, you can use AutoDL Cloud Docker to experience the ful
|
||||
|
||||
4. **WebUI Tools:** Integrated tools include voice accompaniment separation, automatic training set segmentation, Chinese ASR, and text labeling, assisting beginners in creating training datasets and GPT/SoVITS models.
|
||||
|
||||
## Environment Preparation
|
||||
**Check out our [demo video](https://www.bilibili.com/video/BV12g4y1m7Uw) here!**
|
||||
|
||||
If you are a Windows user (tested with win>=10) you can install directly via the prezip. Just download the [prezip](https://huggingface.co/lj1995/GPT-SoVITS-windows-package/resolve/main/GPT-SoVITS-beta.7z?download=true), unzip it and double-click go-webui.bat to start GPT-SoVITS-WebUI.
|
||||
Unseen speakers few-shot fine-tuning demo:
|
||||
|
||||
https://github.com/RVC-Boss/GPT-SoVITS/assets/129054828/05bee1fa-bdd8-4d85-9350-80c060ab47fb
|
||||
|
||||
## Installation
|
||||
|
||||
For users in China region, you can [click here](https://www.codewithgpu.com/i/RVC-Boss/GPT-SoVITS/GPT-SoVITS-Official) to use AutoDL Cloud Docker to experience the full functionality online.
|
||||
|
||||
### Tested Environments
|
||||
|
||||
- Python 3.9, PyTorch 2.0.1, CUDA 11
|
||||
- Python 3.10.13, PyTorch 2.1.2, CUDA 12.3
|
||||
- Python 3.9, PyTorch 2.3.0.dev20240122, macOS 14.3 (Apple silicon, GPU)
|
||||
- Python 3.9, PyTorch 2.3.0.dev20240122, macOS 14.3 (Apple silicon)
|
||||
|
||||
_Note: numba==0.56.4 require py<3.11_
|
||||
_Note: numba==0.56.4 requires py<3.11_
|
||||
|
||||
### Quick Install with Conda
|
||||
### Windows
|
||||
|
||||
If you are a Windows user (tested with win>=10), you can directly download the [pre-packaged distribution](https://huggingface.co/lj1995/GPT-SoVITS-windows-package/resolve/main/GPT-SoVITS-beta.7z?download=true) and double-click on _go-webui.bat_ to start GPT-SoVITS-WebUI.
|
||||
|
||||
### Linux
|
||||
|
||||
```bash
|
||||
conda create -n GPTSoVits python=3.9
|
||||
@ -55,15 +57,37 @@ conda activate GPTSoVits
|
||||
bash install.sh
|
||||
```
|
||||
|
||||
### macOS
|
||||
|
||||
Only Macs that meet the following conditions can train models:
|
||||
|
||||
- Mac computers with Apple silicon
|
||||
- macOS 12.3 or later
|
||||
- Xcode command-line tools installed by running `xcode-select --install`
|
||||
|
||||
**All Macs can do inference with CPU, which has been demonstrated to outperform GPU inference.**
|
||||
|
||||
First make sure you have installed FFmpeg by running `brew install ffmpeg` or `conda install ffmpeg`, then install by using the following commands:
|
||||
|
||||
```bash
|
||||
conda create -n GPTSoVits python=3.9
|
||||
conda activate GPTSoVits
|
||||
|
||||
pip3 install --pre torch torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
_Note: Training models will only work if you've installed PyTorch Nightly._
|
||||
|
||||
### Install Manually
|
||||
|
||||
#### Pip Packages
|
||||
#### Install Dependences
|
||||
|
||||
```bash
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
#### FFmpeg
|
||||
#### Install FFmpeg
|
||||
|
||||
##### Conda Users
|
||||
|
||||
@ -79,57 +103,10 @@ sudo apt install libsox-dev
|
||||
conda install -c conda-forge 'ffmpeg<7'
|
||||
```
|
||||
|
||||
##### MacOS Users
|
||||
|
||||
```bash
|
||||
brew install ffmpeg
|
||||
```
|
||||
|
||||
##### Windows Users
|
||||
|
||||
Download and place [ffmpeg.exe](https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/ffmpeg.exe) and [ffprobe.exe](https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/ffprobe.exe) in the GPT-SoVITS root.
|
||||
|
||||
### Pretrained Models
|
||||
|
||||
Download pretrained models from [GPT-SoVITS Models](https://huggingface.co/lj1995/GPT-SoVITS) and place them in `GPT_SoVITS/pretrained_models`.
|
||||
|
||||
For UVR5 (Vocals/Accompaniment Separation & Reverberation Removal, additionally), download models from [UVR5 Weights](https://huggingface.co/lj1995/VoiceConversionWebUI/tree/main/uvr5_weights) and place them in `tools/uvr5/uvr5_weights`.
|
||||
|
||||
Users in China region can download these two models by entering the links below and clicking "Download a copy"
|
||||
|
||||
- [GPT-SoVITS Models](https://www.icloud.com.cn/iclouddrive/056y_Xog_HXpALuVUjscIwTtg#GPT-SoVITS_Models)
|
||||
|
||||
- [UVR5 Weights](https://www.icloud.com.cn/iclouddrive/0bekRKDiJXboFhbfm3lM2fVbA#UVR5_Weights)
|
||||
|
||||
For Chinese ASR (additionally), download models from [Damo ASR Model](https://modelscope.cn/models/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/files), [Damo VAD Model](https://modelscope.cn/models/damo/speech_fsmn_vad_zh-cn-16k-common-pytorch/files), and [Damo Punc Model](https://modelscope.cn/models/damo/punc_ct-transformer_zh-cn-common-vocab272727-pytorch/files) and place them in `tools/damo_asr/models`.
|
||||
|
||||
### For Mac Users
|
||||
|
||||
If you are a Mac user, make sure you meet the following conditions for training and inferencing with GPU:
|
||||
|
||||
- Mac computers with Apple silicon
|
||||
- macOS 12.3 or later
|
||||
- Xcode command-line tools installed by running `xcode-select --install`
|
||||
|
||||
_Other Macs can do inference with CPU only._
|
||||
|
||||
Then install by using the following commands:
|
||||
|
||||
#### Create Environment
|
||||
|
||||
```bash
|
||||
conda create -n GPTSoVits python=3.9
|
||||
conda activate GPTSoVits
|
||||
```
|
||||
|
||||
#### Install Requirements
|
||||
|
||||
```bash
|
||||
pip install -r requirements.txt
|
||||
pip uninstall torch torchaudio
|
||||
pip3 install --pre torch torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu
|
||||
```
|
||||
|
||||
### Using Docker
|
||||
|
||||
#### docker-compose.yaml configuration
|
||||
@ -157,6 +134,20 @@ As above, modify the corresponding parameters based on your actual situation, th
|
||||
docker run --rm -it --gpus=all --env=is_half=False --volume=G:\GPT-SoVITS-DockerTest\output:/workspace/output --volume=G:\GPT-SoVITS-DockerTest\logs:/workspace/logs --volume=G:\GPT-SoVITS-DockerTest\SoVITS_weights:/workspace/SoVITS_weights --workdir=/workspace -p 9880:9880 -p 9871:9871 -p 9872:9872 -p 9873:9873 -p 9874:9874 --shm-size="16G" -d breakstring/gpt-sovits:xxxxx
|
||||
```
|
||||
|
||||
## Pretrained Models
|
||||
|
||||
Download pretrained models from [GPT-SoVITS Models](https://huggingface.co/lj1995/GPT-SoVITS) and place them in `GPT_SoVITS/pretrained_models`.
|
||||
|
||||
For UVR5 (Vocals/Accompaniment Separation & Reverberation Removal, additionally), download models from [UVR5 Weights](https://huggingface.co/lj1995/VoiceConversionWebUI/tree/main/uvr5_weights) and place them in `tools/uvr5/uvr5_weights`.
|
||||
|
||||
Users in China region can download these two models by entering the links below and clicking "Download a copy"
|
||||
|
||||
- [GPT-SoVITS Models](https://www.icloud.com.cn/iclouddrive/056y_Xog_HXpALuVUjscIwTtg#GPT-SoVITS_Models)
|
||||
|
||||
- [UVR5 Weights](https://www.icloud.com.cn/iclouddrive/0bekRKDiJXboFhbfm3lM2fVbA#UVR5_Weights)
|
||||
|
||||
For Chinese ASR (additionally), download models from [Damo ASR Model](https://modelscope.cn/models/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/files), [Damo VAD Model](https://modelscope.cn/models/damo/speech_fsmn_vad_zh-cn-16k-common-pytorch/files), and [Damo Punc Model](https://modelscope.cn/models/damo/punc_ct-transformer_zh-cn-common-vocab272727-pytorch/files) and place them in `tools/damo_asr/models`.
|
||||
|
||||
## Dataset Format
|
||||
|
||||
The TTS annotation .list file format:
|
||||
@ -229,8 +220,6 @@ python ./tools/damo_asr/WhisperASR.py -i <input> -o <output> -f <file_name.list>
|
||||
A custom list save path is enabled
|
||||
## Credits
|
||||
|
||||
|
||||
|
||||
Special thanks to the following projects and contributors:
|
||||
|
||||
- [ar-vits](https://github.com/innnky/ar-vits)
|
||||
|
@ -17,12 +17,6 @@
|
||||
|
||||
---
|
||||
|
||||
> 查看我们的介绍视频 [demo video](https://www.bilibili.com/video/BV12g4y1m7Uw)
|
||||
|
||||
https://github.com/RVC-Boss/GPT-SoVITS/assets/129054828/05bee1fa-bdd8-4d85-9350-80c060ab47fb
|
||||
|
||||
中国地区用户可使用 AutoDL 云端镜像进行体验:https://www.codewithgpu.com/i/RVC-Boss/GPT-SoVITS/GPT-SoVITS-Official
|
||||
|
||||
## 功能:
|
||||
|
||||
1. **零样本文本到语音(TTS):** 输入 5 秒的声音样本,即刻体验文本到语音转换。
|
||||
@ -33,46 +27,29 @@ https://github.com/RVC-Boss/GPT-SoVITS/assets/129054828/05bee1fa-bdd8-4d85-9350-
|
||||
|
||||
4. **WebUI 工具:** 集成工具包括声音伴奏分离、自动训练集分割、中文自动语音识别(ASR)和文本标注,协助初学者创建训练数据集和 GPT/SoVITS 模型。
|
||||
|
||||
## 环境准备
|
||||
**查看我们的介绍视频 [demo video](https://www.bilibili.com/video/BV12g4y1m7Uw)**
|
||||
|
||||
如果你是 Windows 用户(已在 win>=10 上测试),可以直接通过预打包文件安装。只需下载[预打包文件](https://huggingface.co/lj1995/GPT-SoVITS-windows-package/resolve/main/GPT-SoVITS-beta.7z?download=true),解压后双击 go-webui.bat 即可启动 GPT-SoVITS-WebUI。
|
||||
未见过的说话者 few-shot 微调演示:
|
||||
|
||||
### 测试通过的 Python 和 PyTorch 版本
|
||||
https://github.com/RVC-Boss/GPT-SoVITS/assets/129054828/05bee1fa-bdd8-4d85-9350-80c060ab47fb
|
||||
|
||||
## 安装
|
||||
|
||||
中国地区用户可[点击此处](https://www.codewithgpu.com/i/RVC-Boss/GPT-SoVITS/GPT-SoVITS-Official)使用 AutoDL 云端镜像进行体验。
|
||||
|
||||
### 测试通过的环境
|
||||
|
||||
- Python 3.9、PyTorch 2.0.1 和 CUDA 11
|
||||
- Python 3.10.13, PyTorch 2.1.2 和 CUDA 12.3
|
||||
- Python 3.9、Pytorch 2.3.0.dev20240122 和 macOS 14.3(Apple 芯片,GPU)
|
||||
- Python 3.9、Pytorch 2.3.0.dev20240122 和 macOS 14.3(Apple 芯片)
|
||||
|
||||
_注意: numba==0.56.4 需要 python<3.11_
|
||||
|
||||
### Mac 用户
|
||||
### Windows
|
||||
|
||||
如果你是 Mac 用户,请先确保满足以下条件以使用 GPU 进行训练和推理:
|
||||
如果你是 Windows 用户(已在 win>=10 上测试),可以直接下载[预打包文件](https://huggingface.co/lj1995/GPT-SoVITS-windows-package/resolve/main/GPT-SoVITS-beta.7z?download=true),解压后双击 go-webui.bat 即可启动 GPT-SoVITS-WebUI。
|
||||
|
||||
- 搭载 Apple 芯片的 Mac
|
||||
- macOS 12.3 或更高版本
|
||||
- 已通过运行`xcode-select --install`安装 Xcode command-line tools
|
||||
|
||||
_其他 Mac 仅支持使用 CPU 进行推理_
|
||||
|
||||
然后使用以下命令安装:
|
||||
|
||||
#### 创建环境
|
||||
|
||||
```bash
|
||||
conda create -n GPTSoVits python=3.9
|
||||
conda activate GPTSoVits
|
||||
```
|
||||
|
||||
#### 安装依赖
|
||||
|
||||
```bash
|
||||
pip install -r requirements.txt
|
||||
pip uninstall torch torchaudio
|
||||
pip3 install --pre torch torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu
|
||||
```
|
||||
|
||||
### 使用 Conda 快速安装
|
||||
### Linux
|
||||
|
||||
```bash
|
||||
conda create -n GPTSoVits python=3.9
|
||||
@ -80,15 +57,37 @@ conda activate GPTSoVits
|
||||
bash install.sh
|
||||
```
|
||||
|
||||
### 手动安装包
|
||||
### macOS
|
||||
|
||||
#### Pip 包
|
||||
只有符合以下条件的 Mac 可以训练模型:
|
||||
|
||||
- 搭载 Apple 芯片的 Mac
|
||||
- 运行macOS 12.3 或更高版本
|
||||
- 已通过运行`xcode-select --install`安装 Xcode command-line tools
|
||||
|
||||
**所有 Mac 都可使用 CPU 进行推理,且已测试性能优于 GPU。**
|
||||
|
||||
首先确保你已通过运行 `brew install ffmpeg` 或 `conda install ffmpeg` 安装 FFmpeg,然后运行以下命令安装:
|
||||
|
||||
```bash
|
||||
conda create -n GPTSoVits python=3.9
|
||||
conda activate GPTSoVits
|
||||
|
||||
pip3 install --pre torch torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
_注:只有安装了Pytorch Nightly才可训练模型。_
|
||||
|
||||
### 手动安装
|
||||
|
||||
#### 安装依赖
|
||||
|
||||
```bash
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
#### FFmpeg
|
||||
#### 安装 FFmpeg
|
||||
|
||||
##### Conda 使用者
|
||||
|
||||
@ -104,12 +103,6 @@ sudo apt install libsox-dev
|
||||
conda install -c conda-forge 'ffmpeg<7'
|
||||
```
|
||||
|
||||
##### MacOS 使用者
|
||||
|
||||
```bash
|
||||
brew install ffmpeg
|
||||
```
|
||||
|
||||
##### Windows 使用者
|
||||
|
||||
下载并将 [ffmpeg.exe](https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/ffmpeg.exe) 和 [ffprobe.exe](https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/ffprobe.exe) 放置在 GPT-SoVITS 根目录下。
|
||||
@ -141,11 +134,11 @@ docker compose -f "docker-compose.yaml" up -d
|
||||
docker run --rm -it --gpus=all --env=is_half=False --volume=G:\GPT-SoVITS-DockerTest\output:/workspace/output --volume=G:\GPT-SoVITS-DockerTest\logs:/workspace/logs --volume=G:\GPT-SoVITS-DockerTest\SoVITS_weights:/workspace/SoVITS_weights --workdir=/workspace -p 9880:9880 -p 9871:9871 -p 9872:9872 -p 9873:9873 -p 9874:9874 --shm-size="16G" -d breakstring/gpt-sovits:xxxxx
|
||||
```
|
||||
|
||||
### 预训练模型
|
||||
## 预训练模型
|
||||
|
||||
从 [GPT-SoVITS Models](https://huggingface.co/lj1995/GPT-SoVITS) 下载预训练模型,并将它们放置在 `GPT_SoVITS\pretrained_models` 中。
|
||||
|
||||
对于 UVR5(人声/伴奏分离和混响移除,另外),从 [UVR5 Weights](https://huggingface.co/lj1995/VoiceConversionWebUI/tree/main/uvr5_weights) 下载模型,并将它们放置在 `tools/uvr5/uvr5_weights` 中。
|
||||
对于 UVR5(人声/伴奏分离和混响移除,附加),从 [UVR5 Weights](https://huggingface.co/lj1995/VoiceConversionWebUI/tree/main/uvr5_weights) 下载模型,并将它们放置在 `tools/uvr5/uvr5_weights` 中。
|
||||
|
||||
中国地区用户可以进入以下链接并点击“下载副本”下载以上两个模型:
|
||||
|
||||
@ -153,7 +146,7 @@ docker run --rm -it --gpus=all --env=is_half=False --volume=G:\GPT-SoVITS-Docker
|
||||
|
||||
- [UVR5 Weights](https://www.icloud.com.cn/iclouddrive/0bekRKDiJXboFhbfm3lM2fVbA#UVR5_Weights)
|
||||
|
||||
对于中文自动语音识别(另外),从 [Damo ASR Model](https://modelscope.cn/models/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/files), [Damo VAD Model](https://modelscope.cn/models/damo/speech_fsmn_vad_zh-cn-16k-common-pytorch/files), 和 [Damo Punc Model](https://modelscope.cn/models/damo/punc_ct-transformer_zh-cn-common-vocab272727-pytorch/files) 下载模型,并将它们放置在 `tools/damo_asr/models` 中。
|
||||
对于中文自动语音识别(附加),从 [Damo ASR Model](https://modelscope.cn/models/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/files), [Damo VAD Model](https://modelscope.cn/models/damo/speech_fsmn_vad_zh-cn-16k-common-pytorch/files), 和 [Damo Punc Model](https://modelscope.cn/models/damo/punc_ct-transformer_zh-cn-common-vocab272727-pytorch/files) 下载模型,并将它们放置在 `tools/damo_asr/models` 中。
|
||||
|
||||
## 数据集格式
|
||||
|
||||
|
@ -17,10 +17,6 @@
|
||||
|
||||
---
|
||||
|
||||
> [デモ動画](https://www.bilibili.com/video/BV12g4y1m7Uw)をチェック!
|
||||
|
||||
https://github.com/RVC-Boss/GPT-SoVITS/assets/129054828/05bee1fa-bdd8-4d85-9350-80c060ab47fb
|
||||
|
||||
## 機能:
|
||||
|
||||
1. **ゼロショット TTS:** 5 秒間のボーカルサンプルを入力すると、即座にテキストから音声に変換されます。
|
||||
@ -31,48 +27,27 @@ https://github.com/RVC-Boss/GPT-SoVITS/assets/129054828/05bee1fa-bdd8-4d85-9350-
|
||||
|
||||
4. **WebUI ツール:** 統合されたツールには、音声伴奏の分離、トレーニングセットの自動セグメンテーション、中国語 ASR、テキストラベリングが含まれ、初心者がトレーニングデータセットと GPT/SoVITS モデルを作成するのを支援します。
|
||||
|
||||
## 環境の準備
|
||||
**[デモ動画](https://www.bilibili.com/video/BV12g4y1m7Uw)をチェック!**
|
||||
|
||||
Windows ユーザーであれば(win>=10 にてテスト済み)、prezip 経由で直接インストールできます。[prezip](https://huggingface.co/lj1995/GPT-SoVITS-windows-package/resolve/main/GPT-SoVITS-beta.7z?download=true) をダウンロードして解凍し、go-webui.bat をダブルクリックするだけで GPT-SoVITS-WebUI が起動します。
|
||||
未見の話者数ショット微調整デモ:
|
||||
|
||||
### Python と PyTorch のバージョン
|
||||
https://github.com/RVC-Boss/GPT-SoVITS/assets/129054828/05bee1fa-bdd8-4d85-9350-80c060ab47fb
|
||||
|
||||
## インストール
|
||||
|
||||
### テスト済みの環境
|
||||
|
||||
- Python 3.9, PyTorch 2.0.1, CUDA 11
|
||||
- Python 3.10.13, PyTorch 2.1.2, CUDA 12.3
|
||||
- Python 3.9, PyTorch 2.3.0.dev20240122, macOS 14.3 (Apple silicon, GPU)
|
||||
- Python 3.9, PyTorch 2.3.0.dev20240122, macOS 14.3 (Apple silicon)
|
||||
|
||||
_注記: numba==0.56.4 は py<3.11 が必要です_
|
||||
|
||||
### Mac ユーザーへ
|
||||
### Windows
|
||||
|
||||
如果あなたが Mac ユーザーである場合、GPU を使用してトレーニングおよび推論を行うために以下の条件を満たしていることを確認してください:
|
||||
Windows ユーザーの場合(win>=10 でテスト済み)、[事前にパッケージ化されたディストリビューション](https://huggingface.co/lj1995/GPT-SoVITS-windows-package/resolve/main/GPT-SoVITS-beta.7z?download=true)を直接ダウンロードし、_go-webui.bat_ をダブルクリックして GPT-SoVITS-WebUI を起動することができます。
|
||||
|
||||
- Apple シリコンを搭載した Mac コンピューター
|
||||
- macOS 12.3 以降
|
||||
- `xcode-select --install`を実行してインストールされた Xcode コマンドラインツール
|
||||
|
||||
_その他の Mac は CPU のみで推論を行うことができます。_
|
||||
|
||||
次に、以下のコマンドを使用してインストールします:
|
||||
|
||||
#### 環境作成
|
||||
|
||||
```bash
|
||||
conda create -n GPTSoVits python=3.9
|
||||
conda activate GPTSoVits
|
||||
```
|
||||
|
||||
#### Pip パッケージ
|
||||
|
||||
```bash
|
||||
pip install -r requirements.txt
|
||||
pip uninstall torch torchaudio
|
||||
pip3 install --pre torch torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu
|
||||
```
|
||||
|
||||
_注記: UVR5 を使用して前処理を行う場合は、[オリジナルプロジェクトの GUI をダウンロード](https://github.com/Anjok07/ultimatevocalremovergui)して、「GPU Conversion」を選択することをお勧めします。さらに、特に推論時にメモリリークの問題が発生する可能性があります。推論 webUI を再起動することでメモリを解放することができます。_
|
||||
|
||||
### Conda によるクイックインストール
|
||||
### Linux
|
||||
|
||||
```bash
|
||||
conda create -n GPTSoVits python=3.9
|
||||
@ -80,15 +55,37 @@ conda activate GPTSoVits
|
||||
bash install.sh
|
||||
```
|
||||
|
||||
### macOS
|
||||
|
||||
モデルをトレーニングできるMacは、以下の条件を満たす必要があります:
|
||||
|
||||
- Appleシリコンを搭載したMacコンピュータ
|
||||
- macOS 12.3以降
|
||||
- `xcode-select --install`を実行してインストールされたXcodeコマンドラインツール
|
||||
|
||||
**すべてのMacはCPUを使用して推論を行うことができ、GPU推論よりも優れていることが実証されています。**
|
||||
|
||||
まず、`brew install ffmpeg`または`conda install ffmpeg`を実行してFFmpegをインストールしたことを確認してください。次に、以下のコマンドを使用してインストールします:
|
||||
|
||||
```bash
|
||||
conda create -n GPTSoVits python=3.9
|
||||
conda activate GPTSoVits
|
||||
|
||||
pip3 install --pre torch torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
_注:PyTorch Nightlyをインストールした場合にのみ、モデルのトレーニングが可能です。_
|
||||
|
||||
### 手動インストール
|
||||
|
||||
#### Pip パッケージ
|
||||
#### 依存関係をインストールします
|
||||
|
||||
```bash
|
||||
pip install -r requirementx.txt
|
||||
```
|
||||
|
||||
#### FFmpeg
|
||||
#### FFmpegをインストールします。
|
||||
|
||||
##### Conda ユーザー
|
||||
|
||||
@ -104,12 +101,6 @@ sudo apt install libsox-dev
|
||||
conda install -c conda-forge 'ffmpeg<7'
|
||||
```
|
||||
|
||||
##### MacOS ユーザー
|
||||
|
||||
```bash
|
||||
brew install ffmpeg
|
||||
```
|
||||
|
||||
##### Windows ユーザー
|
||||
|
||||
[ffmpeg.exe](https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/ffmpeg.exe) と [ffprobe.exe](https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/ffprobe.exe) をダウンロードし、GPT-SoVITS のルートディレクトリに置きます。
|
||||
@ -141,7 +132,7 @@ docker compose -f "docker-compose.yaml" up -d
|
||||
docker run --rm -it --gpus=all --env=is_half=False --volume=G:\GPT-SoVITS-DockerTest\output:/workspace/output --volume=G:\GPT-SoVITS-DockerTest\logs:/workspace/logs --volume=G:\GPT-SoVITS-DockerTest\SoVITS_weights:/workspace/SoVITS_weights --workdir=/workspace -p 9880:9880 -p 9871:9871 -p 9872:9872 -p 9873:9873 -p 9874:9874 --shm-size="16G" -d breakstring/gpt-sovits:xxxxx
|
||||
```
|
||||
|
||||
### 事前訓練済みモデル
|
||||
## 事前訓練済みモデル
|
||||
|
||||
[GPT-SoVITS Models](https://huggingface.co/lj1995/GPT-SoVITS) から事前訓練済みモデルをダウンロードし、`GPT_SoVITSpretrained_models` に置きます。
|
||||
|
||||
|
@ -17,12 +17,6 @@
|
||||
|
||||
---
|
||||
|
||||
> 데모 비디오를 확인하세요! [demo video](https://www.bilibili.com/video/BV12g4y1m7Uw)
|
||||
|
||||
https://github.com/RVC-Boss/GPT-SoVITS/assets/129054828/05bee1fa-bdd8-4d85-9350-80c060ab47fb
|
||||
|
||||
중국 지역의 사용자는 AutoDL 클라우드 이미지를 사용하여 체험할 수 있습니다: https://www.codewithgpu.com/i/RVC-Boss/GPT-SoVITS/GPT-SoVITS-Official
|
||||
|
||||
## 기능:
|
||||
|
||||
1. **제로샷 텍스트 음성 변환 (TTS):** 5초의 음성 샘플을 입력하면 즉시 텍스트를 음성으로 변환할 수 있습니다.
|
||||
@ -33,46 +27,27 @@ https://github.com/RVC-Boss/GPT-SoVITS/assets/129054828/05bee1fa-bdd8-4d85-9350-
|
||||
|
||||
4. **WebUI 도구:** 음성 반주 분리, 자동 훈련 데이터셋 분할, 중국어 자동 음성 인식(ASR) 및 텍스트 주석 등의 도구를 통합하여 초보자가 훈련 데이터셋과 GPT/SoVITS 모델을 생성하는 데 도움을 줍니다.
|
||||
|
||||
## 환경 준비
|
||||
**데모 비디오를 확인하세요! [demo video](https://www.bilibili.com/video/BV12g4y1m7Uw)**
|
||||
|
||||
Windows 사용자는 (win>=10 에서 테스트되었습니다) 미리 빌드된 파일을 다운로드하여 설치할 수 있습니다. 다운로드 후 GPT-SoVITS-WebUI를 시작하려면 압축을 풀고 go-webui.bat을 두 번 클릭하면 됩니다.
|
||||
보지 못한 발화자의 퓨샷(few-shot) 파인튜닝 데모:
|
||||
|
||||
### 테스트된 Python 및 PyTorch 버전
|
||||
https://github.com/RVC-Boss/GPT-SoVITS/assets/129054828/05bee1fa-bdd8-4d85-9350-80c060ab47fb
|
||||
|
||||
## 설치
|
||||
|
||||
### 테스트 통과 환경
|
||||
|
||||
- Python 3.9, PyTorch 2.0.1 및 CUDA 11
|
||||
- Python 3.10.13, PyTorch 2.1.2 및 CUDA 12.3
|
||||
- Python 3.9, Pytorch 2.3.0.dev20240122 및 macOS 14.3 (Apple 칩, GPU)
|
||||
- Python 3.9, Pytorch 2.3.0.dev20240122 및 macOS 14.3 (Apple Slilicon)
|
||||
|
||||
_참고: numba==0.56.4 는 python<3.11 을 필요로 합니다._
|
||||
|
||||
### MacOS 사용자
|
||||
### Windows
|
||||
|
||||
MacOS 사용자는 GPU를 사용하여 훈련 및 추론을 하려면 다음 조건을 충족해야 합니다:
|
||||
Windows 사용자이며 (win>=10에서 테스트 완료) [미리 패키지된 배포판](https://huggingface.co/lj1995/GPT-SoVITS-windows-package/resolve/main/GPT-SoVITS-beta.7z?download=true)을 직접 다운로드하여 _go-webui.bat_을 더블클릭하면 GPT-SoVITS-WebUI를 시작할 수 있습니다.
|
||||
|
||||
- Apple 칩이 장착된 Mac
|
||||
- macOS 12.3 이상
|
||||
- `xcode-select --install`을 실행하여 Xcode command-line tools를 설치했습니다.
|
||||
|
||||
_다른 Mac은 CPU를 사용하여 추론만 지원합니다._
|
||||
|
||||
그런 다음 다음 명령을 사용하여 설치합니다:
|
||||
|
||||
#### 환경 설정
|
||||
|
||||
```bash
|
||||
conda create -n GPTSoVits python=3.9
|
||||
conda activate GPTSoVits
|
||||
```
|
||||
|
||||
#### 의존성 모듈 설치
|
||||
|
||||
```bash
|
||||
pip install -r requirements.txt
|
||||
pip uninstall torch torchaudio
|
||||
pip3 install --pre torch torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu
|
||||
```
|
||||
|
||||
### Conda를 사용한 간편 설치
|
||||
### Linux
|
||||
|
||||
```bash
|
||||
conda create -n GPTSoVits python=3.9
|
||||
@ -80,15 +55,37 @@ conda activate GPTSoVits
|
||||
bash install.sh
|
||||
```
|
||||
|
||||
### macOS
|
||||
|
||||
다음 조건을 충족하는 Mac에서만 모델을 훈련할 수 있습니다:
|
||||
|
||||
- Apple 실리콘을 탑재한 Mac
|
||||
- macOS 12.3 이상 버전
|
||||
- `xcode-select --install`을 실행하여 Xcode 명령줄 도구가 설치됨
|
||||
|
||||
**모든 Mac은 CPU를 사용하여 추론할 수 있으며, GPU 추론보다 우수한 성능을 보여주었습니다.**
|
||||
|
||||
먼저 `brew install ffmpeg` 또는 `conda install ffmpeg`를 실행하여 FFmpeg가 설치되었는지 확인한 다음, 다음 명령어를 사용하여 설치하세요:
|
||||
|
||||
```bash
|
||||
conda create -n GPTSoVits python=3.9
|
||||
conda activate GPTSoVits
|
||||
|
||||
pip3 install --pre torch torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
_참고: PyTorch Nightly가 설치되어야만 모델을 훈련할 수 있습니다._
|
||||
|
||||
### 수동 설치
|
||||
|
||||
#### Pip 패키지
|
||||
#### 의존성 설치
|
||||
|
||||
```bash
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
#### FFmpeg
|
||||
#### FFmpeg 설치
|
||||
|
||||
##### Conda 사용자
|
||||
|
||||
@ -104,12 +101,6 @@ sudo apt install libsox-dev
|
||||
conda install -c conda-forge 'ffmpeg<7'
|
||||
```
|
||||
|
||||
##### MacOS 사용자
|
||||
|
||||
```bash
|
||||
brew install ffmpeg
|
||||
```
|
||||
|
||||
##### Windows 사용자
|
||||
|
||||
[ffmpeg.exe](https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/ffmpeg.exe)와 [ffprobe.exe](https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/ffprobe.exe)를 GPT-SoVITS root 디렉토리에 넣습니다.
|
||||
@ -144,7 +135,7 @@ docker compose -f "docker-compose.yaml" up -d
|
||||
docker run --rm -it --gpus=all --env=is_half=False --volume=G:\GPT-SoVITS-DockerTest\output:/workspace/output --volume=G:\GPT-SoVITS-DockerTest\logs:/workspace/logs --volume=G:\GPT-SoVITS-DockerTest\SoVITS_weights:/workspace/SoVITS_weights --workdir=/workspace -p 9880:9880 -p 9871:9871 -p 9872:9872 -p 9873:9873 -p 9874:9874 --shm-size="16G" -d breakstring/gpt-sovits:xxxxx
|
||||
```
|
||||
|
||||
### 사전 훈련된 모델
|
||||
## 사전 훈련된 모델
|
||||
|
||||
[GPT-SoVITS Models](https://huggingface.co/lj1995/GPT-SoVITS)에서 사전 훈련된 모델을 다운로드하고 `GPT_SoVITS\pretrained_models`에 넣습니다.
|
||||
|
||||
|
Loading…
x
Reference in New Issue
Block a user