diff --git a/README.md b/README.md
index c6b774e..9b3d583 100644
--- a/README.md
+++ b/README.md
@@ -106,7 +106,7 @@ conda install -c conda-forge 'ffmpeg<7'
Download and place [ffmpeg.exe](https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/ffmpeg.exe) and [ffprobe.exe](https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/ffprobe.exe) in the GPT-SoVITS root.
-##### Mac Users
+##### MacOS Users
```bash
brew install ffmpeg
```
@@ -156,7 +156,7 @@ For English or Japanese ASR (additionally), download models from [Faster Whisper
Users in the China region can download this model by entering the links below
-- [Faster Whisper Large V3](https://www.icloud.com/iclouddrive/0c4pQxFs7oWyVU1iMTq2DbmLA#faster-whisper-large-v3) (Click "Download a copy", log out if you encounter errors while downloading.)
+- [Faster Whisper Large V3](https://www.icloud.com/iclouddrive/00bUEp9_mcjMq_dhHu_vrAFDQ#faster-whisper-large-v3) (Click "Download a copy", log out if you encounter errors while downloading.)
- [Faster Whisper Large V3](https://hf-mirror.com/Systran/faster-whisper-large-v3) (HuggingFace mirror site)
@@ -227,7 +227,7 @@ ASR processing is performed through Faster_Whisper(ASR marking except Chinese)
(No progress bars, GPU performance may cause time delays)
```
-python ./tools/asr/fasterwhisper_asr.py -i -o -l
+python ./tools/asr/fasterwhisper_asr.py -i -o -l -p
```
A custom list save path is enabled
diff --git a/docs/cn/README.md b/docs/cn/README.md
index e46dce3..0074cdc 100644
--- a/docs/cn/README.md
+++ b/docs/cn/README.md
@@ -106,7 +106,7 @@ conda install -c conda-forge 'ffmpeg<7'
下载并将 [ffmpeg.exe](https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/ffmpeg.exe) 和 [ffprobe.exe](https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/ffprobe.exe) 放置在 GPT-SoVITS 根目录下。
-##### Mac 用户
+##### MacOS 用户
```bash
brew install ffmpeg
```
@@ -155,7 +155,7 @@ docker run --rm -it --gpus=all --env=is_half=False --volume=G:\GPT-SoVITS-Docker
对于英语与日语自动语音识别(附加),从 [Faster Whisper Large V3](https://huggingface.co/Systran/faster-whisper-large-v3) 下载模型,并将它们放置在 `tools/asr/models` 中。 此外,[其他模型](https://huggingface.co/Systran)可能具有类似效果,但占用更小的磁盘空间。
中国地区用户可以通过以下链接下载:
-- [Faster Whisper Large V3](https://www.icloud.com/iclouddrive/0c4pQxFs7oWyVU1iMTq2DbmLA#faster-whisper-large-v3)(点击“下载副本”,如果下载时遇到错误,请退出登录)
+- [Faster Whisper Large V3](https://www.icloud.com/iclouddrive/00bUEp9_mcjMq_dhHu_vrAFDQ#faster-whisper-large-v3)(点击“下载副本”,如果下载时遇到错误,请退出登录)
- [Faster Whisper Large V3](https://hf-mirror.com/Systran/faster-whisper-large-v3)(Hugging Face镜像站)
@@ -185,7 +185,7 @@ D:\GPT-SoVITS\xxx/xxx.wav|xxx|en|I like playing Genshin.
- [ ] **高优先级:**
- [x] 日语和英语的本地化。
- - [ ] 用户指南。
+ - [x] 用户指南。
- [x] 日语和英语数据集微调训练。
- [ ] **功能:**
@@ -226,9 +226,9 @@ python tools/asr/funasr_asr.py -i -o
通过Faster_Whisper进行ASR处理(除中文之外的ASR标记)
(没有进度条,GPU性能可能会导致时间延迟)
-````
-python ./tools/asr/fasterwhisper_asr.py -i -o -l
-````
+```
+python ./tools/asr/fasterwhisper_asr.py -i -o -l -p
+```
启用自定义列表保存路径
## 致谢
diff --git a/docs/ja/README.md b/docs/ja/README.md
index 92ff561..e3a9f00 100644
--- a/docs/ja/README.md
+++ b/docs/ja/README.md
@@ -102,7 +102,7 @@ conda install -c conda-forge 'ffmpeg<7'
[ffmpeg.exe](https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/ffmpeg.exe) と [ffprobe.exe](https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/ffprobe.exe) をダウンロードし、GPT-SoVITS のルートディレクトリに置きます。
-##### Mac ユーザー
+##### MacOS ユーザー
```bash
brew install ffmpeg
```
@@ -209,7 +209,7 @@ ASR処理はFaster_Whisperを通じて実行されます(中国語を除くASR
(進行状況バーは表示されません。GPU のパフォーマンスにより時間遅延が発生する可能性があります)
```
-python ./tools/asr/fasterwhisper_asr.py -i -o -l
+python ./tools/asr/fasterwhisper_asr.py -i -o -l -p
```
カスタムリストの保存パスが有効になっています
diff --git a/docs/ko/README.md b/docs/ko/README.md
index 80b6848..4deb2c4 100644
--- a/docs/ko/README.md
+++ b/docs/ko/README.md
@@ -102,7 +102,7 @@ conda install -c conda-forge 'ffmpeg<7'
[ffmpeg.exe](https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/ffmpeg.exe)와 [ffprobe.exe](https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/ffprobe.exe)를 GPT-SoVITS root 디렉토리에 넣습니다.
-##### Mac 사용자
+##### MacOS 사용자
```bash
brew install ffmpeg
```
@@ -213,7 +213,7 @@ ASR 처리는 Faster_Whisper(중국어를 제외한 ASR 마킹)를 통해 수행
(진행률 표시줄 없음, GPU 성능으로 인해 시간 지연이 발생할 수 있음)
```
-python ./tools/asr/fasterwhisper_asr.py -i -o -l
+python ./tools/asr/fasterwhisper_asr.py -i -o -l -p
```
사용자 정의 목록 저장 경로가 활성화되었습니다.
diff --git a/docs/tr/README.md b/docs/tr/README.md
index d0936f1..5b9a103 100644
--- a/docs/tr/README.md
+++ b/docs/tr/README.md
@@ -102,7 +102,7 @@ conda install -c conda-forge 'ffmpeg<7'
[ffmpeg.exe](https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/ffmpeg.exe) ve [ffprobe.exe](https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/ffprobe.exe) dosyalarını indirin ve GPT-SoVITS kök dizinine yerleştirin.
-##### Mac Kullanıcıları
+##### MacOS Kullanıcıları
```bash
brew install ffmpeg
```
diff --git a/tools/asr/config.py b/tools/asr/config.py
index 8fe6838..d7e184c 100644
--- a/tools/asr/config.py
+++ b/tools/asr/config.py
@@ -21,11 +21,13 @@ asr_dict = {
'lang': ['zh'],
'size': ['large'],
'path': 'funasr_asr.py',
+ 'precision': ['float32']
},
"Faster Whisper (多语种)": {
'lang': ['auto', 'zh', 'en', 'ja'],
'size': check_fw_local_models(),
- 'path': 'fasterwhisper_asr.py'
- }
+ 'path': 'fasterwhisper_asr.py',
+ 'precision': ['float32', 'float16', 'int8']
+ },
}
diff --git a/tools/asr/fasterwhisper_asr.py b/tools/asr/fasterwhisper_asr.py
index e9fc6a4..da8eadf 100644
--- a/tools/asr/fasterwhisper_asr.py
+++ b/tools/asr/fasterwhisper_asr.py
@@ -101,8 +101,8 @@ if __name__ == '__main__':
parser.add_argument("-l", "--language", type=str, default='ja',
choices=language_code_list,
help="Language of the audio files.")
- parser.add_argument("-p", "--precision", type=str, default='float16', choices=['float16','float32'],
- help="fp16 or fp32")
+ parser.add_argument("-p", "--precision", type=str, default='float16', choices=['float16','float32','int8'],
+ help="fp16, int8 or fp32")
cmd = parser.parse_args()
output_file_path = execute_asr(
diff --git a/tools/asr/funasr_asr.py b/tools/asr/funasr_asr.py
index 831da6c..ec78678 100644
--- a/tools/asr/funasr_asr.py
+++ b/tools/asr/funasr_asr.py
@@ -4,7 +4,8 @@ import argparse
import os
import traceback
from tqdm import tqdm
-
+# from funasr.utils import version_checker
+# version_checker.check_for_update = lambda: None
from funasr import AutoModel
path_asr = 'tools/asr/models/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch'
@@ -14,6 +15,7 @@ path_asr = path_asr if os.path.exists(path_asr) else "iic/speech_paraformer-l
path_vad = path_vad if os.path.exists(path_vad) else "iic/speech_fsmn_vad_zh-cn-16k-common-pytorch"
path_punc = path_punc if os.path.exists(path_punc) else "iic/punc_ct-transformer_zh-cn-common-vocab272727-pytorch"
+
model = AutoModel(
model = path_asr,
model_revision = "v2.0.4",
diff --git a/tools/i18n/locale/en_US.json b/tools/i18n/locale/en_US.json
index 6936fb9..3d06e2d 100644
--- a/tools/i18n/locale/en_US.json
+++ b/tools/i18n/locale/en_US.json
@@ -34,6 +34,7 @@
"3、去混响、去延迟模型(by FoxJoy):": "3. Reverberation and delay removal model(by FoxJoy):",
"ASR 模型": "ASR model",
"ASR 模型尺寸": "ASR model size",
+ "数据类型精度": "Computing precision",
"ASR 语言设置": "ASR language",
"ASR进程输出信息": "ASR output log",
"GPT模型列表": "GPT weight list",
diff --git a/tools/i18n/locale/es_ES.json b/tools/i18n/locale/es_ES.json
index dcf17f8..9d4e44b 100644
--- a/tools/i18n/locale/es_ES.json
+++ b/tools/i18n/locale/es_ES.json
@@ -34,6 +34,7 @@
"3、去混响、去延迟模型(by FoxJoy):": "3. Modelos de eliminación de reverberación y retardo (por FoxJoy)",
"ASR 模型": "Modelo ASR",
"ASR 模型尺寸": "Tamaño del modelo ASR",
+ "数据类型精度": "precisión del tipo de datos",
"ASR 语言设置": "Configuración del idioma ASR",
"ASR进程输出信息": "Información de salida del proceso ASR",
"GPT模型列表": "Lista de modelos GPT",
diff --git a/tools/i18n/locale/fr_FR.json b/tools/i18n/locale/fr_FR.json
index 194ff0e..516593a 100644
--- a/tools/i18n/locale/fr_FR.json
+++ b/tools/i18n/locale/fr_FR.json
@@ -34,6 +34,7 @@
"3、去混响、去延迟模型(by FoxJoy):": "3. Modèle de suppression de réverbération et de retard (par FoxJoy) :",
"ASR 模型": "Modèle ASR",
"ASR 模型尺寸": "Taille du modèle ASR",
+ "数据类型精度": "précision du type de données",
"ASR 语言设置": "Paramètres de langue ASR",
"ASR进程输出信息": "Informations de processus ASR",
"GPT模型列表": "Liste des modèles GPT",
diff --git a/tools/i18n/locale/it_IT.json b/tools/i18n/locale/it_IT.json
index 71edbe9..a1a7f88 100644
--- a/tools/i18n/locale/it_IT.json
+++ b/tools/i18n/locale/it_IT.json
@@ -34,6 +34,7 @@
"3、去混响、去延迟模型(by FoxJoy):": "3. Modello per rimuovere la riverberazione e il ritardo (by FoxJoy):",
"ASR 模型": "Modello ASR",
"ASR 模型尺寸": "Dimensioni del modello ASR",
+ "数据类型精度": "precisione del tipo di dati",
"ASR 语言设置": "Impostazioni linguistiche ASR",
"ASR进程输出信息": "Informazioni sull'output del processo ASR",
"GPT模型列表": "Elenco dei modelli GPT",
diff --git a/tools/i18n/locale/ja_JP.json b/tools/i18n/locale/ja_JP.json
index b8ddb1a..0f467ff 100644
--- a/tools/i18n/locale/ja_JP.json
+++ b/tools/i18n/locale/ja_JP.json
@@ -34,6 +34,7 @@
"3、去混响、去延迟模型(by FoxJoy):": "3、リバーブ除去と遅延除去モデル(by FoxJoy):",
"ASR 模型": "ASR モデル",
"ASR 模型尺寸": "ASRモデルサイズ",
+ "数据类型精度": "データ型の精度",
"ASR 语言设置": "ASR 言語設定",
"ASR进程输出信息": "ASRプロセスの出力情報",
"GPT模型列表": "GPTモデルリスト",
diff --git a/tools/i18n/locale/ko_KR.json b/tools/i18n/locale/ko_KR.json
index 22ae5ce..53a9d10 100644
--- a/tools/i18n/locale/ko_KR.json
+++ b/tools/i18n/locale/ko_KR.json
@@ -34,6 +34,7 @@
"3、去混响、去延迟模型(by FoxJoy):": "3. 잔향 제거 및 지연 제거 모델 (by FoxJoy):",
"ASR 模型": "ASR 모델",
"ASR 模型尺寸": "ASR 모델 크기",
+ "数据类型精度": "데이터 유형 정밀도",
"ASR 语言设置": "ASR 언어 설정",
"ASR进程输出信息": "ASR 프로세스 출력 정보",
"GPT模型列表": "GPT 모델 목록",
diff --git a/tools/i18n/locale/pt_BR.json b/tools/i18n/locale/pt_BR.json
index 0454364..95e3e33 100644
--- a/tools/i18n/locale/pt_BR.json
+++ b/tools/i18n/locale/pt_BR.json
@@ -34,6 +34,7 @@
"3、去混响、去延迟模型(by FoxJoy):": "3. Modelo de remoção de reverberação e atraso (por FoxJoy):",
"ASR 模型": "Modelo ASR",
"ASR 模型尺寸": "Tamanho do modelo ASR",
+ "数据类型精度": "precisão do tipo de dado",
"ASR 语言设置": "Configurações de idioma do ASR",
"ASR进程输出信息": "Informações de saída do processo ASR",
"GPT模型列表": "Lista de modelos GPT",
diff --git a/tools/i18n/locale/ru_RU.json b/tools/i18n/locale/ru_RU.json
index a5a7df2..42ca591 100644
--- a/tools/i18n/locale/ru_RU.json
+++ b/tools/i18n/locale/ru_RU.json
@@ -34,6 +34,7 @@
"3、去混响、去延迟模型(by FoxJoy):": "3. Модель удаления реверберации и задержек (от FoxJoy):",
"ASR 模型": "Модель ASR",
"ASR 模型尺寸": "Размер модели ASR",
+ "数据类型精度": "точность типа данных",
"ASR 语言设置": "Настройки языка ASR",
"ASR进程输出信息": "Информация о процессе ASR",
"GPT模型列表": "Список моделей GPT",
diff --git a/tools/i18n/locale/tr_TR.json b/tools/i18n/locale/tr_TR.json
index 8469237..179d854 100644
--- a/tools/i18n/locale/tr_TR.json
+++ b/tools/i18n/locale/tr_TR.json
@@ -34,6 +34,7 @@
"3、去混响、去延迟模型(by FoxJoy):": "3. Yankı ve gecikme giderme modeli (FoxJoy tarafından):",
"ASR 模型": "ASR modeli",
"ASR 模型尺寸": "ASR model boyutu",
+ "数据类型精度": "veri türü doğruluğu",
"ASR 语言设置": "ASR dil ayarları",
"ASR进程输出信息": "ASR işlemi çıktı bilgisi",
"GPT模型列表": "GPT model listesi",
diff --git a/tools/i18n/locale/zh_CN.json b/tools/i18n/locale/zh_CN.json
index 84cbca2..ddc2eda 100644
--- a/tools/i18n/locale/zh_CN.json
+++ b/tools/i18n/locale/zh_CN.json
@@ -34,6 +34,7 @@
"3、去混响、去延迟模型(by FoxJoy):": "3、去混响、去延迟模型(by FoxJoy):",
"ASR 模型": "ASR 模型",
"ASR 模型尺寸": "ASR 模型尺寸",
+ "数据类型精度": "数据类型精度",
"ASR 语言设置": "ASR 语言设置",
"ASR进程输出信息": "ASR进程输出信息",
"GPT模型列表": "GPT模型列表",
diff --git a/tools/i18n/locale/zh_HK.json b/tools/i18n/locale/zh_HK.json
index c229797..253243e 100644
--- a/tools/i18n/locale/zh_HK.json
+++ b/tools/i18n/locale/zh_HK.json
@@ -34,6 +34,7 @@
"3、去混响、去延迟模型(by FoxJoy):": "3、去混響、去延遲模型(by FoxJoy):",
"ASR 模型": "ASR 模型",
"ASR 模型尺寸": "ASR 模型尺寸",
+ "数据类型精度": "數據類型精度",
"ASR 语言设置": "ASR 語言設置",
"ASR进程输出信息": "ASR進程輸出信息",
"GPT模型列表": "GPT模型列表",
diff --git a/tools/i18n/locale/zh_SG.json b/tools/i18n/locale/zh_SG.json
index 81c7ab3..5c90a36 100644
--- a/tools/i18n/locale/zh_SG.json
+++ b/tools/i18n/locale/zh_SG.json
@@ -34,6 +34,7 @@
"3、去混响、去延迟模型(by FoxJoy):": "3、去混響、去延遲模型(by FoxJoy):",
"ASR 模型": "ASR 模型",
"ASR 模型尺寸": "ASR 模型尺寸",
+ "数据类型精度": "數據類型精度",
"ASR 语言设置": "ASR 語言設定",
"ASR进程输出信息": "ASR進程輸出資訊",
"GPT模型列表": "GPT模型列表",
diff --git a/tools/i18n/locale/zh_TW.json b/tools/i18n/locale/zh_TW.json
index 014c16b..d9191e5 100644
--- a/tools/i18n/locale/zh_TW.json
+++ b/tools/i18n/locale/zh_TW.json
@@ -34,6 +34,7 @@
"3、去混响、去延迟模型(by FoxJoy):": "3、去混響、去延遲模型(by FoxJoy):",
"ASR 模型": "ASR 模型",
"ASR 模型尺寸": "ASR 模型尺寸",
+ "数据类型精度": "數據類型精度",
"ASR 语言设置": "ASR 語言設置",
"ASR进程输出信息": "ASR進程輸出資訊",
"GPT模型列表": "GPT模型列表",
diff --git a/webui.py b/webui.py
index aee1fa9..fe358e0 100644
--- a/webui.py
+++ b/webui.py
@@ -195,7 +195,7 @@ def change_tts_inference(if_tts,bert_path,cnhubert_base_path,gpu_number,gpt_path
yield i18n("TTS推理进程已关闭")
from tools.asr.config import asr_dict
-def open_asr(asr_inp_dir, asr_opt_dir, asr_model, asr_model_size, asr_lang):
+def open_asr(asr_inp_dir, asr_opt_dir, asr_model, asr_model_size, asr_lang, asr_precision):
global p_asr
if(p_asr==None):
asr_inp_dir=my_utils.clean_path(asr_inp_dir)
@@ -205,16 +205,18 @@ def open_asr(asr_inp_dir, asr_opt_dir, asr_model, asr_model_size, asr_lang):
cmd += f' -o "{asr_opt_dir}"'
cmd += f' -s {asr_model_size}'
cmd += f' -l {asr_lang}'
- cmd += " -p %s"%("float16"if is_half==True else "float32")
-
- yield "ASR任务开启:%s"%cmd,{"__type__":"update","visible":False},{"__type__":"update","visible":True}
+ cmd += f" -p {asr_precision}"
+ output_file_name = os.path.basename(asr_inp_dir)
+ output_folder = asr_opt_dir or "output/asr_opt"
+ output_file_path = os.path.abspath(f'{output_folder}/{output_file_name}.list')
+ yield "ASR任务开启:%s"%cmd, {"__type__":"update","visible":False}, {"__type__":"update","visible":True}, {"__type__":"update"}
print(cmd)
p_asr = Popen(cmd, shell=True)
p_asr.wait()
p_asr=None
- yield f"ASR任务完成, 查看终端进行下一步",{"__type__":"update","visible":True},{"__type__":"update","visible":False}
+ yield f"ASR任务完成, 查看终端进行下一步", {"__type__":"update","visible":True}, {"__type__":"update","visible":False}, {"__type__":"update","value":output_file_path}
else:
- yield "已有正在进行的ASR任务,需先终止才能开启下一次任务",{"__type__":"update","visible":False},{"__type__":"update","visible":True}
+ yield "已有正在进行的ASR任务,需先终止才能开启下一次任务", {"__type__":"update","visible":False}, {"__type__":"update","visible":True}, {"__type__":"update"}
# return None
def close_asr():
@@ -222,7 +224,7 @@ def close_asr():
if(p_asr!=None):
kill_process(p_asr.pid)
p_asr=None
- return "已终止ASR进程",{"__type__":"update","visible":True},{"__type__":"update","visible":False}
+ return "已终止ASR进程", {"__type__":"update","visible":True}, {"__type__":"update","visible":False}
def open_denoise(denoise_inp_dir, denoise_opt_dir):
global p_denoise
if(p_denoise==None):
@@ -230,14 +232,14 @@ def open_denoise(denoise_inp_dir, denoise_opt_dir):
denoise_opt_dir=my_utils.clean_path(denoise_opt_dir)
cmd = '"%s" tools/cmd-denoise.py -i "%s" -o "%s" -p %s'%(python_exec,denoise_inp_dir,denoise_opt_dir,"float16"if is_half==True else "float32")
- yield "语音降噪任务开启:%s"%cmd,{"__type__":"update","visible":False},{"__type__":"update","visible":True}
+ yield "语音降噪任务开启:%s"%cmd, {"__type__":"update","visible":False}, {"__type__":"update","visible":True}, {"__type__":"update"}
print(cmd)
p_denoise = Popen(cmd, shell=True)
p_denoise.wait()
p_denoise=None
- yield f"语音降噪任务完成, 查看终端进行下一步",{"__type__":"update","visible":True},{"__type__":"update","visible":False}
+ yield f"语音降噪任务完成, 查看终端进行下一步", {"__type__":"update","visible":True}, {"__type__":"update","visible":False}, {"__type__":"update","value":denoise_opt_dir}
else:
- yield "已有正在进行的语音降噪任务,需先终止才能开启下一次任务",{"__type__":"update","visible":False},{"__type__":"update","visible":True}
+ yield "已有正在进行的语音降噪任务,需先终止才能开启下一次任务", {"__type__":"update","visible":False}, {"__type__":"update","visible":True}, {"__type__":"update"}
# return None
def close_denoise():
@@ -245,7 +247,7 @@ def close_denoise():
if(p_denoise!=None):
kill_process(p_denoise.pid)
p_denoise=None
- return "已终止语音降噪进程",{"__type__":"update","visible":True},{"__type__":"update","visible":False}
+ return "已终止语音降噪进程", {"__type__":"update","visible":True}, {"__type__":"update","visible":False}
p_train_SoVITS=None
def open1Ba(batch_size,total_epoch,exp_name,text_low_lr_rate,if_save_latest,if_save_every_weights,save_every_epoch,gpu_numbers1Ba,pretrained_s2G,pretrained_s2D):
@@ -275,21 +277,21 @@ def open1Ba(batch_size,total_epoch,exp_name,text_low_lr_rate,if_save_latest,if_s
with open(tmp_config_path,"w")as f:f.write(json.dumps(data))
cmd = '"%s" GPT_SoVITS/s2_train.py --config "%s"'%(python_exec,tmp_config_path)
- yield "SoVITS训练开始:%s"%cmd,{"__type__":"update","visible":False},{"__type__":"update","visible":True}
+ yield "SoVITS训练开始:%s"%cmd, {"__type__":"update","visible":False}, {"__type__":"update","visible":True}
print(cmd)
p_train_SoVITS = Popen(cmd, shell=True)
p_train_SoVITS.wait()
p_train_SoVITS=None
- yield "SoVITS训练完成",{"__type__":"update","visible":True},{"__type__":"update","visible":False}
+ yield "SoVITS训练完成", {"__type__":"update","visible":True}, {"__type__":"update","visible":False}
else:
- yield "已有正在进行的SoVITS训练任务,需先终止才能开启下一次任务",{"__type__":"update","visible":False},{"__type__":"update","visible":True}
+ yield "已有正在进行的SoVITS训练任务,需先终止才能开启下一次任务", {"__type__":"update","visible":False}, {"__type__":"update","visible":True}
def close1Ba():
global p_train_SoVITS
if(p_train_SoVITS!=None):
kill_process(p_train_SoVITS.pid)
p_train_SoVITS=None
- return "已终止SoVITS训练",{"__type__":"update","visible":True},{"__type__":"update","visible":False}
+ return "已终止SoVITS训练", {"__type__":"update","visible":True}, {"__type__":"update","visible":False}
p_train_GPT=None
def open1Bb(batch_size,total_epoch,exp_name,if_dpo,if_save_latest,if_save_every_weights,save_every_epoch,gpu_numbers,pretrained_s1):
@@ -322,21 +324,21 @@ def open1Bb(batch_size,total_epoch,exp_name,if_dpo,if_save_latest,if_save_every_
with open(tmp_config_path, "w") as f:f.write(yaml.dump(data, default_flow_style=False))
# cmd = '"%s" GPT_SoVITS/s1_train.py --config_file "%s" --train_semantic_path "%s/6-name2semantic.tsv" --train_phoneme_path "%s/2-name2text.txt" --output_dir "%s/logs_s1"'%(python_exec,tmp_config_path,s1_dir,s1_dir,s1_dir)
cmd = '"%s" GPT_SoVITS/s1_train.py --config_file "%s" '%(python_exec,tmp_config_path)
- yield "GPT训练开始:%s"%cmd,{"__type__":"update","visible":False},{"__type__":"update","visible":True}
+ yield "GPT训练开始:%s"%cmd, {"__type__":"update","visible":False}, {"__type__":"update","visible":True}
print(cmd)
p_train_GPT = Popen(cmd, shell=True)
p_train_GPT.wait()
p_train_GPT=None
- yield "GPT训练完成",{"__type__":"update","visible":True},{"__type__":"update","visible":False}
+ yield "GPT训练完成", {"__type__":"update","visible":True}, {"__type__":"update","visible":False}
else:
- yield "已有正在进行的GPT训练任务,需先终止才能开启下一次任务",{"__type__":"update","visible":False},{"__type__":"update","visible":True}
+ yield "已有正在进行的GPT训练任务,需先终止才能开启下一次任务", {"__type__":"update","visible":False}, {"__type__":"update","visible":True}
def close1Bb():
global p_train_GPT
if(p_train_GPT!=None):
kill_process(p_train_GPT.pid)
p_train_GPT=None
- return "已终止GPT训练",{"__type__":"update","visible":True},{"__type__":"update","visible":False}
+ return "已终止GPT训练", {"__type__":"update","visible":True}, {"__type__":"update","visible":False}
ps_slice=[]
def open_slice(inp,opt_root,threshold,min_length,min_interval,hop_size,max_sil_kept,_max,alpha,n_parts):
@@ -344,12 +346,12 @@ def open_slice(inp,opt_root,threshold,min_length,min_interval,hop_size,max_sil_k
inp = my_utils.clean_path(inp)
opt_root = my_utils.clean_path(opt_root)
if(os.path.exists(inp)==False):
- yield "输入路径不存在",{"__type__":"update","visible":True},{"__type__":"update","visible":False}
+ yield "输入路径不存在", {"__type__":"update","visible":True}, {"__type__":"update","visible":False}, {"__type__": "update"}, {"__type__": "update"}
return
if os.path.isfile(inp):n_parts=1
elif os.path.isdir(inp):pass
else:
- yield "输入路径存在但既不是文件也不是文件夹",{"__type__":"update","visible":True},{"__type__":"update","visible":False}
+ yield "输入路径存在但既不是文件也不是文件夹", {"__type__":"update","visible":True}, {"__type__":"update","visible":False}, {"__type__": "update"}, {"__type__": "update"}
return
if (ps_slice == []):
for i_part in range(n_parts):
@@ -357,13 +359,13 @@ def open_slice(inp,opt_root,threshold,min_length,min_interval,hop_size,max_sil_k
print(cmd)
p = Popen(cmd, shell=True)
ps_slice.append(p)
- yield "切割执行中", {"__type__": "update", "visible": False}, {"__type__": "update", "visible": True}
+ yield "切割执行中", {"__type__": "update", "visible": False}, {"__type__": "update", "visible": True}, {"__type__": "update"}, {"__type__": "update"}
for p in ps_slice:
p.wait()
ps_slice=[]
- yield "切割结束",{"__type__":"update","visible":True},{"__type__":"update","visible":False}
+ yield "切割结束", {"__type__":"update","visible":True}, {"__type__":"update","visible":False}, {"__type__": "update", "value":opt_root}, {"__type__": "update", "value":opt_root}
else:
- yield "已有正在进行的切割任务,需先终止才能开启下一次任务", {"__type__": "update", "visible": False}, {"__type__": "update", "visible": True}
+ yield "已有正在进行的切割任务,需先终止才能开启下一次任务", {"__type__": "update", "visible": False}, {"__type__": "update", "visible": True}, {"__type__": "update"}, {"__type__": "update"}
def close_slice():
global ps_slice
@@ -470,7 +472,7 @@ def open1b(inp_text,inp_wav_dir,exp_name,gpu_numbers,ssl_pretrained_dir):
for p in ps1b:
p.wait()
ps1b=[]
- yield "SSL提取进程结束",{"__type__":"update","visible":True},{"__type__":"update","visible":False}
+ yield "SSL提取进程结束", {"__type__":"update","visible":True}, {"__type__":"update","visible":False}
else:
yield "已有正在进行的SSL提取任务,需先终止才能开启下一次任务", {"__type__": "update", "visible": False}, {"__type__": "update", "visible": True}
@@ -527,7 +529,7 @@ def open1c(inp_text,exp_name,gpu_numbers,pretrained_s2G_path):
with open(path_semantic, "w", encoding="utf8") as f:
f.write("\n".join(opt) + "\n")
ps1c=[]
- yield "语义token提取进程结束",{"__type__":"update","visible":True},{"__type__":"update","visible":False}
+ yield "语义token提取进程结束", {"__type__":"update","visible":True}, {"__type__":"update","visible":False}
else:
yield "已有正在进行的语义token提取任务,需先终止才能开启下一次任务", {"__type__": "update", "visible": False}, {"__type__": "update", "visible": True}
@@ -692,33 +694,37 @@ with gr.Blocks(title="GPT-SoVITS WebUI") as app:
uvr5_info = gr.Textbox(label=i18n("UVR5进程输出信息"))
gr.Markdown(value=i18n("0b-语音切分工具"))
with gr.Row():
- with gr.Row():
- slice_inp_path=gr.Textbox(label=i18n("音频自动切分输入路径,可文件可文件夹"),value="")
- slice_opt_root=gr.Textbox(label=i18n("切分后的子音频的输出根目录"),value="output/slicer_opt")
- threshold=gr.Textbox(label=i18n("threshold:音量小于这个值视作静音的备选切割点"),value="-34")
- min_length=gr.Textbox(label=i18n("min_length:每段最小多长,如果第一段太短一直和后面段连起来直到超过这个值"),value="4000")
- min_interval=gr.Textbox(label=i18n("min_interval:最短切割间隔"),value="300")
- hop_size=gr.Textbox(label=i18n("hop_size:怎么算音量曲线,越小精度越大计算量越高(不是精度越大效果越好)"),value="10")
- max_sil_kept=gr.Textbox(label=i18n("max_sil_kept:切完后静音最多留多长"),value="500")
- with gr.Row():
- open_slicer_button=gr.Button(i18n("开启语音切割"), variant="primary",visible=True)
- close_slicer_button=gr.Button(i18n("终止语音切割"), variant="primary",visible=False)
- _max=gr.Slider(minimum=0,maximum=1,step=0.05,label=i18n("max:归一化后最大值多少"),value=0.9,interactive=True)
- alpha=gr.Slider(minimum=0,maximum=1,step=0.05,label=i18n("alpha_mix:混多少比例归一化后音频进来"),value=0.25,interactive=True)
- n_process=gr.Slider(minimum=1,maximum=n_cpu,step=1,label=i18n("切割使用的进程数"),value=4,interactive=True)
- slicer_info = gr.Textbox(label=i18n("语音切割进程输出信息"))
+ with gr.Column(scale=3):
+ with gr.Row():
+ slice_inp_path=gr.Textbox(label=i18n("音频自动切分输入路径,可文件可文件夹"),value="")
+ slice_opt_root=gr.Textbox(label=i18n("切分后的子音频的输出根目录"),value="output/slicer_opt")
+ with gr.Row():
+ threshold=gr.Textbox(label=i18n("threshold:音量小于这个值视作静音的备选切割点"),value="-34")
+ min_length=gr.Textbox(label=i18n("min_length:每段最小多长,如果第一段太短一直和后面段连起来直到超过这个值"),value="4000")
+ min_interval=gr.Textbox(label=i18n("min_interval:最短切割间隔"),value="300")
+ hop_size=gr.Textbox(label=i18n("hop_size:怎么算音量曲线,越小精度越大计算量越高(不是精度越大效果越好)"),value="10")
+ max_sil_kept=gr.Textbox(label=i18n("max_sil_kept:切完后静音最多留多长"),value="500")
+ with gr.Row():
+ _max=gr.Slider(minimum=0,maximum=1,step=0.05,label=i18n("max:归一化后最大值多少"),value=0.9,interactive=True)
+ alpha=gr.Slider(minimum=0,maximum=1,step=0.05,label=i18n("alpha_mix:混多少比例归一化后音频进来"),value=0.25,interactive=True)
+ n_process=gr.Slider(minimum=1,maximum=n_cpu,step=1,label=i18n("切割使用的进程数"),value=4,interactive=True)
+ with gr.Row():
+ slicer_info = gr.Textbox(label=i18n("语音切割进程输出信息"))
+ open_slicer_button=gr.Button(i18n("开启语音切割"), variant="primary",visible=True)
+ close_slicer_button=gr.Button(i18n("终止语音切割"), variant="primary",visible=False)
gr.Markdown(value=i18n("0bb-语音降噪工具"))
with gr.Row():
+ with gr.Column(scale=3):
+ with gr.Row():
+ denoise_input_dir=gr.Textbox(label=i18n("降噪音频文件输入文件夹"),value="")
+ denoise_output_dir=gr.Textbox(label=i18n("降噪结果输出文件夹"),value="output/denoise_opt")
+ with gr.Row():
+ denoise_info = gr.Textbox(label=i18n("语音降噪进程输出信息"))
open_denoise_button = gr.Button(i18n("开启语音降噪"), variant="primary",visible=True)
close_denoise_button = gr.Button(i18n("终止语音降噪进程"), variant="primary",visible=False)
- denoise_input_dir=gr.Textbox(label=i18n("降噪音频文件输入文件夹"),value="")
- denoise_output_dir=gr.Textbox(label=i18n("降噪结果输出文件夹"),value="output/denoise_opt")
- denoise_info = gr.Textbox(label=i18n("语音降噪进程输出信息"))
gr.Markdown(value=i18n("0c-中文批量离线ASR工具"))
with gr.Row():
- open_asr_button = gr.Button(i18n("开启离线批量ASR"), variant="primary",visible=True)
- close_asr_button = gr.Button(i18n("终止ASR进程"), variant="primary",visible=False)
- with gr.Column():
+ with gr.Column(scale=3):
with gr.Row():
asr_inp_dir = gr.Textbox(
label=i18n("输入文件夹路径"),
@@ -749,17 +755,39 @@ with gr.Blocks(title="GPT-SoVITS WebUI") as app:
interactive = True,
value="zh"
)
+ asr_precision = gr.Dropdown(
+ label = i18n("数据类型精度"),
+ choices = ["float32"],
+ interactive = True,
+ value="float32"
+ )
with gr.Row():
- asr_info = gr.Textbox(label=i18n("ASR进程输出信息"))
+ asr_info = gr.Textbox(label=i18n("ASR进程输出信息"))
+ open_asr_button = gr.Button(i18n("开启离线批量ASR"), variant="primary",visible=True)
+ close_asr_button = gr.Button(i18n("终止ASR进程"), variant="primary",visible=False)
def change_lang_choices(key): #根据选择的模型修改可选的语言
# return gr.Dropdown(choices=asr_dict[key]['lang'])
return {"__type__": "update", "choices": asr_dict[key]['lang'],"value":asr_dict[key]['lang'][0]}
def change_size_choices(key): # 根据选择的模型修改可选的模型尺寸
# return gr.Dropdown(choices=asr_dict[key]['size'])
- return {"__type__": "update", "choices": asr_dict[key]['size']}
+ return {"__type__": "update", "choices": asr_dict[key]['size'],"value":asr_dict[key]['size'][-1]}
+ def change_precision_choices(key): #根据选择的模型修改可选的语言
+ if key =="Faster Whisper (多语种)":
+ if default_batch_size <= 4:
+ precision = 'int8'
+ elif is_half:
+ precision = 'float16'
+ else:
+ precision = 'float32'
+ else:
+ precision = 'float32'
+ # return gr.Dropdown(choices=asr_dict[key]['precision'])
+ return {"__type__": "update", "choices": asr_dict[key]['precision'],"value":precision}
asr_model.change(change_lang_choices, [asr_model], [asr_lang])
asr_model.change(change_size_choices, [asr_model], [asr_size])
+ asr_model.change(change_precision_choices, [asr_model], [asr_precision])
+
gr.Markdown(value=i18n("0d-语音文本校对标注工具"))
with gr.Row():
@@ -772,11 +800,11 @@ with gr.Blocks(title="GPT-SoVITS WebUI") as app:
label_info = gr.Textbox(label=i18n("打标工具进程输出信息"))
if_label.change(change_label, [if_label,path_list], [label_info])
if_uvr5.change(change_uvr5, [if_uvr5], [uvr5_info])
- open_asr_button.click(open_asr, [asr_inp_dir, asr_opt_dir, asr_model, asr_size, asr_lang], [asr_info,open_asr_button,close_asr_button])
+ open_asr_button.click(open_asr, [asr_inp_dir, asr_opt_dir, asr_model, asr_size, asr_lang, asr_precision], [asr_info,open_asr_button,close_asr_button,path_list])
close_asr_button.click(close_asr, [], [asr_info,open_asr_button,close_asr_button])
- open_slicer_button.click(open_slice, [slice_inp_path,slice_opt_root,threshold,min_length,min_interval,hop_size,max_sil_kept,_max,alpha,n_process], [slicer_info,open_slicer_button,close_slicer_button])
+ open_slicer_button.click(open_slice, [slice_inp_path,slice_opt_root,threshold,min_length,min_interval,hop_size,max_sil_kept,_max,alpha,n_process], [slicer_info,open_slicer_button,close_slicer_button,asr_inp_dir,denoise_input_dir])
close_slicer_button.click(close_slice, [], [slicer_info,open_slicer_button,close_slicer_button])
- open_denoise_button.click(open_denoise, [denoise_input_dir,denoise_output_dir], [denoise_info,open_denoise_button,close_denoise_button])
+ open_denoise_button.click(open_denoise, [denoise_input_dir,denoise_output_dir], [denoise_info,open_denoise_button,close_denoise_button,asr_inp_dir])
close_denoise_button.click(close_denoise, [], [denoise_info,open_denoise_button,close_denoise_button])
with gr.TabItem(i18n("1-GPT-SoVITS-TTS")):
@@ -879,4 +907,4 @@ with gr.Blocks(title="GPT-SoVITS WebUI") as app:
share=is_share,
server_port=webui_port_main,
quiet=True,
- )
+ )
\ No newline at end of file