From 14a1579877b75a86d58de58f53fb5eed7e468c0b Mon Sep 17 00:00:00 2001 From: pengoosedev <73521518+pengoosedev@users.noreply.github.com> Date: Thu, 18 Jan 2024 04:25:44 +0900 Subject: [PATCH 01/63] Add missing ko_KR.json --- tools/i18n/locale/ko_KR.json | 135 +++++++++++++++++++++++++++++++++++ 1 file changed, 135 insertions(+) create mode 100644 tools/i18n/locale/ko_KR.json diff --git a/tools/i18n/locale/ko_KR.json b/tools/i18n/locale/ko_KR.json new file mode 100644 index 00000000..816ed3f7 --- /dev/null +++ b/tools/i18n/locale/ko_KR.json @@ -0,0 +1,135 @@ +{ + ">=3则使用对harvest音高识别的结果使用中值滤波,数值为滤波半径,使用可以削弱哑音": ">=3이면 harvest 음높이 인식 결과에 중간값 필터를 사용합니다. 이 수치는 필터 반경이며, 사용하면 불명확한 음성을 어느정도 배제할 수 있습니다.", + "A模型权重": "A 모델 가중치", + "A模型路径": "A 모델 경로", + "B模型路径": "B 모델 경로", + "E:\\语音音频+标注\\米津玄师\\src": "E:\\음성 오디오+주석\\요네즈 켄시\\src", + "F0曲线文件, 可选, 一行一个音高, 代替默认F0及升降调": "F0 곡선 파일, 선택 사항, 한 줄에 하나의 음높이, 기본 F0 및 음높이 변화를 대체함", + "Index Rate": "인덱스 비율", + "Onnx导出": "Onnx 내보내기", + "Onnx输出路径": "Onnx 출력 경로", + "RVC模型路径": "RVC 모델 경로", + "ckpt处理": "ckpt 처리", + "harvest进程数": "harvest 프로세스 수", + "index文件路径不可包含中文": "인덱스 파일 경로에는 중국어를 포함할 수 없습니다.", + "pth文件路径不可包含中文": "pth 파일 경로에는 중국어를 포함할 수 없습니다.", + "rmvpe卡号配置:以-分隔输入使用的不同进程卡号,例如0-0-1使用在卡0上跑2个进程并在卡1上跑1个进程": "rmvpe 카드 번호 구성: '-'로 구분하여 입력된 다른 프로세스 카드 번호, 예를 들어 0-0-1은 카드 0에서 2개의 프로세스를 실행하고 카드 1에서 1개의 프로세스를 실행", + "step1: 填写实验配置. 实验数据放在logs下, 每个实验一个文件夹, 需手工输入实验名路径, 内含实验配置, 日志, 训练得到的模型文件. ": "step1: 실험 설정을 작성합니다. 실험 데이터는 logs 아래에 있으며, 각 실험마다 하나의 폴더가 있습니다. 실험 이름 경로를 수동으로 입력해야 하며, 이 안에는 실험 설정, 로그, 훈련으로 얻은 모델 파일이 포함되어 있습니다.", + "step1:正在处理数据": "step1: 데이터 처리 중", + "step2:正在提取音高&正在提取特征": "step2: 음높이 추출 및 특성 추출 중", + "step2a: 自动遍历训练文件夹下所有可解码成音频的文件并进行切片归一化, 在实验目录下生成2个wav文件夹; 暂时只支持单人训练. ": "step2a: 훈련 폴더 아래 모든 오디오로 디코딩 가능한 파일을 자동으로 순회하고 슬라이스 정규화를 진행하여, 실험 디렉토리 아래에 2개의 wav 폴더를 생성합니다; 현재는 단일 사용자 훈련만 지원합니다.", + "step2b: 使用CPU提取音高(如果模型带音高), 使用GPU提取特征(选择卡号)": "step2b: CPU를 사용해 음높이를 추출합니다(모델이 음높이를 포함하는 경우), GPU를 사용해 특성을 추출합니다(카드 번호 선택)", + "step3: 填写训练设置, 开始训练模型和索引": "step3: 훈련 설정을 작성하고, 모델 및 인덱스 훈련을 시작합니다", + "step3a:正在训练模型": "step3a: 모델 훈련 중", + "一键训练": "원키 트레이닝", + "也可批量输入音频文件, 二选一, 优先读文件夹": "대량으로 오디오 파일 입력도 가능, 둘 중 하나 선택, 폴더 우선 읽기", + "人声伴奏分离批量处理, 使用UVR5模型。
合格的文件夹路径格式举例: E:\\codes\\py39\\vits_vc_gpu\\白鹭霜华测试样例(去文件管理器地址栏拷就行了)。
模型分为三类:
1、保留人声:不带和声的音频选这个,对主人声保留比HP5更好。内置HP2和HP3两个模型,HP3可能轻微漏伴奏但对主人声保留比HP2稍微好一丁点;
2、仅保留主人声:带和声的音频选这个,对主人声可能有削弱。内置HP5一个模型;
3、去混响、去延迟模型(by FoxJoy):
  (1)MDX-Net(onnx_dereverb):对于双通道混响是最好的选择,不能去除单通道混响;
 (234)DeEcho:去除延迟效果。Aggressive比Normal去除得更彻底,DeReverb额外去除混响,可去除单声道混响,但是对高频重的板式混响去不干净。
去混响/去延迟,附:
1、DeEcho-DeReverb模型的耗时是另外2个DeEcho模型的接近2倍;
2、MDX-Net-Dereverb模型挺慢的;
3、个人推荐的最干净的配置是先MDX-Net再DeEcho-Aggressive。": "인간 목소리와 반주 분리 대량 처리, UVR5 모델 사용.
올바른 폴더 경로 예: E:\\codes\\py39\\vits_vc_gpu\\백로서화 테스트 케이스(파일 탐색기 주소창에서 복사하면 됨).
모델은 세 가지 유형으로 나뉩니다:
1. 인간 목소리 보존: 하모니가 없는 오디오를 선택, 주요 인간 목소리를 HP5보다 더 잘 보존. 내장된 HP2와 HP3 모델, HP3는 약간의 반주를 놓칠 수 있지만 HP2보다는 인간 목소리를 조금 더 잘 보존합니다.
2. 오직 주요 인간 목소리 보존: 하모니가 있는 오디오를 선택, 주요 인간 목소리가 약간 약해질 수 있음. 내장된 HP5 모델 하나;
3. 울림 제거, 지연 제거 모델(by FoxJoy):
  (1)MDX-Net(onnx_dereverb): 양채널 울림에 대해서는 최선의 선택, 단채널 울림 제거 불가능;
 (234)DeEcho: 지연 효과 제거. Aggressive가 Normal보다 더 철저하게 제거하며, DeReverb는 추가로 울림 제거, 단일 채널 울림 제거 가능하지만 고주파 중심의 판형 울림은 완전히 제거하지 못함.
울림/지연 제거 시 참고:
1. DeEcho-DeReverb 모델의 처리 시간은 다른 두 DeEcho 모델의 거의 2배임;
2. MDX-Net-Dereverb 모델은 상당히 느림;
3. 개인적으로 추천하는 가장 깨끗한 구성은 MDX-Net 다음에 DeEcho-Aggressive 사용.", + "以-分隔输入使用的卡号, 例如 0-1-2 使用卡0和卡1和卡2": "-로 구분하여 입력한 카드 번호, 예를 들어 0-1-2는 카드0, 카드1, 카드2 사용", + "伴奏人声分离&去混响&去回声": "반주 및 인간 목소리 분리 & 울림 제거 & 에코 제거", + "使用模型采样率": "모델 샘플링 레이트 사용", + "使用设备采样率": "장치 샘플링 레이트 사용", + "保存名": "저장 이름", + "保存的文件名, 默认空为和源文件同名": "저장된 파일 이름, 기본값은 원본 파일과 동일", + "保存的模型名不带后缀": "저장된 모델 이름은 접미사 없음", + "保存频率save_every_epoch": "저장 빈도 save_every_epoch", + "保护清辅音和呼吸声,防止电音撕裂等artifact,拉满0.5不开启,调低加大保护力度但可能降低索引效果": "청결한 자음과 숨소리를 보호하고, 전자음의 찢어짐과 같은 아티팩트를 방지하며, 0.5까지 끌어올리면 보호가 활성화되지 않으며, 낮추면 보호 강도는 증가하지만 인덱싱 효과는 감소할 수 있음", + "修改": "수정", + "修改模型信息(仅支持weights文件夹下提取的小模型文件)": "모델 정보 수정(오직 weights 폴더에서 추출된 소형 모델 파일만 지원)", + "停止音频转换": "오디오 변환 중지", + "全流程结束!": "전체 과정 완료!", + "刷新音色列表和索引路径": "음색 목록 및 인덱스 경로 새로고침", + "加载模型": "모델 로드", + "加载预训练底模D路径": "사전 훈련된 베이스 모델 D 경로 로드", + "加载预训练底模G路径": "사전 훈련된 베이스 모델 G 경로 로드", + "单次推理": "단일 추론", + "卸载音色省显存": "음색 언로드로 메모리 절약", + "变调(整数, 半音数量, 升八度12降八度-12)": "변조(정수, 반음 수, 옥타브 상승 12, 옥타브 하강 -12)", + "后处理重采样至最终采样率,0为不进行重采样": "후처리로 최종 샘플링 레이트까지 리샘플링, 0은 리샘플링하지 않음", + "否": "아니오", + "启用相位声码器": "위상 보코더 활성화", + "响应阈值": "응답 임계값", + "响度因子": "소리 크기 인자", + "处理数据": "데이터 처리", + "导出Onnx模型": "Onnx 모델 내보내기", + "导出文件格式": "파일 형식 내보내기", + "常见问题解答": "자주 묻는 질문 답변", + "常规设置": "일반 설정", + "开始音频转换": "오디오 변환 시작", + "很遗憾您这没有能用的显卡来支持您训练": "유감스럽게도 훈련을 지원할 수 있는 그래픽 카드가 없습니다", + "性能设置": "성능 설정", + "总训练轮数total_epoch": "총 훈련 회차 total_epoch", + "批量推理": "대량 추론", + "批量转换, 输入待转换音频文件夹, 或上传多个音频文件, 在指定文件夹(默认opt)下输出转换的音频. ": "대량 변환, 변환할 오디오 폴더 입력, 또는 여러 오디오 파일 업로드, 지정된 폴더(기본값 opt)에 변환된 오디오 출력.", + "指定输出主人声文件夹": "주인공 목소리 출력 폴더 지정", + "指定输出文件夹": "출력 파일 폴더 지정", + "指定输出非主人声文件夹": "비주인공 목소리 출력 폴더 지정", + "推理时间(ms):": "추론 시간(ms):", + "推理音色": "추론 음색", + "提取": "추출", + "提取音高和处理数据使用的CPU进程数": "음높이 추출 및 데이터 처리에 사용되는 CPU 프로세스 수", + "是": "예", + "是否仅保存最新的ckpt文件以节省硬盘空间": "디스크 공간을 절약하기 위해 가장 최신의 ckpt 파일만 저장할지 여부", + "是否在每次保存时间点将最终小模型保存至weights文件夹": "매 저장 시점마다 최종 작은 모델을 weights 폴더에 저장할지 여부", + "是否缓存所有训练集至显存. 10min以下小数据可缓存以加速训练, 大数据缓存会炸显存也加不了多少速": "모든 훈련 세트를 VRAM에 캐시할지 여부. 10분 미만의 작은 데이터는 훈련 속도를 높이기 위해 캐시할 수 있으나, 큰 데이터는 VRAM을 초과하여 큰 속도 향상을 기대할 수 없음.", + "显卡信息": "그래픽 카드 정보", + "本软件以MIT协议开源, 作者不对软件具备任何控制力, 使用软件者、传播软件导出的声音者自负全责.
如不认可该条款, 则不能使用或引用软件包内任何代码和文件. 详见根目录LICENSE.": "이 소프트웨어는 MIT 라이선스로 오픈 소스이며, 작성자는 소프트웨어에 대한 어떠한 제어도 가지지 않으며, 소프트웨어 사용자 및 소프트웨어에서 내보낸 소리를 전파하는 사용자는 모든 책임을 져야 함.
이 조항을 인정하지 않는 경우, 소프트웨어 패키지 내의 어떠한 코드나 파일도 사용하거나 인용할 수 없음. 자세한 내용은 루트 디렉토리의 LICENSE를 참조.", + "查看": "보기", + "查看模型信息(仅支持weights文件夹下提取的小模型文件)": "모델 정보 보기(오직 weights 폴더에서 추출된 작은 모델 파일만 지원)", + "检索特征占比": "특징 검색 비율", + "模型": "모델", + "模型推理": "모델 추론", + "模型提取(输入logs文件夹下大文件模型路径),适用于训一半不想训了模型没有自动提取保存小文件模型,或者想测试中间模型的情况": "모델 추출(로그 폴더 아래 대용량 모델 경로 입력), 중간에 훈련을 중단하고 싶은 경우나 작은 파일 모델을 자동으로 저장하지 않은 경우, 또는 중간 모델을 테스트하고 싶은 경우에 적합", + "模型是否带音高指导": "모델이 음높이 지도를 포함하는지 여부", + "模型是否带音高指导(唱歌一定要, 语音可以不要)": "모델이 음높이 지도를 포함하는지 여부(노래에는 필수, 말하기에는 선택적)", + "模型是否带音高指导,1是0否": "모델이 음높이 지도를 포함하는지 여부, 1은 '예', 0은 '아니오'", + "模型版本型号": "모델 버전 및 모델", + "模型融合, 可用于测试音色融合": "모델 통합, 음색 통합 테스트에 사용 가능", + "模型路径": "모델 경로", + "每张显卡的batch_size": "각 GPU의 batch_size", + "淡入淡出长度": "페이드 인/아웃 길이", + "版本": "버전", + "特征提取": "특징 추출", + "特征检索库文件路径,为空则使用下拉的选择结果": "특징 검색 라이브러리 파일 경로, 비어 있으면 드롭다운 선택 결과 사용", + "男转女推荐+12key, 女转男推荐-12key, 如果音域爆炸导致音色失真也可以自己调整到合适音域. ": "남성에서 여성으로 전환 시 +12키 추천, 여성에서 남성으로 전환 시 -12키 추천, 음역대 폭발로 음색 왜곡이 발생할 경우 적절한 음역대로 조정 가능.", + "目标采样率": "목표 샘플링 비율", + "算法延迟(ms):": "알고리즘 지연(ms):", + "自动检测index路径,下拉式选择(dropdown)": "index 경로 자동 감지, 드롭다운 선택", + "融合": "통합", + "要改的模型信息": "수정할 모델 정보", + "要置入的模型信息": "삽입할 모델 정보", + "训练": "훈련", + "训练模型": "모델 훈련", + "训练特征索引": "특징 인덱스 훈련", + "训练结束, 您可查看控制台训练日志或实验文件夹下的train.log": "훈련이 완료되었습니다. 콘솔 훈련 로그나 실험 폴더 내의 train.log를 확인하세요.", + "请指定说话人id": "화자 id를 지정해주세요.", + "请选择index文件": "index 파일을 선택해주세요.", + "请选择pth文件": "pth 파일을 선택해주세요.", + "请选择说话人id": "화자 id를 선택해주세요.", + "转换": "변환", + "输入实验名": "실험명을 입력하세요.", + "输入待处理音频文件夹路径": "처리할 오디오 파일 폴더 경로를 입력하세요.", + "输入待处理音频文件夹路径(去文件管理器地址栏拷就行了)": "처리할 오디오 파일 폴더 경로를 입력하세요(파일 관리자의 주소 표시줄에서 복사하세요).", + "输入待处理音频文件路径(默认是正确格式示例)": "처리할 오디오 파일 경로를 입력하세요(기본값은 올바른 형식의 예시입니다).", + "输入源音量包络替换输出音量包络融合比例,越靠近1越使用输出包络": "원본 볼륨 엔벨로프와 출력 볼륨 엔벨로프의 혼합 비율을 입력하세요. 1에 가까울수록 출력 엔벨로프를 더 많이 사용합니다.", + "输入监听": "모니터링 입력", + "输入训练文件夹路径": "학습시킬 파일 폴더의 경로를 입력하세요.", + "输入设备": "입력 장치", + "输入降噪": "입력 노이즈 감소", + "输出信息": "출력 정보", + "输出变声": "음성 변환 출력", + "输出设备": "출력 장치", + "输出降噪": "출력 노이즈 감소", + "输出音频(右下角三个点,点了可以下载)": "오디오 출력(오른쪽 하단 세 개의 점, 클릭하면 다운로드 가능)", + "选择.index文件": ".index 파일 선택", + "选择.pth文件": ".pth 파일 선택", + "选择音高提取算法,输入歌声可用pm提速,harvest低音好但巨慢无比,crepe效果好但吃GPU": "음고 추출 알고리즘을 선택하세요. 노래 입력 시 pm으로 속도를 높일 수 있으며, harvest는 저음이 좋지만 매우 느리고, crepe는 효과가 좋지만 GPU를 많이 사용합니다.", + "选择音高提取算法,输入歌声可用pm提速,harvest低音好但巨慢无比,crepe效果好但吃GPU,rmvpe效果最好且微吃GPU": "음고 추출 알고리즘을 선택하세요. 노래 입력 시 pm으로 속도를 높일 수 있고, harvest는 저음이 좋지만 매우 느리며, crepe는 효과가 좋지만 GPU를 많이 사용하고, rmvpe는 가장 좋은 효과를 내면서 GPU를 적게 사용합니다.", + "选择音高提取算法:输入歌声可用pm提速,高质量语音但CPU差可用dio提速,harvest质量更好但慢,rmvpe效果最好且微吃CPU/GPU": "음고 추출 알고리즘 선택: 노래 입력 시 pm으로 속도를 높일 수 있으며, 고품질 음성이지만 CPU가 낮을 때는 dio로 속도를 높일 수 있고, harvest는 품질이 더 좋지만 느리며, rmvpe는 최고의 효과를 내면서 CPU/GPU를 적게 사용합니다.", + "采样率:": "샘플링 레이트:", + "采样长度": "샘플링 길이", + "重载设备列表": "장치 목록 리로드", + "音调设置": "음조 설정", + "音频设备(请使用同种类驱动)": "오디오 장치(동일한 유형의 드라이버를 사용해주세요)", + "音高算法": "음고 알고리즘", + "额外推理时长": "추가적인 추론 시간" +} From 37ae8bf051c4ae43869cf799831ae19d5df1557d Mon Sep 17 00:00:00 2001 From: Yuan-Man <68322456+Yuan-ManX@users.noreply.github.com> Date: Wed, 14 Feb 2024 20:32:26 +0800 Subject: [PATCH 02/63] Update es_ES.json --- i18n/locale/es_ES.json | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/i18n/locale/es_ES.json b/i18n/locale/es_ES.json index 5445b698..3bcd2a38 100644 --- a/i18n/locale/es_ES.json +++ b/i18n/locale/es_ES.json @@ -8,8 +8,16 @@ "是否开启UVR5-WebUI": "¿Habilitar UVR5-WebUI?", "UVR5进程输出信息": "Información de salida del proceso UVR5", "0b-语音切分工具": "0b-Herramienta de división de voz", + ".list标注文件的路径": "Ruta del archivo de anotación .list", + "GPT模型列表": "Lista de modelos GPT", + "SoVITS模型列表": "Lista de modelos SoVITS", + "填切割后音频所在目录!读取的音频文件完整路径=该目录-拼接-list文件里波形对应的文件名(不是全路径)。": "Directorio donde se guardan los archivos de audio después del corte! Ruta completa del archivo de audio a leer = este directorio - nombre de archivo correspondiente a la forma de onda en el archivo de lista (no la ruta completa).", "音频自动切分输入路径,可文件可文件夹": "Ruta de entrada para la división automática de audio, puede ser un archivo o una carpeta", "切分后的子音频的输出根目录": "Directorio raíz de salida de los sub-audios después de la división", + "怎么切": "Cómo cortar", + "不切": "No cortar", + "凑四句一切": "Completa cuatro oraciones para rellenar todo", + "按英文句号.切": "Cortar por puntos en inglés.", "threshold:音量小于这个值视作静音的备选切割点": "umbral: puntos de corte alternativos considerados como silencio si el volumen es menor que este valor", "min_length:每段最小多长,如果第一段太短一直和后面段连起来直到超过这个值": "min_length: duración mínima de cada segmento, si el primer segmento es demasiado corto, se conecta continuamente con los siguientes hasta que supera este valor", "min_interval:最短切割间隔": "min_interval: intervalo mínimo de corte", From 709c6c3d4031cd1bdbdb26ee1bece9d67e45616c Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E5=88=98=E6=82=A6?= Date: Sat, 17 Feb 2024 12:13:43 +0800 Subject: [PATCH 03/63] Update english.py MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 热词应该是覆盖逻辑,因为原版字典里如果key存在的话,那么用户定义的热词纠正发音就不会生效 --- GPT_SoVITS/text/english.py | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/GPT_SoVITS/text/english.py b/GPT_SoVITS/text/english.py index 0a5d6c21..90f48a55 100644 --- a/GPT_SoVITS/text/english.py +++ b/GPT_SoVITS/text/english.py @@ -169,9 +169,9 @@ def read_dict_new(): line = line.strip() word_split = line.split(" ") word = word_split[0] - if word not in g2p_dict: - g2p_dict[word] = [] - g2p_dict[word].append(word_split[1:]) + #if word not in g2p_dict: + g2p_dict[word] = [] + g2p_dict[word].append(word_split[1:]) line_index = line_index + 1 line = f.readline() From c70a609a313cceaf808044f81531dedb2fffabf2 Mon Sep 17 00:00:00 2001 From: KamioRinn Date: Sat, 17 Feb 2024 16:04:06 +0800 Subject: [PATCH 04/63] Adjust ja clean text --- GPT_SoVITS/inference_webui.py | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/GPT_SoVITS/inference_webui.py b/GPT_SoVITS/inference_webui.py index dd8fce3b..39ae7e43 100644 --- a/GPT_SoVITS/inference_webui.py +++ b/GPT_SoVITS/inference_webui.py @@ -248,6 +248,10 @@ def clean_text_inf(text, language): formattext = "" language = language.replace("all_","") for tmp in LangSegment.getTexts(text): + if language == "ja": + if tmp["lang"] == language or tmp["lang"] == "zh": + formattext += tmp["text"] + " " + continue if tmp["lang"] == language: formattext += tmp["text"] + " " while " " in formattext: @@ -279,8 +283,6 @@ def nonen_clean_text_inf(text, language): for tmp in LangSegment.getTexts(text): langlist.append(tmp["lang"]) textlist.append(tmp["text"]) - print(textlist) - print(langlist) phones_list = [] word2ph_list = [] norm_text_list = [] From e97cc3346a16a1cf2fddf2be5735f8d06425bcbe Mon Sep 17 00:00:00 2001 From: RVC-Boss <129054828+RVC-Boss@users.noreply.github.com> Date: Sat, 17 Feb 2024 16:45:31 +0800 Subject: [PATCH 05/63] =?UTF-8?q?=E6=A8=A1=E5=9E=8B=E5=AE=9E=E9=AA=8C?= =?UTF-8?q?=E5=90=8D=E5=8F=AF=E8=AE=BE=E7=BD=AE=E4=B8=BA=E4=B8=AD=E6=96=87?= =?UTF-8?q?=E3=80=82?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit fix https://github.com/RVC-Boss/GPT-SoVITS/issues/500 --- GPT_SoVITS/process_ckpt.py | 12 ++++++++++-- GPT_SoVITS/s1_train.py | 11 ++++++++++- GPT_SoVITS/utils.py | 11 ++++++++++- 3 files changed, 30 insertions(+), 4 deletions(-) diff --git a/GPT_SoVITS/process_ckpt.py b/GPT_SoVITS/process_ckpt.py index 74833379..3a436f10 100644 --- a/GPT_SoVITS/process_ckpt.py +++ b/GPT_SoVITS/process_ckpt.py @@ -1,11 +1,18 @@ import traceback from collections import OrderedDict - +from time import time as ttime +import shutil,os import torch from tools.i18n.i18n import I18nAuto i18n = I18nAuto() +def my_save(fea,path):#####fix issue: torch.save doesn't support chinese path + dir=os.path.dirname(path) + name=os.path.basename(path) + tmp_path="%s.pth"%(ttime()) + torch.save(fea,tmp_path) + shutil.move(tmp_path,"%s/%s"%(dir,name)) def savee(ckpt, name, epoch, steps, hps): try: @@ -17,7 +24,8 @@ def savee(ckpt, name, epoch, steps, hps): opt["weight"][key] = ckpt[key].half() opt["config"] = hps opt["info"] = "%sepoch_%siteration" % (epoch, steps) - torch.save(opt, "%s/%s.pth" % (hps.save_weight_dir, name)) + # torch.save(opt, "%s/%s.pth" % (hps.save_weight_dir, name)) + my_save(opt, "%s/%s.pth" % (hps.save_weight_dir, name)) return "Success." except: return traceback.format_exc() diff --git a/GPT_SoVITS/s1_train.py b/GPT_SoVITS/s1_train.py index c26302a8..fb273542 100644 --- a/GPT_SoVITS/s1_train.py +++ b/GPT_SoVITS/s1_train.py @@ -24,6 +24,14 @@ torch.set_float32_matmul_precision("high") from AR.utils import get_newest_ckpt from collections import OrderedDict +from time import time as ttime +import shutil +def my_save(fea,path):#####fix issue: torch.save doesn't support chinese path + dir=os.path.dirname(path) + name=os.path.basename(path) + tmp_path="%s.pth"%(ttime()) + torch.save(fea,tmp_path) + shutil.move(tmp_path,"%s/%s"%(dir,name)) class my_model_ckpt(ModelCheckpoint): @@ -70,7 +78,8 @@ class my_model_ckpt(ModelCheckpoint): to_save_od["weight"][key] = dictt[key].half() to_save_od["config"] = self.config to_save_od["info"] = "GPT-e%s" % (trainer.current_epoch + 1) - torch.save( + # torch.save( + my_save( to_save_od, "%s/%s-e%s.ckpt" % ( diff --git a/GPT_SoVITS/utils.py b/GPT_SoVITS/utils.py index 0ce03b33..7984b5a8 100644 --- a/GPT_SoVITS/utils.py +++ b/GPT_SoVITS/utils.py @@ -64,6 +64,14 @@ def load_checkpoint(checkpoint_path, model, optimizer=None, skip_optimizer=False ) return model, optimizer, learning_rate, iteration +from time import time as ttime +import shutil +def my_save(fea,path):#####fix issue: torch.save doesn't support chinese path + dir=os.path.dirname(path) + name=os.path.basename(path) + tmp_path="%s.pth"%(ttime()) + torch.save(fea,tmp_path) + shutil.move(tmp_path,"%s/%s"%(dir,name)) def save_checkpoint(model, optimizer, learning_rate, iteration, checkpoint_path): logger.info( @@ -75,7 +83,8 @@ def save_checkpoint(model, optimizer, learning_rate, iteration, checkpoint_path) state_dict = model.module.state_dict() else: state_dict = model.state_dict() - torch.save( + # torch.save( + my_save( { "model": state_dict, "iteration": iteration, From 99ad1f3ce586747534f87edf6072d9054a69703e Mon Sep 17 00:00:00 2001 From: ChanningWang2018 <40551910+ChanningWang2018@users.noreply.github.com> Date: Sat, 17 Feb 2024 20:19:10 +0800 Subject: [PATCH 06/63] Fix Issue with Share Link Generation in colab_webui.ipynb Modified the way we retrieve the "is_share" environment variable. --- colab_webui.ipynb | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/colab_webui.ipynb b/colab_webui.ipynb index 21722da5..79717ccf 100644 --- a/colab_webui.ipynb +++ b/colab_webui.ipynb @@ -82,7 +82,7 @@ "source": [ "# @title launch WebUI 启动WebUI\n", "!/usr/local/bin/pip install ipykernel\n", - "!sed -i '9s/False/True/' /content/GPT-SoVITS/config.py\n", + "!sed -i 's/os.environ.get("is_share","False")/os.environ.get("is_share","True")/g' /content/GPT-SoVITS/config.py\n", "%cd /content/GPT-SoVITS/\n", "!/usr/local/bin/python webui.py" ], @@ -93,4 +93,4 @@ "outputs": [] } ] -} \ No newline at end of file +} From 82d5928bf28d272a5fbb47962410347496a64551 Mon Sep 17 00:00:00 2001 From: RVC-Boss <129054828+RVC-Boss@users.noreply.github.com> Date: Sat, 17 Feb 2024 22:25:33 +0800 Subject: [PATCH 07/63] Revert "Fix Issue with Share Link Generation in colab_webui.ipynb" --- colab_webui.ipynb | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/colab_webui.ipynb b/colab_webui.ipynb index 79717ccf..21722da5 100644 --- a/colab_webui.ipynb +++ b/colab_webui.ipynb @@ -82,7 +82,7 @@ "source": [ "# @title launch WebUI 启动WebUI\n", "!/usr/local/bin/pip install ipykernel\n", - "!sed -i 's/os.environ.get("is_share","False")/os.environ.get("is_share","True")/g' /content/GPT-SoVITS/config.py\n", + "!sed -i '9s/False/True/' /content/GPT-SoVITS/config.py\n", "%cd /content/GPT-SoVITS/\n", "!/usr/local/bin/python webui.py" ], @@ -93,4 +93,4 @@ "outputs": [] } ] -} +} \ No newline at end of file From e60988a568290b2d76b2b0b860aa1c9cd2f9524a Mon Sep 17 00:00:00 2001 From: RVC-Boss <129054828+RVC-Boss@users.noreply.github.com> Date: Sat, 17 Feb 2024 22:30:54 +0800 Subject: [PATCH 08/63] Update colab_webui.ipynb --- colab_webui.ipynb | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/colab_webui.ipynb b/colab_webui.ipynb index 21722da5..70fa7940 100644 --- a/colab_webui.ipynb +++ b/colab_webui.ipynb @@ -82,7 +82,7 @@ "source": [ "# @title launch WebUI 启动WebUI\n", "!/usr/local/bin/pip install ipykernel\n", - "!sed -i '9s/False/True/' /content/GPT-SoVITS/config.py\n", + "!sed -i '10s/False/True/' /content/GPT-SoVITS/config.py\n", "%cd /content/GPT-SoVITS/\n", "!/usr/local/bin/python webui.py" ], @@ -93,4 +93,4 @@ "outputs": [] } ] -} \ No newline at end of file +} From f49d60d6bb7fec124ff859431b048fe423b59627 Mon Sep 17 00:00:00 2001 From: Tundra Work Date: Sun, 18 Feb 2024 07:13:09 +0000 Subject: [PATCH 09/63] fix: 1A-Dataset formatting doesn't work if using a empty 'Audio dataset folder' --- GPT_SoVITS/prepare_datasets/2-get-hubert-wav32k.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/GPT_SoVITS/prepare_datasets/2-get-hubert-wav32k.py b/GPT_SoVITS/prepare_datasets/2-get-hubert-wav32k.py index b8355dd4..9e137a9f 100644 --- a/GPT_SoVITS/prepare_datasets/2-get-hubert-wav32k.py +++ b/GPT_SoVITS/prepare_datasets/2-get-hubert-wav32k.py @@ -98,7 +98,7 @@ for line in lines[int(i_part)::int(all_parts)]: try: # wav_name,text=line.split("\t") wav_name, spk_name, language, text = line.split("|") - if (inp_wav_dir !=None): + if (inp_wav_dir != ""): wav_name = os.path.basename(wav_name) wav_path = "%s/%s"%(inp_wav_dir, wav_name) From 7d6f9aecc72ffbe339ad40911f2c077ed0af01f4 Mon Sep 17 00:00:00 2001 From: pengoosedev <73521518+pengoosedev@users.noreply.github.com> Date: Sun, 18 Feb 2024 23:36:15 +0900 Subject: [PATCH 10/63] docs: uppdate Changelog_KO.md --- docs/ko/Changelog_KO.md | 46 ++++++++++++++++++++++++++++++++++++----- 1 file changed, 41 insertions(+), 5 deletions(-) diff --git a/docs/ko/Changelog_KO.md b/docs/ko/Changelog_KO.md index 1afa9e5c..ede4e613 100644 --- a/docs/ko/Changelog_KO.md +++ b/docs/ko/Changelog_KO.md @@ -50,9 +50,45 @@ 2. 중국어 및 영어 문자열의 문장 부호가 잘리는 문제 및 문장의 시작과 끝에 문장 부호가 추가되는 문제를 수정했습니다. 3. 문장 부호의 수를 확장하였습니다. -todolist: +### 20240201 업데이트 -1. 동음이의어(중문) 추론 최적화 -2. 영문 대문자 인식 및 영문 하이픈 [문제](https://github.com/RVC-Boss/GPT-SoVITS/issues/271) -3. 텍스트에 % 기호가 포함되어 있으면 오류가 발생하며 추론이 불가능합니다. 또한 '元/吨'이 '元吨'으로 읽히지 않고 '元每吨'으로 읽히도록 하는 등의 문제가 존재합니다. 이러한 문제를 해결하기 위해 어떤 라이브러리를 사용해야 하며, 이에 대한 개선을 고민하고 있습니다. -4. 중-일-영, 중-영, 일-영을 포함한 다섯 가지 언어를 지원하는 것을 목표로 잡고있습니다. +1. uvr5가 잘못된 형식으로 읽어들이는 문제를 수정하였습니다. +2. 중국어, 일본어, 영어가 혼합된 여러 텍스트를 자동으로 분리하여 언어를 인식합니다. + +### 20240202 업데이트 + +1. asr 경로의 끝에 `/`가 포함되어 있는 경우 오류가 발생하는 문제를 수정하였습니다. +2. paddlespeech의 Normalizer를 도입하여 [문제를 해결](https://github.com/RVC-Boss/GPT-SoVITS/pull/377)하여, 예를 들어 xx.xx%(백분율), 元/吨이 元吨으로 읽히는 문제를 해결하였습니다. 또한, 밑줄이 더 이상 오류를 발생시키지 않습니다. + +### 20240207 업데이트 + +1. 언어 전달 매개변수가 혼란스러워져 [중국어 추론 효과가 저하되는 문제](https://github.com/RVC-Boss/GPT-SoVITS/issues/391)를 수정하였습니다. +2. uvr5가 `inf everywhere` [오류를 반환하는 문제](https://github.com/RVC-Boss/GPT-SoVITS/pull/403)를 수정하였습니다. +3. uvr5의 `is_half` 매개변수가 bool로 변환되지 않아 항상 반정밀도 추론으로 설정되어 16 시리즈 그래픽 카드에서 `inf`가 반환되는 [문제](https://github.com/RVC-Boss/GPT-SoVITS/commit/14a285109a521679f8846589c22da8f656a46ad8)를 수정하였습니다. +4. 영어 텍스트 입력을 최적화하였습니다. +5. gradio 종속성을 지원합니다. +6. 루트 디렉토리가 비어 있으면 `.list` 전체 경로를 자동으로 읽습니다. +7. faster whisper ASR 일본어 및 영어를 지원합니다. + +### 20240208 업데이트 + +1. GPT 학습이 카드에 따라 멈추는 문제와 [GPT 학습 중 ZeroDivisionError](https://github.com/RVC-Boss/GPT-SoVITS/commit/59f35adad85815df27e9c6b33d420f5ebfd8376b) 문제를 수정하였습니다. + +### 20240212 업데이트 + +1. faster whisper 및 funasr 로직을 최적화하였습니다. faster whisper는 이미지 스토어에서 다운로드하여 huggingface에 연결하지 못하는 문제를 회피합니다. +2. DPO Loss 실험적 학습 옵션을 활성화하여 부정적 샘플을 생성하여 [GPT 반복 및 누락 문자 문제](https://github.com/RVC-Boss/GPT-SoVITS/pull/457)를 완화합니다. 추론 인터페이스에 몇 가지 추론 매개변수를 공개합니다. + +### 20240214 업데이트 + +1. 학습에서 중국어 실험 이름을 지원합니다. (이전에 오류가 발생했습니다) +2. DPO 학습을 선택적으로 설정할 수 있도록 변경하였습니다. 배치 크기를 선택하면 자동으로 절반으로 줄어듭니다. 추론 인터페이스에서 새로운 매개변수를 전달하지 않는 문제를 수정하였습니다. + +### 20240216 업데이트 + +1. 참조 텍스트 입력을 지원합니다. +2. 프론트엔드에 있던 중국어 텍스트 입력 버그를 수정하였습니다. + +todolist : + +1. 중국어 다음음자 추론 최적화 From 92b229132fc23cafbde5b63385a844ed6af1ed0b Mon Sep 17 00:00:00 2001 From: pengoosedev <73521518+pengoosedev@users.noreply.github.com> Date: Sun, 18 Feb 2024 23:36:45 +0900 Subject: [PATCH 11/63] chore: tiny change i18n(ko_KR.json) --- i18n/locale/ko_KR.json | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/i18n/locale/ko_KR.json b/i18n/locale/ko_KR.json index 9061ef97..1898c9b9 100644 --- a/i18n/locale/ko_KR.json +++ b/i18n/locale/ko_KR.json @@ -5,7 +5,7 @@ "本软件以MIT协议开源, 作者不对软件具备任何控制力, 使用软件者、传播软件导出的声音者自负全责.
如不认可该条款, 则不能使用或引用软件包内任何代码和文件. 详见根目录LICENSE.": "본 소프트웨어는 MIT 라이선스로 오픈 소스로 제공되며, 제작자는 소프트웨어에 대해 어떠한 제어력도 가지지 않습니다. 소프트웨어 사용자 및 소프트웨어에서 내보낸 소리를 전파하는 자는 전적으로 책임져야 합니다.
이 조항을 인정하지 않으면 소프트웨어의 코드 및 파일을 사용하거나 인용할 수 없습니다. 루트 디렉터리의 LICENSE를 참조하십시오.", "0-前置数据集获取工具": "0-전방 데이터 세트 수집 도구", "0a-UVR5人声伴奏分离&去混响去延迟工具": "0a-UVR5 보컬 및 반주 분리 및 에코 및 지연 제거 도구", - "是否开启UVR5-WebUI": "UVR5-WebUI 활성화 여부", + "是否开启UVR5-WebUI": "UVR5-WebUI를 여시겠습니까?", "UVR5进程输出信息": "UVR5 프로세스 출력 정보", "0b-语音切分工具": "0b-음성 분리 도구", ".list标注文件的路径": ".list 주석 파일 경로", From 0edf40bc4b490f388f3520e138f19bf4db6c4156 Mon Sep 17 00:00:00 2001 From: pengoosedev <73521518+pengoosedev@users.noreply.github.com> Date: Sun, 18 Feb 2024 23:38:04 +0900 Subject: [PATCH 12/63] chore: sync i18n --- tools/i18n/locale/ko_KR.json | 328 +++++++++++++++++++++++++---------- 1 file changed, 239 insertions(+), 89 deletions(-) diff --git a/tools/i18n/locale/ko_KR.json b/tools/i18n/locale/ko_KR.json index 816ed3f7..1898c9b9 100644 --- a/tools/i18n/locale/ko_KR.json +++ b/tools/i18n/locale/ko_KR.json @@ -1,135 +1,285 @@ { - ">=3则使用对harvest音高识别的结果使用中值滤波,数值为滤波半径,使用可以削弱哑音": ">=3이면 harvest 음높이 인식 결과에 중간값 필터를 사용합니다. 이 수치는 필터 반경이며, 사용하면 불명확한 음성을 어느정도 배제할 수 있습니다.", + "很遗憾您这没有能用的显卡来支持您训练": "죄송합니다. 훈련을 지원할 수 있는 그래픽 카드가 없습니다.", + "UVR5已开启": "UVR5가 활성화되었습니다", + "UVR5已关闭": "UVR5가 비활성화되었습니다", + "本软件以MIT协议开源, 作者不对软件具备任何控制力, 使用软件者、传播软件导出的声音者自负全责.
如不认可该条款, 则不能使用或引用软件包内任何代码和文件. 详见根目录LICENSE.": "본 소프트웨어는 MIT 라이선스로 오픈 소스로 제공되며, 제작자는 소프트웨어에 대해 어떠한 제어력도 가지지 않습니다. 소프트웨어 사용자 및 소프트웨어에서 내보낸 소리를 전파하는 자는 전적으로 책임져야 합니다.
이 조항을 인정하지 않으면 소프트웨어의 코드 및 파일을 사용하거나 인용할 수 없습니다. 루트 디렉터리의 LICENSE를 참조하십시오.", + "0-前置数据集获取工具": "0-전방 데이터 세트 수집 도구", + "0a-UVR5人声伴奏分离&去混响去延迟工具": "0a-UVR5 보컬 및 반주 분리 및 에코 및 지연 제거 도구", + "是否开启UVR5-WebUI": "UVR5-WebUI를 여시겠습니까?", + "UVR5进程输出信息": "UVR5 프로세스 출력 정보", + "0b-语音切分工具": "0b-음성 분리 도구", + ".list标注文件的路径": ".list 주석 파일 경로", + "GPT模型路径": "GPT 모델 경로", + "SoVITS模型列表": "SoVITS 모델 목록", + "填切割后音频所在目录!读取的音频文件完整路径=该目录-拼接-list文件里波形对应的文件名(不是全路径)。": "분리된 오디오가 있는 디렉터리를 입력하십시오! 읽은 오디오 파일의 전체 경로 = 해당 디렉터리-연결-목록 파일에 해당하는 원본 이름 (전체 경로가 아님).", + "音频自动切分输入路径,可文件可文件夹": "오디오 자동 분리 입력 경로, 파일 또는 폴더 가능", + "切分后的子音频的输出根目录": "분리된 하위 오디오의 출력 기본 디렉터리", + "怎么切": "자르기 옵션", + "不切": "자르지 않음", + "凑四句一切": "네 문장의 세트를 완성하세요.", + "按英文句号.切": "영어 문장으로 분리하기", + "threshold:音量小于这个值视作静音的备选切割点": "임계 값: 이 값보다 작은 볼륨은 대체 분리 지점으로 간주됩니다.", + "min_length:每段最小多长,如果第一段太短一直和后面段连起来直到超过这个值": "최소 길이: 각 세그먼트의 최소 길이. 첫 번째 세그먼트가 너무 짧으면 계속해서 뒷부분과 연결하여 이 값 이상이 될 때까지", + "min_interval:最短切割间隔": "최소 분리 간격", + "hop_size:怎么算音量曲线,越小精度越大计算量越高(不是精度越大效果越好)": "hop 크기: 볼륨 곡선을 계산하는 방법. 작을수록 정확도가 높아지지만 계산량이 높아집니다 (정확도가 높다고 효과가 좋아지지 않음)", + "max_sil_kept:切完后静音最多留多长": "최대 유지되는 정적 길이 (분리 후)", + "开启语音切割": "음성 분리 활성화", + "终止语音切割": "음성 분리 종료", + "max:归一化后最大值多少": "최대 값 (정규화 후)", + "alpha_mix:混多少比例归一化后音频进来": "알파 믹스: 정규화된 오디오가 들어오는 비율", + "切割使用的进程数": "사용되는 프로세스 수로 자르기", + "语音切割进程输出信息": "음성 분리 프로세스 출력 정보", + "0c-中文批量离线ASR工具": "0c-중국어 대량 오프라인 ASR 도구", + "开启离线批量ASR": "오프라인 대량 ASR 활성화", + "终止ASR进程": "ASR 프로세스 종료", + "批量ASR(中文only)输入文件夹路径": "대량 ASR (중국어 전용) 입력 폴더 경로", + "ASR进程输出信息": "ASR 프로세스 출력 정보", + "0d-语音文本校对标注工具": "0d-음성 텍스트 교정 주석 도구", + "是否开启打标WebUI": "웹 기반 주석 활성화 여부", + "打标数据标注文件路径": "주석 데이터 주석 파일 경로", + "打标工具进程输出信息": "주석 도구 프로세스 출력 정보", + "1-GPT-SoVITS-TTS": "1-GPT-SoVITS-TTS", + "*实验/模型名": "*실험/모델 이름", + "显卡信息": "그래픽 카드 정보", + "预训练的SoVITS-G模型路径": "사전 훈련된 SoVITS-G 모델 경로", + "预训练的SoVITS-D模型路径": "사전 훈련된 SoVITS-D 모델 경로", + "预训练的GPT模型路径": "사전 훈련된 GPT 모델 경로", + "1A-训练集格式化工具": "1A-훈련 세트 형식 지정 도구", + "输出logs/实验名目录下应有23456开头的文件和文件夹": "logs/실험 이름 디렉터리에는 23456으로 시작하는 파일과 폴더가 있어야 함", + "*文本标注文件": "*텍스트 주석 파일", + "*训练集音频文件目录": "*훈련 세트 오디오 파일 디렉터리", + "训练集音频文件目录 拼接 list文件里波形对应的文件名。": "훈련 세트 오디오 파일 디렉터리 - 목록 파일에 해당하는 원형 이름 연결", + "1Aa-文本内容": "1Aa-텍스트 내용", + "GPU卡号以-分割,每个卡号一个进程": "GPU 카드 번호는 -로 구분되며 각 카드 번호에 하나의 프로세스가 있어야 함", + "预训练的中文BERT模型路径": "사전 훈련된 중국어 BERT 모델 경로", + "开启文本获取": "텍스트 추출 활성화", + "终止文本获取进程": "텍스트 추출 프로세스 종료", + "文本进程输出信息": "텍스트 프로세스 출력 정보", + "1Ab-SSL自监督特征提取": "1Ab-SSL 자기 지도 특징 추출", + "预训练的SSL模型路径": "사전 훈련된 SSL 모델 경로", + "开启SSL提取": "SSL 추출 활성화", + "终止SSL提取进程": "SSL 추출 프로세스 종료", + "SSL进程输出信息": "SSL 프로세스 출력 정보", + "1Ac-语义token提取": "1Ac-의미 토큰 추출", + "开启语义token提取": "의미 토큰 추출 활성화", + "终止语义token提取进程": "의미 토큰 추출 프로세스 종료", + "语义token提取进程输出信息": "의미 토큰 추출 프로세스 출력 정보", + "1Aabc-训练集格式化一键三连": "1Aabc-훈련 세트 형식 지정 일괄 처리", + "开启一键三连": "일괄 처리 활성화", + "终止一键三连": "일괄 처리 종료", + "一键三连进程输出信息": "일괄 처리 프로세스 출력 정보", + "1B-微调训练": "1B-미세 조정 훈련", + "1Ba-SoVITS训练。用于分享的模型文件输出在SoVITS_weights下。": "1Ba-SoVITS 훈련. 공유 용 모델 파일은 SoVITS_weights 하위에 출력됩니다.", + "每张显卡的batch_size": "각 그래픽 카드의 배치 크기", + "总训练轮数total_epoch,不建议太高": "총 훈련 라운드 수 (total_epoch), 너무 높지 않게 권장됨", + "文本模块学习率权重": "텍스트 모듈 학습률 가중치", + "保存频率save_every_epoch": "저장 빈도 (각 라운드마다)", + "是否仅保存最新的ckpt文件以节省硬盘空间": "디스크 공간을 절약하기 위해 최신 ckpt 파일만 저장할지 여부", + "是否在每次保存时间点将最终小模型保存至weights文件夹": "각 저장 시간에 최종 작은 모델을 weights 폴더에 저장할지 여부", + "开启SoVITS训练": "SoVITS 훈련 활성화", + "终止SoVITS训练": "SoVITS 훈련 종료", + "SoVITS训练进程输出信息": "SoVITS 훈련 프로세스 출력 정보", + "1Bb-GPT训练。用于分享的模型文件输出在GPT_weights下。": "1Bb-GPT 훈련. 공유 용 모델 파일은 GPT_weights 하위에 출력됩니다.", + "总训练轮数total_epoch": "총 훈련 라운드 수 (total_epoch)", + "开启GPT训练": "GPT 훈련 활성화", + "终止GPT训练": "GPT 훈련 종료", + "GPT训练进程输出信息": "GPT 훈련 프로세스 출력 정보", + "1C-推理": "1C-추론", + "选择训练完存放在SoVITS_weights和GPT_weights下的模型。默认的一个是底模,体验5秒Zero Shot TTS用。": "SoVITS_weights 및 GPT_weights에 저장된 훈련 완료된 모델 중 선택. 기본적으로 하나는 기본 모델이며 5초 Zero Shot TTS를 체험할 수 있습니다.", + "*GPT模型列表": "*GPT 모델 목록", + "*SoVITS模型列表": "*SoVITS 모델 목록", + "GPU卡号,只能填1个整数": "GPU 카드 번호, 1개의 정수만 입력 가능", + "刷新模型路径": "모델 경로 새로 고침", + "是否开启TTS推理WebUI": "TTS 추론 WebUI 활성화 여부", + "TTS推理WebUI进程输出信息": "TTS 추론 WebUI 프로세스 출력 정보", + "2-GPT-SoVITS-变声": "2-GPT-SoVITS-음성 변환", + "施工中,请静候佳音": "공사 중입니다. 기다려주십시오.", + "参考音频在3~10秒范围外,请更换!": "참고 오디오가 3~10초 범위를 벗어났습니다. 다른 것으로 바꾸십시오!", + "请上传3~10秒内参考音频,超过会报错!": "3~10초 이내의 참고 오디오를 업로드하십시오. 초과하면 오류가 발생합니다!", + "TTS推理进程已开启": "TTS 추론 프로세스가 열렸습니다", + "TTS推理进程已关闭": "TTS 추론 프로세스가 닫혔습니다", + "打标工具WebUI已开启": "주석 도구 WebUI가 열렸습니다", + "打标工具WebUI已关闭": "주석 도구 WebUI가 닫혔습니다", + "*请填写需要合成的目标文本。中英混合选中文,日英混合选日文,中日混合暂不支持,非目标语言文本自动遗弃。": "*합성할 대상 텍스트를 입력하십시오. 중국어와 영어를 혼합하면 중국어를 선택하고 일본어와 영어를 혼합하면 일본어를 선택하십시오. 중국어와 일본어를 혼합하는 것은 아직 지원되지 않으며 대상 언어가 아닌 텍스트는 자동으로 버려집니다.", + "*请填写需要合成的目标文本": "*합성할 대상 텍스트를 입력하십시오", + "ASR任务开启:%s": "ASR 작업 시작: %s", + "GPT训练完成": "GPT 훈련 완료", + "GPT训练开始:%s": "GPT 훈련 시작: %s", + "SSL提取进程执行中": "SSL 추출 프로세스 실행 중", + "SSL提取进程结束": "SSL 추출 프로세스 종료", + "SoVITS训练完成": "SoVITS 훈련 완료", + "SoVITS训练开始:%s": "SoVITS 훈련 시작: %s", + "一键三连中途报错": "일괄 처리 중 오류 발생", + "一键三连进程结束": "일괄 처리 프로세스 종료", + "中文": "중국어", + "凑50字一切": "50자를 채우십시오", + "凑五句一切": "다섯 문장을 채우십시오", + "切分后文本": "분리된 텍스트", + "切割执行中": "분리 진행 중", + "切割结束": "분리 종료", + "参考音频的文本": "참고 오디오의 텍스트", + "参考音频的语种": "참고 오디오의 언어", + "合成语音": "합성 음성", + "后续将支持混合语种编码文本输入。": "향후 혼합 언어 코딩 텍스트 입력을 지원할 예정입니다.", + "已有正在进行的ASR任务,需先终止才能开启下一次任务": "이미 진행 중인 ASR 작업이 있습니다. 다음 작업을 시작하려면 먼저 종료하십시오.", + "已有正在进行的GPT训练任务,需先终止才能开启下一次任务": "이미 진행 중인 GPT 훈련 작업이 있습니다. 다음 작업을 시작하려면 먼저 종료하십시오.", + "已有正在进行的SSL提取任务,需先终止才能开启下一次任务": "이미 진행 중인 SSL 추출 작업이 있습니다. 다음 작업을 시작하려면 먼저 종료하십시오.", + "已有正在进行的SoVITS训练任务,需先终止才能开启下一次任务": "이미 진행 중인 SoVITS 훈련 작업이 있습니다. 다음 작업을 시작하려면 먼저 종료하십시오.", + "已有正在进行的一键三连任务,需先终止才能开启下一次任务": "이미 진행 중인 일괄 처리 작업이 있습니다. 다음 작업을 시작하려면 먼저 종료하십시오.", + "已有正在进行的切割任务,需先终止才能开启下一次任务": "이미 진행 중인 분리 작업이 있습니다. 다음 작업을 시작하려면 먼저 종료하십시오.", + "已有正在进行的文本任务,需先终止才能开启下一次任务": "이미 진행 중인 텍스트 작업이 있습니다. 다음 작업을 시작하려면 먼저 종료하십시오.", + "已有正在进行的语义token提取任务,需先终止才能开启下一次任务": "이미 진행 중인 의미 토큰 추출 작업이 있습니다. 다음 작업을 시작하려면 먼저 종료하십시오.", + "已终止ASR进程": "ASR 프로세스 종료됨", + "已终止GPT训练": "GPT 훈련 종료됨", + "已终止SoVITS训练": "SoVITS 훈련 종료됨", + "已终止所有1a进程": "모든 1a 프로세스 종료됨", + "已终止所有1b进程": "모든 1b 프로세스 종료됨", + "已终止所有一键三连进程": "모든 일괄 처리 프로세스 종료됨", + "已终止所有切割进程": "모든 분리 프로세스 종료됨", + "已终止所有语义token进程": "모든 의미 토큰 프로세스 종료됨", + "按中文句号。切": "중국어 문장으로 분리하십시오.", + "文本切分工具。太长的文本合成出来效果不一定好,所以太长建议先切。合成会根据文本的换行分开合成再拼起来。": "텍스트 분리 도구. 너무 긴 텍스트는 합성 결과가 항상 좋지 않을 수 있으므로 너무 길면 먼저 분리하는 것이 좋습니다. 합성은 텍스트 줄 바꿈을 기준으로 분리되어 다시 조합됩니다.", + "文本进程执行中": "텍스트 프로세스 실행 중", + "文本进程结束": "텍스트 프로세스 종료", + "日文": "일본어", + "英文": "영어", + "语义token提取进程执行中": "의미 토큰 추출 프로세스 실행 중", + "语义token提取进程结束": "의미 토큰 추출 프로세스 종료", + "请上传参考音频": "참고 오디오를 업로드하십시오", + "输入路径不存在": "입력 경로가 존재하지 않습니다", + "输入路径存在但既不是文件也不是文件夹": "입력 경로가 파일이나 폴더가 아닙니다", + "输出的语音": "출력 음성", + "进度:1a-done": "진행: 1a-done", + "进度:1a-done, 1b-ing": "진행: 1a-done, 1b-ing", + "进度:1a-ing": "진행: 1a-ing", + "进度:1a1b-done": "진행: 1a1b-done", + "进度:1a1b-done, 1cing": "진행: 1a1b-done, 1cing", + "进度:all-done": "진행: all-done", + "需要合成的切分前文本": "합성해야 할 분할 전 텍스트", + "需要合成的文本": "합성해야 할 텍스트", + "需要合成的语种": "합성해야 할 언어", + ">=3则使用对harvest音高识别的结果使用中值滤波,数值为滤波半径,使用可以削弱哑音": ">=3이면 harvest 음고 인식 결과에 중앙값 필터를 사용하며, 값은 필터 반경이며 사용하면 소리를 약하게 할 수 있습니다", "A模型权重": "A 모델 가중치", "A模型路径": "A 모델 경로", "B模型路径": "B 모델 경로", - "E:\\语音音频+标注\\米津玄师\\src": "E:\\음성 오디오+주석\\요네즈 켄시\\src", - "F0曲线文件, 可选, 一行一个音高, 代替默认F0及升降调": "F0 곡선 파일, 선택 사항, 한 줄에 하나의 음높이, 기본 F0 및 음높이 변화를 대체함", + "E:\\语音音频+标注\\米津玄师\\src": "E:\\음성 오디오 + 주석\\Miyuki Kenshi\\src", + "F0曲线文件, 可选, 一行一个音高, 代替默认F0及升降调": "F0 곡선 파일, 선택 사항, 한 줄에 하나의 음고, 기본 F0 및 음조 대신 사용", "Index Rate": "인덱스 비율", "Onnx导出": "Onnx 내보내기", "Onnx输出路径": "Onnx 출력 경로", "RVC模型路径": "RVC 모델 경로", "ckpt处理": "ckpt 처리", "harvest进程数": "harvest 프로세스 수", - "index文件路径不可包含中文": "인덱스 파일 경로에는 중국어를 포함할 수 없습니다.", - "pth文件路径不可包含中文": "pth 파일 경로에는 중국어를 포함할 수 없습니다.", - "rmvpe卡号配置:以-分隔输入使用的不同进程卡号,例如0-0-1使用在卡0上跑2个进程并在卡1上跑1个进程": "rmvpe 카드 번호 구성: '-'로 구분하여 입력된 다른 프로세스 카드 번호, 예를 들어 0-0-1은 카드 0에서 2개의 프로세스를 실행하고 카드 1에서 1개의 프로세스를 실행", - "step1: 填写实验配置. 实验数据放在logs下, 每个实验一个文件夹, 需手工输入实验名路径, 内含实验配置, 日志, 训练得到的模型文件. ": "step1: 실험 설정을 작성합니다. 실험 데이터는 logs 아래에 있으며, 각 실험마다 하나의 폴더가 있습니다. 실험 이름 경로를 수동으로 입력해야 하며, 이 안에는 실험 설정, 로그, 훈련으로 얻은 모델 파일이 포함되어 있습니다.", + "index文件路径不可包含中文": "인덱스 파일 경로에는 중국어를 포함할 수 없습니다", + "pth文件路径不可包含中文": "pth 파일 경로에는 중국어를 포함할 수 없습니다", + "rmvpe卡号配置:以-分隔输入使用的不同进程卡号,例如0-0-1使用在卡0上跑2个进程并在卡1上跑1个进程": "rmvpe 카드 번호 구성: 각 입력에 사용되는 다른 프로세스 카드를 -로 구분하여 입력하십시오. 예: 0-0-1은 카드 0에서 2개의 프로세스를 실행하고 카드 1에서 1개의 프로세스를 실행합니다", + "step1: 填写实验配置. 实验数据放在logs下, 每个实验一个文件夹, 需手工输入实验名路径, 内含实验配置, 日志, 训练得到的模型文件. ": "step1: 실험 구성 입력. 실험 데이터는 logs 하위에 있으며 각 실험에 대한 폴더가 있어야합니다. 실험 이름 경로를 수동으로 입력해야하며 실험 구성, 로그, 훈련된 모델 파일이 포함되어 있습니다.", "step1:正在处理数据": "step1: 데이터 처리 중", - "step2:正在提取音高&正在提取特征": "step2: 음높이 추출 및 특성 추출 중", - "step2a: 自动遍历训练文件夹下所有可解码成音频的文件并进行切片归一化, 在实验目录下生成2个wav文件夹; 暂时只支持单人训练. ": "step2a: 훈련 폴더 아래 모든 오디오로 디코딩 가능한 파일을 자동으로 순회하고 슬라이스 정규화를 진행하여, 실험 디렉토리 아래에 2개의 wav 폴더를 생성합니다; 현재는 단일 사용자 훈련만 지원합니다.", - "step2b: 使用CPU提取音高(如果模型带音高), 使用GPU提取特征(选择卡号)": "step2b: CPU를 사용해 음높이를 추출합니다(모델이 음높이를 포함하는 경우), GPU를 사용해 특성을 추출합니다(카드 번호 선택)", - "step3: 填写训练设置, 开始训练模型和索引": "step3: 훈련 설정을 작성하고, 모델 및 인덱스 훈련을 시작합니다", + "step2:正在提取音高&正在提取特征": "step2: 음고 추출 및 특징 추출 중", + "step2a: 自动遍历训练文件夹下所有可解码成音频的文件并进行切片归一化, 在实验目录下生成2个wav文件夹; 暂时只支持单人训练. ": "step2a: 자동으로 훈련 폴더에서 오디오로 디코딩할 수 있는 모든 파일을 반복하고 슬라이스 정규화를 수행하여 실험 디렉토리에 2 개의 wav 폴더를 생성합니다. 현재 단일 훈련만 지원됩니다.", + "step2b: 使用CPU提取音高(如果模型带音高), 使用GPU提取特征(选择卡号)": "step2b: CPU로 음고 추출(모델이 음고를 지원하는 경우), GPU로 특징 추출(카드 번호 선택)", + "step3: 填写训练设置, 开始训练模型和索引": "step3: 훈련 설정 입력, 모델 및 인덱스 훈련 시작", "step3a:正在训练模型": "step3a: 모델 훈련 중", - "一键训练": "원키 트레이닝", - "也可批量输入音频文件, 二选一, 优先读文件夹": "대량으로 오디오 파일 입력도 가능, 둘 중 하나 선택, 폴더 우선 읽기", - "人声伴奏分离批量处理, 使用UVR5模型。
合格的文件夹路径格式举例: E:\\codes\\py39\\vits_vc_gpu\\白鹭霜华测试样例(去文件管理器地址栏拷就行了)。
模型分为三类:
1、保留人声:不带和声的音频选这个,对主人声保留比HP5更好。内置HP2和HP3两个模型,HP3可能轻微漏伴奏但对主人声保留比HP2稍微好一丁点;
2、仅保留主人声:带和声的音频选这个,对主人声可能有削弱。内置HP5一个模型;
3、去混响、去延迟模型(by FoxJoy):
  (1)MDX-Net(onnx_dereverb):对于双通道混响是最好的选择,不能去除单通道混响;
 (234)DeEcho:去除延迟效果。Aggressive比Normal去除得更彻底,DeReverb额外去除混响,可去除单声道混响,但是对高频重的板式混响去不干净。
去混响/去延迟,附:
1、DeEcho-DeReverb模型的耗时是另外2个DeEcho模型的接近2倍;
2、MDX-Net-Dereverb模型挺慢的;
3、个人推荐的最干净的配置是先MDX-Net再DeEcho-Aggressive。": "인간 목소리와 반주 분리 대량 처리, UVR5 모델 사용.
올바른 폴더 경로 예: E:\\codes\\py39\\vits_vc_gpu\\백로서화 테스트 케이스(파일 탐색기 주소창에서 복사하면 됨).
모델은 세 가지 유형으로 나뉩니다:
1. 인간 목소리 보존: 하모니가 없는 오디오를 선택, 주요 인간 목소리를 HP5보다 더 잘 보존. 내장된 HP2와 HP3 모델, HP3는 약간의 반주를 놓칠 수 있지만 HP2보다는 인간 목소리를 조금 더 잘 보존합니다.
2. 오직 주요 인간 목소리 보존: 하모니가 있는 오디오를 선택, 주요 인간 목소리가 약간 약해질 수 있음. 내장된 HP5 모델 하나;
3. 울림 제거, 지연 제거 모델(by FoxJoy):
  (1)MDX-Net(onnx_dereverb): 양채널 울림에 대해서는 최선의 선택, 단채널 울림 제거 불가능;
 (234)DeEcho: 지연 효과 제거. Aggressive가 Normal보다 더 철저하게 제거하며, DeReverb는 추가로 울림 제거, 단일 채널 울림 제거 가능하지만 고주파 중심의 판형 울림은 완전히 제거하지 못함.
울림/지연 제거 시 참고:
1. DeEcho-DeReverb 모델의 처리 시간은 다른 두 DeEcho 모델의 거의 2배임;
2. MDX-Net-Dereverb 모델은 상당히 느림;
3. 개인적으로 추천하는 가장 깨끗한 구성은 MDX-Net 다음에 DeEcho-Aggressive 사용.", - "以-分隔输入使用的卡号, 例如 0-1-2 使用卡0和卡1和卡2": "-로 구분하여 입력한 카드 번호, 예를 들어 0-1-2는 카드0, 카드1, 카드2 사용", - "伴奏人声分离&去混响&去回声": "반주 및 인간 목소리 분리 & 울림 제거 & 에코 제거", - "使用模型采样率": "모델 샘플링 레이트 사용", - "使用设备采样率": "장치 샘플링 레이트 사용", + "一键训练": "일괄 훈련", + "也可批量输入音频文件, 二选一, 优先读文件夹": "오디오 파일을 일괄로 입력할 수도 있습니다. 둘 중 하나를 선택하고 폴더를 읽기를 우선합니다.", + "以-分隔输入使用的卡号, 例如 0-1-2 使用卡0和卡1和卡2": "-로 구분하여 입력에 사용되는 카드 번호를 지정하십시오. 예 : 0-1-2는 카드 0, 1 및 2를 사용합니다", + "伴奏人声分离&去混响&去回声": "반주 및 보컬 분리 & 리버브 제거 & 에코 제거", + "使用模型采样率": "모델 샘플링 속도 사용", + "使用设备采样率": "기기 샘플링 속도 사용", "保存名": "저장 이름", - "保存的文件名, 默认空为和源文件同名": "저장된 파일 이름, 기본값은 원본 파일과 동일", - "保存的模型名不带后缀": "저장된 모델 이름은 접미사 없음", - "保存频率save_every_epoch": "저장 빈도 save_every_epoch", - "保护清辅音和呼吸声,防止电音撕裂等artifact,拉满0.5不开启,调低加大保护力度但可能降低索引效果": "청결한 자음과 숨소리를 보호하고, 전자음의 찢어짐과 같은 아티팩트를 방지하며, 0.5까지 끌어올리면 보호가 활성화되지 않으며, 낮추면 보호 강도는 증가하지만 인덱싱 효과는 감소할 수 있음", + "保存的文件名, 默认空为和源文件同名": "저장할 파일 이름, 기본적으로 공백은 원본 파일과 동일한 이름입니다", + "保存的模型名不带后缀": "저장할 모델 이름에는 확장자가 없습니다", + "保护清辅音和呼吸声,防止电音撕裂等artifact,拉满0.5不开启,调低加大保护力度但可能降低索引效果": "클리어 자음 및 숨소를 보호하여 전자 음향 찢김과 같은 아티팩트를 방지하려면 0.5로 설정하되, 보호 강도를 높이려면 0.5로 당기지 않고 낮추면 인덱스 효과가 감소할 수 있습니다", "修改": "수정", - "修改模型信息(仅支持weights文件夹下提取的小模型文件)": "모델 정보 수정(오직 weights 폴더에서 추출된 소형 모델 파일만 지원)", + "修改模型信息(仅支持weights文件夹下提取的小模型文件)": "모델 정보 수정 (weights 폴더에서 추출된 작은 모델 파일만 지원됨)", "停止音频转换": "오디오 변환 중지", - "全流程结束!": "전체 과정 완료!", - "刷新音色列表和索引路径": "음색 목록 및 인덱스 경로 새로고침", + "全流程结束!": "전체 프로세스 완료!", + "刷新音色列表和索引路径": "음색 목록 및 인덱스 경로 새로 고침", "加载模型": "모델 로드", - "加载预训练底模D路径": "사전 훈련된 베이스 모델 D 경로 로드", - "加载预训练底模G路径": "사전 훈련된 베이스 모델 G 경로 로드", + "加载预训练底模D路径": "사전 훈련된 기본 모델 D 경로 로드", + "加载预训练底模G路径": "사전 훈련된 기본 모델 G 경로 로드", "单次推理": "단일 추론", - "卸载音色省显存": "음색 언로드로 메모리 절약", - "变调(整数, 半音数量, 升八度12降八度-12)": "변조(정수, 반음 수, 옥타브 상승 12, 옥타브 하강 -12)", - "后处理重采样至最终采样率,0为不进行重采样": "후처리로 최종 샘플링 레이트까지 리샘플링, 0은 리샘플링하지 않음", + "卸载音色省显存": "음색 언로드 및 GPU 메모리 절약", + "变调(整数, 半音数量, 升八度12降八度-12)": "음높이 변경(정수, 반음 수, 올림 높이 12 내림 높이 -12)", + "后处理重采样至最终采样率,0为不进行重采样": "후 처리를 통한 최종 샘플링률 재샘플링, 0은 재샘플링 미실행", "否": "아니오", - "启用相位声码器": "위상 보코더 활성화", + "启用相位声码器": "페이즈 보코더 사용", "响应阈值": "응답 임계값", - "响度因子": "소리 크기 인자", + "响度因子": "음량 요소", "处理数据": "데이터 처리", "导出Onnx模型": "Onnx 모델 내보내기", - "导出文件格式": "파일 형식 내보내기", - "常见问题解答": "자주 묻는 질문 답변", + "导出文件格式": "내보내기 파일 형식", + "常见问题解答": "자주 묻는 질문 해결", "常规设置": "일반 설정", "开始音频转换": "오디오 변환 시작", - "很遗憾您这没有能用的显卡来支持您训练": "유감스럽게도 훈련을 지원할 수 있는 그래픽 카드가 없습니다", "性能设置": "성능 설정", - "总训练轮数total_epoch": "총 훈련 회차 total_epoch", - "批量推理": "대량 추론", - "批量转换, 输入待转换音频文件夹, 或上传多个音频文件, 在指定文件夹(默认opt)下输出转换的音频. ": "대량 변환, 변환할 오디오 폴더 입력, 또는 여러 오디오 파일 업로드, 지정된 폴더(기본값 opt)에 변환된 오디오 출력.", - "指定输出主人声文件夹": "주인공 목소리 출력 폴더 지정", - "指定输出文件夹": "출력 파일 폴더 지정", - "指定输出非主人声文件夹": "비주인공 목소리 출력 폴더 지정", + "批量推理": "일괄 추론", + "批量转换, 输入待转换音频文件夹, 或上传多个音频文件, 在指定文件夹(默认opt)下输出转换的音频. ": "일괄 변환, 변환 대기 중인 오디오 폴더를 입력하거나 여러 오디오 파일을 업로드하고 지정된 폴더(opt 기본값)에 변환된 오디오를 출력합니다.", + "指定输出主人声文件夹": "지정된 주인 목소리 출력 폴더", + "指定输出文件夹": "지정된 출력 폴더", + "指定输出非主人声文件夹": "지정된 비주인 목소리 출력 폴더", "推理时间(ms):": "추론 시간(ms):", "推理音色": "추론 음색", "提取": "추출", - "提取音高和处理数据使用的CPU进程数": "음높이 추출 및 데이터 처리에 사용되는 CPU 프로세스 수", + "提取音高和处理数据使用的CPU进程数": "음높이 추출 및 데이터 처리에 사용되는 CPU 프로세스 수 추출", "是": "예", - "是否仅保存最新的ckpt文件以节省硬盘空间": "디스크 공간을 절약하기 위해 가장 최신의 ckpt 파일만 저장할지 여부", - "是否在每次保存时间点将最终小模型保存至weights文件夹": "매 저장 시점마다 최종 작은 모델을 weights 폴더에 저장할지 여부", - "是否缓存所有训练集至显存. 10min以下小数据可缓存以加速训练, 大数据缓存会炸显存也加不了多少速": "모든 훈련 세트를 VRAM에 캐시할지 여부. 10분 미만의 작은 데이터는 훈련 속도를 높이기 위해 캐시할 수 있으나, 큰 데이터는 VRAM을 초과하여 큰 속도 향상을 기대할 수 없음.", - "显卡信息": "그래픽 카드 정보", - "本软件以MIT协议开源, 作者不对软件具备任何控制力, 使用软件者、传播软件导出的声音者自负全责.
如不认可该条款, 则不能使用或引用软件包内任何代码和文件. 详见根目录LICENSE.": "이 소프트웨어는 MIT 라이선스로 오픈 소스이며, 작성자는 소프트웨어에 대한 어떠한 제어도 가지지 않으며, 소프트웨어 사용자 및 소프트웨어에서 내보낸 소리를 전파하는 사용자는 모든 책임을 져야 함.
이 조항을 인정하지 않는 경우, 소프트웨어 패키지 내의 어떠한 코드나 파일도 사용하거나 인용할 수 없음. 자세한 내용은 루트 디렉토리의 LICENSE를 참조.", + "是否缓存所有训练集至显存. 10min以下小数据可缓存以加速训练, 大数据缓存会炸显存也加不了多少速": "모든 훈련 세트를 GPU 메모리에 캐시할지 여부. 10분 미만의 소량 데이터는 훈련 속도를 높이기 위해 캐시할 수 있지만, 대량 데이터를 캐시하면 메모리가 터지고 속도가 크게 향상되지 않을 수 있습니다.", "查看": "보기", - "查看模型信息(仅支持weights文件夹下提取的小模型文件)": "모델 정보 보기(오직 weights 폴더에서 추출된 작은 모델 파일만 지원)", - "检索特征占比": "특징 검색 비율", + "查看模型信息(仅支持weights文件夹下提取的小模型文件)": "모델 정보보기(작은 모델 파일로 추출된 weights 폴더에서만 지원)", + "检索特征占比": "특징 비율 검색", "模型": "모델", "模型推理": "모델 추론", - "模型提取(输入logs文件夹下大文件模型路径),适用于训一半不想训了模型没有自动提取保存小文件模型,或者想测试中间模型的情况": "모델 추출(로그 폴더 아래 대용량 모델 경로 입력), 중간에 훈련을 중단하고 싶은 경우나 작은 파일 모델을 자동으로 저장하지 않은 경우, 또는 중간 모델을 테스트하고 싶은 경우에 적합", - "模型是否带音高指导": "모델이 음높이 지도를 포함하는지 여부", - "模型是否带音高指导(唱歌一定要, 语音可以不要)": "모델이 음높이 지도를 포함하는지 여부(노래에는 필수, 말하기에는 선택적)", - "模型是否带音高指导,1是0否": "모델이 음높이 지도를 포함하는지 여부, 1은 '예', 0은 '아니오'", - "模型版本型号": "모델 버전 및 모델", + "模型提取(输入logs文件夹下大文件模型路径),适用于训一半不想训了模型没有自动提取保存小文件模型,或者想测试中间模型的情况": "모델 추출(로그 폴더에 대형 파일 모델 경로 입력), 반 훈련하고 싶지 않거나 모델이 자동으로 작은 파일 모델로 추출되지 않았거나 중간 모델을 테스트하려는 경우에 사용", + "模型是否带音高指导": "모델에 음높이 안내가 있는지 여부", + "模型是否带音高指导(唱歌一定要, 语音可以不要)": "모델에 음높이 안내가 있는지 여부(노래에는 필수, 음성은 선택 사항)", + "模型是否带音高指导,1是0否": "모델에 음높이 안내가 있는지 여부, 1이면 있음 0이면 없음", + "模型版本型号": "모델 버전 및 모델 번호", "模型融合, 可用于测试音色融合": "모델 통합, 음색 통합 테스트에 사용 가능", "模型路径": "모델 경로", - "每张显卡的batch_size": "각 GPU의 batch_size", "淡入淡出长度": "페이드 인/아웃 길이", "版本": "버전", - "特征提取": "특징 추출", - "特征检索库文件路径,为空则使用下拉的选择结果": "특징 검색 라이브러리 파일 경로, 비어 있으면 드롭다운 선택 결과 사용", - "男转女推荐+12key, 女转男推荐-12key, 如果音域爆炸导致音色失真也可以自己调整到合适音域. ": "남성에서 여성으로 전환 시 +12키 추천, 여성에서 남성으로 전환 시 -12키 추천, 음역대 폭발로 음색 왜곡이 발생할 경우 적절한 음역대로 조정 가능.", - "目标采样率": "목표 샘플링 비율", - "算法延迟(ms):": "알고리즘 지연(ms):", - "自动检测index路径,下拉式选择(dropdown)": "index 경로 자동 감지, 드롭다운 선택", - "融合": "통합", + "特征提取": "특성 추출", + "特征检索库文件路径,为空则使用下拉的选择结果": "특성 검색 라이브러리 파일 경로, 비어 있으면 드롭다운 선택 결과 사용", + "男转女推荐+12key, 女转男推荐-12key, 如果音域爆炸导致音色失真也可以自己调整到合适音域. ": "남성을 여성으로 추천 +12키, 여성을 남성으로 추천 -12키, 음역 폭발로 음색이 왜곡되면 적절한 음역으로 직접 조절 가능", + "目标采样率": "목표 샘플링률", + "算法延迟(ms):": "알고리즘 지연 시간(ms):", + "自动检测index路径,下拉式选择(dropdown)": "자동으로 index 경로 감지, 드롭다운 선택", + "融合": "융합", "要改的模型信息": "수정할 모델 정보", "要置入的模型信息": "삽입할 모델 정보", "训练": "훈련", "训练模型": "모델 훈련", - "训练特征索引": "특징 인덱스 훈련", - "训练结束, 您可查看控制台训练日志或实验文件夹下的train.log": "훈련이 완료되었습니다. 콘솔 훈련 로그나 실험 폴더 내의 train.log를 확인하세요.", - "请指定说话人id": "화자 id를 지정해주세요.", - "请选择index文件": "index 파일을 선택해주세요.", - "请选择pth文件": "pth 파일을 선택해주세요.", - "请选择说话人id": "화자 id를 선택해주세요.", + "训练特征索引": "특성 인덱스 훈련", + "训练结束, 您可查看控制台训练日志或实验文件夹下的train.log": "훈련 종료, 콘솔 훈련 로그 또는 실험 폴더의 train.log를 확인할 수 있습니다", + "请指定说话人id": "화자 ID 지정", + "请选择index文件": "index 파일 선택", + "请选择pth文件": "pth 파일 선택", + "请选择说话人id": "화자 ID 선택", "转换": "변환", - "输入实验名": "실험명을 입력하세요.", - "输入待处理音频文件夹路径": "처리할 오디오 파일 폴더 경로를 입력하세요.", - "输入待处理音频文件夹路径(去文件管理器地址栏拷就行了)": "처리할 오디오 파일 폴더 경로를 입력하세요(파일 관리자의 주소 표시줄에서 복사하세요).", - "输入待处理音频文件路径(默认是正确格式示例)": "처리할 오디오 파일 경로를 입력하세요(기본값은 올바른 형식의 예시입니다).", - "输入源音量包络替换输出音量包络融合比例,越靠近1越使用输出包络": "원본 볼륨 엔벨로프와 출력 볼륨 엔벨로프의 혼합 비율을 입력하세요. 1에 가까울수록 출력 엔벨로프를 더 많이 사용합니다.", - "输入监听": "모니터링 입력", - "输入训练文件夹路径": "학습시킬 파일 폴더의 경로를 입력하세요.", + "输入实验名": "실험명 입력", + "输入待处理音频文件夹路径": "처리 대기 중인 오디오 폴더 경로 입력", + "输入待处理音频文件夹路径(去文件管理器地址栏拷就行了)": "처리 대기 중인 오디오 폴더 경로 입력(파일 관리자 주소 표시 줄에서 복사하면 됨)", + "输入待处理音频文件路径(默认是正确格式示例)": "처리 대기 중인 오디오 파일 경로 입력(기본적으로 올바른 형식의 예제)", + "输入源音量包络替换输出音量包络融合比例,越靠近1越使用输出包络": "소스 음량 에너벌롭을 입력하여 출력 음량 에너벌롭 합성 비율을 대체하면 1에 가까울수록 출력 에너벌롭 사용", + "输入监听": "입력 모니터링", + "输入训练文件夹路径": "훈련 폴더 경로 입력", "输入设备": "입력 장치", - "输入降噪": "입력 노이즈 감소", + "输入降噪": "노이즈 감소 입력", "输出信息": "출력 정보", - "输出变声": "음성 변환 출력", + "输出变声": "음성 출력", "输出设备": "출력 장치", - "输出降噪": "출력 노이즈 감소", - "输出音频(右下角三个点,点了可以下载)": "오디오 출력(오른쪽 하단 세 개의 점, 클릭하면 다운로드 가능)", - "选择.index文件": ".index 파일 선택", - "选择.pth文件": ".pth 파일 선택", - "选择音高提取算法,输入歌声可用pm提速,harvest低音好但巨慢无比,crepe效果好但吃GPU": "음고 추출 알고리즘을 선택하세요. 노래 입력 시 pm으로 속도를 높일 수 있으며, harvest는 저음이 좋지만 매우 느리고, crepe는 효과가 좋지만 GPU를 많이 사용합니다.", - "选择音高提取算法,输入歌声可用pm提速,harvest低音好但巨慢无比,crepe效果好但吃GPU,rmvpe效果最好且微吃GPU": "음고 추출 알고리즘을 선택하세요. 노래 입력 시 pm으로 속도를 높일 수 있고, harvest는 저음이 좋지만 매우 느리며, crepe는 효과가 좋지만 GPU를 많이 사용하고, rmvpe는 가장 좋은 효과를 내면서 GPU를 적게 사용합니다.", - "选择音高提取算法:输入歌声可用pm提速,高质量语音但CPU差可用dio提速,harvest质量更好但慢,rmvpe效果最好且微吃CPU/GPU": "음고 추출 알고리즘 선택: 노래 입력 시 pm으로 속도를 높일 수 있으며, 고품질 음성이지만 CPU가 낮을 때는 dio로 속도를 높일 수 있고, harvest는 품질이 더 좋지만 느리며, rmvpe는 최고의 효과를 내면서 CPU/GPU를 적게 사용합니다.", - "采样率:": "샘플링 레이트:", + "输出降噪": "노이즈 감소 출력", + "输出音频(右下角三个点,点了可以下载)": "출력 오디오(우하단 세 점, 클릭하면 다운로드 가능)", + "选择.index文件": "index 파일 선택", + "选择.pth文件": "pth 파일 선택", + "选择音高提取算法,输入歌声可用pm提速,harvest低音好但巨慢无比,crepe效果好但吃GPU": "음높이 추출 알고리즘 선택, 노래 입력에 pm 사용 가능, harvest는 저음이 좋지만 매우 느림, crepe 효과는 좋지만 GPU 사용", + "选择音高提取算法,输入歌声可用pm提速,harvest低音好但巨慢无比,crepe效果好但吃GPU,rmvpe效果最好且微吃GPU": "음높이 추출 알고리즘 선택, 노래 입력에 pm 사용 가능, harvest는 저음이 좋지만 매우 느림, crepe 효과는 좋지만 GPU 사용, rmvpe 효과가 가장 좋으며 약간의 GPU 사용", + "选择音高提取算法:输入歌声可用pm提速,高质量语音但CPU差可用dio提速,harvest质量更好但慢,rmvpe效果最好且微吃CPU/GPU": "음높이 추출 알고리즘 선택: 노래 입력에 pm 사용 가능, 고품질 음성이지만 CPU가 낮음, dio 사용 가능, harvest 품질이 더 좋지만 느림, rmvpe 효과가 최고이며 CPU/GPU 약간 사용", + "采样率:": "샘플링률:", "采样长度": "샘플링 길이", - "重载设备列表": "장치 목록 리로드", + "重载设备列表": "장치 목록 다시로드", "音调设置": "음조 설정", - "音频设备(请使用同种类驱动)": "오디오 장치(동일한 유형의 드라이버를 사용해주세요)", - "音高算法": "음고 알고리즘", - "额外推理时长": "추가적인 추론 시간" + "音频设备(请使用同种类驱动)": "오디오 장치(동일한 유형의 드라이버 사용 권장)", + "音高算法": "음높이 알고리즘", + "额外推理时长": "추가 추론 시간" } From 26e6fe6b15e9259c0f4b77b5435c5fe5efbbff43 Mon Sep 17 00:00:00 2001 From: yukannoshonen <151692166+idkdik2@users.noreply.github.com> Date: Sun, 18 Feb 2024 14:50:20 -0300 Subject: [PATCH 13/63] Add inference-only --- GPT_SoVITS_Inference.ipynb | 152 +++++++++++++++++++++++++++++++++++++ 1 file changed, 152 insertions(+) create mode 100644 GPT_SoVITS_Inference.ipynb diff --git a/GPT_SoVITS_Inference.ipynb b/GPT_SoVITS_Inference.ipynb new file mode 100644 index 00000000..a5b55325 --- /dev/null +++ b/GPT_SoVITS_Inference.ipynb @@ -0,0 +1,152 @@ +{ + "nbformat": 4, + "nbformat_minor": 0, + "metadata": { + "colab": { + "provenance": [] + }, + "kernelspec": { + "name": "python3", + "display_name": "Python 3" + }, + "accelerator": "GPU" + }, + "cells": [ + { + "cell_type": "markdown", + "source": [ + "# Credits for bubarino giving me the huggingface import code (感谢 bubarino 给了我 huggingface 导入代码)" + ], + "metadata": { + "id": "himHYZmra7ix" + } + }, + { + "cell_type": "code", + "metadata": { + "id": "e9b7iFV3dm1f" + }, + "source": [ + "!git clone https://github.com/RVC-Boss/GPT-SoVITS.git\n", + "%cd GPT-SoVITS\n", + "!apt-get update && apt-get install -y --no-install-recommends tzdata ffmpeg libsox-dev parallel aria2 git git-lfs && git lfs install\n", + "!pip install -r requirements.txt" + ], + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "code", + "source": [ + "# @title Download pretrained models 下载预训练模型\n", + "!mkdir -p /content/GPT-SoVITS/GPT_SoVITS/pretrained_models\n", + "!mkdir -p /content/GPT-SoVITS/tools/damo_asr/models\n", + "!mkdir -p /content/GPT-SoVITS/tools/uvr5\n", + "%cd /content/GPT-SoVITS/GPT_SoVITS/pretrained_models\n", + "!git clone https://huggingface.co/lj1995/GPT-SoVITS\n", + "%cd /content/GPT-SoVITS/tools/damo_asr/models\n", + "!git clone https://www.modelscope.cn/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch.git\n", + "!git clone https://www.modelscope.cn/damo/speech_fsmn_vad_zh-cn-16k-common-pytorch.git\n", + "!git clone https://www.modelscope.cn/damo/punc_ct-transformer_zh-cn-common-vocab272727-pytorch.git\n", + "# @title UVR5 pretrains 安装uvr5模型\n", + "%cd /content/GPT-SoVITS/tools/uvr5\n", + "!git clone https://huggingface.co/Delik/uvr5_weights\n", + "!git config core.sparseCheckout true\n", + "!mv /content/GPT-SoVITS/GPT_SoVITS/pretrained_models/GPT-SoVITS/* /content/GPT-SoVITS/GPT_SoVITS/pretrained_models/" + ], + "metadata": { + "id": "0NgxXg5sjv7z", + "cellView": "form" + }, + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "code", + "source": [ + "#@title Create folder models 创建文件夹模型\n", + "import os\n", + "base_directory = \"/content/GPT-SoVITS\"\n", + "folder_names = [\"SoVITS_weights\", \"GPT_weights\"]\n", + "\n", + "for folder_name in folder_names:\n", + " if os.path.exists(os.path.join(base_directory, folder_name)):\n", + " print(f\"The folder '{folder_name}' already exists. (文件夹'{folder_name}'已经存在。)\")\n", + " else:\n", + " os.makedirs(os.path.join(base_directory, folder_name))\n", + " print(f\"The folder '{folder_name}' was created successfully! (文件夹'{folder_name}'已成功创建!)\")\n", + "\n", + "print(\"All folders have been created. (所有文件夹均已创建。)\")" + ], + "metadata": { + "cellView": "form", + "id": "cPDEH-9czOJF" + }, + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "code", + "source": [ + "import requests\n", + "import zipfile\n", + "import shutil\n", + "import os\n", + "\n", + "#@title Import model 导入模型 (HuggingFace)\n", + "hf_link = 'https://huggingface.co/modelloosrvcc/Nagisa_Shingetsu_GPT-SoVITS/resolve/main/Nagisa.zip' #@param {type: \"string\"}\n", + "\n", + "output_path = '/content/'\n", + "\n", + "response = requests.get(hf_link)\n", + "with open(output_path + 'file.zip', 'wb') as file:\n", + " file.write(response.content)\n", + "\n", + "with zipfile.ZipFile(output_path + 'file.zip', 'r') as zip_ref:\n", + " zip_ref.extractall(output_path)\n", + "\n", + "os.remove(output_path + \"file.zip\")\n", + "\n", + "source_directory = output_path\n", + "SoVITS_destination_directory = '/content/GPT-SoVITS/SoVITS_weights'\n", + "GPT_destination_directory = '/content/GPT-SoVITS/GPT_weights'\n", + "\n", + "for filename in os.listdir(source_directory):\n", + " if filename.endswith(\".pth\"):\n", + " source_path = os.path.join(source_directory, filename)\n", + " destination_path = os.path.join(SoVITS_destination_directory, filename)\n", + " shutil.move(source_path, destination_path)\n", + "\n", + "for filename in os.listdir(source_directory):\n", + " if filename.endswith(\".ckpt\"):\n", + " source_path = os.path.join(source_directory, filename)\n", + " destination_path = os.path.join(GPT_destination_directory, filename)\n", + " shutil.move(source_path, destination_path)\n", + "\n", + "print(f'Model downloaded. (模型已下载。)')" + ], + "metadata": { + "cellView": "form", + "id": "vbZY-LnM0tzq" + }, + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "code", + "source": [ + "# @title launch WebUI 启动WebUI\n", + "!/usr/local/bin/pip install ipykernel\n", + "!sed -i '10s/False/True/' /content/GPT-SoVITS/config.py\n", + "%cd /content/GPT-SoVITS/\n", + "!/usr/local/bin/python webui.py" + ], + "metadata": { + "id": "4oRGUzkrk8C7", + "cellView": "form" + }, + "execution_count": null, + "outputs": [] + } + ] +} \ No newline at end of file From 55f82e9ad17f98f3b25803f426e5d423e0245d46 Mon Sep 17 00:00:00 2001 From: KamioRinn Date: Mon, 19 Feb 2024 02:11:02 +0800 Subject: [PATCH 14/63] Fix text formatting --- GPT_SoVITS/inference_webui.py | 9 ++++++--- GPT_SoVITS/text/chinese.py | 2 +- 2 files changed, 7 insertions(+), 4 deletions(-) diff --git a/GPT_SoVITS/inference_webui.py b/GPT_SoVITS/inference_webui.py index 39ae7e43..407437f4 100644 --- a/GPT_SoVITS/inference_webui.py +++ b/GPT_SoVITS/inference_webui.py @@ -562,10 +562,13 @@ def cut5(inp): # if not re.search(r'[^\w\s]', inp[-1]): # inp += '。' inp = inp.strip("\n") - punds = r'[,.;?!、,。?!;:]' + punds = r'[,.;?!、,。?!;:…]' items = re.split(f'({punds})', inp) - items = ["".join(group) for group in zip(items[::2], items[1::2])] - opt = "\n".join(items) + mergeitems = ["".join(group) for group in zip(items[::2], items[1::2])] + # 在句子不存在符号或句尾无符号的时候保证文本完整 + if len(items)%2 == 1: + mergeitems.append(items[-1]) + opt = "\n".join(mergeitems) return opt diff --git a/GPT_SoVITS/text/chinese.py b/GPT_SoVITS/text/chinese.py index ea41db1f..5334326e 100644 --- a/GPT_SoVITS/text/chinese.py +++ b/GPT_SoVITS/text/chinese.py @@ -30,7 +30,7 @@ rep_map = { "\n": ".", "·": ",", "、": ",", - "...": "…", + # "...": "…", "$": ".", "/": ",", "—": "-", From 41ffbe5c3ec2bd0ef242d81e2018f279d812a63a Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E5=88=98=E6=82=A6?= Date: Mon, 19 Feb 2024 09:49:50 +0800 Subject: [PATCH 15/63] Add files via upload MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 添加Kaggle平台运行脚本 --- gpt-sovits_kaggle.ipynb | 218 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 218 insertions(+) create mode 100644 gpt-sovits_kaggle.ipynb diff --git a/gpt-sovits_kaggle.ipynb b/gpt-sovits_kaggle.ipynb new file mode 100644 index 00000000..1980a77a --- /dev/null +++ b/gpt-sovits_kaggle.ipynb @@ -0,0 +1,218 @@ +{ + "cells": [ + { + "cell_type": "code", + "execution_count": null, + "id": "45857cb2", + "metadata": { + "_cell_guid": "b1076dfc-b9ad-4769-8c92-a6c4dae69d19", + "_uuid": "8f2839f25d086af736a60e9eeb907d3b93b6e0e5", + "execution": { + "iopub.execute_input": "2024-02-18T14:43:46.735480Z", + "iopub.status.busy": "2024-02-18T14:43:46.735183Z", + "iopub.status.idle": "2024-02-18T14:48:10.724175Z", + "shell.execute_reply": "2024-02-18T14:48:10.723059Z" + }, + "papermill": { + "duration": 263.994935, + "end_time": "2024-02-18T14:48:10.726613", + "exception": false, + "start_time": "2024-02-18T14:43:46.731678", + "status": "completed" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "!git clone https://github.com/RVC-Boss/GPT-SoVITS.git\n", + "%cd GPT-SoVITS\n", + "!apt-get update && apt-get install -y --no-install-recommends tzdata ffmpeg libsox-dev parallel aria2 git git-lfs && git lfs install\n", + "!pip install -r requirements.txt" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "b9d346b4", + "metadata": { + "execution": { + "iopub.execute_input": "2024-02-18T14:48:10.815802Z", + "iopub.status.busy": "2024-02-18T14:48:10.814899Z", + "iopub.status.idle": "2024-02-18T14:50:31.253276Z", + "shell.execute_reply": "2024-02-18T14:50:31.252024Z" + }, + "papermill": { + "duration": 140.484893, + "end_time": "2024-02-18T14:50:31.255720", + "exception": false, + "start_time": "2024-02-18T14:48:10.770827", + "status": "completed" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "# @title Download pretrained models 下载预训练模型\n", + "!mkdir -p /kaggle/working/GPT-SoVITS/GPT_SoVITS/pretrained_models\n", + "!mkdir -p /kaggle/working/GPT-SoVITS/tools/damo_asr/models\n", + "!mkdir -p /kaggle/working/GPT-SoVITS/tools/uvr5\n", + "%cd /kaggle/working/GPT-SoVITS/GPT_SoVITS/pretrained_models\n", + "!git clone https://huggingface.co/lj1995/GPT-SoVITS\n", + "%cd /kaggle/working/GPT-SoVITS/tools/damo_asr/models\n", + "!git clone https://www.modelscope.cn/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch.git\n", + "!git clone https://www.modelscope.cn/damo/speech_fsmn_vad_zh-cn-16k-common-pytorch.git\n", + "!git clone https://www.modelscope.cn/damo/punc_ct-transformer_zh-cn-common-vocab272727-pytorch.git\n", + "# # @title UVR5 pretrains 安装uvr5模型\n", + "%cd /kaggle/working/GPT-SoVITS/tools/uvr5\n", + "!git clone https://huggingface.co/Delik/uvr5_weights\n", + "!git config core.sparseCheckout true\n", + "!mv /kaggle/working/GPT-SoVITS/GPT_SoVITS/pretrained_models/GPT-SoVITS/* /kaggle/working/GPT-SoVITS/GPT_SoVITS/pretrained_models/" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "ea94d245", + "metadata": { + "execution": { + "iopub.execute_input": "2024-02-18T14:29:01.071549Z", + "iopub.status.busy": "2024-02-18T14:29:01.070592Z", + "iopub.status.idle": "2024-02-18T14:40:45.318368Z", + "shell.execute_reply": "2024-02-18T14:40:45.317130Z", + "shell.execute_reply.started": "2024-02-18T14:29:01.071512Z" + }, + "papermill": { + "duration": null, + "end_time": null, + "exception": false, + "start_time": "2024-02-18T14:50:31.309013", + "status": "running" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "# @title launch WebUI 启动WebUI\n", + "%cd /kaggle/working/GPT-SoVITS/\n", + "!npm install -g localtunnel\n", + "import subprocess\n", + "import threading\n", + "import time\n", + "import socket\n", + "import urllib.request\n", + "def iframe_thread(port):\n", + " while True:\n", + " time.sleep(0.5)\n", + " sock= socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n", + " result = sock.connect_ex(('127.0.0.1', port))\n", + " if result == 0:\n", + " break\n", + " sock.close()\n", + "\n", + " from colorama import Fore, Style\n", + " print (Fore.GREEN + \"\\nIP: \", Fore. RED, urllib.request.urlopen('https://ipv4.icanhazip.com').read().decode('utf8').strip(\"\\n\"), \"\\n\", Style. RESET_ALL)\n", + " p = subprocess.Popen([\"lt\", \"--port\", \"{}\".format(port)], stdout=subprocess.PIPE)\n", + " for line in p.stdout:\n", + " print(line.decode(), end='')\n", + "threading.Thread (target=iframe_thread, daemon=True, args=(9874,)).start()\n", + "\n", + "!python webui.py" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "dda88a6d", + "metadata": { + "execution": { + "iopub.execute_input": "2024-02-18T14:40:56.880608Z", + "iopub.status.busy": "2024-02-18T14:40:56.879879Z" + }, + "papermill": { + "duration": null, + "end_time": null, + "exception": null, + "start_time": null, + "status": "pending" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "# 开启推理页面\n", + "%cd /kaggle/working/GPT-SoVITS/\n", + "!npm install -g localtunnel\n", + "import subprocess\n", + "import threading\n", + "import time\n", + "import socket\n", + "import urllib.request\n", + "def iframe_thread(port):\n", + " while True:\n", + " time.sleep(0.5)\n", + " sock= socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n", + " result = sock.connect_ex(('127.0.0.1', port))\n", + " if result == 0:\n", + " break\n", + " sock.close()\n", + "\n", + " from colorama import Fore, Style\n", + " print (Fore.GREEN + \"\\nIP: \", Fore. RED, urllib.request.urlopen('https://ipv4.icanhazip.com').read().decode('utf8').strip(\"\\n\"), \"\\n\", Style. RESET_ALL)\n", + " p = subprocess.Popen([\"lt\", \"--port\", \"{}\".format(port)], stdout=subprocess.PIPE)\n", + " for line in p.stdout:\n", + " print(line.decode(), end='')\n", + "threading.Thread (target=iframe_thread, daemon=True, args=(9872,)).start()\n", + "\n", + "!python ./GPT_SoVITS/inference_webui.py" + ] + } + ], + "metadata": { + "kaggle": { + "accelerator": "nvidiaTeslaT4", + "dataSources": [ + { + "datasetId": 4459328, + "sourceId": 7649639, + "sourceType": "datasetVersion" + } + ], + "dockerImageVersionId": 30646, + "isGpuEnabled": true, + "isInternetEnabled": true, + "language": "python", + "sourceType": "notebook" + }, + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.10.13" + }, + "papermill": { + "default_parameters": {}, + "duration": null, + "end_time": null, + "environment_variables": {}, + "exception": null, + "input_path": "__notebook__.ipynb", + "output_path": "__notebook__.ipynb", + "parameters": {}, + "start_time": "2024-02-18T14:43:44.011910", + "version": "2.5.0" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} From d36c7fa1dffa609902935f3aba6a12d462c8c95f Mon Sep 17 00:00:00 2001 From: Lion Date: Mon, 19 Feb 2024 19:28:32 +0800 Subject: [PATCH 16/63] Update README --- README.md | 2 +- docs/cn/README.md | 2 +- docs/ja/README.md | 2 +- docs/ko/README.md | 2 +- 4 files changed, 4 insertions(+), 4 deletions(-) diff --git a/README.md b/README.md index 72f3694f..4266970f 100644 --- a/README.md +++ b/README.md @@ -107,7 +107,7 @@ For Chinese ASR (additionally), download models from [Damo ASR Model](https://mo If you are a Mac user, make sure you meet the following conditions for training and inferencing with GPU: -- Mac computers with Apple silicon or AMD GPUs +- Mac computers with Apple silicon - macOS 12.3 or later - Xcode command-line tools installed by running `xcode-select --install` diff --git a/docs/cn/README.md b/docs/cn/README.md index 8d3ca49a..4e47e21b 100644 --- a/docs/cn/README.md +++ b/docs/cn/README.md @@ -49,7 +49,7 @@ _注意: numba==0.56.4 需要 python<3.11_ 如果你是 Mac 用户,请先确保满足以下条件以使用 GPU 进行训练和推理: -- 搭载 Apple 芯片或 AMD GPU 的 Mac +- 搭载 Apple 芯片 的 Mac - macOS 12.3 或更高版本 - 已通过运行`xcode-select --install`安装 Xcode command-line tools diff --git a/docs/ja/README.md b/docs/ja/README.md index aa300c86..b27fd652 100644 --- a/docs/ja/README.md +++ b/docs/ja/README.md @@ -47,7 +47,7 @@ _注記: numba==0.56.4 は py<3.11 が必要です_ 如果あなたが Mac ユーザーである場合、GPU を使用してトレーニングおよび推論を行うために以下の条件を満たしていることを確認してください: -- Apple シリコンまたは AMD GPU を搭載した Mac コンピューター +- Apple シリコンを搭載した Mac コンピューター - macOS 12.3 以降 - `xcode-select --install`を実行してインストールされた Xcode コマンドラインツール diff --git a/docs/ko/README.md b/docs/ko/README.md index afcdd667..c57cf5cb 100644 --- a/docs/ko/README.md +++ b/docs/ko/README.md @@ -49,7 +49,7 @@ _참고: numba==0.56.4 는 python<3.11 을 필요로 합니다._ MacOS 사용자는 GPU를 사용하여 훈련 및 추론을 하려면 다음 조건을 충족해야 합니다: -- Apple 칩 또는 AMD GPU가 장착된 Mac +- Apple 칩이 장착된 Mac - macOS 12.3 이상 - `xcode-select --install`을 실행하여 Xcode command-line tools를 설치했습니다. From 555c52b0aa93b08ba05621b349e6e61f243f1b3d Mon Sep 17 00:00:00 2001 From: Lion Date: Mon, 19 Feb 2024 19:29:57 +0800 Subject: [PATCH 17/63] Update README --- docs/cn/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/cn/README.md b/docs/cn/README.md index 4e47e21b..a0cfd0a0 100644 --- a/docs/cn/README.md +++ b/docs/cn/README.md @@ -49,7 +49,7 @@ _注意: numba==0.56.4 需要 python<3.11_ 如果你是 Mac 用户,请先确保满足以下条件以使用 GPU 进行训练和推理: -- 搭载 Apple 芯片 的 Mac +- 搭载 Apple 芯片的 Mac - macOS 12.3 或更高版本 - 已通过运行`xcode-select --install`安装 Xcode command-line tools From bbef82fa86cb99ba08c6c71d8144e51689b7bc7e Mon Sep 17 00:00:00 2001 From: KamioRinn Date: Tue, 20 Feb 2024 22:41:39 +0800 Subject: [PATCH 18/63] Refactoring get phones and bert --- GPT_SoVITS/inference_webui.py | 167 +++++++++++----------------------- GPT_SoVITS/text/chinese.py | 2 +- requirements.txt | 2 +- 3 files changed, 55 insertions(+), 116 deletions(-) diff --git a/GPT_SoVITS/inference_webui.py b/GPT_SoVITS/inference_webui.py index 407437f4..70519dab 100644 --- a/GPT_SoVITS/inference_webui.py +++ b/GPT_SoVITS/inference_webui.py @@ -209,54 +209,8 @@ dict_language = { } -def splite_en_inf(sentence, language): - pattern = re.compile(r'[a-zA-Z ]+') - textlist = [] - langlist = [] - pos = 0 - for match in pattern.finditer(sentence): - start, end = match.span() - if start > pos: - textlist.append(sentence[pos:start]) - langlist.append(language) - textlist.append(sentence[start:end]) - langlist.append("en") - pos = end - if pos < len(sentence): - textlist.append(sentence[pos:]) - langlist.append(language) - # Merge punctuation into previous word - for i in range(len(textlist)-1, 0, -1): - if re.match(r'^[\W_]+$', textlist[i]): - textlist[i-1] += textlist[i] - del textlist[i] - del langlist[i] - # Merge consecutive words with the same language tag - i = 0 - while i < len(langlist) - 1: - if langlist[i] == langlist[i+1]: - textlist[i] += textlist[i+1] - del textlist[i+1] - del langlist[i+1] - else: - i += 1 - - return textlist, langlist - - def clean_text_inf(text, language): - formattext = "" - language = language.replace("all_","") - for tmp in LangSegment.getTexts(text): - if language == "ja": - if tmp["lang"] == language or tmp["lang"] == "zh": - formattext += tmp["text"] + " " - continue - if tmp["lang"] == language: - formattext += tmp["text"] + " " - while " " in formattext: - formattext = formattext.replace(" ", " ") - phones, word2ph, norm_text = clean_text(formattext, language) + phones, word2ph, norm_text = clean_text(text, language) phones = cleaned_text_to_sequence(phones) return phones, word2ph, norm_text @@ -274,55 +228,6 @@ def get_bert_inf(phones, word2ph, norm_text, language): return bert -def nonen_clean_text_inf(text, language): - if(language!="auto"): - textlist, langlist = splite_en_inf(text, language) - else: - textlist=[] - langlist=[] - for tmp in LangSegment.getTexts(text): - langlist.append(tmp["lang"]) - textlist.append(tmp["text"]) - phones_list = [] - word2ph_list = [] - norm_text_list = [] - for i in range(len(textlist)): - lang = langlist[i] - phones, word2ph, norm_text = clean_text_inf(textlist[i], lang) - phones_list.append(phones) - if lang == "zh": - word2ph_list.append(word2ph) - norm_text_list.append(norm_text) - print(word2ph_list) - phones = sum(phones_list, []) - word2ph = sum(word2ph_list, []) - norm_text = ' '.join(norm_text_list) - - return phones, word2ph, norm_text - - -def nonen_get_bert_inf(text, language): - if(language!="auto"): - textlist, langlist = splite_en_inf(text, language) - else: - textlist=[] - langlist=[] - for tmp in LangSegment.getTexts(text): - langlist.append(tmp["lang"]) - textlist.append(tmp["text"]) - print(textlist) - print(langlist) - bert_list = [] - for i in range(len(textlist)): - lang = langlist[i] - phones, word2ph, norm_text = clean_text_inf(textlist[i], lang) - bert = get_bert_inf(phones, word2ph, norm_text, lang) - bert_list.append(bert) - bert = torch.cat(bert_list, dim=1) - - return bert - - splits = {",", "。", "?", "!", ",", ".", "?", "!", "~", ":", ":", "—", "…", } @@ -332,23 +237,59 @@ def get_first(text): return text -def get_cleaned_text_final(text,language): +def get_phones_and_bert(text,language): if language in {"en","all_zh","all_ja"}: - phones, word2ph, norm_text = clean_text_inf(text, language) + language = language.replace("all_","") + if language == "en": + LangSegment.setfilters(["en"]) + formattext = " ".join(tmp["text"] for tmp in LangSegment.getTexts(text)) + else: + # 因无法区别中日文汉字,以用户输入为准 + formattext = text + while " " in formattext: + formattext = formattext.replace(" ", " ") + phones, word2ph, norm_text = clean_text_inf(formattext, language) + if language == "zh": + bert = get_bert_feature(norm_text, word2ph).to(device) + else: + bert = torch.zeros( + (1024, len(phones)), + dtype=torch.float16 if is_half == True else torch.float32, + ).to(device) elif language in {"zh", "ja","auto"}: - phones, word2ph, norm_text = nonen_clean_text_inf(text, language) - return phones, word2ph, norm_text + textlist=[] + langlist=[] + LangSegment.setfilters(["zh","ja","en"]) + if language == "auto": + for tmp in LangSegment.getTexts(text): + langlist.append(tmp["lang"]) + textlist.append(tmp["text"]) + else: + for tmp in LangSegment.getTexts(text): + if tmp["lang"] == "en": + langlist.append(tmp["lang"]) + else: + # 因无法区别中日文汉字,以用户输入为准 + langlist.append(language) + textlist.append(tmp["text"]) + print(textlist) + print(langlist) + phones_list = [] + bert_list = [] + norm_text_list = [] + for i in range(len(textlist)): + lang = langlist[i] + phones, word2ph, norm_text = clean_text_inf(textlist[i], lang) + bert = get_bert_inf(phones, word2ph, norm_text, lang) + phones_list.append(phones) + norm_text_list.append(norm_text) + bert_list.append(bert) + bert = torch.cat(bert_list, dim=1) + phones = sum(phones_list, []) + norm_text = ' '.join(norm_text_list) + + return phones,bert.to(dtype),norm_text -def get_bert_final(phones, word2ph, text,language,device): - if language == "en": - bert = get_bert_inf(phones, word2ph, text, language) - elif language in {"zh", "ja","auto"}: - bert = nonen_get_bert_inf(text, language) - elif language == "all_zh": - bert = get_bert_feature(text, word2ph).to(device) - else: - bert = torch.zeros((1024, len(phones))).to(device) - return bert def merge_short_text_in_array(texts, threshold): if (len(texts)) < 2: @@ -425,8 +366,7 @@ def get_tts_wav(ref_wav_path, prompt_text, prompt_language, text, text_language, texts = merge_short_text_in_array(texts, 5) audio_opt = [] if not ref_free: - phones1, word2ph1, norm_text1=get_cleaned_text_final(prompt_text, prompt_language) - bert1=get_bert_final(phones1, word2ph1, norm_text1,prompt_language,device).to(dtype) + phones1,bert1,norm_text1=get_phones_and_bert(prompt_text, prompt_language) for text in texts: # 解决输入目标文本的空行导致报错的问题 @@ -434,8 +374,7 @@ def get_tts_wav(ref_wav_path, prompt_text, prompt_language, text, text_language, continue if (text[-1] not in splits): text += "。" if text_language != "en" else "." print(i18n("实际输入的目标文本(每句):"), text) - phones2, word2ph2, norm_text2 = get_cleaned_text_final(text, text_language) - bert2 = get_bert_final(phones2, word2ph2, norm_text2, text_language, device).to(dtype) + phones2,bert2,norm_text2=get_phones_and_bert(text, text_language) if not ref_free: bert = torch.cat([bert1, bert2], 1) all_phoneme_ids = torch.LongTensor(phones1+phones2).to(device).unsqueeze(0) diff --git a/GPT_SoVITS/text/chinese.py b/GPT_SoVITS/text/chinese.py index 5334326e..ea41db1f 100644 --- a/GPT_SoVITS/text/chinese.py +++ b/GPT_SoVITS/text/chinese.py @@ -30,7 +30,7 @@ rep_map = { "\n": ".", "·": ",", "、": ",", - # "...": "…", + "...": "…", "$": ".", "/": ",", "—": "-", diff --git a/requirements.txt b/requirements.txt index fae6198d..75bd945d 100644 --- a/requirements.txt +++ b/requirements.txt @@ -23,5 +23,5 @@ PyYAML psutil jieba_fast jieba -LangSegment +LangSegment>=0.2.0 Faster_Whisper \ No newline at end of file From 76570cff52ff81e90b6b5f98e80aa657afc70738 Mon Sep 17 00:00:00 2001 From: KamioRinn Date: Tue, 20 Feb 2024 22:45:49 +0800 Subject: [PATCH 19/63] Del a-zA-Z --- GPT_SoVITS/inference_webui.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/GPT_SoVITS/inference_webui.py b/GPT_SoVITS/inference_webui.py index 70519dab..c427b25f 100644 --- a/GPT_SoVITS/inference_webui.py +++ b/GPT_SoVITS/inference_webui.py @@ -245,7 +245,7 @@ def get_phones_and_bert(text,language): formattext = " ".join(tmp["text"] for tmp in LangSegment.getTexts(text)) else: # 因无法区别中日文汉字,以用户输入为准 - formattext = text + formattext = re.sub('[a-zA-Z]', '', text) while " " in formattext: formattext = formattext.replace(" ", " ") phones, word2ph, norm_text = clean_text_inf(formattext, language) From 31802947108cb12d708404fb621f287fd5d13716 Mon Sep 17 00:00:00 2001 From: XXXXRT666 <157766680+XXXXRT666@users.noreply.github.com> Date: Tue, 20 Feb 2024 15:57:58 +0000 Subject: [PATCH 20/63] Update config.py Change the inference device for Mac to accelerate inference and reduce memory leak --- config.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/config.py b/config.py index 3e9e951c..caaadd47 100644 --- a/config.py +++ b/config.py @@ -20,7 +20,7 @@ python_exec = sys.executable or "python" if torch.cuda.is_available(): infer_device = "cuda" elif torch.backends.mps.is_available(): - infer_device = "mps" + infer_device = "cpu" else: infer_device = "cpu" From 861658050b6eab32ce6a34cfee37fc63a53a4ae7 Mon Sep 17 00:00:00 2001 From: XXXXRT666 <157766680+XXXXRT666@users.noreply.github.com> Date: Tue, 20 Feb 2024 16:03:08 +0000 Subject: [PATCH 21/63] Update inference_webui.py Change inference device to accelerate inference on Mac and reduce memory leak --- GPT_SoVITS/inference_webui.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/GPT_SoVITS/inference_webui.py b/GPT_SoVITS/inference_webui.py index c427b25f..a046776d 100644 --- a/GPT_SoVITS/inference_webui.py +++ b/GPT_SoVITS/inference_webui.py @@ -73,7 +73,7 @@ os.environ['PYTORCH_ENABLE_MPS_FALLBACK'] = '1' # 确保直接启动推理UI时 if torch.cuda.is_available(): device = "cuda" elif torch.backends.mps.is_available(): - device = "mps" + device = "cpu" else: device = "cpu" From 84062074a311d10da5998f10b5f3d36dc8467b5f Mon Sep 17 00:00:00 2001 From: KamioRinn Date: Wed, 21 Feb 2024 01:14:09 +0800 Subject: [PATCH 22/63] Adjust text normlization --- GPT_SoVITS/inference_webui.py | 5 +++-- GPT_SoVITS/text/chinese.py | 2 ++ GPT_SoVITS/text/zh_normalization/num.py | 15 +++++++++++++++ .../text/zh_normalization/text_normlization.py | 10 +++++++--- 4 files changed, 27 insertions(+), 5 deletions(-) diff --git a/GPT_SoVITS/inference_webui.py b/GPT_SoVITS/inference_webui.py index c427b25f..695121ac 100644 --- a/GPT_SoVITS/inference_webui.py +++ b/GPT_SoVITS/inference_webui.py @@ -245,7 +245,7 @@ def get_phones_and_bert(text,language): formattext = " ".join(tmp["text"] for tmp in LangSegment.getTexts(text)) else: # 因无法区别中日文汉字,以用户输入为准 - formattext = re.sub('[a-zA-Z]', '', text) + formattext = text while " " in formattext: formattext = formattext.replace(" ", " ") phones, word2ph, norm_text = clean_text_inf(formattext, language) @@ -286,7 +286,7 @@ def get_phones_and_bert(text,language): bert_list.append(bert) bert = torch.cat(bert_list, dim=1) phones = sum(phones_list, []) - norm_text = ' '.join(norm_text_list) + norm_text = ''.join(norm_text_list) return phones,bert.to(dtype),norm_text @@ -375,6 +375,7 @@ def get_tts_wav(ref_wav_path, prompt_text, prompt_language, text, text_language, if (text[-1] not in splits): text += "。" if text_language != "en" else "." print(i18n("实际输入的目标文本(每句):"), text) phones2,bert2,norm_text2=get_phones_and_bert(text, text_language) + print(i18n("前端处理后的文本(每句):"), norm_text2) if not ref_free: bert = torch.cat([bert1, bert2], 1) all_phoneme_ids = torch.LongTensor(phones1+phones2).to(device).unsqueeze(0) diff --git a/GPT_SoVITS/text/chinese.py b/GPT_SoVITS/text/chinese.py index ea41db1f..f9a4b360 100644 --- a/GPT_SoVITS/text/chinese.py +++ b/GPT_SoVITS/text/chinese.py @@ -34,6 +34,8 @@ rep_map = { "$": ".", "/": ",", "—": "-", + "~": "…", + "~":"…", } tone_modifier = ToneSandhi() diff --git a/GPT_SoVITS/text/zh_normalization/num.py b/GPT_SoVITS/text/zh_normalization/num.py index 8a54d3e6..8ef7f48f 100644 --- a/GPT_SoVITS/text/zh_normalization/num.py +++ b/GPT_SoVITS/text/zh_normalization/num.py @@ -172,6 +172,21 @@ def replace_range(match) -> str: return result +# ~至表达式 +RE_TO_RANGE = re.compile( + r'((-?)((\d+)(\.\d+)?)|(\.(\d+)))(%|°C|℃|度|摄氏度|cm2|cm²|cm3|cm³|cm|db|ds|kg|km|m2|m²|m³|m3|ml|m|mm|s)[~]((-?)((\d+)(\.\d+)?)|(\.(\d+)))(%|°C|℃|度|摄氏度|cm2|cm²|cm3|cm³|cm|db|ds|kg|km|m2|m²|m³|m3|ml|m|mm|s)') + +def replace_to_range(match) -> str: + """ + Args: + match (re.Match) + Returns: + str + """ + result = match.group(0).replace('~', '至') + return result + + def _get_value(value_string: str, use_zero: bool=True) -> List[str]: stripped = value_string.lstrip('0') if len(stripped) == 0: diff --git a/GPT_SoVITS/text/zh_normalization/text_normlization.py b/GPT_SoVITS/text/zh_normalization/text_normlization.py index 1250e96c..712537d5 100644 --- a/GPT_SoVITS/text/zh_normalization/text_normlization.py +++ b/GPT_SoVITS/text/zh_normalization/text_normlization.py @@ -33,6 +33,7 @@ from .num import RE_NUMBER from .num import RE_PERCENTAGE from .num import RE_POSITIVE_QUANTIFIERS from .num import RE_RANGE +from .num import RE_TO_RANGE from .num import replace_default_num from .num import replace_frac from .num import replace_negative_num @@ -40,6 +41,7 @@ from .num import replace_number from .num import replace_percentage from .num import replace_positive_quantifier from .num import replace_range +from .num import replace_to_range from .phonecode import RE_MOBILE_PHONE from .phonecode import RE_NATIONAL_UNIFORM_NUMBER from .phonecode import RE_TELEPHONE @@ -65,7 +67,7 @@ class TextNormalizer(): if lang == "zh": text = text.replace(" ", "") # 过滤掉特殊字符 - text = re.sub(r'[——《》【】<=>{}()()#&@“”^_|…\\]', '', text) + text = re.sub(r'[——《》【】<=>{}()()#&@“”^_|\\]', '', text) text = self.SENTENCE_SPLITOR.sub(r'\1\n', text) text = text.strip() sentences = [sentence.strip() for sentence in re.split(r'\n+', text)] @@ -73,8 +75,8 @@ class TextNormalizer(): def _post_replace(self, sentence: str) -> str: sentence = sentence.replace('/', '每') - sentence = sentence.replace('~', '至') - sentence = sentence.replace('~', '至') + # sentence = sentence.replace('~', '至') + # sentence = sentence.replace('~', '至') sentence = sentence.replace('①', '一') sentence = sentence.replace('②', '二') sentence = sentence.replace('③', '三') @@ -128,6 +130,8 @@ class TextNormalizer(): sentence = RE_TIME_RANGE.sub(replace_time, sentence) sentence = RE_TIME.sub(replace_time, sentence) + # 处理~波浪号作为至的替换 + sentence = RE_TO_RANGE.sub(replace_to_range, sentence) sentence = RE_TEMPERATURE.sub(replace_temperature, sentence) sentence = replace_measure(sentence) sentence = RE_FRAC.sub(replace_frac, sentence) From 220367f90c85f6dc20751c4a586320c463b28406 Mon Sep 17 00:00:00 2001 From: XXXXRT666 <157766680+XXXXRT666@users.noreply.github.com> Date: Wed, 21 Feb 2024 01:15:11 +0000 Subject: [PATCH 23/63] Update inference_webui.py MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 精简代码 --- GPT_SoVITS/inference_webui.py | 2 -- 1 file changed, 2 deletions(-) diff --git a/GPT_SoVITS/inference_webui.py b/GPT_SoVITS/inference_webui.py index a046776d..3a4bfb3e 100644 --- a/GPT_SoVITS/inference_webui.py +++ b/GPT_SoVITS/inference_webui.py @@ -72,8 +72,6 @@ os.environ['PYTORCH_ENABLE_MPS_FALLBACK'] = '1' # 确保直接启动推理UI时 if torch.cuda.is_available(): device = "cuda" -elif torch.backends.mps.is_available(): - device = "cpu" else: device = "cpu" From db40317d9ceaf782b5ccb383e044281a0489f29a Mon Sep 17 00:00:00 2001 From: XXXXRT666 <157766680+XXXXRT666@users.noreply.github.com> Date: Wed, 21 Feb 2024 01:15:31 +0000 Subject: [PATCH 24/63] Update config.py MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 精简代码 --- config.py | 2 -- 1 file changed, 2 deletions(-) diff --git a/config.py b/config.py index caaadd47..1f741285 100644 --- a/config.py +++ b/config.py @@ -19,8 +19,6 @@ exp_root = "logs" python_exec = sys.executable or "python" if torch.cuda.is_available(): infer_device = "cuda" -elif torch.backends.mps.is_available(): - infer_device = "cpu" else: infer_device = "cpu" From b0b039ad2154d9867ae77bd484367fc6a8d1d2c7 Mon Sep 17 00:00:00 2001 From: Kenn Zhang Date: Sat, 17 Feb 2024 09:57:18 +0000 Subject: [PATCH 25/63] =?UTF-8?q?Docker=E9=95=9C=E5=83=8F=E6=9E=84?= =?UTF-8?q?=E5=BB=BA=E8=84=9A=E6=9C=AC=E5=AF=B9=E4=BA=8E=E9=95=9C=E5=83=8F?= =?UTF-8?q?=E7=9A=84Tag=E5=A2=9E=E5=8A=A0Git=20Commit=E7=9A=84Hash?= =?UTF-8?q?=E5=80=BC=EF=BC=8C=E4=BE=BF=E4=BA=8E=E7=9F=A5=E9=81=93=E9=95=9C?= =?UTF-8?q?=E5=83=8F=E4=B8=AD=E5=BA=94=E7=94=A8=E7=89=88=E6=9C=AC?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .dockerignore | 4 +++- dockerbuild.sh | 9 ++++++++- 2 files changed, 11 insertions(+), 2 deletions(-) diff --git a/.dockerignore b/.dockerignore index dc39f76f..4eca27be 100644 --- a/.dockerignore +++ b/.dockerignore @@ -3,4 +3,6 @@ logs output reference SoVITS_weights -.git \ No newline at end of file +GPT_weights +TEMP +.git diff --git a/dockerbuild.sh b/dockerbuild.sh index 1b3dcee5..3a4a1e18 100755 --- a/dockerbuild.sh +++ b/dockerbuild.sh @@ -2,13 +2,20 @@ # 获取当前日期,格式为 YYYYMMDD DATE=$(date +%Y%m%d) +# 获取最新的 Git commit 哈希值的前 7 位 +COMMIT_HASH=$(git rev-parse HEAD | cut -c 1-7) # 构建 full 版本的镜像 docker build --build-arg IMAGE_TYPE=full -t breakstring/gpt-sovits:latest . # 为同一个镜像添加带日期的标签 docker tag breakstring/gpt-sovits:latest breakstring/gpt-sovits:dev-$DATE +# 为同一个镜像添加带当前代码库Commit哈希值的标签 +docker tag breakstring/gpt-sovits:latest breakstring/gpt-sovits:dev-$COMMIT_HASH -# 构建 elite 版本的镜像 + +# 构建 elite 版本的镜像(无模型下载步骤,需手工将模型下载安装进容器) docker build --build-arg IMAGE_TYPE=elite -t breakstring/gpt-sovits:latest-elite . # 为同一个镜像添加带日期的标签 docker tag breakstring/gpt-sovits:latest-elite breakstring/gpt-sovits:dev-$DATE-elite +# 为同一个镜像添加带当前代码库Commit哈希值的标签 +docker tag breakstring/gpt-sovits:latest-elite breakstring/gpt-sovits:dev-$COMMIT_HASH-elite From 4b0fae83020389eed0dfd283c5122e5f3df584fc Mon Sep 17 00:00:00 2001 From: JavaAndPython55 <34533090+JavaAndPython55@users.noreply.github.com> Date: Wed, 21 Feb 2024 18:11:59 +0800 Subject: [PATCH 26/63] =?UTF-8?q?=E6=96=B0=E5=A2=9Eapi.py=E4=B8=AD?= =?UTF-8?q?=EF=BC=9A=E5=8F=AF=E5=9C=A8=E5=90=AF=E5=8A=A8=E5=90=8E=E5=8A=A8?= =?UTF-8?q?=E6=80=81=E4=BF=AE=E6=94=B9=E6=A8=A1=E5=9E=8B=EF=BC=8C=E4=BB=A5?= =?UTF-8?q?=E6=AD=A4=E6=BB=A1=E8=B6=B3=E5=90=8C=E4=B8=80=E4=B8=AAapi?= =?UTF-8?q?=E4=B8=8D=E5=90=8C=E7=9A=84=E6=9C=97=E8=AF=BB=E8=80=85=E8=AF=B7?= =?UTF-8?q?=E6=B1=82?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 可在启动后动态修改模型,以此满足同一个api不同的朗读者请求 --- api.py | 54 +++++++++++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 53 insertions(+), 1 deletion(-) diff --git a/api.py b/api.py index b8d584e7..754f0769 100644 --- a/api.py +++ b/api.py @@ -144,7 +144,7 @@ parser.add_argument("-dt", "--default_refer_text", type=str, default="", help=" parser.add_argument("-dl", "--default_refer_language", type=str, default="", help="默认参考音频语种") parser.add_argument("-d", "--device", type=str, default=g_config.infer_device, help="cuda / cpu / mps") -parser.add_argument("-a", "--bind_addr", type=str, default="127.0.0.1", help="default: 127.0.0.1") +parser.add_argument("-a", "--bind_addr", type=str, default="0.0.0.0", help="default: 0.0.0.0") parser.add_argument("-p", "--port", type=int, default=g_config.api_port, help="default: 9880") parser.add_argument("-fp", "--full_precision", action="store_true", default=False, help="覆盖config.is_half为False, 使用全精度") parser.add_argument("-hp", "--half_precision", action="store_true", default=False, help="覆盖config.is_half为True, 使用半精度") @@ -227,6 +227,44 @@ def is_full(*items): # 任意一项为空返回False return False return True +def change_sovits_weights(sovits_path): + global vq_model, hps + dict_s2 = torch.load(sovits_path, map_location="cpu") + hps = dict_s2["config"] + hps = DictToAttrRecursive(hps) + hps.model.semantic_frame_rate = "25hz" + vq_model = SynthesizerTrn( + hps.data.filter_length // 2 + 1, + hps.train.segment_size // hps.data.hop_length, + n_speakers=hps.data.n_speakers, + **hps.model + ) + if ("pretrained" not in sovits_path): + del vq_model.enc_q + if is_half == True: + vq_model = vq_model.half().to(device) + else: + vq_model = vq_model.to(device) + vq_model.eval() + print(vq_model.load_state_dict(dict_s2["weight"], strict=False)) + with open("./sweight.txt", "w", encoding="utf-8") as f: + f.write(sovits_path) +def change_gpt_weights(gpt_path): + global hz, max_sec, t2s_model, config + hz = 50 + dict_s1 = torch.load(gpt_path, map_location="cpu") + config = dict_s1["config"] + max_sec = config["data"]["max_sec"] + t2s_model = Text2SemanticLightningModule(config, "****", is_train=False) + t2s_model.load_state_dict(dict_s1["weight"]) + if is_half == True: + t2s_model = t2s_model.half() + t2s_model = t2s_model.to(device) + t2s_model.eval() + total = sum([param.nelement() for param in t2s_model.parameters()]) + print("Number of parameter: %.2fM" % (total / 1e6)) + with open("./gweight.txt", "w", encoding="utf-8") as f: f.write(gpt_path) + def get_bert_feature(text, word2ph): with torch.no_grad(): @@ -452,6 +490,20 @@ def handle(refer_wav_path, prompt_text, prompt_language, text, text_language): app = FastAPI() +#clark新增-----2024-02-21 +#可在启动后动态修改模型,以此满足同一个api不同的朗读者请求 +@app.post("/set_model") +async def set_model(request: Request): + json_post_raw = await request.json() + global gpt_path + gpt_path=json_post_raw.get("gpt_model_path") + global sovits_path + sovits_path=json_post_raw.get("sovits_model_path") + print("gptpath"+gpt_path+";vitspath"+sovits_path) + change_sovits_weights(sovits_path) + change_gpt_weights(gpt_path) + return "ok" +# 新增-----end------ @app.post("/control") async def control(request: Request): From 6da486c15d09e3d99fa42c5e560aaac56b6b4ce1 Mon Sep 17 00:00:00 2001 From: RVC-Boss <129054828+RVC-Boss@users.noreply.github.com> Date: Wed, 21 Feb 2024 18:27:59 +0800 Subject: [PATCH 27/63] Add files via upload --- webui.py | 34 ++++++++++++++++++++++++++++++++++ 1 file changed, 34 insertions(+) diff --git a/webui.py b/webui.py index cff7cdb2..c6430d92 100644 --- a/webui.py +++ b/webui.py @@ -117,6 +117,7 @@ def change_choices(): p_label=None p_uvr5=None p_asr=None +p_denoise=None p_tts_inference=None def kill_proc_tree(pid, including_parent=True): @@ -220,6 +221,29 @@ def close_asr(): kill_process(p_asr.pid) p_asr=None return "已终止ASR进程",{"__type__":"update","visible":True},{"__type__":"update","visible":False} +def open_denoise(denoise_inp_dir, denoise_opt_dir): + global p_denoise + if(p_denoise==None): + denoise_inp_dir=my_utils.clean_path(denoise_inp_dir) + denoise_opt_dir=my_utils.clean_path(denoise_opt_dir) + cmd = '"%s" tools/cmd-denoise.py -i "%s" -o "%s" -p %s'%(python_exec,denoise_inp_dir,denoise_opt_dir,"float16"if is_half==True else "float32") + + yield "语音降噪任务开启:%s"%cmd,{"__type__":"update","visible":False},{"__type__":"update","visible":True} + print(cmd) + p_denoise = Popen(cmd, shell=True) + p_denoise.wait() + p_denoise=None + yield f"语音降噪任务完成, 查看终端进行下一步",{"__type__":"update","visible":True},{"__type__":"update","visible":False} + else: + yield "已有正在进行的语音降噪任务,需先终止才能开启下一次任务",{"__type__":"update","visible":False},{"__type__":"update","visible":True} + # return None + +def close_denoise(): + global p_denoise + if(p_denoise!=None): + kill_process(p_denoise.pid) + p_denoise=None + return "已终止语音降噪进程",{"__type__":"update","visible":True},{"__type__":"update","visible":False} p_train_SoVITS=None def open1Ba(batch_size,total_epoch,exp_name,text_low_lr_rate,if_save_latest,if_save_every_weights,save_every_epoch,gpu_numbers1Ba,pretrained_s2G,pretrained_s2D): @@ -678,6 +702,13 @@ with gr.Blocks(title="GPT-SoVITS WebUI") as app: alpha=gr.Slider(minimum=0,maximum=1,step=0.05,label=i18n("alpha_mix:混多少比例归一化后音频进来"),value=0.25,interactive=True) n_process=gr.Slider(minimum=1,maximum=n_cpu,step=1,label=i18n("切割使用的进程数"),value=4,interactive=True) slicer_info = gr.Textbox(label=i18n("语音切割进程输出信息")) + gr.Markdown(value=i18n("0bb-语音降噪工具")) + with gr.Row(): + open_denoise_button = gr.Button(i18n("开启语音降噪"), variant="primary",visible=True) + close_denoise_button = gr.Button(i18n("终止语音降噪进程"), variant="primary",visible=False) + denoise_input_dir=gr.Textbox(label=i18n("降噪音频文件输入文件夹"),value="") + denoise_output_dir=gr.Textbox(label=i18n("降噪结果输出文件夹"),value="output/denoise_opt") + denoise_info = gr.Textbox(label=i18n("语音降噪进程输出信息")) gr.Markdown(value=i18n("0c-中文批量离线ASR工具")) with gr.Row(): open_asr_button = gr.Button(i18n("开启离线批量ASR"), variant="primary",visible=True) @@ -740,6 +771,9 @@ with gr.Blocks(title="GPT-SoVITS WebUI") as app: close_asr_button.click(close_asr, [], [asr_info,open_asr_button,close_asr_button]) open_slicer_button.click(open_slice, [slice_inp_path,slice_opt_root,threshold,min_length,min_interval,hop_size,max_sil_kept,_max,alpha,n_process], [slicer_info,open_slicer_button,close_slicer_button]) close_slicer_button.click(close_slice, [], [slicer_info,open_slicer_button,close_slicer_button]) + open_denoise_button.click(open_denoise, [denoise_input_dir,denoise_output_dir], [denoise_info,open_denoise_button,close_denoise_button]) + close_denoise_button.click(close_denoise, [], [denoise_info,open_denoise_button,close_denoise_button]) + with gr.TabItem(i18n("1-GPT-SoVITS-TTS")): with gr.Row(): exp_name = gr.Textbox(label=i18n("*实验/模型名"), value="xxx", interactive=True) From 5a17177342d2df1e11369f2f4f58d34a3feb1a35 Mon Sep 17 00:00:00 2001 From: RVC-Boss <129054828+RVC-Boss@users.noreply.github.com> Date: Wed, 21 Feb 2024 18:28:22 +0800 Subject: [PATCH 28/63] Add files via upload --- tools/cmd-denoise.py | 29 +++++++++++++++++++++++++++++ 1 file changed, 29 insertions(+) create mode 100644 tools/cmd-denoise.py diff --git a/tools/cmd-denoise.py b/tools/cmd-denoise.py new file mode 100644 index 00000000..69b51e66 --- /dev/null +++ b/tools/cmd-denoise.py @@ -0,0 +1,29 @@ +import os,argparse + +from modelscope.pipelines import pipeline +from modelscope.utils.constant import Tasks +from tqdm import tqdm + +path_denoise = 'tools/denoise-model/speech_frcrn_ans_cirm_16k' +path_denoise = path_denoise if os.path.exists(path_denoise) else "damo/speech_frcrn_ans_cirm_16k" +ans = pipeline(Tasks.acoustic_noise_suppression,model=path_denoise) +def execute_denoise(input_folder,output_folder): + os.makedirs(output_folder,exist_ok=True) + # print(input_folder) + # print(list(os.listdir(input_folder).sort())) + for name in tqdm(os.listdir(input_folder)): + ans("%s/%s"%(input_folder,name),output_path='%s/%s'%(output_folder,name)) + +if __name__ == '__main__': + parser = argparse.ArgumentParser() + parser.add_argument("-i", "--input_folder", type=str, required=True, + help="Path to the folder containing WAV files.") + parser.add_argument("-o", "--output_folder", type=str, required=True, + help="Output folder to store transcriptions.") + parser.add_argument("-p", "--precision", type=str, default='float16', choices=['float16','float32'], + help="fp16 or fp32")#还没接入 + cmd = parser.parse_args() + execute_denoise( + input_folder = cmd.input_folder, + output_folder = cmd.output_folder, + ) \ No newline at end of file From 82085e48869fbe6f817e83a7e858309ca2f06bd6 Mon Sep 17 00:00:00 2001 From: RVC-Boss <129054828+RVC-Boss@users.noreply.github.com> Date: Wed, 21 Feb 2024 18:29:14 +0800 Subject: [PATCH 29/63] Create .gitignore --- tools/denoise-model/.gitignore | 2 ++ 1 file changed, 2 insertions(+) create mode 100644 tools/denoise-model/.gitignore diff --git a/tools/denoise-model/.gitignore b/tools/denoise-model/.gitignore new file mode 100644 index 00000000..d6b7ef32 --- /dev/null +++ b/tools/denoise-model/.gitignore @@ -0,0 +1,2 @@ +* +!.gitignore From 788ea251dafa9aff6de7ca019d1870443f08f445 Mon Sep 17 00:00:00 2001 From: RVC-Boss <129054828+RVC-Boss@users.noreply.github.com> Date: Wed, 21 Feb 2024 18:33:13 +0800 Subject: [PATCH 30/63] Update Changelog_CN.md --- docs/cn/Changelog_CN.md | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/docs/cn/Changelog_CN.md b/docs/cn/Changelog_CN.md index 6622146a..8afd3514 100644 --- a/docs/cn/Changelog_CN.md +++ b/docs/cn/Changelog_CN.md @@ -125,6 +125,16 @@ 2-修复中文文本前端bug https://github.com/RVC-Boss/GPT-SoVITS/issues/475 +### 20240221更新 + +1-数据处理添加语音降噪选项 + +2-中文日文前端处理优化 https://github.com/RVC-Boss/GPT-SoVITS/pull/559 https://github.com/RVC-Boss/GPT-SoVITS/pull/556 https://github.com/RVC-Boss/GPT-SoVITS/pull/532 https://github.com/RVC-Boss/GPT-SoVITS/pull/507 https://github.com/RVC-Boss/GPT-SoVITS/pull/509 + +3-mac CPU推理更快因此把推理设备从mps改到CPU + +4-colab修复不开启公网url + todolist: 1-中文多音字推理优化 From 8b4f0dfe43ed92606e5ff4fd95040abb8bba541b Mon Sep 17 00:00:00 2001 From: RVC-Boss <129054828+RVC-Boss@users.noreply.github.com> Date: Wed, 21 Feb 2024 18:37:19 +0800 Subject: [PATCH 31/63] Update 2-get-hubert-wav32k.py --- GPT_SoVITS/prepare_datasets/2-get-hubert-wav32k.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/GPT_SoVITS/prepare_datasets/2-get-hubert-wav32k.py b/GPT_SoVITS/prepare_datasets/2-get-hubert-wav32k.py index 76a7ec99..7607259e 100644 --- a/GPT_SoVITS/prepare_datasets/2-get-hubert-wav32k.py +++ b/GPT_SoVITS/prepare_datasets/2-get-hubert-wav32k.py @@ -99,7 +99,7 @@ for line in lines[int(i_part)::int(all_parts)]: try: # wav_name,text=line.split("\t") wav_name, spk_name, language, text = line.split("|") - if (inp_wav_dir != ""): + if (inp_wav_dir != "" and inp_wav_dir != None): wav_name = os.path.basename(wav_name) wav_path = "%s/%s"%(inp_wav_dir, wav_name) From 939971afe3770c530b0bc0f9a1d5824a1786411d Mon Sep 17 00:00:00 2001 From: RVC-Boss <129054828+RVC-Boss@users.noreply.github.com> Date: Wed, 21 Feb 2024 18:52:07 +0800 Subject: [PATCH 32/63] Add files via upload --- GPT_SoVITS/inference_webui.py | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/GPT_SoVITS/inference_webui.py b/GPT_SoVITS/inference_webui.py index 2247bc74..d2f3f949 100644 --- a/GPT_SoVITS/inference_webui.py +++ b/GPT_SoVITS/inference_webui.py @@ -561,12 +561,12 @@ with gr.Blocks(title="GPT-SoVITS WebUI") as app: inp_ref = gr.Audio(label=i18n("请上传3~10秒内参考音频,超过会报错!"), type="filepath") with gr.Column(): ref_text_free = gr.Checkbox(label=i18n("开启无参考文本模式。不填参考文本亦相当于开启。"), value=False, interactive=True, show_label=True) - gr.Markdown(i18n("使用无参考文本模式时建议使用微调的GPT")) + gr.Markdown(i18n("使用无参考文本模式时建议使用微调的GPT,听不清参考音频说的啥(不晓得写啥)可以开,开启后无视填写的参考文本。")) prompt_text = gr.Textbox(label=i18n("参考音频的文本"), value="") prompt_language = gr.Dropdown( label=i18n("参考音频的语种"), choices=[i18n("中文"), i18n("英文"), i18n("日文"), i18n("中英混合"), i18n("日英混合"), i18n("多语种混合")], value=i18n("中文") ) - gr.Markdown(value=i18n("*请填写需要合成的目标文本。中英混合选中文,日英混合选日文,中日混合暂不支持,非目标语言文本自动遗弃。")) + gr.Markdown(value=i18n("*请填写需要合成的目标文本和语种模式")) with gr.Row(): text = gr.Textbox(label=i18n("需要合成的文本"), value="") text_language = gr.Dropdown( From 9fa3da91a7a66c0e0aed8a43f13ad7456491b764 Mon Sep 17 00:00:00 2001 From: XXXXRT666 <157766680+XXXXRT666@users.noreply.github.com> Date: Wed, 21 Feb 2024 13:21:37 +0000 Subject: [PATCH 33/63] Update config.py --- config.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/config.py b/config.py index 1f741285..eaf292f1 100644 --- a/config.py +++ b/config.py @@ -6,7 +6,7 @@ import torch sovits_path = "" gpt_path = "" is_half_str = os.environ.get("is_half", "True") -is_half = True if is_half_str.lower() == 'true' else False +is_half = True if is_half_str.lower() == 'true' and not torch.backends.mps.is_available() else False is_share_str = os.environ.get("is_share","False") is_share= True if is_share_str.lower() == 'true' else False From 97e3479b07155b37785547a04076619ce74285c1 Mon Sep 17 00:00:00 2001 From: XXXXRT666 <157766680+XXXXRT666@users.noreply.github.com> Date: Wed, 21 Feb 2024 13:24:16 +0000 Subject: [PATCH 34/63] Update config.py --- config.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/config.py b/config.py index eaf292f1..1f741285 100644 --- a/config.py +++ b/config.py @@ -6,7 +6,7 @@ import torch sovits_path = "" gpt_path = "" is_half_str = os.environ.get("is_half", "True") -is_half = True if is_half_str.lower() == 'true' and not torch.backends.mps.is_available() else False +is_half = True if is_half_str.lower() == 'true' else False is_share_str = os.environ.get("is_share","False") is_share= True if is_share_str.lower() == 'true' else False From 83c9e8ff0294510dea3c44978edc44c915d7c4b2 Mon Sep 17 00:00:00 2001 From: XXXXRT666 <157766680+XXXXRT666@users.noreply.github.com> Date: Wed, 21 Feb 2024 13:26:30 +0000 Subject: [PATCH 35/63] Update inference_webui.py --- GPT_SoVITS/inference_webui.py | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/GPT_SoVITS/inference_webui.py b/GPT_SoVITS/inference_webui.py index d2f3f949..26936472 100644 --- a/GPT_SoVITS/inference_webui.py +++ b/GPT_SoVITS/inference_webui.py @@ -16,6 +16,7 @@ logging.getLogger("asyncio").setLevel(logging.ERROR) logging.getLogger("charset_normalizer").setLevel(logging.ERROR) logging.getLogger("torchaudio._extension").setLevel(logging.ERROR) import pdb +import torch if os.path.exists("./gweight.txt"): with open("./gweight.txt", 'r', encoding="utf-8") as file: @@ -48,11 +49,11 @@ is_share = os.environ.get("is_share", "False") is_share = eval(is_share) if "_CUDA_VISIBLE_DEVICES" in os.environ: os.environ["CUDA_VISIBLE_DEVICES"] = os.environ["_CUDA_VISIBLE_DEVICES"] -is_half = eval(os.environ.get("is_half", "True")) +is_half = eval(os.environ.get("is_half", "True")) and not torch.backends.mps.is_available() import gradio as gr from transformers import AutoModelForMaskedLM, AutoTokenizer import numpy as np -import librosa, torch +import librosa from feature_extractor import cnhubert cnhubert.cnhubert_base_path = cnhubert_base_path From 0d88cff99eb721b5a9bdcd84261951129fcbe90e Mon Sep 17 00:00:00 2001 From: Lion Date: Fri, 23 Feb 2024 20:32:25 +0800 Subject: [PATCH 36/63] optimize the structure --- README.md | 117 +++++++++++++++++++++------------------------- docs/cn/README.md | 89 ++++++++++++++++------------------- docs/ja/README.md | 81 ++++++++++++++------------------ docs/ko/README.md | 81 ++++++++++++++------------------ 4 files changed, 166 insertions(+), 202 deletions(-) diff --git a/README.md b/README.md index 4266970f..6f42aa61 100644 --- a/README.md +++ b/README.md @@ -17,14 +17,6 @@ A Powerful Few-shot Voice Conversion and Text-to-Speech WebUI.

--- -> Check out our [demo video](https://www.bilibili.com/video/BV12g4y1m7Uw) here! - -Unseen speakers few-shot fine-tuning demo: - -https://github.com/RVC-Boss/GPT-SoVITS/assets/129054828/05bee1fa-bdd8-4d85-9350-80c060ab47fb - -For users in China region, you can use AutoDL Cloud Docker to experience the full functionality online: https://www.codewithgpu.com/i/RVC-Boss/GPT-SoVITS/GPT-SoVITS-Official - ## Features: 1. **Zero-shot TTS:** Input a 5-second vocal sample and experience instant text-to-speech conversion. @@ -35,19 +27,29 @@ For users in China region, you can use AutoDL Cloud Docker to experience the ful 4. **WebUI Tools:** Integrated tools include voice accompaniment separation, automatic training set segmentation, Chinese ASR, and text labeling, assisting beginners in creating training datasets and GPT/SoVITS models. -## Environment Preparation +**Check out our [demo video](https://www.bilibili.com/video/BV12g4y1m7Uw) here!** -If you are a Windows user (tested with win>=10) you can install directly via the prezip. Just download the [prezip](https://huggingface.co/lj1995/GPT-SoVITS-windows-package/resolve/main/GPT-SoVITS-beta.7z?download=true), unzip it and double-click go-webui.bat to start GPT-SoVITS-WebUI. +Unseen speakers few-shot fine-tuning demo: + +https://github.com/RVC-Boss/GPT-SoVITS/assets/129054828/05bee1fa-bdd8-4d85-9350-80c060ab47fb + +## Installation + +For users in China region, you can [click here](https://www.codewithgpu.com/i/RVC-Boss/GPT-SoVITS/GPT-SoVITS-Official) to use AutoDL Cloud Docker to experience the full functionality online. ### Tested Environments - Python 3.9, PyTorch 2.0.1, CUDA 11 - Python 3.10.13, PyTorch 2.1.2, CUDA 12.3 -- Python 3.9, PyTorch 2.3.0.dev20240122, macOS 14.3 (Apple silicon, GPU) +- Python 3.9, PyTorch 2.3.0.dev20240122, macOS 14.3 (Apple silicon) -_Note: numba==0.56.4 require py<3.11_ +_Note: numba==0.56.4 requires py<3.11_ -### Quick Install with Conda +### Windows + +If you are a Windows user (tested with win>=10), you can directly download the [pre-packaged distribution](https://huggingface.co/lj1995/GPT-SoVITS-windows-package/resolve/main/GPT-SoVITS-beta.7z?download=true) and double-click on _go-webui.bat_ to start GPT-SoVITS-WebUI. + +### Linux ```bash conda create -n GPTSoVits python=3.9 @@ -55,15 +57,37 @@ conda activate GPTSoVits bash install.sh ``` +### macOS + +Only Macs that meet the following conditions can train models: + +- Mac computers with Apple silicon +- macOS 12.3 or later +- Xcode command-line tools installed by running `xcode-select --install` + +**All Macs can do inference with CPU, which has been demonstrated to outperform GPU inference.** + +First make sure you have installed FFmpeg by running `brew install ffmpeg` or `conda install ffmpeg`, then install by using the following commands: + +```bash +conda create -n GPTSoVits python=3.9 +conda activate GPTSoVits + +pip3 install --pre torch torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu +pip install -r requirements.txt +``` + +_Note: Training models will only work if you've installed PyTorch Nightly._ + ### Install Manually -#### Pip Packages +#### Install Dependences ```bash pip install -r requirements.txt ``` -#### FFmpeg +#### Install FFmpeg ##### Conda Users @@ -79,57 +103,10 @@ sudo apt install libsox-dev conda install -c conda-forge 'ffmpeg<7' ``` -##### MacOS Users - -```bash -brew install ffmpeg -``` - ##### Windows Users Download and place [ffmpeg.exe](https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/ffmpeg.exe) and [ffprobe.exe](https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/ffprobe.exe) in the GPT-SoVITS root. -### Pretrained Models - -Download pretrained models from [GPT-SoVITS Models](https://huggingface.co/lj1995/GPT-SoVITS) and place them in `GPT_SoVITS/pretrained_models`. - -For UVR5 (Vocals/Accompaniment Separation & Reverberation Removal, additionally), download models from [UVR5 Weights](https://huggingface.co/lj1995/VoiceConversionWebUI/tree/main/uvr5_weights) and place them in `tools/uvr5/uvr5_weights`. - -Users in China region can download these two models by entering the links below and clicking "Download a copy" - -- [GPT-SoVITS Models](https://www.icloud.com.cn/iclouddrive/056y_Xog_HXpALuVUjscIwTtg#GPT-SoVITS_Models) - -- [UVR5 Weights](https://www.icloud.com.cn/iclouddrive/0bekRKDiJXboFhbfm3lM2fVbA#UVR5_Weights) - -For Chinese ASR (additionally), download models from [Damo ASR Model](https://modelscope.cn/models/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/files), [Damo VAD Model](https://modelscope.cn/models/damo/speech_fsmn_vad_zh-cn-16k-common-pytorch/files), and [Damo Punc Model](https://modelscope.cn/models/damo/punc_ct-transformer_zh-cn-common-vocab272727-pytorch/files) and place them in `tools/damo_asr/models`. - -### For Mac Users - -If you are a Mac user, make sure you meet the following conditions for training and inferencing with GPU: - -- Mac computers with Apple silicon -- macOS 12.3 or later -- Xcode command-line tools installed by running `xcode-select --install` - -_Other Macs can do inference with CPU only._ - -Then install by using the following commands: - -#### Create Environment - -```bash -conda create -n GPTSoVits python=3.9 -conda activate GPTSoVits -``` - -#### Install Requirements - -```bash -pip install -r requirements.txt -pip uninstall torch torchaudio -pip3 install --pre torch torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu -``` - ### Using Docker #### docker-compose.yaml configuration @@ -157,6 +134,20 @@ As above, modify the corresponding parameters based on your actual situation, th docker run --rm -it --gpus=all --env=is_half=False --volume=G:\GPT-SoVITS-DockerTest\output:/workspace/output --volume=G:\GPT-SoVITS-DockerTest\logs:/workspace/logs --volume=G:\GPT-SoVITS-DockerTest\SoVITS_weights:/workspace/SoVITS_weights --workdir=/workspace -p 9880:9880 -p 9871:9871 -p 9872:9872 -p 9873:9873 -p 9874:9874 --shm-size="16G" -d breakstring/gpt-sovits:xxxxx ``` +## Pretrained Models + +Download pretrained models from [GPT-SoVITS Models](https://huggingface.co/lj1995/GPT-SoVITS) and place them in `GPT_SoVITS/pretrained_models`. + +For UVR5 (Vocals/Accompaniment Separation & Reverberation Removal, additionally), download models from [UVR5 Weights](https://huggingface.co/lj1995/VoiceConversionWebUI/tree/main/uvr5_weights) and place them in `tools/uvr5/uvr5_weights`. + +Users in China region can download these two models by entering the links below and clicking "Download a copy" + +- [GPT-SoVITS Models](https://www.icloud.com.cn/iclouddrive/056y_Xog_HXpALuVUjscIwTtg#GPT-SoVITS_Models) + +- [UVR5 Weights](https://www.icloud.com.cn/iclouddrive/0bekRKDiJXboFhbfm3lM2fVbA#UVR5_Weights) + +For Chinese ASR (additionally), download models from [Damo ASR Model](https://modelscope.cn/models/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/files), [Damo VAD Model](https://modelscope.cn/models/damo/speech_fsmn_vad_zh-cn-16k-common-pytorch/files), and [Damo Punc Model](https://modelscope.cn/models/damo/punc_ct-transformer_zh-cn-common-vocab272727-pytorch/files) and place them in `tools/damo_asr/models`. + ## Dataset Format The TTS annotation .list file format: @@ -229,8 +220,6 @@ python ./tools/damo_asr/WhisperASR.py -i -o -f A custom list save path is enabled ## Credits - - Special thanks to the following projects and contributors: - [ar-vits](https://github.com/innnky/ar-vits) diff --git a/docs/cn/README.md b/docs/cn/README.md index a0cfd0a0..1f31eccf 100644 --- a/docs/cn/README.md +++ b/docs/cn/README.md @@ -17,12 +17,6 @@ --- -> 查看我们的介绍视频 [demo video](https://www.bilibili.com/video/BV12g4y1m7Uw) - -https://github.com/RVC-Boss/GPT-SoVITS/assets/129054828/05bee1fa-bdd8-4d85-9350-80c060ab47fb - -中国地区用户可使用 AutoDL 云端镜像进行体验:https://www.codewithgpu.com/i/RVC-Boss/GPT-SoVITS/GPT-SoVITS-Official - ## 功能: 1. **零样本文本到语音(TTS):** 输入 5 秒的声音样本,即刻体验文本到语音转换。 @@ -33,46 +27,29 @@ https://github.com/RVC-Boss/GPT-SoVITS/assets/129054828/05bee1fa-bdd8-4d85-9350- 4. **WebUI 工具:** 集成工具包括声音伴奏分离、自动训练集分割、中文自动语音识别(ASR)和文本标注,协助初学者创建训练数据集和 GPT/SoVITS 模型。 -## 环境准备 +**查看我们的介绍视频 [demo video](https://www.bilibili.com/video/BV12g4y1m7Uw)** -如果你是 Windows 用户(已在 win>=10 上测试),可以直接通过预打包文件安装。只需下载[预打包文件](https://huggingface.co/lj1995/GPT-SoVITS-windows-package/resolve/main/GPT-SoVITS-beta.7z?download=true),解压后双击 go-webui.bat 即可启动 GPT-SoVITS-WebUI。 +未见过的说话者 few-shot 微调演示: -### 测试通过的 Python 和 PyTorch 版本 +https://github.com/RVC-Boss/GPT-SoVITS/assets/129054828/05bee1fa-bdd8-4d85-9350-80c060ab47fb + +## 安装 + +中国地区用户可[点击此处](https://www.codewithgpu.com/i/RVC-Boss/GPT-SoVITS/GPT-SoVITS-Official)使用 AutoDL 云端镜像进行体验。 + +### 测试通过的环境 - Python 3.9、PyTorch 2.0.1 和 CUDA 11 - Python 3.10.13, PyTorch 2.1.2 和 CUDA 12.3 -- Python 3.9、Pytorch 2.3.0.dev20240122 和 macOS 14.3(Apple 芯片,GPU) +- Python 3.9、Pytorch 2.3.0.dev20240122 和 macOS 14.3(Apple 芯片) _注意: numba==0.56.4 需要 python<3.11_ -### Mac 用户 +### Windows -如果你是 Mac 用户,请先确保满足以下条件以使用 GPU 进行训练和推理: +如果你是 Windows 用户(已在 win>=10 上测试),可以直接下载[预打包文件](https://huggingface.co/lj1995/GPT-SoVITS-windows-package/resolve/main/GPT-SoVITS-beta.7z?download=true),解压后双击 go-webui.bat 即可启动 GPT-SoVITS-WebUI。 -- 搭载 Apple 芯片的 Mac -- macOS 12.3 或更高版本 -- 已通过运行`xcode-select --install`安装 Xcode command-line tools - -_其他 Mac 仅支持使用 CPU 进行推理_ - -然后使用以下命令安装: - -#### 创建环境 - -```bash -conda create -n GPTSoVits python=3.9 -conda activate GPTSoVits -``` - -#### 安装依赖 - -```bash -pip install -r requirements.txt -pip uninstall torch torchaudio -pip3 install --pre torch torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu -``` - -### 使用 Conda 快速安装 +### Linux ```bash conda create -n GPTSoVits python=3.9 @@ -80,15 +57,37 @@ conda activate GPTSoVits bash install.sh ``` -### 手动安装包 +### macOS -#### Pip 包 +只有符合以下条件的 Mac 可以训练模型: + +- 搭载 Apple 芯片的 Mac +- 运行macOS 12.3 或更高版本 +- 已通过运行`xcode-select --install`安装 Xcode command-line tools + +**所有 Mac 都可使用 CPU 进行推理,且已测试性能优于 GPU。** + +首先确保你已通过运行 `brew install ffmpeg` 或 `conda install ffmpeg` 安装 FFmpeg,然后运行以下命令安装: + +```bash +conda create -n GPTSoVits python=3.9 +conda activate GPTSoVits + +pip3 install --pre torch torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu +pip install -r requirements.txt +``` + +_注:只有安装了Pytorch Nightly才可训练模型。_ + +### 手动安装 + +#### 安装依赖 ```bash pip install -r requirements.txt ``` -#### FFmpeg +#### 安装 FFmpeg ##### Conda 使用者 @@ -104,12 +103,6 @@ sudo apt install libsox-dev conda install -c conda-forge 'ffmpeg<7' ``` -##### MacOS 使用者 - -```bash -brew install ffmpeg -``` - ##### Windows 使用者 下载并将 [ffmpeg.exe](https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/ffmpeg.exe) 和 [ffprobe.exe](https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/ffprobe.exe) 放置在 GPT-SoVITS 根目录下。 @@ -141,11 +134,11 @@ docker compose -f "docker-compose.yaml" up -d docker run --rm -it --gpus=all --env=is_half=False --volume=G:\GPT-SoVITS-DockerTest\output:/workspace/output --volume=G:\GPT-SoVITS-DockerTest\logs:/workspace/logs --volume=G:\GPT-SoVITS-DockerTest\SoVITS_weights:/workspace/SoVITS_weights --workdir=/workspace -p 9880:9880 -p 9871:9871 -p 9872:9872 -p 9873:9873 -p 9874:9874 --shm-size="16G" -d breakstring/gpt-sovits:xxxxx ``` -### 预训练模型 +## 预训练模型 从 [GPT-SoVITS Models](https://huggingface.co/lj1995/GPT-SoVITS) 下载预训练模型,并将它们放置在 `GPT_SoVITS\pretrained_models` 中。 -对于 UVR5(人声/伴奏分离和混响移除,另外),从 [UVR5 Weights](https://huggingface.co/lj1995/VoiceConversionWebUI/tree/main/uvr5_weights) 下载模型,并将它们放置在 `tools/uvr5/uvr5_weights` 中。 +对于 UVR5(人声/伴奏分离和混响移除,附加),从 [UVR5 Weights](https://huggingface.co/lj1995/VoiceConversionWebUI/tree/main/uvr5_weights) 下载模型,并将它们放置在 `tools/uvr5/uvr5_weights` 中。 中国地区用户可以进入以下链接并点击“下载副本”下载以上两个模型: @@ -153,7 +146,7 @@ docker run --rm -it --gpus=all --env=is_half=False --volume=G:\GPT-SoVITS-Docker - [UVR5 Weights](https://www.icloud.com.cn/iclouddrive/0bekRKDiJXboFhbfm3lM2fVbA#UVR5_Weights) -对于中文自动语音识别(另外),从 [Damo ASR Model](https://modelscope.cn/models/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/files), [Damo VAD Model](https://modelscope.cn/models/damo/speech_fsmn_vad_zh-cn-16k-common-pytorch/files), 和 [Damo Punc Model](https://modelscope.cn/models/damo/punc_ct-transformer_zh-cn-common-vocab272727-pytorch/files) 下载模型,并将它们放置在 `tools/damo_asr/models` 中。 +对于中文自动语音识别(附加),从 [Damo ASR Model](https://modelscope.cn/models/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/files), [Damo VAD Model](https://modelscope.cn/models/damo/speech_fsmn_vad_zh-cn-16k-common-pytorch/files), 和 [Damo Punc Model](https://modelscope.cn/models/damo/punc_ct-transformer_zh-cn-common-vocab272727-pytorch/files) 下载模型,并将它们放置在 `tools/damo_asr/models` 中。 ## 数据集格式 diff --git a/docs/ja/README.md b/docs/ja/README.md index b27fd652..88ed865d 100644 --- a/docs/ja/README.md +++ b/docs/ja/README.md @@ -17,10 +17,6 @@ --- -> [デモ動画](https://www.bilibili.com/video/BV12g4y1m7Uw)をチェック! - -https://github.com/RVC-Boss/GPT-SoVITS/assets/129054828/05bee1fa-bdd8-4d85-9350-80c060ab47fb - ## 機能: 1. **ゼロショット TTS:** 5 秒間のボーカルサンプルを入力すると、即座にテキストから音声に変換されます。 @@ -31,48 +27,27 @@ https://github.com/RVC-Boss/GPT-SoVITS/assets/129054828/05bee1fa-bdd8-4d85-9350- 4. **WebUI ツール:** 統合されたツールには、音声伴奏の分離、トレーニングセットの自動セグメンテーション、中国語 ASR、テキストラベリングが含まれ、初心者がトレーニングデータセットと GPT/SoVITS モデルを作成するのを支援します。 -## 環境の準備 +**[デモ動画](https://www.bilibili.com/video/BV12g4y1m7Uw)をチェック!** -Windows ユーザーであれば(win>=10 にてテスト済み)、prezip 経由で直接インストールできます。[prezip](https://huggingface.co/lj1995/GPT-SoVITS-windows-package/resolve/main/GPT-SoVITS-beta.7z?download=true) をダウンロードして解凍し、go-webui.bat をダブルクリックするだけで GPT-SoVITS-WebUI が起動します。 +未見の話者数ショット微調整デモ: -### Python と PyTorch のバージョン +https://github.com/RVC-Boss/GPT-SoVITS/assets/129054828/05bee1fa-bdd8-4d85-9350-80c060ab47fb + +## インストール + +### テスト済みの環境 - Python 3.9, PyTorch 2.0.1, CUDA 11 - Python 3.10.13, PyTorch 2.1.2, CUDA 12.3 -- Python 3.9, PyTorch 2.3.0.dev20240122, macOS 14.3 (Apple silicon, GPU) +- Python 3.9, PyTorch 2.3.0.dev20240122, macOS 14.3 (Apple silicon) _注記: numba==0.56.4 は py<3.11 が必要です_ -### Mac ユーザーへ +### Windows -如果あなたが Mac ユーザーである場合、GPU を使用してトレーニングおよび推論を行うために以下の条件を満たしていることを確認してください: +Windows ユーザーの場合(win>=10 でテスト済み)、[事前にパッケージ化されたディストリビューション](https://huggingface.co/lj1995/GPT-SoVITS-windows-package/resolve/main/GPT-SoVITS-beta.7z?download=true)を直接ダウンロードし、_go-webui.bat_ をダブルクリックして GPT-SoVITS-WebUI を起動することができます。 -- Apple シリコンを搭載した Mac コンピューター -- macOS 12.3 以降 -- `xcode-select --install`を実行してインストールされた Xcode コマンドラインツール - -_その他の Mac は CPU のみで推論を行うことができます。_ - -次に、以下のコマンドを使用してインストールします: - -#### 環境作成 - -```bash -conda create -n GPTSoVits python=3.9 -conda activate GPTSoVits -``` - -#### Pip パッケージ - -```bash -pip install -r requirements.txt -pip uninstall torch torchaudio -pip3 install --pre torch torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu -``` - -_注記: UVR5 を使用して前処理を行う場合は、[オリジナルプロジェクトの GUI をダウンロード](https://github.com/Anjok07/ultimatevocalremovergui)して、「GPU Conversion」を選択することをお勧めします。さらに、特に推論時にメモリリークの問題が発生する可能性があります。推論 webUI を再起動することでメモリを解放することができます。_ - -### Conda によるクイックインストール +### Linux ```bash conda create -n GPTSoVits python=3.9 @@ -80,15 +55,37 @@ conda activate GPTSoVits bash install.sh ``` +### macOS + +モデルをトレーニングできるMacは、以下の条件を満たす必要があります: + +- Appleシリコンを搭載したMacコンピュータ +- macOS 12.3以降 +- `xcode-select --install`を実行してインストールされたXcodeコマンドラインツール + +**すべてのMacはCPUを使用して推論を行うことができ、GPU推論よりも優れていることが実証されています。** + +まず、`brew install ffmpeg`または`conda install ffmpeg`を実行してFFmpegをインストールしたことを確認してください。次に、以下のコマンドを使用してインストールします: + +```bash +conda create -n GPTSoVits python=3.9 +conda activate GPTSoVits + +pip3 install --pre torch torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu +pip install -r requirements.txt +``` + +_注:PyTorch Nightlyをインストールした場合にのみ、モデルのトレーニングが可能です。_ + ### 手動インストール -#### Pip パッケージ +#### 依存関係をインストールします ```bash pip install -r requirementx.txt ``` -#### FFmpeg +#### FFmpegをインストールします。 ##### Conda ユーザー @@ -104,12 +101,6 @@ sudo apt install libsox-dev conda install -c conda-forge 'ffmpeg<7' ``` -##### MacOS ユーザー - -```bash -brew install ffmpeg -``` - ##### Windows ユーザー [ffmpeg.exe](https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/ffmpeg.exe) と [ffprobe.exe](https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/ffprobe.exe) をダウンロードし、GPT-SoVITS のルートディレクトリに置きます。 @@ -141,7 +132,7 @@ docker compose -f "docker-compose.yaml" up -d docker run --rm -it --gpus=all --env=is_half=False --volume=G:\GPT-SoVITS-DockerTest\output:/workspace/output --volume=G:\GPT-SoVITS-DockerTest\logs:/workspace/logs --volume=G:\GPT-SoVITS-DockerTest\SoVITS_weights:/workspace/SoVITS_weights --workdir=/workspace -p 9880:9880 -p 9871:9871 -p 9872:9872 -p 9873:9873 -p 9874:9874 --shm-size="16G" -d breakstring/gpt-sovits:xxxxx ``` -### 事前訓練済みモデル +## 事前訓練済みモデル [GPT-SoVITS Models](https://huggingface.co/lj1995/GPT-SoVITS) から事前訓練済みモデルをダウンロードし、`GPT_SoVITSpretrained_models` に置きます。 diff --git a/docs/ko/README.md b/docs/ko/README.md index c57cf5cb..cc390b06 100644 --- a/docs/ko/README.md +++ b/docs/ko/README.md @@ -17,12 +17,6 @@ --- -> 데모 비디오를 확인하세요! [demo video](https://www.bilibili.com/video/BV12g4y1m7Uw) - -https://github.com/RVC-Boss/GPT-SoVITS/assets/129054828/05bee1fa-bdd8-4d85-9350-80c060ab47fb - -중국 지역의 사용자는 AutoDL 클라우드 이미지를 사용하여 체험할 수 있습니다: https://www.codewithgpu.com/i/RVC-Boss/GPT-SoVITS/GPT-SoVITS-Official - ## 기능: 1. **제로샷 텍스트 음성 변환 (TTS):** 5초의 음성 샘플을 입력하면 즉시 텍스트를 음성으로 변환할 수 있습니다. @@ -33,46 +27,27 @@ https://github.com/RVC-Boss/GPT-SoVITS/assets/129054828/05bee1fa-bdd8-4d85-9350- 4. **WebUI 도구:** 음성 반주 분리, 자동 훈련 데이터셋 분할, 중국어 자동 음성 인식(ASR) 및 텍스트 주석 등의 도구를 통합하여 초보자가 훈련 데이터셋과 GPT/SoVITS 모델을 생성하는 데 도움을 줍니다. -## 환경 준비 +**데모 비디오를 확인하세요! [demo video](https://www.bilibili.com/video/BV12g4y1m7Uw)** -Windows 사용자는 (win>=10 에서 테스트되었습니다) 미리 빌드된 파일을 다운로드하여 설치할 수 있습니다. 다운로드 후 GPT-SoVITS-WebUI를 시작하려면 압축을 풀고 go-webui.bat을 두 번 클릭하면 됩니다. +보지 못한 발화자의 퓨샷(few-shot) 파인튜닝 데모: -### 테스트된 Python 및 PyTorch 버전 +https://github.com/RVC-Boss/GPT-SoVITS/assets/129054828/05bee1fa-bdd8-4d85-9350-80c060ab47fb + +## 설치 + +### 테스트 통과 환경 - Python 3.9, PyTorch 2.0.1 및 CUDA 11 - Python 3.10.13, PyTorch 2.1.2 및 CUDA 12.3 -- Python 3.9, Pytorch 2.3.0.dev20240122 및 macOS 14.3 (Apple 칩, GPU) +- Python 3.9, Pytorch 2.3.0.dev20240122 및 macOS 14.3 (Apple Slilicon) _참고: numba==0.56.4 는 python<3.11 을 필요로 합니다._ -### MacOS 사용자 +### Windows -MacOS 사용자는 GPU를 사용하여 훈련 및 추론을 하려면 다음 조건을 충족해야 합니다: +Windows 사용자이며 (win>=10에서 테스트 완료) [미리 패키지된 배포판](https://huggingface.co/lj1995/GPT-SoVITS-windows-package/resolve/main/GPT-SoVITS-beta.7z?download=true)을 직접 다운로드하여 _go-webui.bat_을 더블클릭하면 GPT-SoVITS-WebUI를 시작할 수 있습니다. -- Apple 칩이 장착된 Mac -- macOS 12.3 이상 -- `xcode-select --install`을 실행하여 Xcode command-line tools를 설치했습니다. - -_다른 Mac은 CPU를 사용하여 추론만 지원합니다._ - -그런 다음 다음 명령을 사용하여 설치합니다: - -#### 환경 설정 - -```bash -conda create -n GPTSoVits python=3.9 -conda activate GPTSoVits -``` - -#### 의존성 모듈 설치 - -```bash -pip install -r requirements.txt -pip uninstall torch torchaudio -pip3 install --pre torch torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu -``` - -### Conda를 사용한 간편 설치 +### Linux ```bash conda create -n GPTSoVits python=3.9 @@ -80,15 +55,37 @@ conda activate GPTSoVits bash install.sh ``` +### macOS + +다음 조건을 충족하는 Mac에서만 모델을 훈련할 수 있습니다: + +- Apple 실리콘을 탑재한 Mac +- macOS 12.3 이상 버전 +- `xcode-select --install`을 실행하여 Xcode 명령줄 도구가 설치됨 + +**모든 Mac은 CPU를 사용하여 추론할 수 있으며, GPU 추론보다 우수한 성능을 보여주었습니다.** + +먼저 `brew install ffmpeg` 또는 `conda install ffmpeg`를 실행하여 FFmpeg가 설치되었는지 확인한 다음, 다음 명령어를 사용하여 설치하세요: + +```bash +conda create -n GPTSoVits python=3.9 +conda activate GPTSoVits + +pip3 install --pre torch torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu +pip install -r requirements.txt +``` + +_참고: PyTorch Nightly가 설치되어야만 모델을 훈련할 수 있습니다._ + ### 수동 설치 -#### Pip 패키지 +#### 의존성 설치 ```bash pip install -r requirements.txt ``` -#### FFmpeg +#### FFmpeg 설치 ##### Conda 사용자 @@ -104,12 +101,6 @@ sudo apt install libsox-dev conda install -c conda-forge 'ffmpeg<7' ``` -##### MacOS 사용자 - -```bash -brew install ffmpeg -``` - ##### Windows 사용자 [ffmpeg.exe](https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/ffmpeg.exe)와 [ffprobe.exe](https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/ffprobe.exe)를 GPT-SoVITS root 디렉토리에 넣습니다. @@ -144,7 +135,7 @@ docker compose -f "docker-compose.yaml" up -d docker run --rm -it --gpus=all --env=is_half=False --volume=G:\GPT-SoVITS-DockerTest\output:/workspace/output --volume=G:\GPT-SoVITS-DockerTest\logs:/workspace/logs --volume=G:\GPT-SoVITS-DockerTest\SoVITS_weights:/workspace/SoVITS_weights --workdir=/workspace -p 9880:9880 -p 9871:9871 -p 9872:9872 -p 9873:9873 -p 9874:9874 --shm-size="16G" -d breakstring/gpt-sovits:xxxxx ``` -### 사전 훈련된 모델 +## 사전 훈련된 모델 [GPT-SoVITS Models](https://huggingface.co/lj1995/GPT-SoVITS)에서 사전 훈련된 모델을 다운로드하고 `GPT_SoVITS\pretrained_models`에 넣습니다. From 35e673d801655b9ccf63949d347bdeb9a5eea733 Mon Sep 17 00:00:00 2001 From: ShiroDoMain Date: Tue, 27 Feb 2024 15:19:47 +0800 Subject: [PATCH 37/63] fix MDX-Net output path --- tools/uvr5/mdxnet.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tools/uvr5/mdxnet.py b/tools/uvr5/mdxnet.py index ae09c5f5..0d609c41 100644 --- a/tools/uvr5/mdxnet.py +++ b/tools/uvr5/mdxnet.py @@ -252,5 +252,5 @@ class MDXNetDereverb: self.pred = Predictor(self) self.device = cpu - def _path_audio_(self, input, vocal_root, others_root, format, is_hp3=False): + def _path_audio_(self, input, others_root, vocal_root, format, is_hp3=False): self.pred.prediction(input, vocal_root, others_root, format) From 4496426896157f441ef090ecca11c759369f8a37 Mon Sep 17 00:00:00 2001 From: root Date: Wed, 28 Feb 2024 17:31:19 +0800 Subject: [PATCH 38/63] =?UTF-8?q?=E4=BF=AE=E6=94=B9=E4=BB=A3=E7=A0=81?= =?UTF-8?q?=E5=BC=95=E7=94=A8=EF=BC=8C=E6=B7=A1=E5=AE=9A?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- GPT_SoVITS/AR/data/bucket_sampler.py | 3 ++- GPT_SoVITS/AR/data/data_module.py | 3 ++- GPT_SoVITS/AR/data/dataset.py | 3 ++- GPT_SoVITS/AR/models/t2s_lightning_module.py | 3 ++- GPT_SoVITS/AR/models/t2s_lightning_module_onnx.py | 3 ++- GPT_SoVITS/AR/models/t2s_model.py | 3 ++- GPT_SoVITS/AR/models/t2s_model_onnx.py | 3 ++- GPT_SoVITS/AR/models/utils.py | 3 ++- GPT_SoVITS/AR/modules/lr_schedulers.py | 3 ++- GPT_SoVITS/AR/text_processing/phonemizer.py | 3 ++- GPT_SoVITS/AR/text_processing/symbols.py | 3 ++- 11 files changed, 22 insertions(+), 11 deletions(-) diff --git a/GPT_SoVITS/AR/data/bucket_sampler.py b/GPT_SoVITS/AR/data/bucket_sampler.py index 647491f7..45f91d8e 100644 --- a/GPT_SoVITS/AR/data/bucket_sampler.py +++ b/GPT_SoVITS/AR/data/bucket_sampler.py @@ -1,4 +1,5 @@ -# modified from https://github.com/feng-yufei/shared_debugging_code/blob/main/bucketsampler.py +# modified from https://github.com/yangdongchao/SoundStorm/blob/master/soundstorm/s1/AR/data/bucket_sampler.py +# reference: https://github.com/lifeiteng/vall-e import itertools import math import random diff --git a/GPT_SoVITS/AR/data/data_module.py b/GPT_SoVITS/AR/data/data_module.py index 54d46344..cb947959 100644 --- a/GPT_SoVITS/AR/data/data_module.py +++ b/GPT_SoVITS/AR/data/data_module.py @@ -1,4 +1,5 @@ -# modified from https://github.com/feng-yufei/shared_debugging_code/blob/main/data_module.py +# modified from https://github.com/yangdongchao/SoundStorm/blob/master/soundstorm/s1/AR/data/data_module.py +# reference: https://github.com/lifeiteng/vall-e from pytorch_lightning import LightningDataModule from AR.data.bucket_sampler import DistributedBucketSampler from AR.data.dataset import Text2SemanticDataset diff --git a/GPT_SoVITS/AR/data/dataset.py b/GPT_SoVITS/AR/data/dataset.py index b1ea69e6..1a2ffef1 100644 --- a/GPT_SoVITS/AR/data/dataset.py +++ b/GPT_SoVITS/AR/data/dataset.py @@ -1,4 +1,5 @@ -# modified from https://github.com/feng-yufei/shared_debugging_code/blob/main/t2s_dataset.py +# modified from https://github.com/yangdongchao/SoundStorm/blob/master/soundstorm/s1/AR/data/dataset.py +# reference: https://github.com/lifeiteng/vall-e import pdb import sys diff --git a/GPT_SoVITS/AR/models/t2s_lightning_module.py b/GPT_SoVITS/AR/models/t2s_lightning_module.py index 594b73bc..2dd3f392 100644 --- a/GPT_SoVITS/AR/models/t2s_lightning_module.py +++ b/GPT_SoVITS/AR/models/t2s_lightning_module.py @@ -1,4 +1,5 @@ -# modified from https://github.com/feng-yufei/shared_debugging_code/blob/main/model/t2s_lightning_module.py +# modified from https://github.com/yangdongchao/SoundStorm/blob/master/soundstorm/s1/AR/models/t2s_lightning_module.py +# reference: https://github.com/lifeiteng/vall-e import os, sys now_dir = os.getcwd() diff --git a/GPT_SoVITS/AR/models/t2s_lightning_module_onnx.py b/GPT_SoVITS/AR/models/t2s_lightning_module_onnx.py index bb9e30b9..487edb01 100644 --- a/GPT_SoVITS/AR/models/t2s_lightning_module_onnx.py +++ b/GPT_SoVITS/AR/models/t2s_lightning_module_onnx.py @@ -1,4 +1,5 @@ -# modified from https://github.com/feng-yufei/shared_debugging_code/blob/main/model/t2s_lightning_module.py +# modified from https://github.com/yangdongchao/SoundStorm/blob/master/soundstorm/s1/AR/models/t2s_lightning_module.py +# reference: https://github.com/lifeiteng/vall-e import os, sys now_dir = os.getcwd() diff --git a/GPT_SoVITS/AR/models/t2s_model.py b/GPT_SoVITS/AR/models/t2s_model.py index 815ecec9..c8ad3d82 100644 --- a/GPT_SoVITS/AR/models/t2s_model.py +++ b/GPT_SoVITS/AR/models/t2s_model.py @@ -1,4 +1,5 @@ -# modified from https://github.com/feng-yufei/shared_debugging_code/blob/main/model/t2s_model.py +# modified from https://github.com/yangdongchao/SoundStorm/blob/master/soundstorm/s1/AR/models/t2s_model.py +# reference: https://github.com/lifeiteng/vall-e import torch from tqdm import tqdm diff --git a/GPT_SoVITS/AR/models/t2s_model_onnx.py b/GPT_SoVITS/AR/models/t2s_model_onnx.py index 92f2d745..7834297d 100644 --- a/GPT_SoVITS/AR/models/t2s_model_onnx.py +++ b/GPT_SoVITS/AR/models/t2s_model_onnx.py @@ -1,4 +1,5 @@ -# modified from https://github.com/feng-yufei/shared_debugging_code/blob/main/model/t2s_model.py +# modified from https://github.com/yangdongchao/SoundStorm/blob/master/soundstorm/s1/AR/models/t2s_model.py +# reference: https://github.com/lifeiteng/vall-e import torch from tqdm import tqdm diff --git a/GPT_SoVITS/AR/models/utils.py b/GPT_SoVITS/AR/models/utils.py index 84063f8a..9678c7e1 100644 --- a/GPT_SoVITS/AR/models/utils.py +++ b/GPT_SoVITS/AR/models/utils.py @@ -1,4 +1,5 @@ -# modified from https://github.com/feng-yufei/shared_debugging_code/blob/main/model/utils.py\ +# modified from https://github.com/yangdongchao/SoundStorm/blob/master/soundstorm/s1/AR/models/utils.py +# reference: https://github.com/lifeiteng/vall-e import torch import torch.nn.functional as F from typing import Tuple diff --git a/GPT_SoVITS/AR/modules/lr_schedulers.py b/GPT_SoVITS/AR/modules/lr_schedulers.py index 7dec462b..b8867467 100644 --- a/GPT_SoVITS/AR/modules/lr_schedulers.py +++ b/GPT_SoVITS/AR/modules/lr_schedulers.py @@ -1,4 +1,5 @@ -# modified from https://github.com/feng-yufei/shared_debugging_code/blob/main/model/lr_schedulers.py +# modified from https://github.com/yangdongchao/SoundStorm/blob/master/soundstorm/s1/AR/modules/lr_schedulers.py +# reference: https://github.com/lifeiteng/vall-e import math import torch diff --git a/GPT_SoVITS/AR/text_processing/phonemizer.py b/GPT_SoVITS/AR/text_processing/phonemizer.py index 9fcf5c09..9c5f58fb 100644 --- a/GPT_SoVITS/AR/text_processing/phonemizer.py +++ b/GPT_SoVITS/AR/text_processing/phonemizer.py @@ -1,4 +1,5 @@ -# modified from https://github.com/feng-yufei/shared_debugging_code/blob/main/text_processing/phonemizer.py +# modified from https://github.com/yangdongchao/SoundStorm/blob/master/soundstorm/s1/AR/text_processing/phonemizer.py +# reference: https://github.com/lifeiteng/vall-e import itertools import re from typing import Dict diff --git a/GPT_SoVITS/AR/text_processing/symbols.py b/GPT_SoVITS/AR/text_processing/symbols.py index c57e2d41..7d754a78 100644 --- a/GPT_SoVITS/AR/text_processing/symbols.py +++ b/GPT_SoVITS/AR/text_processing/symbols.py @@ -1,4 +1,5 @@ -# modified from https://github.com/feng-yufei/shared_debugging_code/blob/main/text_processing/symbols.py +# modified from https://github.com/yangdongchao/SoundStorm/blob/master/soundstorm/s1/AR/text_processing/symbols.py +# reference: https://github.com/lifeiteng/vall-e PAD = "_" PUNCTUATION = ';:,.!?¡¿—…"«»“” ' LETTERS = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz" From e066cd93d2351586aa9de99e62143db94f6fef4c Mon Sep 17 00:00:00 2001 From: KamioRinn Date: Mon, 4 Mar 2024 00:13:28 +0800 Subject: [PATCH 39/63] fix auto LangSegment misunderstand KO --- GPT_SoVITS/inference_webui.py | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-) diff --git a/GPT_SoVITS/inference_webui.py b/GPT_SoVITS/inference_webui.py index d2f3f949..cbde9f8c 100644 --- a/GPT_SoVITS/inference_webui.py +++ b/GPT_SoVITS/inference_webui.py @@ -257,11 +257,15 @@ def get_phones_and_bert(text,language): elif language in {"zh", "ja","auto"}: textlist=[] langlist=[] - LangSegment.setfilters(["zh","ja","en"]) + LangSegment.setfilters(["zh","ja","en","ko"]) if language == "auto": for tmp in LangSegment.getTexts(text): - langlist.append(tmp["lang"]) - textlist.append(tmp["text"]) + if tmp["lang"] == "ko": + langlist.append("zh") + textlist.append(tmp["text"]) + else: + langlist.append(tmp["lang"]) + textlist.append(tmp["text"]) else: for tmp in LangSegment.getTexts(text): if tmp["lang"] == "en": From a2761038c05c72be94b31fb01f1cf05a0d4266fb Mon Sep 17 00:00:00 2001 From: DW <147780325+D3lik@users.noreply.github.com> Date: Mon, 4 Mar 2024 20:30:59 +1100 Subject: [PATCH 40/63] Update en_US.json --- i18n/locale/en_US.json | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/i18n/locale/en_US.json b/i18n/locale/en_US.json index 0a07679c..292a915c 100644 --- a/i18n/locale/en_US.json +++ b/i18n/locale/en_US.json @@ -2,6 +2,18 @@ "很遗憾您这没有能用的显卡来支持您训练": "Unfortunately, there is no compatible GPU available to support your training.", "UVR5已开启": "UVR5 opened ", "UVR5已关闭": "UVR5 closed", + "输入文件夹路径": "Input folder path", + "输出文件夹路径": "Output folder path", + "ASR 模型": "ASR model", + "ASR 模型尺寸": "ASR model size", + "ASR 语言设置": "ASR language", + "模型切换": "Model switch", + "是否开启dpo训练选项(实验性)": "Enable DPO training (experimental feature)", + "开启无参考文本模式。不填参考文本亦相当于开启。": "Enable no reference mode. If you don't fill 'Text for reference audio', no reference mode will be enabled.", + "使用无参考文本模式时建议使用微调的GPT": "Please use your trained GPT model if you don't use reference audio.", + "后续将支持转音素、手工修改音素、语音合成分步执行。": " Step-to-step phoneme transformation and modification coming soon!", + "gpt采样参数(无参考文本时不要太低):": "GPT parameters:", + "按标点符号切": "Slice by every punct", "本软件以MIT协议开源, 作者不对软件具备任何控制力, 使用软件者、传播软件导出的声音者自负全责.
如不认可该条款, 则不能使用或引用软件包内任何代码和文件. 详见根目录LICENSE.": "This software is open source under the MIT license. The author does not have any control over the software. Users who use the software and distribute the sounds exported by the software are solely responsible.
If you do not agree with this clause, you cannot use or reference any codes and files within the software package. See the root directory Agreement-LICENSE for details.", "0-前置数据集获取工具": "0-Fetch dataset", "0a-UVR5人声伴奏分离&去混响去延迟工具": "0a-UVR5 webui (for vocal separation, deecho, dereverb and denoise)", From 37206edbd967717cb0d95d88b8415a13d226908e Mon Sep 17 00:00:00 2001 From: DW <147780325+D3lik@users.noreply.github.com> Date: Mon, 4 Mar 2024 20:37:18 +1100 Subject: [PATCH 41/63] Update inference_webui.py --- GPT_SoVITS/inference_webui.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/GPT_SoVITS/inference_webui.py b/GPT_SoVITS/inference_webui.py index df32d365..ee099627 100644 --- a/GPT_SoVITS/inference_webui.py +++ b/GPT_SoVITS/inference_webui.py @@ -584,7 +584,7 @@ with gr.Blocks(title="GPT-SoVITS WebUI") as app: interactive=True, ) with gr.Row(): - gr.Markdown("gpt采样参数(无参考文本时不要太低):") + gr.Markdown(value=i18n("gpt采样参数(无参考文本时不要太低):")) top_k = gr.Slider(minimum=1,maximum=100,step=1,label=i18n("top_k"),value=5,interactive=True) top_p = gr.Slider(minimum=0,maximum=1,step=0.05,label=i18n("top_p"),value=1,interactive=True) temperature = gr.Slider(minimum=0,maximum=1,step=0.05,label=i18n("temperature"),value=1,interactive=True) From 93075f52ddc80c70e634534a2ad960d1f2b66e58 Mon Sep 17 00:00:00 2001 From: Yuze Wang Date: Tue, 5 Mar 2024 15:19:32 +0800 Subject: [PATCH 42/63] added the ability to automatically switch to cpu if fast whisper don't compile with cuda --- tools/asr/fasterwhisper_asr.py | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/tools/asr/fasterwhisper_asr.py b/tools/asr/fasterwhisper_asr.py index 5f49de70..9371324c 100644 --- a/tools/asr/fasterwhisper_asr.py +++ b/tools/asr/fasterwhisper_asr.py @@ -4,6 +4,7 @@ os.environ["HF_ENDPOINT"]="https://hf-mirror.com" import traceback import requests from glob import glob +import torch from faster_whisper import WhisperModel from tqdm import tqdm @@ -45,8 +46,9 @@ def execute_asr(input_folder, output_folder, model_size, language,precision): if language == 'auto': language = None #不设置语种由模型自动输出概率最高的语种 print("loading faster whisper model:",model_size,model_path) + device = 'cuda' if torch.cuda.is_available() else 'cpu' try: - model = WhisperModel(model_path, device="cuda", compute_type=precision) + model = WhisperModel(model_path, device=device, compute_type=precision) except: return print(traceback.format_exc()) output = [] From b65dae788df75e29e1b16cec1681918efd7ae4f7 Mon Sep 17 00:00:00 2001 From: GoHomeToMacDonal Date: Wed, 6 Mar 2024 01:37:55 +0800 Subject: [PATCH 43/63] torchscript for T2STransformer --- GPT_SoVITS/AR/models/t2s_model.py | 245 +++++++++++++++++++++++------- 1 file changed, 191 insertions(+), 54 deletions(-) diff --git a/GPT_SoVITS/AR/models/t2s_model.py b/GPT_SoVITS/AR/models/t2s_model.py index c8ad3d82..5649627e 100644 --- a/GPT_SoVITS/AR/models/t2s_model.py +++ b/GPT_SoVITS/AR/models/t2s_model.py @@ -1,5 +1,7 @@ # modified from https://github.com/yangdongchao/SoundStorm/blob/master/soundstorm/s1/AR/models/t2s_model.py # reference: https://github.com/lifeiteng/vall-e +from typing import List + import torch from tqdm import tqdm @@ -35,6 +37,140 @@ default_config = { } +@torch.jit.script +class T2SMLP: + def __init__(self, w1, b1, w2, b2): + self.w1 = w1 + self.b1 = b1 + self.w2 = w2 + self.b2 = b2 + + def forward(self, x): + x = F.relu(F.linear(x, self.w1, self.b1)) + x = F.linear(x, self.w2, self.b2) + return x + + +@torch.jit.script +class T2SBlock: + def __init__( + self, + num_heads, + hidden_dim: int, + mlp: T2SMLP, + qkv_w, + qkv_b, + out_w, + out_b, + norm_w1, + norm_b1, + norm_eps1, + norm_w2, + norm_b2, + norm_eps2, + ): + self.num_heads = num_heads + self.mlp = mlp + self.hidden_dim: int = hidden_dim + self.qkv_w = qkv_w + self.qkv_b = qkv_b + self.out_w = out_w + self.out_b = out_b + self.norm_w1 = norm_w1 + self.norm_b1 = norm_b1 + self.norm_eps1 = norm_eps1 + self.norm_w2 = norm_w2 + self.norm_b2 = norm_b2 + self.norm_eps2 = norm_eps2 + + def process_prompt(self, x, attn_mask : torch.Tensor): + q, k, v = F.linear(x, self.qkv_w, self.qkv_b).chunk(3, dim=-1) + + batch_size = q.shape[0] + q_len = q.shape[1] + kv_len = k.shape[1] + + k_cache = k + v_cache = v + + q = q.view(batch_size, q_len, self.num_heads, -1).transpose(1, 2) + k = k_cache.view(batch_size, kv_len, self.num_heads, -1).transpose(1, 2) + v = v_cache.view(batch_size, kv_len, self.num_heads, -1).transpose(1, 2) + + attn = F.scaled_dot_product_attention(q, k, v, ~attn_mask) + + attn = attn.permute(2, 0, 1, 3).reshape(batch_size, -1, self.hidden_dim) + attn = F.linear(attn, self.out_w, self.out_b) + + x = F.layer_norm( + x + attn, [self.hidden_dim], self.norm_w1, self.norm_b1, self.norm_eps1 + ) + x = F.layer_norm( + x + self.mlp.forward(x), + [self.hidden_dim], + self.norm_w2, + self.norm_b2, + self.norm_eps2, + ) + return x, k_cache, v_cache + + def decode_next_token(self, x, k_cache, v_cache): + q, k, v = F.linear(x, self.qkv_w, self.qkv_b).chunk(3, dim=-1) + + k_cache = torch.cat([k_cache, k], dim=1) + v_cache = torch.cat([v_cache, v], dim=1) + kv_len = k_cache.shape[1] + + batch_size = q.shape[0] + q_len = q.shape[1] + + q = q.view(batch_size, q_len, self.num_heads, -1).transpose(1, 2) + k = k_cache.view(batch_size, kv_len, self.num_heads, -1).transpose(1, 2) + v = v_cache.view(batch_size, kv_len, self.num_heads, -1).transpose(1, 2) + + + attn = F.scaled_dot_product_attention(q, k, v) + + attn = attn.permute(2, 0, 1, 3).reshape(batch_size, -1, self.hidden_dim) + attn = F.linear(attn, self.out_w, self.out_b) + + x = F.layer_norm( + x + attn, [self.hidden_dim], self.norm_w1, self.norm_b1, self.norm_eps1 + ) + x = F.layer_norm( + x + self.mlp.forward(x), + [self.hidden_dim], + self.norm_w2, + self.norm_b2, + self.norm_eps2, + ) + return x, k_cache, v_cache + + +@torch.jit.script +class T2STransformer: + def __init__(self, num_blocks : int, blocks: List[T2SBlock]): + self.num_blocks : int = num_blocks + self.blocks = blocks + + def process_prompt( + self, x, attn_mask : torch.Tensor): + k_cache : List[torch.Tensor] = [] + v_cache : List[torch.Tensor] = [] + for i in range(self.num_blocks): + x, k_cache_, v_cache_ = self.blocks[i].process_prompt(x, attn_mask) + k_cache.append(k_cache_) + v_cache.append(v_cache_) + return x, k_cache, v_cache + + def decode_next_token( + self, x, k_cache: List[torch.Tensor], v_cache: List[torch.Tensor] + ): + for i in range(self.num_blocks): + x, k_cache[i], v_cache[i] = self.blocks[i].decode_next_token(x, k_cache[i], v_cache[i]) + return x, k_cache, v_cache + + class Text2SemanticDecoder(nn.Module): def __init__(self, config, norm_first=False, top_k=3): super(Text2SemanticDecoder, self).__init__() @@ -89,6 +225,37 @@ class Text2SemanticDecoder(nn.Module): ignore_index=self.EOS, ) + blocks = [] + + for i in range(self.num_layers): + layer = self.h.layers[i] + t2smlp = T2SMLP( + layer.linear1.weight, + layer.linear1.bias, + layer.linear2.weight, + layer.linear2.bias + ) + # (layer.self_attn.in_proj_weight, layer.self_attn.in_proj_bias) + block = T2SBlock( + self.num_head, + self.model_dim, + t2smlp, + layer.self_attn.in_proj_weight, + layer.self_attn.in_proj_bias, + layer.self_attn.out_proj.weight, + layer.self_attn.out_proj.bias, + layer.norm1.weight, + layer.norm1.bias, + layer.norm1.eps, + layer.norm2.weight, + layer.norm2.bias, + layer.norm2.eps + ) + + blocks.append(block) + + self.t2s_transformer = T2STransformer(self.num_layers, blocks) + def make_input_data(self, x, x_lens, y, y_lens, bert_feature): x = self.ar_text_embedding(x) x = x + self.bert_proj(bert_feature.transpose(1, 2)) @@ -343,17 +510,9 @@ class Text2SemanticDecoder(nn.Module): x_attn_mask = torch.zeros((x_len, x_len), dtype=torch.bool) stop = False # print(1111111,self.num_layers) - cache = { - "all_stage": self.num_layers, - "k": [None] * self.num_layers, ###根据配置自己手写 - "v": [None] * self.num_layers, - # "xy_pos":None,##y_pos位置编码每次都不一样的没法缓存,每次都要重新拼xy_pos.主要还是写法原因,其实是可以历史统一一样的,但也没啥计算量就不管了 - "y_emb": None, ##只需要对最新的samples求emb,再拼历史的就行 - # "logits":None,###原版就已经只对结尾求再拼接了,不用管 - # "xy_dec":None,###不需要,本来只需要最后一个做logits - "first_infer": 1, - "stage": 0, - } + + k_cache = None + v_cache = None ################### first step ########################## if y is not None: y_emb = self.ar_audio_embedding(y) @@ -361,7 +520,6 @@ class Text2SemanticDecoder(nn.Module): prefix_len = y.shape[1] y_pos = self.ar_audio_position(y_emb) xy_pos = torch.concat([x, y_pos], dim=1) - cache["y_emb"] = y_emb ref_free = False else: y_emb = None @@ -373,10 +531,10 @@ class Text2SemanticDecoder(nn.Module): ref_free = True x_attn_mask_pad = F.pad( - x_attn_mask, - (0, y_len), ###xx的纯0扩展到xx纯0+xy纯1,(x,x+y) - value=True, - ) + x_attn_mask, + (0, y_len), ###xx的纯0扩展到xx纯0+xy纯1,(x,x+y) + value=True, + ) y_attn_mask = F.pad( ###yy的右上1扩展到左边xy的0,(y,x+y) torch.triu(torch.ones(y_len, y_len, dtype=torch.bool), diagonal=1), (x_len, 0), @@ -385,64 +543,43 @@ class Text2SemanticDecoder(nn.Module): xy_attn_mask = torch.concat([x_attn_mask_pad, y_attn_mask], dim=0).to( x.device ) - for idx in tqdm(range(1500)): - - xy_dec, _ = self.h((xy_pos, None), mask=xy_attn_mask, cache=cache) + if xy_attn_mask is not None: + xy_dec, k_cache, v_cache = self.t2s_transformer.process_prompt(xy_pos, xy_attn_mask) + else: + xy_dec, k_cache, v_cache = self.t2s_transformer.decode_next_token(xy_pos, k_cache, v_cache) + logits = self.ar_predict_layer( xy_dec[:, -1] - ) ##不用改,如果用了cache的默认就是只有一帧,取最后一帧一样的 - # samples = topk_sampling(logits, top_k=top_k, top_p=1.0, temperature=temperature) - if(idx==0):###第一次跑不能EOS否则没有了 - logits = logits[:, :-1] ###刨除1024终止符号的概率 + ) + + if idx == 0: + xy_attn_mask = None + logits = logits[:, :-1] samples = sample( logits[0], y, top_k=top_k, top_p=top_p, repetition_penalty=1.35, temperature=temperature )[0].unsqueeze(0) - # 本次生成的 semantic_ids 和之前的 y 构成新的 y - # print(samples.shape)#[1,1]#第一个1是bs - y = torch.concat([y, samples], dim=1) + + y = torch.concat([y, samples], dim=1) if early_stop_num != -1 and (y.shape[1] - prefix_len) > early_stop_num: print("use early stop num:", early_stop_num) stop = True if torch.argmax(logits, dim=-1)[0] == self.EOS or samples[0, 0] == self.EOS: - # print(torch.argmax(logits, dim=-1)[0] == self.EOS, samples[0, 0] == self.EOS) stop = True if stop: - # if prompts.shape[1] == y.shape[1]: - # y = torch.concat([y, torch.zeros_like(samples)], dim=1) - # print("bad zero prediction") if y.shape[1]==0: y = torch.concat([y, torch.zeros_like(samples)], dim=1) print("bad zero prediction") print(f"T2S Decoding EOS [{prefix_len} -> {y.shape[1]}]") break - - ####################### update next step ################################### - cache["first_infer"] = 0 - if cache["y_emb"] is not None: - y_emb = torch.cat( - [cache["y_emb"], self.ar_audio_embedding(y[:, -1:])], dim = 1 - ) - cache["y_emb"] = y_emb - y_pos = self.ar_audio_position(y_emb) - xy_pos = y_pos[:, -1:] - else: - y_emb = self.ar_audio_embedding(y[:, -1:]) - cache["y_emb"] = y_emb - y_pos = self.ar_audio_position(y_emb) - xy_pos = y_pos - y_len = y_pos.shape[1] - ###最右边一列(是错的) - # xy_attn_mask=torch.ones((1, x_len+y_len), dtype=torch.bool,device=xy_pos.device) - # xy_attn_mask[:,-1]=False - ###最下面一行(是对的) - xy_attn_mask = torch.zeros( - (1, x_len + y_len), dtype=torch.bool, device=xy_pos.device - ) + ####################### update next step ################################### + y_emb = self.ar_audio_embedding(y[:, -1:]) + xy_pos = y_emb * self.ar_audio_position.x_scale + self.ar_audio_position.alpha * self.ar_audio_position.pe[:, prompts.shape[1] + idx] + if ref_free: return y[:, :-1], 0 - return y[:, :-1], idx-1 + return y[:, :-1], idx - 1 From 616be20db3cf94f1cd663782fea61b2370704193 Mon Sep 17 00:00:00 2001 From: RVC-Boss <129054828+RVC-Boss@users.noreply.github.com> Date: Wed, 6 Mar 2024 18:03:21 +0800 Subject: [PATCH 44/63] =?UTF-8?q?=E5=A6=82=E6=9E=9C=E7=94=A8=E8=8B=B1?= =?UTF-8?q?=E6=96=87ASR=E4=B8=8D=E5=86=8D=E9=9C=80=E8=A6=81=E5=85=88?= =?UTF-8?q?=E4=B8=8B=E4=B8=AD=E6=96=87funasr=E6=A8=A1=E5=9E=8B?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 如果用英文ASR不再需要先下中文funasr模型 --- tools/asr/fasterwhisper_asr.py | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/tools/asr/fasterwhisper_asr.py b/tools/asr/fasterwhisper_asr.py index 9371324c..f7b31aab 100644 --- a/tools/asr/fasterwhisper_asr.py +++ b/tools/asr/fasterwhisper_asr.py @@ -10,7 +10,6 @@ from faster_whisper import WhisperModel from tqdm import tqdm from tools.asr.config import check_fw_local_models -from tools.asr.funasr_asr import only_asr os.environ["KMP_DUPLICATE_LIB_OK"]="TRUE" @@ -70,6 +69,8 @@ def execute_asr(input_folder, output_folder, model_size, language,precision): if info.language == "zh": print("检测为中文文本,转funasr处理") + if("only_asr"not in globals()): + from tools.asr.funasr_asr import only_asr##如果用英文就不需要导入下载模型 text = only_asr(file) if text == '': From 34e35012f390f5371f2645f06b792b49fbf209be Mon Sep 17 00:00:00 2001 From: RVC-Boss <129054828+RVC-Boss@users.noreply.github.com> Date: Wed, 6 Mar 2024 23:27:29 +0800 Subject: [PATCH 45/63] Update Changelog_CN.md --- docs/cn/Changelog_CN.md | 17 +++++++++++++++-- 1 file changed, 15 insertions(+), 2 deletions(-) diff --git a/docs/cn/Changelog_CN.md b/docs/cn/Changelog_CN.md index 8afd3514..d0d07033 100644 --- a/docs/cn/Changelog_CN.md +++ b/docs/cn/Changelog_CN.md @@ -127,7 +127,7 @@ ### 20240221更新 -1-数据处理添加语音降噪选项 +1-数据处理添加语音降噪选项(降噪为只剩16k采样率,除非底噪很大先不急着用哦。) 2-中文日文前端处理优化 https://github.com/RVC-Boss/GPT-SoVITS/pull/559 https://github.com/RVC-Boss/GPT-SoVITS/pull/556 https://github.com/RVC-Boss/GPT-SoVITS/pull/532 https://github.com/RVC-Boss/GPT-SoVITS/pull/507 https://github.com/RVC-Boss/GPT-SoVITS/pull/509 @@ -135,9 +135,22 @@ 4-colab修复不开启公网url +### 20240306更新 + +1-推理加速50%(RTX3090+pytorch2.2.1+cu11.8tested)https://github.com/RVC-Boss/GPT-SoVITS/pull/672 + +2-如果用faster whisper非中文ASR不再需要先下中文funasr模型 + +3-修复uvr5去混响模型 是否混响 反的 https://github.com/RVC-Boss/GPT-SoVITS/pull/610 + +4-faster whisper如果无cuda可用自动cpu推理 https://github.com/RVC-Boss/GPT-SoVITS/pull/675 + +5-修改is_half的判断使在Mac上能正常CPU推理 https://github.com/RVC-Boss/GPT-SoVITS/pull/573 + + todolist: -1-中文多音字推理优化 +1-中文多音字推理优化(有没有人来测试的,欢迎把测试结果写在pr评论区里) https://github.com/RVC-Boss/GPT-SoVITS/pull/488 From 3905f6f2feb4e9ccf5ba9be8dac88a9e0518d412 Mon Sep 17 00:00:00 2001 From: RVC-Boss <129054828+RVC-Boss@users.noreply.github.com> Date: Wed, 6 Mar 2024 23:29:52 +0800 Subject: [PATCH 46/63] Update Changelog_CN.md --- docs/cn/Changelog_CN.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/cn/Changelog_CN.md b/docs/cn/Changelog_CN.md index d0d07033..625e4782 100644 --- a/docs/cn/Changelog_CN.md +++ b/docs/cn/Changelog_CN.md @@ -137,7 +137,7 @@ ### 20240306更新 -1-推理加速50%(RTX3090+pytorch2.2.1+cu11.8tested)https://github.com/RVC-Boss/GPT-SoVITS/pull/672 +1-推理加速50%(RTX3090+pytorch2.2.1+cu11.8+win10+py39 tested)https://github.com/RVC-Boss/GPT-SoVITS/pull/672 2-如果用faster whisper非中文ASR不再需要先下中文funasr模型 From 223291318e9f7c0ce74820ce6e2c781039880b41 Mon Sep 17 00:00:00 2001 From: RVC-Boss <129054828+RVC-Boss@users.noreply.github.com> Date: Thu, 7 Mar 2024 16:49:54 +0800 Subject: [PATCH 47/63] =?UTF-8?q?=E5=AE=8C=E5=96=84=E5=BC=95=E7=94=A8?= =?UTF-8?q?=EF=BC=8C=E6=97=A0=E4=BA=8B=E5=8F=91=E7=94=9F=EF=BC=8C=E6=B7=A1?= =?UTF-8?q?=E5=AE=9A?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 完善引用,无事发生,淡定 --- README.md | 15 ++++++++++++--- 1 file changed, 12 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index 6f42aa61..0ce862f7 100644 --- a/README.md +++ b/README.md @@ -218,25 +218,34 @@ ASR processing is performed through Faster_Whisper(ASR marking except Chinese) python ./tools/damo_asr/WhisperASR.py -i -o -f -l ``` A custom list save path is enabled + ## Credits Special thanks to the following projects and contributors: +### Theoretical - [ar-vits](https://github.com/innnky/ar-vits) - [SoundStorm](https://github.com/yangdongchao/SoundStorm/tree/master/soundstorm/s1/AR) - [vits](https://github.com/jaywalnut310/vits) - [TransferTTS](https://github.com/hcy71o/TransferTTS/blob/master/models.py#L556) -- [Chinese Speech Pretrain](https://github.com/TencentGameMate/chinese_speech_pretrain) - [contentvec](https://github.com/auspicious3000/contentvec/) - [hifi-gan](https://github.com/jik876/hifi-gan) -- [Chinese-Roberta-WWM-Ext-Large](https://huggingface.co/hfl/chinese-roberta-wwm-ext-large) - [fish-speech](https://github.com/fishaudio/fish-speech/blob/main/tools/llama/generate.py#L41) +### Pretrained Models +- [Chinese Speech Pretrain](https://github.com/TencentGameMate/chinese_speech_pretrain) +- [Chinese-Roberta-WWM-Ext-Large](https://huggingface.co/hfl/chinese-roberta-wwm-ext-large) +### Text Frontend for Inference +- [paddlespeech zh_normalization](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/paddlespeech/t2s/frontend/zh_normalization) +- [LangSegment](https://github.com/juntaosun/LangSegment) +### WebUI Tools - [ultimatevocalremovergui](https://github.com/Anjok07/ultimatevocalremovergui) - [audio-slicer](https://github.com/openvpi/audio-slicer) - [SubFix](https://github.com/cronrpc/SubFix) - [FFmpeg](https://github.com/FFmpeg/FFmpeg) - [gradio](https://github.com/gradio-app/gradio) - +- [faster-whisper](https://github.com/SYSTRAN/faster-whisper) +- [FunASR](https://github.com/alibaba-damo-academy/FunASR) + ## Thanks to all contributors for their efforts From 7b88f8656184566d44252b20f6d16631636840ae Mon Sep 17 00:00:00 2001 From: RVC-Boss <129054828+RVC-Boss@users.noreply.github.com> Date: Thu, 7 Mar 2024 17:06:35 +0800 Subject: [PATCH 48/63] =?UTF-8?q?=E5=A2=9E=E5=8A=A0=E4=B8=AD=E8=8B=B1?= =?UTF-8?q?=E6=96=87=E6=95=99=E7=A8=8B=E3=80=81=E7=94=A8=E6=88=B7=E6=8C=87?= =?UTF-8?q?=E5=8D=97=E9=93=BE=E6=8E=A5?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit add url of user guide english version --- README.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index 0ce862f7..9b9c90ac 100644 --- a/README.md +++ b/README.md @@ -33,6 +33,8 @@ Unseen speakers few-shot fine-tuning demo: https://github.com/RVC-Boss/GPT-SoVITS/assets/129054828/05bee1fa-bdd8-4d85-9350-80c060ab47fb +[教程中文版](https://www.yuque.com/baicaigongchang1145haoyuangong/ib3g1e) [User guide English version](https://rentry.co/GPT-SoVITS-guide#/) + ## Installation For users in China region, you can [click here](https://www.codewithgpu.com/i/RVC-Boss/GPT-SoVITS/GPT-SoVITS-Official) to use AutoDL Cloud Docker to experience the full functionality online. @@ -173,7 +175,7 @@ D:\GPT-SoVITS\xxx/xxx.wav|xxx|en|I like playing Genshin. - [ ] **High Priority:** - [x] Localization in Japanese and English. - - [ ] User guide. + - [x] User guide. - [x] Japanese and English dataset fine tune training. - [ ] **Features:** From 8875d1de01ad1b1b5f710319b7d50e18ebeda30e Mon Sep 17 00:00:00 2001 From: DW <147780325+D3lik@users.noreply.github.com> Date: Thu, 7 Mar 2024 20:29:28 +1100 Subject: [PATCH 49/63] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 9b9c90ac..0b0e2d44 100644 --- a/README.md +++ b/README.md @@ -33,7 +33,7 @@ Unseen speakers few-shot fine-tuning demo: https://github.com/RVC-Boss/GPT-SoVITS/assets/129054828/05bee1fa-bdd8-4d85-9350-80c060ab47fb -[教程中文版](https://www.yuque.com/baicaigongchang1145haoyuangong/ib3g1e) [User guide English version](https://rentry.co/GPT-SoVITS-guide#/) +[教程中文版](https://www.yuque.com/baicaigongchang1145haoyuangong/ib3g1e) [User guide (EN)](https://rentry.co/GPT-SoVITS-guide#/) ## Installation From 13bb68c71531f365b81e49b55f1086ab641326bd Mon Sep 17 00:00:00 2001 From: GoHomeToMacDonal Date: Fri, 8 Mar 2024 19:09:15 +0800 Subject: [PATCH 50/63] Bug fix: inference w/o prompt --- GPT_SoVITS/AR/models/t2s_model.py | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/GPT_SoVITS/AR/models/t2s_model.py b/GPT_SoVITS/AR/models/t2s_model.py index 5649627e..00f4aa30 100644 --- a/GPT_SoVITS/AR/models/t2s_model.py +++ b/GPT_SoVITS/AR/models/t2s_model.py @@ -235,7 +235,7 @@ class Text2SemanticDecoder(nn.Module): layer.linear2.weight, layer.linear2.bias ) - # (layer.self_attn.in_proj_weight, layer.self_attn.in_proj_bias) + block = T2SBlock( self.num_head, self.model_dim, @@ -578,7 +578,7 @@ class Text2SemanticDecoder(nn.Module): ####################### update next step ################################### y_emb = self.ar_audio_embedding(y[:, -1:]) - xy_pos = y_emb * self.ar_audio_position.x_scale + self.ar_audio_position.alpha * self.ar_audio_position.pe[:, prompts.shape[1] + idx] + xy_pos = y_emb * self.ar_audio_position.x_scale + self.ar_audio_position.alpha * self.ar_audio_position.pe[:, y_len + idx] if ref_free: return y[:, :-1], 0 From 17832e5c4a38a84f8d091a6e77c9a8813ae4d430 Mon Sep 17 00:00:00 2001 From: chasonjiang <1440499136@qq.com> Date: Fri, 8 Mar 2024 23:41:59 +0800 Subject: [PATCH 51/63] =?UTF-8?q?=09=E5=BF=BD=E7=95=A5ffmpeg=20=20=20.giti?= =?UTF-8?q?gnore=20=09=E4=BD=BFt2s=E6=A8=A1=E5=9E=8B=E6=94=AF=E6=8C=81?= =?UTF-8?q?=E6=89=B9=E9=87=8F=E6=8E=A8=E7=90=86:=20=20=20GPT=5FSoVITS/AR/m?= =?UTF-8?q?odels/t2s=5Fmodel.py=20=09=E4=BF=AE=E5=A4=8Dbatch=20bug=20=20?= =?UTF-8?q?=20GPT=5FSoVITS/AR/models/utils.py=20=20=20=20=20=E9=87=8D?= =?UTF-8?q?=E6=9E=84=E7=9A=84tts=20infer=20=20=20GPT=5FSoVITS/TTS=5Finfer?= =?UTF-8?q?=5Fpack/TTS.py=20=09=E6=96=87=E6=9C=AC=E9=A2=84=E5=A4=84?= =?UTF-8?q?=E7=90=86=E6=A8=A1=E5=9D=97=20=20=20GPT=5FSoVITS/TTS=5Finfer=5F?= =?UTF-8?q?pack/TextPreprocessor.py=20=09new=20file=20=20=20GPT=5FSoVITS/T?= =?UTF-8?q?TS=5Finfer=5Fpack/=5F=5Finit=5F=5F.py=20=09=E6=96=87=E6=9C=AC?= =?UTF-8?q?=E6=8B=86=E5=88=86=E6=96=B9=E6=B3=95=E6=A8=A1=E5=9D=97=20=20=20?= =?UTF-8?q?GPT=5FSoVITS/TTS=5Finfer=5Fpack/text=5Fsegmentation=5Fmethod.py?= =?UTF-8?q?=20=09tts=20infer=E9=85=8D=E7=BD=AE=E6=96=87=E4=BB=B6=20=20=20G?= =?UTF-8?q?PT=5FSoVITS/configs/tts=5Finfer.yaml=20=09modified=20=20=20GPT?= =?UTF-8?q?=5FSoVITS/feature=5Fextractor/cnhubert.py=20=09modified=20=20?= =?UTF-8?q?=20GPT=5FSoVITS/inference=5Fgui.py=20=09=E9=87=8D=E6=9E=84?= =?UTF-8?q?=E7=9A=84webui=20=20=20GPT=5FSoVITS/inference=5Fwebui.py=20=09n?= =?UTF-8?q?ew=20file=20=20=20GPT=5FSoVITS/inference=5Fwebui=5Fold.py?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .gitignore | 3 +- GPT_SoVITS/AR/models/t2s_model.py | 53 +- GPT_SoVITS/AR/models/utils.py | 12 +- GPT_SoVITS/TTS_infer_pack/TTS.py | 546 +++++++++++++++ GPT_SoVITS/TTS_infer_pack/TextPreprocessor.py | 176 +++++ GPT_SoVITS/TTS_infer_pack/__init__.py | 1 + .../text_segmentation_method.py | 126 ++++ GPT_SoVITS/configs/tts_infer.yaml | 14 + GPT_SoVITS/feature_extractor/cnhubert.py | 9 +- GPT_SoVITS/inference_gui.py | 2 +- GPT_SoVITS/inference_webui.py | 514 ++------------- GPT_SoVITS/inference_webui_old.py | 622 ++++++++++++++++++ 12 files changed, 1587 insertions(+), 491 deletions(-) create mode 100644 GPT_SoVITS/TTS_infer_pack/TTS.py create mode 100644 GPT_SoVITS/TTS_infer_pack/TextPreprocessor.py create mode 100644 GPT_SoVITS/TTS_infer_pack/__init__.py create mode 100644 GPT_SoVITS/TTS_infer_pack/text_segmentation_method.py create mode 100644 GPT_SoVITS/configs/tts_infer.yaml create mode 100644 GPT_SoVITS/inference_webui_old.py diff --git a/.gitignore b/.gitignore index 96e754a9..6f846a91 100644 --- a/.gitignore +++ b/.gitignore @@ -10,5 +10,6 @@ reference GPT_weights SoVITS_weights TEMP - +ffmpeg.exe +ffprobe.exe diff --git a/GPT_SoVITS/AR/models/t2s_model.py b/GPT_SoVITS/AR/models/t2s_model.py index c8ad3d82..8c31f12a 100644 --- a/GPT_SoVITS/AR/models/t2s_model.py +++ b/GPT_SoVITS/AR/models/t2s_model.py @@ -1,5 +1,4 @@ -# modified from https://github.com/yangdongchao/SoundStorm/blob/master/soundstorm/s1/AR/models/t2s_model.py -# reference: https://github.com/lifeiteng/vall-e +# modified from https://github.com/feng-yufei/shared_debugging_code/blob/main/model/t2s_model.py import torch from tqdm import tqdm @@ -386,7 +385,9 @@ class Text2SemanticDecoder(nn.Module): x.device ) - + y_list = [None]*y.shape[0] + batch_idx_map = list(range(y.shape[0])) + idx_list = [None]*y.shape[0] for idx in tqdm(range(1500)): xy_dec, _ = self.h((xy_pos, None), mask=xy_attn_mask, cache=cache) @@ -397,17 +398,45 @@ class Text2SemanticDecoder(nn.Module): if(idx==0):###第一次跑不能EOS否则没有了 logits = logits[:, :-1] ###刨除1024终止符号的概率 samples = sample( - logits[0], y, top_k=top_k, top_p=top_p, repetition_penalty=1.35, temperature=temperature - )[0].unsqueeze(0) + logits, y, top_k=top_k, top_p=top_p, repetition_penalty=1.35, temperature=temperature + )[0] # 本次生成的 semantic_ids 和之前的 y 构成新的 y # print(samples.shape)#[1,1]#第一个1是bs y = torch.concat([y, samples], dim=1) + # 移除已经生成完毕的序列 + reserved_idx_of_batch_for_y = None + if (self.EOS in torch.argmax(logits, dim=-1)) or \ + (self.EOS in samples[:, 0]): ###如果生成到EOS,则停止 + l = samples[:, 0]==self.EOS + removed_idx_of_batch_for_y = torch.where(l==True)[0].tolist() + reserved_idx_of_batch_for_y = torch.where(l==False)[0] + # batch_indexs = torch.tensor(batch_idx_map, device=y.device)[removed_idx_of_batch_for_y] + for i in removed_idx_of_batch_for_y: + batch_index = batch_idx_map[i] + idx_list[batch_index] = idx - 1 + y_list[batch_index] = y[i, :-1] + + batch_idx_map = [batch_idx_map[i] for i in reserved_idx_of_batch_for_y.tolist()] + + # 只保留未生成完毕的序列 + if reserved_idx_of_batch_for_y is not None: + # index = torch.LongTensor(batch_idx_map).to(y.device) + y = torch.index_select(y, dim=0, index=reserved_idx_of_batch_for_y) + if cache["y_emb"] is not None: + cache["y_emb"] = torch.index_select(cache["y_emb"], dim=0, index=reserved_idx_of_batch_for_y) + if cache["k"] is not None: + for i in range(self.num_layers): + # 因为kv转置了,所以batch dim是1 + cache["k"][i] = torch.index_select(cache["k"][i], dim=1, index=reserved_idx_of_batch_for_y) + cache["v"][i] = torch.index_select(cache["v"][i], dim=1, index=reserved_idx_of_batch_for_y) + + if early_stop_num != -1 and (y.shape[1] - prefix_len) > early_stop_num: print("use early stop num:", early_stop_num) stop = True - - if torch.argmax(logits, dim=-1)[0] == self.EOS or samples[0, 0] == self.EOS: + + if not (None in idx_list): # print(torch.argmax(logits, dim=-1)[0] == self.EOS, samples[0, 0] == self.EOS) stop = True if stop: @@ -443,6 +472,12 @@ class Text2SemanticDecoder(nn.Module): xy_attn_mask = torch.zeros( (1, x_len + y_len), dtype=torch.bool, device=xy_pos.device ) + + if (None in idx_list): + for i in range(x.shape[0]): + if idx_list[i] is None: + idx_list[i] = 1500-1 ###如果没有生成到EOS,就用最大长度代替 + if ref_free: - return y[:, :-1], 0 - return y[:, :-1], idx-1 + return y_list, [0]*x.shape[0] + return y_list, idx_list diff --git a/GPT_SoVITS/AR/models/utils.py b/GPT_SoVITS/AR/models/utils.py index 9678c7e1..34178fea 100644 --- a/GPT_SoVITS/AR/models/utils.py +++ b/GPT_SoVITS/AR/models/utils.py @@ -115,17 +115,17 @@ def logits_to_probs( top_p: Optional[int] = None, repetition_penalty: float = 1.0, ): - if previous_tokens is not None: - previous_tokens = previous_tokens.squeeze() + # if previous_tokens is not None: + # previous_tokens = previous_tokens.squeeze() # print(logits.shape,previous_tokens.shape) # pdb.set_trace() if previous_tokens is not None and repetition_penalty != 1.0: previous_tokens = previous_tokens.long() - score = torch.gather(logits, dim=0, index=previous_tokens) + score = torch.gather(logits, dim=1, index=previous_tokens) score = torch.where( score < 0, score * repetition_penalty, score / repetition_penalty ) - logits.scatter_(dim=0, index=previous_tokens, src=score) + logits.scatter_(dim=1, index=previous_tokens, src=score) if top_p is not None and top_p < 1.0: sorted_logits, sorted_indices = torch.sort(logits, descending=True) @@ -133,9 +133,9 @@ def logits_to_probs( torch.nn.functional.softmax(sorted_logits, dim=-1), dim=-1 ) sorted_indices_to_remove = cum_probs > top_p - sorted_indices_to_remove[0] = False # keep at least one option + sorted_indices_to_remove[:, 0] = False # keep at least one option indices_to_remove = sorted_indices_to_remove.scatter( - dim=0, index=sorted_indices, src=sorted_indices_to_remove + dim=1, index=sorted_indices, src=sorted_indices_to_remove ) logits = logits.masked_fill(indices_to_remove, -float("Inf")) diff --git a/GPT_SoVITS/TTS_infer_pack/TTS.py b/GPT_SoVITS/TTS_infer_pack/TTS.py new file mode 100644 index 00000000..9f98a246 --- /dev/null +++ b/GPT_SoVITS/TTS_infer_pack/TTS.py @@ -0,0 +1,546 @@ +import os, sys + +now_dir = os.getcwd() +sys.path.append(now_dir) +import os +from typing import Generator, List, Union +import numpy as np +import torch +import yaml +from transformers import AutoModelForMaskedLM, AutoTokenizer + +from AR.models.t2s_lightning_module import Text2SemanticLightningModule +from feature_extractor.cnhubert import CNHubert +from module.models import SynthesizerTrn +import librosa +from time import time as ttime +from tools.i18n.i18n import I18nAuto +from my_utils import load_audio +from module.mel_processing import spectrogram_torch +from .text_segmentation_method import splits +from .TextPreprocessor import TextPreprocessor +i18n = I18nAuto() + +# tts_infer.yaml +""" +default: + device: cpu + is_half: false + bert_base_path: GPT_SoVITS/pretrained_models/chinese-roberta-wwm-ext-large + cnhuhbert_base_path: GPT_SoVITS/pretrained_models/chinese-hubert-base + t2s_weights_path: GPT_SoVITS/pretrained_models/s1bert25hz-2kh-longer-epoch=68e-step=50232.ckpt + vits_weights_path: GPT_SoVITS/pretrained_models/s2G488k.pth + +custom: + device: cuda + is_half: true + bert_base_path: GPT_SoVITS/pretrained_models/chinese-roberta-wwm-ext-large + cnhuhbert_base_path: GPT_SoVITS/pretrained_models/chinese-hubert-base + t2s_weights_path: GPT_SoVITS/pretrained_models/s1bert25hz-2kh-longer-epoch=68e-step=50232.ckpt + vits_weights_path: GPT_SoVITS/pretrained_models/s2G488k.pth + + + +""" + + + +class TTS_Config: + def __init__(self, configs: Union[dict, str]): + configs_base_path:str = "GPT_SoVITS/configs/" + os.makedirs(configs_base_path, exist_ok=True) + self.configs_path:str = os.path.join(configs_base_path, "tts_infer.yaml") + if isinstance(configs, str): + self.configs_path = configs + configs:dict = self._load_configs(configs) + + # assert isinstance(configs, dict) + self.default_configs:dict = configs.get("default", None) + if self.default_configs is None: + self.default_configs={ + "device": "cpu", + "is_half": False, + "t2s_weights_path": "GPT_SoVITS/pretrained_models/s1bert25hz-2kh-longer-epoch=68e-step=50232.ckpt", + "vits_weights_path": "GPT_SoVITS/pretrained_models/s2G488k.pth", + "cnhuhbert_base_path": "GPT_SoVITS/pretrained_models/chinese-hubert-base", + "bert_base_path": "GPT_SoVITS/pretrained_models/chinese-roberta-wwm-ext-large" + } + self.configs:dict = configs.get("custom", self.default_configs) + + self.device = self.configs.get("device") + self.is_half = self.configs.get("is_half") + self.t2s_weights_path = self.configs.get("t2s_weights_path") + self.vits_weights_path = self.configs.get("vits_weights_path") + self.bert_base_path = self.configs.get("bert_base_path") + self.cnhuhbert_base_path = self.configs.get("cnhuhbert_base_path") + + + self.max_sec = None + self.hz:int = 50 + self.semantic_frame_rate:str = "25hz" + self.segment_size:int = 20480 + self.filter_length:int = 2048 + self.sampling_rate:int = 32000 + self.hop_length:int = 640 + self.win_length:int = 2048 + self.n_speakers:int = 300 + + self.langauges:list = ["auto", "en", "zh", "ja", "all_zh", "all_ja"] + + def _load_configs(self, configs_path: str)->dict: + with open(configs_path, 'r') as f: + configs = yaml.load(f, Loader=yaml.FullLoader) + + return configs + + + def save_configs(self, configs_path:str=None)->None: + configs={ + "default": { + "device": "cpu", + "is_half": False, + "t2s_weights_path": "GPT_SoVITS/pretrained_models/s1bert25hz-2kh-longer-epoch=68e-step=50232.ckpt", + "vits_weights_path": "GPT_SoVITS/pretrained_models/s2G488k.pth", + "cnhuhbert_base_path": "GPT_SoVITS/pretrained_models/chinese-hubert-base", + "bert_base_path": "GPT_SoVITS/pretrained_models/chinese-roberta-wwm-ext-large" + }, + "custom": { + "device": str(self.device), + "is_half": self.is_half, + "t2s_weights_path": self.t2s_weights_path, + "vits_weights_path": self.vits_weights_path, + "bert_base_path": self.bert_base_path, + "cnhuhbert_base_path": self.cnhuhbert_base_path + } + } + if configs_path is None: + configs_path = self.configs_path + with open(configs_path, 'w') as f: + yaml.dump(configs, f) + + +class TTS: + def __init__(self, configs: Union[dict, str, TTS_Config]): + if isinstance(configs, TTS_Config): + self.configs = configs + else: + self.configs:TTS_Config = TTS_Config(configs) + + self.t2s_model:Text2SemanticLightningModule = None + self.vits_model:SynthesizerTrn = None + self.bert_tokenizer:AutoTokenizer = None + self.bert_model:AutoModelForMaskedLM = None + self.cnhuhbert_model:CNHubert = None + + self._init_models() + + self.text_preprocessor:TextPreprocessor = \ + TextPreprocessor(self.bert_model, + self.bert_tokenizer, + self.configs.device) + + + self.prompt_cache:dict = { + "ref_audio_path":None, + "prompt_semantic":None, + "refer_spepc":None, + "prompt_text":None, + "prompt_lang":None, + "phones":None, + "bert_features":None, + "norm_text":None, + } + + def _init_models(self,): + self.init_t2s_weights(self.configs.t2s_weights_path) + self.init_vits_weights(self.configs.vits_weights_path) + self.init_bert_weights(self.configs.bert_base_path) + self.init_cnhuhbert_weights(self.configs.cnhuhbert_base_path) + + + + def init_cnhuhbert_weights(self, base_path: str): + self.cnhuhbert_model = CNHubert(base_path) + self.cnhuhbert_model.eval() + if self.configs.is_half == True: + self.cnhuhbert_model = self.cnhuhbert_model.half() + self.cnhuhbert_model = self.cnhuhbert_model.to(self.configs.device) + + + + def init_bert_weights(self, base_path: str): + self.bert_tokenizer = AutoTokenizer.from_pretrained(base_path) + self.bert_model = AutoModelForMaskedLM.from_pretrained(base_path) + if self.configs.is_half: + self.bert_model = self.bert_model.half() + self.bert_model = self.bert_model.to(self.configs.device) + + + + def init_vits_weights(self, weights_path: str): + self.configs.vits_weights_path = weights_path + self.configs.save_configs() + dict_s2 = torch.load(weights_path, map_location=self.configs.device) + hps = dict_s2["config"] + self.configs.filter_length = hps["data"]["filter_length"] + self.configs.segment_size = hps["train"]["segment_size"] + self.configs.sampling_rate = hps["data"]["sampling_rate"] + self.configs.hop_length = hps["data"]["hop_length"] + self.configs.win_length = hps["data"]["win_length"] + self.configs.n_speakers = hps["data"]["n_speakers"] + self.configs.semantic_frame_rate = "25hz" + kwargs = hps["model"] + vits_model = SynthesizerTrn( + self.configs.filter_length // 2 + 1, + self.configs.segment_size // self.configs.hop_length, + n_speakers=self.configs.n_speakers, + **kwargs + ) + # if ("pretrained" not in weights_path): + if hasattr(vits_model, "enc_q"): + del vits_model.enc_q + + if self.configs.is_half: + vits_model = vits_model.half() + vits_model = vits_model.to(self.configs.device) + vits_model.eval() + vits_model.load_state_dict(dict_s2["weight"], strict=False) + self.vits_model = vits_model + + + def init_t2s_weights(self, weights_path: str): + self.configs.t2s_weights_path = weights_path + self.configs.save_configs() + self.configs.hz = 50 + dict_s1 = torch.load(weights_path, map_location=self.configs.device) + config = dict_s1["config"] + self.configs.max_sec = config["data"]["max_sec"] + t2s_model = Text2SemanticLightningModule(config, "****", is_train=False) + t2s_model.load_state_dict(dict_s1["weight"]) + if self.configs.is_half: + t2s_model = t2s_model.half() + t2s_model = t2s_model.to(self.configs.device) + t2s_model.eval() + self.t2s_model = t2s_model + + def set_ref_audio(self, ref_audio_path:str): + self._set_prompt_semantic(ref_audio_path) + self._set_ref_spepc(ref_audio_path) + + def _set_ref_spepc(self, ref_audio_path): + audio = load_audio(ref_audio_path, int(self.configs.sampling_rate)) + audio = torch.FloatTensor(audio) + audio_norm = audio + audio_norm = audio_norm.unsqueeze(0) + spec = spectrogram_torch( + audio_norm, + self.configs.filter_length, + self.configs.sampling_rate, + self.configs.hop_length, + self.configs.win_length, + center=False, + ) + spec = spec.to(self.configs.device) + if self.configs.is_half: + spec = spec.half() + # self.refer_spepc = spec + self.prompt_cache["refer_spepc"] = spec + + + def _set_prompt_semantic(self, ref_wav_path:str): + zero_wav = np.zeros( + int(self.configs.sampling_rate * 0.3), + dtype=np.float16 if self.configs.is_half else np.float32, + ) + with torch.no_grad(): + wav16k, sr = librosa.load(ref_wav_path, sr=16000) + if (wav16k.shape[0] > 160000 or wav16k.shape[0] < 48000): + raise OSError(i18n("参考音频在3~10秒范围外,请更换!")) + wav16k = torch.from_numpy(wav16k) + zero_wav_torch = torch.from_numpy(zero_wav) + wav16k = wav16k.to(self.configs.device) + zero_wav_torch = zero_wav_torch.to(self.configs.device) + if self.configs.is_half: + wav16k = wav16k.half() + zero_wav_torch = zero_wav_torch.half() + + wav16k = torch.cat([wav16k, zero_wav_torch]) + hubert_feature = self.cnhuhbert_model.model(wav16k.unsqueeze(0))[ + "last_hidden_state" + ].transpose( + 1, 2 + ) # .float() + codes = self.vits_model.extract_latent(hubert_feature) + + prompt_semantic = codes[0, 0].to(self.configs.device) + self.prompt_cache["prompt_semantic"] = prompt_semantic + + def batch_sequences(self, sequences: List[torch.Tensor], axis: int = 0, pad_value: int = 0): + seq = sequences[0] + ndim = seq.dim() + if axis < 0: + axis += ndim + dtype:torch.dtype = seq.dtype + pad_value = torch.tensor(pad_value, dtype=dtype) + seq_lengths = [seq.shape[axis] for seq in sequences] + max_length = max(seq_lengths) + + padded_sequences = [] + for seq, length in zip(sequences, seq_lengths): + padding = [0] * axis + [0, max_length - length] + [0] * (ndim - axis - 1) + padded_seq = torch.nn.functional.pad(seq, padding, value=pad_value) + padded_sequences.append(padded_seq) + batch = torch.stack(padded_sequences) + return batch + + def to_batch(self, data:list, prompt_data:dict=None, batch_size:int=5, threshold:float=0.75): + + _data:list = [] + index_and_len_list = [] + for idx, item in enumerate(data): + norm_text_len = len(item["norm_text"]) + index_and_len_list.append([idx, norm_text_len]) + + index_and_len_list.sort(key=lambda x: x[1]) + # index_and_len_batch_list = [index_and_len_list[idx:min(idx+batch_size,len(index_and_len_list))] for idx in range(0,len(index_and_len_list),batch_size)] + index_and_len_list = np.array(index_and_len_list, dtype=np.int64) + + # for batch_idx, index_and_len_batch in enumerate(index_and_len_batch_list): + + batch_index_list = [] + batch_index_list_len = 0 + pos = 0 + while pos =threshold) or (pos_end-pos==1): + batch_index=index_and_len_list[pos:pos_end, 0].tolist() + batch_index_list_len += len(batch_index) + batch_index_list.append(batch_index) + pos = pos_end + break + pos_end=pos_end-1 + + assert batch_index_list_len == len(data) + + for batch_idx, index_list in enumerate(batch_index_list): + item_list = [data[idx] for idx in index_list] + phones_list = [] + # bert_features_list = [] + all_phones_list = [] + all_bert_features_list = [] + norm_text_batch = [] + for item in item_list: + if prompt_data is not None: + all_bert_features = torch.cat([prompt_data["bert_features"].clone(), item["bert_features"]], 1) + all_phones = torch.LongTensor(prompt_data["phones"]+item["phones"]) + phones = torch.LongTensor(item["phones"]) + # norm_text = prompt_data["norm_text"]+item["norm_text"] + else: + all_bert_features = item["bert_features"] + phones = torch.LongTensor(item["phones"]) + all_phones = phones.clone() + # norm_text = item["norm_text"] + + phones_list.append(phones) + all_phones_list.append(all_phones) + all_bert_features_list.append(all_bert_features) + norm_text_batch.append(item["norm_text"]) + # phones_batch = phones_list + phones_batch = self.batch_sequences(phones_list, axis=0, pad_value=0) + all_phones_batch = self.batch_sequences(all_phones_list, axis=0, pad_value=0) + all_bert_features_batch = torch.FloatTensor(len(item_list), 1024, all_phones_batch.shape[-1]) + all_bert_features_batch.zero_() + + for idx, item in enumerate(all_bert_features_list): + if item != None: + all_bert_features_batch[idx, :, : item.shape[-1]] = item + + batch = { + "phones": phones_batch, + "all_phones": all_phones_batch, + "all_bert_features": all_bert_features_batch, + "norm_text": norm_text_batch + } + _data.append(batch) + + return _data, batch_index_list + + def recovery_order(self, data:list, batch_index_list:list)->list: + lenght = len(sum(batch_index_list, [])) + _data = [None]*lenght + for i, index_list in enumerate(batch_index_list): + for j, index in enumerate(index_list): + _data[index] = data[i][j] + return _data + + + + + def run(self, inputs:dict): + """ + Text to speech inference. + + Args: + inputs (dict): + { + "text": "", + "text_lang: "", + "ref_audio_path": "", + "prompt_text": "", + "prompt_lang": "", + "top_k": 5, + "top_p": 0.9, + "temperature": 0.6, + "text_split_method": "", + "batch_size": 1, + "batch_threshold": 0.75 + } + returns: + tulpe[int, np.ndarray]: sampling rate and audio data. + """ + text:str = inputs.get("text", "") + text_lang:str = inputs.get("text_lang", "") + ref_audio_path:str = inputs.get("ref_audio_path", "") + prompt_text:str = inputs.get("prompt_text", "") + prompt_lang:str = inputs.get("prompt_lang", "") + top_k:int = inputs.get("top_k", 20) + top_p:float = inputs.get("top_p", 0.9) + temperature:float = inputs.get("temperature", 0.6) + text_split_method:str = inputs.get("text_split_method", "") + batch_size = inputs.get("batch_size", 1) + batch_threshold = inputs.get("batch_threshold", 0.75) + + no_prompt_text = False + if prompt_text in [None, ""]: + no_prompt_text = True + + assert text_lang in self.configs.langauges + if not no_prompt_text: + assert prompt_lang in self.configs.langauges + + if ref_audio_path in [None, ""] and \ + ((self.prompt_cache["prompt_semantic"] is None) or (self.prompt_cache["refer_spepc"] is None)): + raise ValueError("ref_audio_path cannot be empty, when the reference audio is not set using set_ref_audio()") + + t0 = ttime() + if (ref_audio_path is not None) and (ref_audio_path != self.prompt_cache["ref_audio_path"]): + self.set_ref_audio(ref_audio_path) + + if not no_prompt_text: + prompt_text = prompt_text.strip("\n") + if (prompt_text[-1] not in splits): prompt_text += "。" if prompt_lang != "en" else "." + print(i18n("实际输入的参考文本:"), prompt_text) + if self.prompt_cache["prompt_text"] != prompt_text: + self.prompt_cache["prompt_text"] = prompt_text + self.prompt_cache["prompt_lang"] = prompt_lang + phones, bert_features, norm_text = \ + self.text_preprocessor.segment_and_extract_feature_for_text( + prompt_text, + prompt_lang) + self.prompt_cache["phones"] = phones + self.prompt_cache["bert_features"] = bert_features + self.prompt_cache["norm_text"] = norm_text + + zero_wav = np.zeros( + int(self.configs.sampling_rate * 0.3), + dtype=np.float16 if self.configs.is_half else np.float32, + ) + + + data = self.text_preprocessor.preprocess(text, text_lang, text_split_method) + audio = [] + t1 = ttime() + data, batch_index_list = self.to_batch(data, + prompt_data=self.prompt_cache if not no_prompt_text else None, + batch_size=batch_size, + threshold=batch_threshold) + t2 = ttime() + zero_wav = torch.zeros( + int(self.configs.sampling_rate * 0.3), + dtype=torch.float16 if self.configs.is_half else torch.float32, + device=self.configs.device + ) + + t_34 = 0.0 + t_45 = 0.0 + for item in data: + t3 = ttime() + batch_phones = item["phones"] + all_phoneme_ids = item["all_phones"] + all_bert_features = item["all_bert_features"] + norm_text = item["norm_text"] + + # phones = phones.to(self.configs.device) + all_phoneme_ids = all_phoneme_ids.to(self.configs.device) + all_bert_features = all_bert_features.to(self.configs.device) + if self.configs.is_half: + all_bert_features = all_bert_features.half() + # all_phoneme_len = torch.tensor([all_phoneme_ids.shape[-1]]*all_phoneme_ids.shape[0], device=self.configs.device) + + print(i18n("前端处理后的文本(每句):"), norm_text) + if no_prompt_text : + prompt = None + else: + prompt = self.prompt_cache["prompt_semantic"].clone().repeat(all_phoneme_ids.shape[0], 1).to(self.configs.device) + + with torch.no_grad(): + # pred_semantic = t2s_model.model.infer( + pred_semantic_list, idx_list = self.t2s_model.model.infer_panel( + all_phoneme_ids, + None, + prompt, + all_bert_features, + # prompt_phone_len=ph_offset, + top_k=top_k, + top_p=top_p, + temperature=temperature, + early_stop_num=self.configs.hz * self.configs.max_sec, + ) + t4 = ttime() + t_34 += t4 - t3 + + refer_audio_spepc:torch.Tensor = self.prompt_cache["refer_spepc"].clone().to(self.configs.device) + if self.configs.is_half: + refer_audio_spepc = refer_audio_spepc.half() + + ## 直接对batch进行decode 生成的音频会有问题 + # pred_semantic_list = [item[-idx:] for item, idx in zip(pred_semantic_list, idx_list)] + # pred_semantic = self.batch_sequences(pred_semantic_list, axis=0, pad_value=0).unsqueeze(0) + # batch_phones = batch_phones.to(self.configs.device) + # batch_audio_fragment =(self.vits_model.decode( + # pred_semantic, batch_phones, refer_audio_spepc + # ).detach()[:, 0, :]) + # max_audio=torch.abs(batch_audio_fragment).max()#简单防止16bit爆音 + # if max_audio>1: batch_audio_fragment/=max_audio + # batch_audio_fragment = batch_audio_fragment.cpu().numpy() + + ## 改成串行处理 + batch_audio_fragment = [] + for i, idx in enumerate(idx_list): + phones = batch_phones[i].clone().unsqueeze(0).to(self.configs.device) + _pred_semantic = (pred_semantic_list[i][-idx:].unsqueeze(0).unsqueeze(0)) # .unsqueeze(0)#mq要多unsqueeze一次 + audio_fragment =(self.vits_model.decode( + _pred_semantic, phones, refer_audio_spepc + ).detach()[0, 0, :]) + max_audio=torch.abs(audio_fragment).max()#简单防止16bit爆音 + if max_audio>1: audio_fragment/=max_audio + audio_fragment = torch.cat([audio_fragment, zero_wav], dim=0) + batch_audio_fragment.append( + audio_fragment.cpu().numpy() + ) ###试试重建不带上prompt部分 + + audio.append(batch_audio_fragment) + # audio.append(zero_wav) + t5 = ttime() + t_45 += t5 - t4 + + audio = self.recovery_order(audio, batch_index_list) + print("%.3f\t%.3f\t%.3f\t%.3f" % (t1 - t0, t2 - t1, t_34, t_45)) + yield self.configs.sampling_rate, (np.concatenate(audio, 0) * 32768).astype( + np.int16 + ) + \ No newline at end of file diff --git a/GPT_SoVITS/TTS_infer_pack/TextPreprocessor.py b/GPT_SoVITS/TTS_infer_pack/TextPreprocessor.py new file mode 100644 index 00000000..1504a534 --- /dev/null +++ b/GPT_SoVITS/TTS_infer_pack/TextPreprocessor.py @@ -0,0 +1,176 @@ + + +import re +import torch +import LangSegment +from typing import Dict, List, Tuple +from text.cleaner import clean_text +from text import cleaned_text_to_sequence +from transformers import AutoModelForMaskedLM, AutoTokenizer +from .text_segmentation_method import splits, get_method as get_seg_method + +# from tools.i18n.i18n import I18nAuto + +# i18n = I18nAuto() + +def get_first(text:str) -> str: + pattern = "[" + "".join(re.escape(sep) for sep in splits) + "]" + text = re.split(pattern, text)[0].strip() + return text + +def merge_short_text_in_array(texts:str, threshold:int) -> list: + if (len(texts)) < 2: + return texts + result = [] + text = "" + for ele in texts: + text += ele + if len(text) >= threshold: + result.append(text) + text = "" + if (len(text) > 0): + if len(result) == 0: + result.append(text) + else: + result[len(result) - 1] += text + return result + + +class TextPreprocessor: + def __init__(self, bert_model:AutoModelForMaskedLM, + tokenizer:AutoTokenizer, device:torch.device): + self.bert_model = bert_model + self.tokenizer = tokenizer + self.device = device + + def preprocess(self, text:str, lang:str, text_split_method:str)->List[Dict]: + texts = self.pre_seg_text(text, lang, text_split_method) + result = [] + for text in texts: + phones, bert_features, norm_text = self.segment_and_extract_feature_for_text(text, lang) + res={ + "phones": phones, + "bert_features": bert_features, + "norm_text": norm_text, + } + result.append(res) + return result + + def pre_seg_text(self, text:str, lang:str, text_split_method:str): + text = text.strip("\n") + if (text[0] not in splits and len(get_first(text)) < 4): + text = "。" + text if lang != "en" else "." + text + # print(i18n("实际输入的目标文本:"), text) + + seg_method = get_seg_method(text_split_method) + text = seg_method(text) + + while "\n\n" in text: + text = text.replace("\n\n", "\n") + # print(i18n("实际输入的目标文本(切句后):"), text) + _texts = text.split("\n") + _texts = merge_short_text_in_array(_texts, 5) + texts = [] + for text in _texts: + # 解决输入目标文本的空行导致报错的问题 + if (len(text.strip()) == 0): + continue + if (text[-1] not in splits): text += "。" if lang != "en" else "." + texts.append(text) + + return texts + + def segment_and_extract_feature_for_text(self, texts:list, language:str)->Tuple[list, torch.Tensor, str]: + textlist, langlist = self.seg_text(texts, language) + phones, bert_features, norm_text = self.extract_bert_feature(textlist, langlist) + + return phones, bert_features, norm_text + + + def seg_text(self, text:str, language:str)->Tuple[list, list]: + + textlist=[] + langlist=[] + if language in ["auto", "zh", "ja"]: + # LangSegment.setfilters(["zh","ja","en","ko"]) + for tmp in LangSegment.getTexts(text): + if tmp["lang"] == "ko": + langlist.append("zh") + elif tmp["lang"] == "en": + langlist.append("en") + else: + # 因无法区别中日文汉字,以用户输入为准 + langlist.append(language if language!="auto" else tmp["lang"]) + textlist.append(tmp["text"]) + elif language == "en": + # LangSegment.setfilters(["en"]) + formattext = " ".join(tmp["text"] for tmp in LangSegment.getTexts(text)) + while " " in formattext: + formattext = formattext.replace(" ", " ") + textlist.append(formattext) + langlist.append("en") + + elif language in ["all_zh","all_ja"]: + formattext = text + while " " in formattext: + formattext = formattext.replace(" ", " ") + language = language.replace("all_","") + textlist.append(formattext) + langlist.append(language) + + else: + raise ValueError(f"language {language} not supported") + + return textlist, langlist + + + def extract_bert_feature(self, textlist:list, langlist:list): + phones_list = [] + bert_feature_list = [] + norm_text_list = [] + for i in range(len(textlist)): + lang = langlist[i] + phones, word2ph, norm_text = self.clean_text_inf(textlist[i], lang) + _bert_feature = self.get_bert_inf(phones, word2ph, norm_text, lang) + # phones_list.append(phones) + phones_list.extend(phones) + norm_text_list.append(norm_text) + bert_feature_list.append(_bert_feature) + bert_feature = torch.cat(bert_feature_list, dim=1) + # phones = sum(phones_list, []) + norm_text = ''.join(norm_text_list) + + return phones, bert_feature, norm_text + + + def get_bert_feature(self, text:str, word2ph:list)->torch.Tensor: + with torch.no_grad(): + inputs = self.tokenizer(text, return_tensors="pt") + for i in inputs: + inputs[i] = inputs[i].to(self.device) + res = self.bert_model(**inputs, output_hidden_states=True) + res = torch.cat(res["hidden_states"][-3:-2], -1)[0].cpu()[1:-1] + assert len(word2ph) == len(text) + phone_level_feature = [] + for i in range(len(word2ph)): + repeat_feature = res[i].repeat(word2ph[i], 1) + phone_level_feature.append(repeat_feature) + phone_level_feature = torch.cat(phone_level_feature, dim=0) + return phone_level_feature.T + + def clean_text_inf(self, text:str, language:str): + phones, word2ph, norm_text = clean_text(text, language) + phones = cleaned_text_to_sequence(phones) + return phones, word2ph, norm_text + + def get_bert_inf(self, phones:list, word2ph:list, norm_text:str, language:str): + language=language.replace("all_","") + if language == "zh": + feature = self.get_bert_feature(norm_text, word2ph).to(self.device) + else: + feature = torch.zeros( + (1024, len(phones)), + dtype=torch.float32, + ).to(self.device) + + return feature \ No newline at end of file diff --git a/GPT_SoVITS/TTS_infer_pack/__init__.py b/GPT_SoVITS/TTS_infer_pack/__init__.py new file mode 100644 index 00000000..74381982 --- /dev/null +++ b/GPT_SoVITS/TTS_infer_pack/__init__.py @@ -0,0 +1 @@ +from . import TTS, text_segmentation_method \ No newline at end of file diff --git a/GPT_SoVITS/TTS_infer_pack/text_segmentation_method.py b/GPT_SoVITS/TTS_infer_pack/text_segmentation_method.py new file mode 100644 index 00000000..7bc6b009 --- /dev/null +++ b/GPT_SoVITS/TTS_infer_pack/text_segmentation_method.py @@ -0,0 +1,126 @@ + + + + +import re +from typing import Callable +from tools.i18n.i18n import I18nAuto + +i18n = I18nAuto() + +METHODS = dict() + +def get_method(name:str)->Callable: + method = METHODS.get(name, None) + if method is None: + raise ValueError(f"Method {name} not found") + return method + +def register_method(name): + def decorator(func): + METHODS[name] = func + return func + return decorator + +splits = {",", "。", "?", "!", ",", ".", "?", "!", "~", ":", ":", "—", "…", } + + +def split(todo_text): + todo_text = todo_text.replace("……", "。").replace("——", ",") + if todo_text[-1] not in splits: + todo_text += "。" + i_split_head = i_split_tail = 0 + len_text = len(todo_text) + todo_texts = [] + while 1: + if i_split_head >= len_text: + break # 结尾一定有标点,所以直接跳出即可,最后一段在上次已加入 + if todo_text[i_split_head] in splits: + i_split_head += 1 + todo_texts.append(todo_text[i_split_tail:i_split_head]) + i_split_tail = i_split_head + else: + i_split_head += 1 + return todo_texts + + +# 不切 +@register_method("cut0") +def cut0(inp): + return inp + + +# 凑四句一切 +@register_method("cut1") +def cut1(inp): + inp = inp.strip("\n") + inps = split(inp) + split_idx = list(range(0, len(inps), 4)) + split_idx[-1] = None + if len(split_idx) > 1: + opts = [] + for idx in range(len(split_idx) - 1): + opts.append("".join(inps[split_idx[idx]: split_idx[idx + 1]])) + else: + opts = [inp] + return "\n".join(opts) + +# 凑50字一切 +@register_method("cut2") +def cut2(inp): + inp = inp.strip("\n") + inps = split(inp) + if len(inps) < 2: + return inp + opts = [] + summ = 0 + tmp_str = "" + for i in range(len(inps)): + summ += len(inps[i]) + tmp_str += inps[i] + if summ > 50: + summ = 0 + opts.append(tmp_str) + tmp_str = "" + if tmp_str != "": + opts.append(tmp_str) + # print(opts) + if len(opts) > 1 and len(opts[-1]) < 50: ##如果最后一个太短了,和前一个合一起 + opts[-2] = opts[-2] + opts[-1] + opts = opts[:-1] + return "\n".join(opts) + +# 按中文句号。切 +@register_method("cut3") +def cut3(inp): + inp = inp.strip("\n") + return "\n".join(["%s" % item for item in inp.strip("。").split("。")]) + +#按英文句号.切 +@register_method("cut4") +def cut4(inp): + inp = inp.strip("\n") + return "\n".join(["%s" % item for item in inp.strip(".").split(".")]) + +# 按标点符号切 +# contributed by https://github.com/AI-Hobbyist/GPT-SoVITS/blob/main/GPT_SoVITS/inference_webui.py +@register_method("cut5") +def cut5(inp): + # if not re.search(r'[^\w\s]', inp[-1]): + # inp += '。' + inp = inp.strip("\n") + punds = r'[,.;?!、,。?!;:…]' + items = re.split(f'({punds})', inp) + mergeitems = ["".join(group) for group in zip(items[::2], items[1::2])] + # 在句子不存在符号或句尾无符号的时候保证文本完整 + if len(items)%2 == 1: + mergeitems.append(items[-1]) + opt = "\n".join(mergeitems) + return opt + + + +if __name__ == '__main__': + method = get_method("cut1") + print(method("你好,我是小明。你好,我是小红。你好,我是小刚。你好,我是小张。")) + \ No newline at end of file diff --git a/GPT_SoVITS/configs/tts_infer.yaml b/GPT_SoVITS/configs/tts_infer.yaml new file mode 100644 index 00000000..5f56a4ec --- /dev/null +++ b/GPT_SoVITS/configs/tts_infer.yaml @@ -0,0 +1,14 @@ +custom: + bert_base_path: GPT_SoVITS/pretrained_models/chinese-roberta-wwm-ext-large + cnhuhbert_base_path: GPT_SoVITS/pretrained_models/chinese-hubert-base + device: cuda + is_half: true + t2s_weights_path: GPT_SoVITS/pretrained_models/s1bert25hz-2kh-longer-epoch=68e-step=50232.ckpt + vits_weights_path: GPT_SoVITS/pretrained_models/s2G488k.pth +default: + bert_base_path: GPT_SoVITS/pretrained_models/chinese-roberta-wwm-ext-large + cnhuhbert_base_path: GPT_SoVITS/pretrained_models/chinese-hubert-base + device: cpu + is_half: false + t2s_weights_path: GPT_SoVITS/pretrained_models/s1bert25hz-2kh-longer-epoch=68e-step=50232.ckpt + vits_weights_path: GPT_SoVITS/pretrained_models/s2G488k.pth diff --git a/GPT_SoVITS/feature_extractor/cnhubert.py b/GPT_SoVITS/feature_extractor/cnhubert.py index dc155bdd..7dffbdb2 100644 --- a/GPT_SoVITS/feature_extractor/cnhubert.py +++ b/GPT_SoVITS/feature_extractor/cnhubert.py @@ -20,13 +20,16 @@ cnhubert_base_path = None class CNHubert(nn.Module): - def __init__(self): + def __init__(self, base_path:str=None): super().__init__() - self.model = HubertModel.from_pretrained(cnhubert_base_path) + if base_path is None: + base_path = cnhubert_base_path + self.model = HubertModel.from_pretrained(base_path) self.feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained( - cnhubert_base_path + base_path ) + def forward(self, x): input_values = self.feature_extractor( x, return_tensors="pt", sampling_rate=16000 diff --git a/GPT_SoVITS/inference_gui.py b/GPT_SoVITS/inference_gui.py index f6cfdc5e..830c66de 100644 --- a/GPT_SoVITS/inference_gui.py +++ b/GPT_SoVITS/inference_gui.py @@ -7,7 +7,7 @@ import soundfile as sf from tools.i18n.i18n import I18nAuto i18n = I18nAuto() -from GPT_SoVITS.inference_webui import change_gpt_weights, change_sovits_weights, get_tts_wav +from GPT_SoVITS.inference_webui_old import change_gpt_weights, change_sovits_weights, get_tts_wav class GPTSoVITSGUI(QMainWindow): diff --git a/GPT_SoVITS/inference_webui.py b/GPT_SoVITS/inference_webui.py index ee099627..68a2136a 100644 --- a/GPT_SoVITS/inference_webui.py +++ b/GPT_SoVITS/inference_webui.py @@ -7,7 +7,7 @@ 全部按日文识别 ''' import os, re, logging -import LangSegment + logging.getLogger("markdown_it").setLevel(logging.ERROR) logging.getLogger("urllib3").setLevel(logging.ERROR) logging.getLogger("httpcore").setLevel(logging.ERROR) @@ -17,32 +17,12 @@ logging.getLogger("charset_normalizer").setLevel(logging.ERROR) logging.getLogger("torchaudio._extension").setLevel(logging.ERROR) import pdb import torch +# modified from https://github.com/feng-yufei/shared_debugging_code/blob/main/model/t2s_lightning_module.py +import os, sys -if os.path.exists("./gweight.txt"): - with open("./gweight.txt", 'r', encoding="utf-8") as file: - gweight_data = file.read() - gpt_path = os.environ.get( - "gpt_path", gweight_data) -else: - gpt_path = os.environ.get( - "gpt_path", "GPT_SoVITS/pretrained_models/s1bert25hz-2kh-longer-epoch=68e-step=50232.ckpt") +now_dir = os.getcwd() +sys.path.append(now_dir) -if os.path.exists("./sweight.txt"): - with open("./sweight.txt", 'r', encoding="utf-8") as file: - sweight_data = file.read() - sovits_path = os.environ.get("sovits_path", sweight_data) -else: - sovits_path = os.environ.get("sovits_path", "GPT_SoVITS/pretrained_models/s2G488k.pth") -# gpt_path = os.environ.get( -# "gpt_path", "pretrained_models/s1bert25hz-2kh-longer-epoch=68e-step=50232.ckpt" -# ) -# sovits_path = os.environ.get("sovits_path", "pretrained_models/s2G488k.pth") -cnhubert_base_path = os.environ.get( - "cnhubert_base_path", "GPT_SoVITS/pretrained_models/chinese-hubert-base" -) -bert_path = os.environ.get( - "bert_path", "GPT_SoVITS/pretrained_models/chinese-roberta-wwm-ext-large" -) infer_ttswebui = os.environ.get("infer_ttswebui", 9872) infer_ttswebui = int(infer_ttswebui) is_share = os.environ.get("is_share", "False") @@ -51,22 +31,9 @@ if "_CUDA_VISIBLE_DEVICES" in os.environ: os.environ["CUDA_VISIBLE_DEVICES"] = os.environ["_CUDA_VISIBLE_DEVICES"] is_half = eval(os.environ.get("is_half", "True")) and not torch.backends.mps.is_available() import gradio as gr -from transformers import AutoModelForMaskedLM, AutoTokenizer -import numpy as np -import librosa -from feature_extractor import cnhubert - -cnhubert.cnhubert_base_path = cnhubert_base_path - -from module.models import SynthesizerTrn -from AR.models.t2s_lightning_module import Text2SemanticLightningModule -from text import cleaned_text_to_sequence -from text.cleaner import clean_text -from time import time as ttime -from module.mel_processing import spectrogram_torch -from my_utils import load_audio +from TTS_infer_pack.TTS import TTS, TTS_Config +from TTS_infer_pack.text_segmentation_method import cut1, cut2, cut3, cut4, cut5 from tools.i18n.i18n import I18nAuto - i18n = I18nAuto() os.environ['PYTORCH_ENABLE_MPS_FALLBACK'] = '1' # 确保直接启动推理UI时也能够设置。 @@ -76,128 +43,6 @@ if torch.cuda.is_available(): else: device = "cpu" -tokenizer = AutoTokenizer.from_pretrained(bert_path) -bert_model = AutoModelForMaskedLM.from_pretrained(bert_path) -if is_half == True: - bert_model = bert_model.half().to(device) -else: - bert_model = bert_model.to(device) - - -def get_bert_feature(text, word2ph): - with torch.no_grad(): - inputs = tokenizer(text, return_tensors="pt") - for i in inputs: - inputs[i] = inputs[i].to(device) - res = bert_model(**inputs, output_hidden_states=True) - res = torch.cat(res["hidden_states"][-3:-2], -1)[0].cpu()[1:-1] - assert len(word2ph) == len(text) - phone_level_feature = [] - for i in range(len(word2ph)): - repeat_feature = res[i].repeat(word2ph[i], 1) - phone_level_feature.append(repeat_feature) - phone_level_feature = torch.cat(phone_level_feature, dim=0) - return phone_level_feature.T - - -class DictToAttrRecursive(dict): - def __init__(self, input_dict): - super().__init__(input_dict) - for key, value in input_dict.items(): - if isinstance(value, dict): - value = DictToAttrRecursive(value) - self[key] = value - setattr(self, key, value) - - def __getattr__(self, item): - try: - return self[item] - except KeyError: - raise AttributeError(f"Attribute {item} not found") - - def __setattr__(self, key, value): - if isinstance(value, dict): - value = DictToAttrRecursive(value) - super(DictToAttrRecursive, self).__setitem__(key, value) - super().__setattr__(key, value) - - def __delattr__(self, item): - try: - del self[item] - except KeyError: - raise AttributeError(f"Attribute {item} not found") - - -ssl_model = cnhubert.get_model() -if is_half == True: - ssl_model = ssl_model.half().to(device) -else: - ssl_model = ssl_model.to(device) - - -def change_sovits_weights(sovits_path): - global vq_model, hps - dict_s2 = torch.load(sovits_path, map_location="cpu") - hps = dict_s2["config"] - hps = DictToAttrRecursive(hps) - hps.model.semantic_frame_rate = "25hz" - vq_model = SynthesizerTrn( - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - n_speakers=hps.data.n_speakers, - **hps.model - ) - if ("pretrained" not in sovits_path): - del vq_model.enc_q - if is_half == True: - vq_model = vq_model.half().to(device) - else: - vq_model = vq_model.to(device) - vq_model.eval() - print(vq_model.load_state_dict(dict_s2["weight"], strict=False)) - with open("./sweight.txt", "w", encoding="utf-8") as f: - f.write(sovits_path) - - -change_sovits_weights(sovits_path) - - -def change_gpt_weights(gpt_path): - global hz, max_sec, t2s_model, config - hz = 50 - dict_s1 = torch.load(gpt_path, map_location="cpu") - config = dict_s1["config"] - max_sec = config["data"]["max_sec"] - t2s_model = Text2SemanticLightningModule(config, "****", is_train=False) - t2s_model.load_state_dict(dict_s1["weight"]) - if is_half == True: - t2s_model = t2s_model.half() - t2s_model = t2s_model.to(device) - t2s_model.eval() - total = sum([param.nelement() for param in t2s_model.parameters()]) - print("Number of parameter: %.2fM" % (total / 1e6)) - with open("./gweight.txt", "w", encoding="utf-8") as f: f.write(gpt_path) - - -change_gpt_weights(gpt_path) - - -def get_spepc(hps, filename): - audio = load_audio(filename, int(hps.data.sampling_rate)) - audio = torch.FloatTensor(audio) - audio_norm = audio - audio_norm = audio_norm.unsqueeze(0) - spec = spectrogram_torch( - audio_norm, - hps.data.filter_length, - hps.data.sampling_rate, - hps.data.hop_length, - hps.data.win_length, - center=False, - ) - return spec - - dict_language = { i18n("中文"): "all_zh",#全部按中文识别 i18n("英文"): "en",#全部按英文识别#######不变 @@ -207,313 +52,36 @@ dict_language = { i18n("多语种混合"): "auto",#多语种启动切分识别语种 } +cut_method = { + i18n("不切"):"cut0", + i18n("凑四句一切"): "cut1", + i18n("凑50字一切"): "cut2", + i18n("按中文句号。切"): "cut3", + i18n("按英文句号.切"): "cut4", + i18n("按标点符号切"): "cut5", +} -def clean_text_inf(text, language): - phones, word2ph, norm_text = clean_text(text, language) - phones = cleaned_text_to_sequence(phones) - return phones, word2ph, norm_text - -dtype=torch.float16 if is_half == True else torch.float32 -def get_bert_inf(phones, word2ph, norm_text, language): - language=language.replace("all_","") - if language == "zh": - bert = get_bert_feature(norm_text, word2ph).to(device)#.to(dtype) - else: - bert = torch.zeros( - (1024, len(phones)), - dtype=torch.float16 if is_half == True else torch.float32, - ).to(device) - - return bert - - -splits = {",", "。", "?", "!", ",", ".", "?", "!", "~", ":", ":", "—", "…", } - - -def get_first(text): - pattern = "[" + "".join(re.escape(sep) for sep in splits) + "]" - text = re.split(pattern, text)[0].strip() - return text - - -def get_phones_and_bert(text,language): - if language in {"en","all_zh","all_ja"}: - language = language.replace("all_","") - if language == "en": - LangSegment.setfilters(["en"]) - formattext = " ".join(tmp["text"] for tmp in LangSegment.getTexts(text)) - else: - # 因无法区别中日文汉字,以用户输入为准 - formattext = text - while " " in formattext: - formattext = formattext.replace(" ", " ") - phones, word2ph, norm_text = clean_text_inf(formattext, language) - if language == "zh": - bert = get_bert_feature(norm_text, word2ph).to(device) - else: - bert = torch.zeros( - (1024, len(phones)), - dtype=torch.float16 if is_half == True else torch.float32, - ).to(device) - elif language in {"zh", "ja","auto"}: - textlist=[] - langlist=[] - LangSegment.setfilters(["zh","ja","en","ko"]) - if language == "auto": - for tmp in LangSegment.getTexts(text): - if tmp["lang"] == "ko": - langlist.append("zh") - textlist.append(tmp["text"]) - else: - langlist.append(tmp["lang"]) - textlist.append(tmp["text"]) - else: - for tmp in LangSegment.getTexts(text): - if tmp["lang"] == "en": - langlist.append(tmp["lang"]) - else: - # 因无法区别中日文汉字,以用户输入为准 - langlist.append(language) - textlist.append(tmp["text"]) - print(textlist) - print(langlist) - phones_list = [] - bert_list = [] - norm_text_list = [] - for i in range(len(textlist)): - lang = langlist[i] - phones, word2ph, norm_text = clean_text_inf(textlist[i], lang) - bert = get_bert_inf(phones, word2ph, norm_text, lang) - phones_list.append(phones) - norm_text_list.append(norm_text) - bert_list.append(bert) - bert = torch.cat(bert_list, dim=1) - phones = sum(phones_list, []) - norm_text = ''.join(norm_text_list) - - return phones,bert.to(dtype),norm_text - - -def merge_short_text_in_array(texts, threshold): - if (len(texts)) < 2: - return texts - result = [] - text = "" - for ele in texts: - text += ele - if len(text) >= threshold: - result.append(text) - text = "" - if (len(text) > 0): - if len(result) == 0: - result.append(text) - else: - result[len(result) - 1] += text - return result - -def get_tts_wav(ref_wav_path, prompt_text, prompt_language, text, text_language, how_to_cut=i18n("不切"), top_k=20, top_p=0.6, temperature=0.6, ref_free = False): - if prompt_text is None or len(prompt_text) == 0: - ref_free = True - t0 = ttime() - prompt_language = dict_language[prompt_language] - text_language = dict_language[text_language] - if not ref_free: - prompt_text = prompt_text.strip("\n") - if (prompt_text[-1] not in splits): prompt_text += "。" if prompt_language != "en" else "." - print(i18n("实际输入的参考文本:"), prompt_text) - text = text.strip("\n") - if (text[0] not in splits and len(get_first(text)) < 4): text = "。" + text if text_language != "en" else "." + text - - print(i18n("实际输入的目标文本:"), text) - zero_wav = np.zeros( - int(hps.data.sampling_rate * 0.3), - dtype=np.float16 if is_half == True else np.float32, - ) - with torch.no_grad(): - wav16k, sr = librosa.load(ref_wav_path, sr=16000) - if (wav16k.shape[0] > 160000 or wav16k.shape[0] < 48000): - raise OSError(i18n("参考音频在3~10秒范围外,请更换!")) - wav16k = torch.from_numpy(wav16k) - zero_wav_torch = torch.from_numpy(zero_wav) - if is_half == True: - wav16k = wav16k.half().to(device) - zero_wav_torch = zero_wav_torch.half().to(device) - else: - wav16k = wav16k.to(device) - zero_wav_torch = zero_wav_torch.to(device) - wav16k = torch.cat([wav16k, zero_wav_torch]) - ssl_content = ssl_model.model(wav16k.unsqueeze(0))[ - "last_hidden_state" - ].transpose( - 1, 2 - ) # .float() - codes = vq_model.extract_latent(ssl_content) - - prompt_semantic = codes[0, 0] - t1 = ttime() - - if (how_to_cut == i18n("凑四句一切")): - text = cut1(text) - elif (how_to_cut == i18n("凑50字一切")): - text = cut2(text) - elif (how_to_cut == i18n("按中文句号。切")): - text = cut3(text) - elif (how_to_cut == i18n("按英文句号.切")): - text = cut4(text) - elif (how_to_cut == i18n("按标点符号切")): - text = cut5(text) - while "\n\n" in text: - text = text.replace("\n\n", "\n") - print(i18n("实际输入的目标文本(切句后):"), text) - texts = text.split("\n") - texts = merge_short_text_in_array(texts, 5) - audio_opt = [] - if not ref_free: - phones1,bert1,norm_text1=get_phones_and_bert(prompt_text, prompt_language) - - for text in texts: - # 解决输入目标文本的空行导致报错的问题 - if (len(text.strip()) == 0): - continue - if (text[-1] not in splits): text += "。" if text_language != "en" else "." - print(i18n("实际输入的目标文本(每句):"), text) - phones2,bert2,norm_text2=get_phones_and_bert(text, text_language) - print(i18n("前端处理后的文本(每句):"), norm_text2) - if not ref_free: - bert = torch.cat([bert1, bert2], 1) - all_phoneme_ids = torch.LongTensor(phones1+phones2).to(device).unsqueeze(0) - else: - bert = bert2 - all_phoneme_ids = torch.LongTensor(phones2).to(device).unsqueeze(0) - - bert = bert.to(device).unsqueeze(0) - all_phoneme_len = torch.tensor([all_phoneme_ids.shape[-1]]).to(device) - prompt = prompt_semantic.unsqueeze(0).to(device) - t2 = ttime() - with torch.no_grad(): - # pred_semantic = t2s_model.model.infer( - pred_semantic, idx = t2s_model.model.infer_panel( - all_phoneme_ids, - all_phoneme_len, - None if ref_free else prompt, - bert, - # prompt_phone_len=ph_offset, - top_k=top_k, - top_p=top_p, - temperature=temperature, - early_stop_num=hz * max_sec, - ) - t3 = ttime() - # print(pred_semantic.shape,idx) - pred_semantic = pred_semantic[:, -idx:].unsqueeze( - 0 - ) # .unsqueeze(0)#mq要多unsqueeze一次 - refer = get_spepc(hps, ref_wav_path) # .to(device) - if is_half == True: - refer = refer.half().to(device) - else: - refer = refer.to(device) - # audio = vq_model.decode(pred_semantic, all_phoneme_ids, refer).detach().cpu().numpy()[0, 0] - audio = ( - vq_model.decode( - pred_semantic, torch.LongTensor(phones2).to(device).unsqueeze(0), refer - ) - .detach() - .cpu() - .numpy()[0, 0] - ) ###试试重建不带上prompt部分 - max_audio=np.abs(audio).max()#简单防止16bit爆音 - if max_audio>1:audio/=max_audio - audio_opt.append(audio) - audio_opt.append(zero_wav) - t4 = ttime() - print("%.3f\t%.3f\t%.3f\t%.3f" % (t1 - t0, t2 - t1, t3 - t2, t4 - t3)) - yield hps.data.sampling_rate, (np.concatenate(audio_opt, 0) * 32768).astype( - np.int16 - ) - - -def split(todo_text): - todo_text = todo_text.replace("……", "。").replace("——", ",") - if todo_text[-1] not in splits: - todo_text += "。" - i_split_head = i_split_tail = 0 - len_text = len(todo_text) - todo_texts = [] - while 1: - if i_split_head >= len_text: - break # 结尾一定有标点,所以直接跳出即可,最后一段在上次已加入 - if todo_text[i_split_head] in splits: - i_split_head += 1 - todo_texts.append(todo_text[i_split_tail:i_split_head]) - i_split_tail = i_split_head - else: - i_split_head += 1 - return todo_texts - - -def cut1(inp): - inp = inp.strip("\n") - inps = split(inp) - split_idx = list(range(0, len(inps), 4)) - split_idx[-1] = None - if len(split_idx) > 1: - opts = [] - for idx in range(len(split_idx) - 1): - opts.append("".join(inps[split_idx[idx]: split_idx[idx + 1]])) - else: - opts = [inp] - return "\n".join(opts) - - -def cut2(inp): - inp = inp.strip("\n") - inps = split(inp) - if len(inps) < 2: - return inp - opts = [] - summ = 0 - tmp_str = "" - for i in range(len(inps)): - summ += len(inps[i]) - tmp_str += inps[i] - if summ > 50: - summ = 0 - opts.append(tmp_str) - tmp_str = "" - if tmp_str != "": - opts.append(tmp_str) - # print(opts) - if len(opts) > 1 and len(opts[-1]) < 50: ##如果最后一个太短了,和前一个合一起 - opts[-2] = opts[-2] + opts[-1] - opts = opts[:-1] - return "\n".join(opts) - - -def cut3(inp): - inp = inp.strip("\n") - return "\n".join(["%s" % item for item in inp.strip("。").split("。")]) - - -def cut4(inp): - inp = inp.strip("\n") - return "\n".join(["%s" % item for item in inp.strip(".").split(".")]) - - -# contributed by https://github.com/AI-Hobbyist/GPT-SoVITS/blob/main/GPT_SoVITS/inference_webui.py -def cut5(inp): - # if not re.search(r'[^\w\s]', inp[-1]): - # inp += '。' - inp = inp.strip("\n") - punds = r'[,.;?!、,。?!;:…]' - items = re.split(f'({punds})', inp) - mergeitems = ["".join(group) for group in zip(items[::2], items[1::2])] - # 在句子不存在符号或句尾无符号的时候保证文本完整 - if len(items)%2 == 1: - mergeitems.append(items[-1]) - opt = "\n".join(mergeitems) - return opt +tts_config = TTS_Config("GPT_SoVITS/configs/tts_infer.yaml") +tts_config.device = device +tts_config.is_half = is_half +tts_pipline = TTS(tts_config) +gpt_path = tts_config.t2s_weights_path +sovits_path = tts_config.vits_weights_path +def inference(text, text_lang, ref_audio_path, prompt_text, prompt_lang, top_k, top_p, temperature, text_split_method, batch_size): + inputs={ + "text": text, + "text_lang": dict_language[text_lang], + "ref_audio_path": ref_audio_path, + "prompt_text": prompt_text, + "prompt_lang": dict_language[prompt_lang], + "top_k": top_k, + "top_p": top_p, + "temperature": temperature, + "text_split_method": cut_method[text_split_method], + "batch_size":int(batch_size), + } + yield next(tts_pipline.run(inputs)) def custom_sort_key(s): # 使用正则表达式提取字符串中的数字部分和非数字部分 @@ -559,8 +127,8 @@ with gr.Blocks(title="GPT-SoVITS WebUI") as app: SoVITS_dropdown = gr.Dropdown(label=i18n("SoVITS模型列表"), choices=sorted(SoVITS_names, key=custom_sort_key), value=sovits_path, interactive=True) refresh_button = gr.Button(i18n("刷新模型路径"), variant="primary") refresh_button.click(fn=change_choices, inputs=[], outputs=[SoVITS_dropdown, GPT_dropdown]) - SoVITS_dropdown.change(change_sovits_weights, [SoVITS_dropdown], []) - GPT_dropdown.change(change_gpt_weights, [GPT_dropdown], []) + SoVITS_dropdown.change(tts_pipline.init_vits_weights, [SoVITS_dropdown], []) + GPT_dropdown.change(tts_pipline.init_t2s_weights, [GPT_dropdown], []) gr.Markdown(value=i18n("*请上传并填写参考信息")) with gr.Row(): inp_ref = gr.Audio(label=i18n("请上传3~10秒内参考音频,超过会报错!"), type="filepath") @@ -585,15 +153,19 @@ with gr.Blocks(title="GPT-SoVITS WebUI") as app: ) with gr.Row(): gr.Markdown(value=i18n("gpt采样参数(无参考文本时不要太低):")) + batch_size = gr.Slider(minimum=1,maximum=20,step=1,label=i18n("batch_size"),value=1,interactive=True) top_k = gr.Slider(minimum=1,maximum=100,step=1,label=i18n("top_k"),value=5,interactive=True) top_p = gr.Slider(minimum=0,maximum=1,step=0.05,label=i18n("top_p"),value=1,interactive=True) temperature = gr.Slider(minimum=0,maximum=1,step=0.05,label=i18n("temperature"),value=1,interactive=True) inference_button = gr.Button(i18n("合成语音"), variant="primary") output = gr.Audio(label=i18n("输出的语音")) + + + inference_button.click( - get_tts_wav, - [inp_ref, prompt_text, prompt_language, text, text_language, how_to_cut, top_k, top_p, temperature, ref_text_free], + inference, + [text,text_language, inp_ref, prompt_text, prompt_language, top_k, top_p, temperature, how_to_cut, batch_size], [output], ) diff --git a/GPT_SoVITS/inference_webui_old.py b/GPT_SoVITS/inference_webui_old.py new file mode 100644 index 00000000..ee099627 --- /dev/null +++ b/GPT_SoVITS/inference_webui_old.py @@ -0,0 +1,622 @@ +''' +按中英混合识别 +按日英混合识别 +多语种启动切分识别语种 +全部按中文识别 +全部按英文识别 +全部按日文识别 +''' +import os, re, logging +import LangSegment +logging.getLogger("markdown_it").setLevel(logging.ERROR) +logging.getLogger("urllib3").setLevel(logging.ERROR) +logging.getLogger("httpcore").setLevel(logging.ERROR) +logging.getLogger("httpx").setLevel(logging.ERROR) +logging.getLogger("asyncio").setLevel(logging.ERROR) +logging.getLogger("charset_normalizer").setLevel(logging.ERROR) +logging.getLogger("torchaudio._extension").setLevel(logging.ERROR) +import pdb +import torch + +if os.path.exists("./gweight.txt"): + with open("./gweight.txt", 'r', encoding="utf-8") as file: + gweight_data = file.read() + gpt_path = os.environ.get( + "gpt_path", gweight_data) +else: + gpt_path = os.environ.get( + "gpt_path", "GPT_SoVITS/pretrained_models/s1bert25hz-2kh-longer-epoch=68e-step=50232.ckpt") + +if os.path.exists("./sweight.txt"): + with open("./sweight.txt", 'r', encoding="utf-8") as file: + sweight_data = file.read() + sovits_path = os.environ.get("sovits_path", sweight_data) +else: + sovits_path = os.environ.get("sovits_path", "GPT_SoVITS/pretrained_models/s2G488k.pth") +# gpt_path = os.environ.get( +# "gpt_path", "pretrained_models/s1bert25hz-2kh-longer-epoch=68e-step=50232.ckpt" +# ) +# sovits_path = os.environ.get("sovits_path", "pretrained_models/s2G488k.pth") +cnhubert_base_path = os.environ.get( + "cnhubert_base_path", "GPT_SoVITS/pretrained_models/chinese-hubert-base" +) +bert_path = os.environ.get( + "bert_path", "GPT_SoVITS/pretrained_models/chinese-roberta-wwm-ext-large" +) +infer_ttswebui = os.environ.get("infer_ttswebui", 9872) +infer_ttswebui = int(infer_ttswebui) +is_share = os.environ.get("is_share", "False") +is_share = eval(is_share) +if "_CUDA_VISIBLE_DEVICES" in os.environ: + os.environ["CUDA_VISIBLE_DEVICES"] = os.environ["_CUDA_VISIBLE_DEVICES"] +is_half = eval(os.environ.get("is_half", "True")) and not torch.backends.mps.is_available() +import gradio as gr +from transformers import AutoModelForMaskedLM, AutoTokenizer +import numpy as np +import librosa +from feature_extractor import cnhubert + +cnhubert.cnhubert_base_path = cnhubert_base_path + +from module.models import SynthesizerTrn +from AR.models.t2s_lightning_module import Text2SemanticLightningModule +from text import cleaned_text_to_sequence +from text.cleaner import clean_text +from time import time as ttime +from module.mel_processing import spectrogram_torch +from my_utils import load_audio +from tools.i18n.i18n import I18nAuto + +i18n = I18nAuto() + +os.environ['PYTORCH_ENABLE_MPS_FALLBACK'] = '1' # 确保直接启动推理UI时也能够设置。 + +if torch.cuda.is_available(): + device = "cuda" +else: + device = "cpu" + +tokenizer = AutoTokenizer.from_pretrained(bert_path) +bert_model = AutoModelForMaskedLM.from_pretrained(bert_path) +if is_half == True: + bert_model = bert_model.half().to(device) +else: + bert_model = bert_model.to(device) + + +def get_bert_feature(text, word2ph): + with torch.no_grad(): + inputs = tokenizer(text, return_tensors="pt") + for i in inputs: + inputs[i] = inputs[i].to(device) + res = bert_model(**inputs, output_hidden_states=True) + res = torch.cat(res["hidden_states"][-3:-2], -1)[0].cpu()[1:-1] + assert len(word2ph) == len(text) + phone_level_feature = [] + for i in range(len(word2ph)): + repeat_feature = res[i].repeat(word2ph[i], 1) + phone_level_feature.append(repeat_feature) + phone_level_feature = torch.cat(phone_level_feature, dim=0) + return phone_level_feature.T + + +class DictToAttrRecursive(dict): + def __init__(self, input_dict): + super().__init__(input_dict) + for key, value in input_dict.items(): + if isinstance(value, dict): + value = DictToAttrRecursive(value) + self[key] = value + setattr(self, key, value) + + def __getattr__(self, item): + try: + return self[item] + except KeyError: + raise AttributeError(f"Attribute {item} not found") + + def __setattr__(self, key, value): + if isinstance(value, dict): + value = DictToAttrRecursive(value) + super(DictToAttrRecursive, self).__setitem__(key, value) + super().__setattr__(key, value) + + def __delattr__(self, item): + try: + del self[item] + except KeyError: + raise AttributeError(f"Attribute {item} not found") + + +ssl_model = cnhubert.get_model() +if is_half == True: + ssl_model = ssl_model.half().to(device) +else: + ssl_model = ssl_model.to(device) + + +def change_sovits_weights(sovits_path): + global vq_model, hps + dict_s2 = torch.load(sovits_path, map_location="cpu") + hps = dict_s2["config"] + hps = DictToAttrRecursive(hps) + hps.model.semantic_frame_rate = "25hz" + vq_model = SynthesizerTrn( + hps.data.filter_length // 2 + 1, + hps.train.segment_size // hps.data.hop_length, + n_speakers=hps.data.n_speakers, + **hps.model + ) + if ("pretrained" not in sovits_path): + del vq_model.enc_q + if is_half == True: + vq_model = vq_model.half().to(device) + else: + vq_model = vq_model.to(device) + vq_model.eval() + print(vq_model.load_state_dict(dict_s2["weight"], strict=False)) + with open("./sweight.txt", "w", encoding="utf-8") as f: + f.write(sovits_path) + + +change_sovits_weights(sovits_path) + + +def change_gpt_weights(gpt_path): + global hz, max_sec, t2s_model, config + hz = 50 + dict_s1 = torch.load(gpt_path, map_location="cpu") + config = dict_s1["config"] + max_sec = config["data"]["max_sec"] + t2s_model = Text2SemanticLightningModule(config, "****", is_train=False) + t2s_model.load_state_dict(dict_s1["weight"]) + if is_half == True: + t2s_model = t2s_model.half() + t2s_model = t2s_model.to(device) + t2s_model.eval() + total = sum([param.nelement() for param in t2s_model.parameters()]) + print("Number of parameter: %.2fM" % (total / 1e6)) + with open("./gweight.txt", "w", encoding="utf-8") as f: f.write(gpt_path) + + +change_gpt_weights(gpt_path) + + +def get_spepc(hps, filename): + audio = load_audio(filename, int(hps.data.sampling_rate)) + audio = torch.FloatTensor(audio) + audio_norm = audio + audio_norm = audio_norm.unsqueeze(0) + spec = spectrogram_torch( + audio_norm, + hps.data.filter_length, + hps.data.sampling_rate, + hps.data.hop_length, + hps.data.win_length, + center=False, + ) + return spec + + +dict_language = { + i18n("中文"): "all_zh",#全部按中文识别 + i18n("英文"): "en",#全部按英文识别#######不变 + i18n("日文"): "all_ja",#全部按日文识别 + i18n("中英混合"): "zh",#按中英混合识别####不变 + i18n("日英混合"): "ja",#按日英混合识别####不变 + i18n("多语种混合"): "auto",#多语种启动切分识别语种 +} + + +def clean_text_inf(text, language): + phones, word2ph, norm_text = clean_text(text, language) + phones = cleaned_text_to_sequence(phones) + return phones, word2ph, norm_text + +dtype=torch.float16 if is_half == True else torch.float32 +def get_bert_inf(phones, word2ph, norm_text, language): + language=language.replace("all_","") + if language == "zh": + bert = get_bert_feature(norm_text, word2ph).to(device)#.to(dtype) + else: + bert = torch.zeros( + (1024, len(phones)), + dtype=torch.float16 if is_half == True else torch.float32, + ).to(device) + + return bert + + +splits = {",", "。", "?", "!", ",", ".", "?", "!", "~", ":", ":", "—", "…", } + + +def get_first(text): + pattern = "[" + "".join(re.escape(sep) for sep in splits) + "]" + text = re.split(pattern, text)[0].strip() + return text + + +def get_phones_and_bert(text,language): + if language in {"en","all_zh","all_ja"}: + language = language.replace("all_","") + if language == "en": + LangSegment.setfilters(["en"]) + formattext = " ".join(tmp["text"] for tmp in LangSegment.getTexts(text)) + else: + # 因无法区别中日文汉字,以用户输入为准 + formattext = text + while " " in formattext: + formattext = formattext.replace(" ", " ") + phones, word2ph, norm_text = clean_text_inf(formattext, language) + if language == "zh": + bert = get_bert_feature(norm_text, word2ph).to(device) + else: + bert = torch.zeros( + (1024, len(phones)), + dtype=torch.float16 if is_half == True else torch.float32, + ).to(device) + elif language in {"zh", "ja","auto"}: + textlist=[] + langlist=[] + LangSegment.setfilters(["zh","ja","en","ko"]) + if language == "auto": + for tmp in LangSegment.getTexts(text): + if tmp["lang"] == "ko": + langlist.append("zh") + textlist.append(tmp["text"]) + else: + langlist.append(tmp["lang"]) + textlist.append(tmp["text"]) + else: + for tmp in LangSegment.getTexts(text): + if tmp["lang"] == "en": + langlist.append(tmp["lang"]) + else: + # 因无法区别中日文汉字,以用户输入为准 + langlist.append(language) + textlist.append(tmp["text"]) + print(textlist) + print(langlist) + phones_list = [] + bert_list = [] + norm_text_list = [] + for i in range(len(textlist)): + lang = langlist[i] + phones, word2ph, norm_text = clean_text_inf(textlist[i], lang) + bert = get_bert_inf(phones, word2ph, norm_text, lang) + phones_list.append(phones) + norm_text_list.append(norm_text) + bert_list.append(bert) + bert = torch.cat(bert_list, dim=1) + phones = sum(phones_list, []) + norm_text = ''.join(norm_text_list) + + return phones,bert.to(dtype),norm_text + + +def merge_short_text_in_array(texts, threshold): + if (len(texts)) < 2: + return texts + result = [] + text = "" + for ele in texts: + text += ele + if len(text) >= threshold: + result.append(text) + text = "" + if (len(text) > 0): + if len(result) == 0: + result.append(text) + else: + result[len(result) - 1] += text + return result + +def get_tts_wav(ref_wav_path, prompt_text, prompt_language, text, text_language, how_to_cut=i18n("不切"), top_k=20, top_p=0.6, temperature=0.6, ref_free = False): + if prompt_text is None or len(prompt_text) == 0: + ref_free = True + t0 = ttime() + prompt_language = dict_language[prompt_language] + text_language = dict_language[text_language] + if not ref_free: + prompt_text = prompt_text.strip("\n") + if (prompt_text[-1] not in splits): prompt_text += "。" if prompt_language != "en" else "." + print(i18n("实际输入的参考文本:"), prompt_text) + text = text.strip("\n") + if (text[0] not in splits and len(get_first(text)) < 4): text = "。" + text if text_language != "en" else "." + text + + print(i18n("实际输入的目标文本:"), text) + zero_wav = np.zeros( + int(hps.data.sampling_rate * 0.3), + dtype=np.float16 if is_half == True else np.float32, + ) + with torch.no_grad(): + wav16k, sr = librosa.load(ref_wav_path, sr=16000) + if (wav16k.shape[0] > 160000 or wav16k.shape[0] < 48000): + raise OSError(i18n("参考音频在3~10秒范围外,请更换!")) + wav16k = torch.from_numpy(wav16k) + zero_wav_torch = torch.from_numpy(zero_wav) + if is_half == True: + wav16k = wav16k.half().to(device) + zero_wav_torch = zero_wav_torch.half().to(device) + else: + wav16k = wav16k.to(device) + zero_wav_torch = zero_wav_torch.to(device) + wav16k = torch.cat([wav16k, zero_wav_torch]) + ssl_content = ssl_model.model(wav16k.unsqueeze(0))[ + "last_hidden_state" + ].transpose( + 1, 2 + ) # .float() + codes = vq_model.extract_latent(ssl_content) + + prompt_semantic = codes[0, 0] + t1 = ttime() + + if (how_to_cut == i18n("凑四句一切")): + text = cut1(text) + elif (how_to_cut == i18n("凑50字一切")): + text = cut2(text) + elif (how_to_cut == i18n("按中文句号。切")): + text = cut3(text) + elif (how_to_cut == i18n("按英文句号.切")): + text = cut4(text) + elif (how_to_cut == i18n("按标点符号切")): + text = cut5(text) + while "\n\n" in text: + text = text.replace("\n\n", "\n") + print(i18n("实际输入的目标文本(切句后):"), text) + texts = text.split("\n") + texts = merge_short_text_in_array(texts, 5) + audio_opt = [] + if not ref_free: + phones1,bert1,norm_text1=get_phones_and_bert(prompt_text, prompt_language) + + for text in texts: + # 解决输入目标文本的空行导致报错的问题 + if (len(text.strip()) == 0): + continue + if (text[-1] not in splits): text += "。" if text_language != "en" else "." + print(i18n("实际输入的目标文本(每句):"), text) + phones2,bert2,norm_text2=get_phones_and_bert(text, text_language) + print(i18n("前端处理后的文本(每句):"), norm_text2) + if not ref_free: + bert = torch.cat([bert1, bert2], 1) + all_phoneme_ids = torch.LongTensor(phones1+phones2).to(device).unsqueeze(0) + else: + bert = bert2 + all_phoneme_ids = torch.LongTensor(phones2).to(device).unsqueeze(0) + + bert = bert.to(device).unsqueeze(0) + all_phoneme_len = torch.tensor([all_phoneme_ids.shape[-1]]).to(device) + prompt = prompt_semantic.unsqueeze(0).to(device) + t2 = ttime() + with torch.no_grad(): + # pred_semantic = t2s_model.model.infer( + pred_semantic, idx = t2s_model.model.infer_panel( + all_phoneme_ids, + all_phoneme_len, + None if ref_free else prompt, + bert, + # prompt_phone_len=ph_offset, + top_k=top_k, + top_p=top_p, + temperature=temperature, + early_stop_num=hz * max_sec, + ) + t3 = ttime() + # print(pred_semantic.shape,idx) + pred_semantic = pred_semantic[:, -idx:].unsqueeze( + 0 + ) # .unsqueeze(0)#mq要多unsqueeze一次 + refer = get_spepc(hps, ref_wav_path) # .to(device) + if is_half == True: + refer = refer.half().to(device) + else: + refer = refer.to(device) + # audio = vq_model.decode(pred_semantic, all_phoneme_ids, refer).detach().cpu().numpy()[0, 0] + audio = ( + vq_model.decode( + pred_semantic, torch.LongTensor(phones2).to(device).unsqueeze(0), refer + ) + .detach() + .cpu() + .numpy()[0, 0] + ) ###试试重建不带上prompt部分 + max_audio=np.abs(audio).max()#简单防止16bit爆音 + if max_audio>1:audio/=max_audio + audio_opt.append(audio) + audio_opt.append(zero_wav) + t4 = ttime() + print("%.3f\t%.3f\t%.3f\t%.3f" % (t1 - t0, t2 - t1, t3 - t2, t4 - t3)) + yield hps.data.sampling_rate, (np.concatenate(audio_opt, 0) * 32768).astype( + np.int16 + ) + + +def split(todo_text): + todo_text = todo_text.replace("……", "。").replace("——", ",") + if todo_text[-1] not in splits: + todo_text += "。" + i_split_head = i_split_tail = 0 + len_text = len(todo_text) + todo_texts = [] + while 1: + if i_split_head >= len_text: + break # 结尾一定有标点,所以直接跳出即可,最后一段在上次已加入 + if todo_text[i_split_head] in splits: + i_split_head += 1 + todo_texts.append(todo_text[i_split_tail:i_split_head]) + i_split_tail = i_split_head + else: + i_split_head += 1 + return todo_texts + + +def cut1(inp): + inp = inp.strip("\n") + inps = split(inp) + split_idx = list(range(0, len(inps), 4)) + split_idx[-1] = None + if len(split_idx) > 1: + opts = [] + for idx in range(len(split_idx) - 1): + opts.append("".join(inps[split_idx[idx]: split_idx[idx + 1]])) + else: + opts = [inp] + return "\n".join(opts) + + +def cut2(inp): + inp = inp.strip("\n") + inps = split(inp) + if len(inps) < 2: + return inp + opts = [] + summ = 0 + tmp_str = "" + for i in range(len(inps)): + summ += len(inps[i]) + tmp_str += inps[i] + if summ > 50: + summ = 0 + opts.append(tmp_str) + tmp_str = "" + if tmp_str != "": + opts.append(tmp_str) + # print(opts) + if len(opts) > 1 and len(opts[-1]) < 50: ##如果最后一个太短了,和前一个合一起 + opts[-2] = opts[-2] + opts[-1] + opts = opts[:-1] + return "\n".join(opts) + + +def cut3(inp): + inp = inp.strip("\n") + return "\n".join(["%s" % item for item in inp.strip("。").split("。")]) + + +def cut4(inp): + inp = inp.strip("\n") + return "\n".join(["%s" % item for item in inp.strip(".").split(".")]) + + +# contributed by https://github.com/AI-Hobbyist/GPT-SoVITS/blob/main/GPT_SoVITS/inference_webui.py +def cut5(inp): + # if not re.search(r'[^\w\s]', inp[-1]): + # inp += '。' + inp = inp.strip("\n") + punds = r'[,.;?!、,。?!;:…]' + items = re.split(f'({punds})', inp) + mergeitems = ["".join(group) for group in zip(items[::2], items[1::2])] + # 在句子不存在符号或句尾无符号的时候保证文本完整 + if len(items)%2 == 1: + mergeitems.append(items[-1]) + opt = "\n".join(mergeitems) + return opt + + +def custom_sort_key(s): + # 使用正则表达式提取字符串中的数字部分和非数字部分 + parts = re.split('(\d+)', s) + # 将数字部分转换为整数,非数字部分保持不变 + parts = [int(part) if part.isdigit() else part for part in parts] + return parts + + +def change_choices(): + SoVITS_names, GPT_names = get_weights_names() + return {"choices": sorted(SoVITS_names, key=custom_sort_key), "__type__": "update"}, {"choices": sorted(GPT_names, key=custom_sort_key), "__type__": "update"} + + +pretrained_sovits_name = "GPT_SoVITS/pretrained_models/s2G488k.pth" +pretrained_gpt_name = "GPT_SoVITS/pretrained_models/s1bert25hz-2kh-longer-epoch=68e-step=50232.ckpt" +SoVITS_weight_root = "SoVITS_weights" +GPT_weight_root = "GPT_weights" +os.makedirs(SoVITS_weight_root, exist_ok=True) +os.makedirs(GPT_weight_root, exist_ok=True) + + +def get_weights_names(): + SoVITS_names = [pretrained_sovits_name] + for name in os.listdir(SoVITS_weight_root): + if name.endswith(".pth"): SoVITS_names.append("%s/%s" % (SoVITS_weight_root, name)) + GPT_names = [pretrained_gpt_name] + for name in os.listdir(GPT_weight_root): + if name.endswith(".ckpt"): GPT_names.append("%s/%s" % (GPT_weight_root, name)) + return SoVITS_names, GPT_names + + +SoVITS_names, GPT_names = get_weights_names() + +with gr.Blocks(title="GPT-SoVITS WebUI") as app: + gr.Markdown( + value=i18n("本软件以MIT协议开源, 作者不对软件具备任何控制力, 使用软件者、传播软件导出的声音者自负全责.
如不认可该条款, 则不能使用或引用软件包内任何代码和文件. 详见根目录LICENSE.") + ) + with gr.Group(): + gr.Markdown(value=i18n("模型切换")) + with gr.Row(): + GPT_dropdown = gr.Dropdown(label=i18n("GPT模型列表"), choices=sorted(GPT_names, key=custom_sort_key), value=gpt_path, interactive=True) + SoVITS_dropdown = gr.Dropdown(label=i18n("SoVITS模型列表"), choices=sorted(SoVITS_names, key=custom_sort_key), value=sovits_path, interactive=True) + refresh_button = gr.Button(i18n("刷新模型路径"), variant="primary") + refresh_button.click(fn=change_choices, inputs=[], outputs=[SoVITS_dropdown, GPT_dropdown]) + SoVITS_dropdown.change(change_sovits_weights, [SoVITS_dropdown], []) + GPT_dropdown.change(change_gpt_weights, [GPT_dropdown], []) + gr.Markdown(value=i18n("*请上传并填写参考信息")) + with gr.Row(): + inp_ref = gr.Audio(label=i18n("请上传3~10秒内参考音频,超过会报错!"), type="filepath") + with gr.Column(): + ref_text_free = gr.Checkbox(label=i18n("开启无参考文本模式。不填参考文本亦相当于开启。"), value=False, interactive=True, show_label=True) + gr.Markdown(i18n("使用无参考文本模式时建议使用微调的GPT,听不清参考音频说的啥(不晓得写啥)可以开,开启后无视填写的参考文本。")) + prompt_text = gr.Textbox(label=i18n("参考音频的文本"), value="") + prompt_language = gr.Dropdown( + label=i18n("参考音频的语种"), choices=[i18n("中文"), i18n("英文"), i18n("日文"), i18n("中英混合"), i18n("日英混合"), i18n("多语种混合")], value=i18n("中文") + ) + gr.Markdown(value=i18n("*请填写需要合成的目标文本和语种模式")) + with gr.Row(): + text = gr.Textbox(label=i18n("需要合成的文本"), value="") + text_language = gr.Dropdown( + label=i18n("需要合成的语种"), choices=[i18n("中文"), i18n("英文"), i18n("日文"), i18n("中英混合"), i18n("日英混合"), i18n("多语种混合")], value=i18n("中文") + ) + how_to_cut = gr.Radio( + label=i18n("怎么切"), + choices=[i18n("不切"), i18n("凑四句一切"), i18n("凑50字一切"), i18n("按中文句号。切"), i18n("按英文句号.切"), i18n("按标点符号切"), ], + value=i18n("凑四句一切"), + interactive=True, + ) + with gr.Row(): + gr.Markdown(value=i18n("gpt采样参数(无参考文本时不要太低):")) + top_k = gr.Slider(minimum=1,maximum=100,step=1,label=i18n("top_k"),value=5,interactive=True) + top_p = gr.Slider(minimum=0,maximum=1,step=0.05,label=i18n("top_p"),value=1,interactive=True) + temperature = gr.Slider(minimum=0,maximum=1,step=0.05,label=i18n("temperature"),value=1,interactive=True) + inference_button = gr.Button(i18n("合成语音"), variant="primary") + output = gr.Audio(label=i18n("输出的语音")) + + inference_button.click( + get_tts_wav, + [inp_ref, prompt_text, prompt_language, text, text_language, how_to_cut, top_k, top_p, temperature, ref_text_free], + [output], + ) + + gr.Markdown(value=i18n("文本切分工具。太长的文本合成出来效果不一定好,所以太长建议先切。合成会根据文本的换行分开合成再拼起来。")) + with gr.Row(): + text_inp = gr.Textbox(label=i18n("需要合成的切分前文本"), value="") + button1 = gr.Button(i18n("凑四句一切"), variant="primary") + button2 = gr.Button(i18n("凑50字一切"), variant="primary") + button3 = gr.Button(i18n("按中文句号。切"), variant="primary") + button4 = gr.Button(i18n("按英文句号.切"), variant="primary") + button5 = gr.Button(i18n("按标点符号切"), variant="primary") + text_opt = gr.Textbox(label=i18n("切分后文本"), value="") + button1.click(cut1, [text_inp], [text_opt]) + button2.click(cut2, [text_inp], [text_opt]) + button3.click(cut3, [text_inp], [text_opt]) + button4.click(cut4, [text_inp], [text_opt]) + button5.click(cut5, [text_inp], [text_opt]) + gr.Markdown(value=i18n("后续将支持转音素、手工修改音素、语音合成分步执行。")) + +app.queue(concurrency_count=511, max_size=1022).launch( + server_name="0.0.0.0", + inbrowser=True, + share=is_share, + server_port=infer_ttswebui, + quiet=True, +) From 7556e8cc9629fc79549e48171e04ad71985764e8 Mon Sep 17 00:00:00 2001 From: chasonjiang <1440499136@qq.com> Date: Sat, 9 Mar 2024 01:02:09 +0800 Subject: [PATCH 52/63] fix some bugs GPT_SoVITS/TTS_infer_pack/TTS.py --- GPT_SoVITS/TTS_infer_pack/TTS.py | 20 ++++++++++++++------ 1 file changed, 14 insertions(+), 6 deletions(-) diff --git a/GPT_SoVITS/TTS_infer_pack/TTS.py b/GPT_SoVITS/TTS_infer_pack/TTS.py index 9f98a246..09f3175d 100644 --- a/GPT_SoVITS/TTS_infer_pack/TTS.py +++ b/GPT_SoVITS/TTS_infer_pack/TTS.py @@ -275,7 +275,7 @@ class TTS: prompt_semantic = codes[0, 0].to(self.configs.device) self.prompt_cache["prompt_semantic"] = prompt_semantic - def batch_sequences(self, sequences: List[torch.Tensor], axis: int = 0, pad_value: int = 0): + def batch_sequences(self, sequences: List[torch.Tensor], axis: int = 0, pad_value: int = 0, max_length:int=None): seq = sequences[0] ndim = seq.dim() if axis < 0: @@ -283,7 +283,10 @@ class TTS: dtype:torch.dtype = seq.dtype pad_value = torch.tensor(pad_value, dtype=dtype) seq_lengths = [seq.shape[axis] for seq in sequences] - max_length = max(seq_lengths) + if max_length is None: + max_length = max(seq_lengths) + else: + max_length = max(seq_lengths) if max_length < max(seq_lengths) else max_length padded_sequences = [] for seq, length in zip(sequences, seq_lengths): @@ -333,6 +336,8 @@ class TTS: all_phones_list = [] all_bert_features_list = [] norm_text_batch = [] + bert_max_len = 0 + phones_max_len = 0 for item in item_list: if prompt_data is not None: all_bert_features = torch.cat([prompt_data["bert_features"].clone(), item["bert_features"]], 1) @@ -344,15 +349,18 @@ class TTS: phones = torch.LongTensor(item["phones"]) all_phones = phones.clone() # norm_text = item["norm_text"] - + bert_max_len = max(bert_max_len, all_bert_features.shape[-1]) + phones_max_len = max(phones_max_len, phones.shape[-1]) + phones_list.append(phones) all_phones_list.append(all_phones) all_bert_features_list.append(all_bert_features) norm_text_batch.append(item["norm_text"]) # phones_batch = phones_list - phones_batch = self.batch_sequences(phones_list, axis=0, pad_value=0) - all_phones_batch = self.batch_sequences(all_phones_list, axis=0, pad_value=0) - all_bert_features_batch = torch.FloatTensor(len(item_list), 1024, all_phones_batch.shape[-1]) + max_len = max(bert_max_len, phones_max_len) + phones_batch = self.batch_sequences(phones_list, axis=0, pad_value=0, max_length=max_len) + all_phones_batch = self.batch_sequences(all_phones_list, axis=0, pad_value=0, max_length=max_len) + all_bert_features_batch = torch.FloatTensor(len(item_list), 1024, max_len) all_bert_features_batch.zero_() for idx, item in enumerate(all_bert_features_list): From 61453b59b21bee4c30501ce3ea4772b781b238d1 Mon Sep 17 00:00:00 2001 From: chasonjiang <1440499136@qq.com> Date: Sat, 9 Mar 2024 02:05:03 +0800 Subject: [PATCH 53/63] =?UTF-8?q?=09=E6=B7=BB=E5=8A=A0=E9=9F=B3=E9=A2=91?= =?UTF-8?q?=E5=80=8D=E9=80=9F=E6=94=AF=E6=8C=81:=20=20=20GPT=5FSoVITS/TTS?= =?UTF-8?q?=5Finfer=5Fpack/TTS.py=20=09=E6=B7=BB=E5=8A=A0=E9=9F=B3?= =?UTF-8?q?=E9=A2=91=E5=80=8D=E9=80=9F=E6=94=AF=E6=8C=81:=20=20=20GPT=5FSo?= =?UTF-8?q?VITS/inference=5Fwebui.py?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- GPT_SoVITS/TTS_infer_pack/TTS.py | 40 ++++++++++++++++++++++++++++---- GPT_SoVITS/inference_webui.py | 6 +++-- 2 files changed, 39 insertions(+), 7 deletions(-) diff --git a/GPT_SoVITS/TTS_infer_pack/TTS.py b/GPT_SoVITS/TTS_infer_pack/TTS.py index 09f3175d..70d0cc92 100644 --- a/GPT_SoVITS/TTS_infer_pack/TTS.py +++ b/GPT_SoVITS/TTS_infer_pack/TTS.py @@ -1,5 +1,6 @@ import os, sys +import ffmpeg now_dir = os.getcwd() sys.path.append(now_dir) import os @@ -405,7 +406,8 @@ class TTS: "temperature": 0.6, "text_split_method": "", "batch_size": 1, - "batch_threshold": 0.75 + "batch_threshold": 0.75, + "speed_factor":1.0, } returns: tulpe[int, np.ndarray]: sampling rate and audio data. @@ -421,6 +423,7 @@ class TTS: text_split_method:str = inputs.get("text_split_method", "") batch_size = inputs.get("batch_size", 1) batch_threshold = inputs.get("batch_threshold", 0.75) + speed_factor = inputs.get("speed_factor", 1.0) no_prompt_text = False if prompt_text in [None, ""]: @@ -548,7 +551,34 @@ class TTS: audio = self.recovery_order(audio, batch_index_list) print("%.3f\t%.3f\t%.3f\t%.3f" % (t1 - t0, t2 - t1, t_34, t_45)) - yield self.configs.sampling_rate, (np.concatenate(audio, 0) * 32768).astype( - np.int16 - ) - \ No newline at end of file + + audio = np.concatenate(audio, 0) + audio = (audio * 32768).astype(np.int16) + if speed_factor != 1.0: + audio = speed_change(audio, speed=speed_factor, sr=int(self.configs.sampling_rate)) + + yield self.configs.sampling_rate, audio + + + + +def speed_change(input_audio:np.ndarray, speed:float, sr:int): + # 将 NumPy 数组转换为原始 PCM 流 + raw_audio = input_audio.astype(np.int16).tobytes() + + # 设置 ffmpeg 输入流 + input_stream = ffmpeg.input('pipe:', format='s16le', acodec='pcm_s16le', ar=str(sr), ac=1) + + # 变速处理 + output_stream = input_stream.filter('atempo', speed) + + # 输出流到管道 + out, _ = ( + output_stream.output('pipe:', format='s16le', acodec='pcm_s16le') + .run(input=raw_audio, capture_stdout=True, capture_stderr=True) + ) + + # 将管道输出解码为 NumPy 数组 + processed_audio = np.frombuffer(out, np.int16) + + return processed_audio \ No newline at end of file diff --git a/GPT_SoVITS/inference_webui.py b/GPT_SoVITS/inference_webui.py index 68a2136a..f0336bb5 100644 --- a/GPT_SoVITS/inference_webui.py +++ b/GPT_SoVITS/inference_webui.py @@ -68,7 +68,7 @@ tts_pipline = TTS(tts_config) gpt_path = tts_config.t2s_weights_path sovits_path = tts_config.vits_weights_path -def inference(text, text_lang, ref_audio_path, prompt_text, prompt_lang, top_k, top_p, temperature, text_split_method, batch_size): +def inference(text, text_lang, ref_audio_path, prompt_text, prompt_lang, top_k, top_p, temperature, text_split_method, batch_size, speed_factor): inputs={ "text": text, "text_lang": dict_language[text_lang], @@ -80,6 +80,7 @@ def inference(text, text_lang, ref_audio_path, prompt_text, prompt_lang, top_k, "temperature": temperature, "text_split_method": cut_method[text_split_method], "batch_size":int(batch_size), + "speed_factor":float(speed_factor) } yield next(tts_pipline.run(inputs)) @@ -154,6 +155,7 @@ with gr.Blocks(title="GPT-SoVITS WebUI") as app: with gr.Row(): gr.Markdown(value=i18n("gpt采样参数(无参考文本时不要太低):")) batch_size = gr.Slider(minimum=1,maximum=20,step=1,label=i18n("batch_size"),value=1,interactive=True) + speed_factor = gr.Slider(minimum=0.25,maximum=4,step=0.05,label="speed_factor",value=1.0,interactive=True) top_k = gr.Slider(minimum=1,maximum=100,step=1,label=i18n("top_k"),value=5,interactive=True) top_p = gr.Slider(minimum=0,maximum=1,step=0.05,label=i18n("top_p"),value=1,interactive=True) temperature = gr.Slider(minimum=0,maximum=1,step=0.05,label=i18n("temperature"),value=1,interactive=True) @@ -165,7 +167,7 @@ with gr.Blocks(title="GPT-SoVITS WebUI") as app: inference_button.click( inference, - [text,text_language, inp_ref, prompt_text, prompt_language, top_k, top_p, temperature, how_to_cut, batch_size], + [text,text_language, inp_ref, prompt_text, prompt_language, top_k, top_p, temperature, how_to_cut, batch_size, speed_factor], [output], ) From c85b29f5a8903aba2792f7519832701b838237a3 Mon Sep 17 00:00:00 2001 From: chasonjiang <1440499136@qq.com> Date: Sat, 9 Mar 2024 02:12:20 +0800 Subject: [PATCH 54/63] =?UTF-8?q?=09=E5=A2=9E=E5=8A=A0=E5=81=A5=E5=A3=AE?= =?UTF-8?q?=E6=80=A7:=20=20=20GPT=5FSoVITS/TTS=5Finfer=5Fpack/TTS.py?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- GPT_SoVITS/TTS_infer_pack/TTS.py | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-) diff --git a/GPT_SoVITS/TTS_infer_pack/TTS.py b/GPT_SoVITS/TTS_infer_pack/TTS.py index 70d0cc92..f54bf3bf 100644 --- a/GPT_SoVITS/TTS_infer_pack/TTS.py +++ b/GPT_SoVITS/TTS_infer_pack/TTS.py @@ -554,9 +554,13 @@ class TTS: audio = np.concatenate(audio, 0) audio = (audio * 32768).astype(np.int16) - if speed_factor != 1.0: - audio = speed_change(audio, speed=speed_factor, sr=int(self.configs.sampling_rate)) - + + try: + if speed_factor != 1.0: + audio = speed_change(audio, speed=speed_factor, sr=int(self.configs.sampling_rate)) + except Exception as e: + print(f"Failed to change speed of audio: \n{e}") + yield self.configs.sampling_rate, audio From 4096a17e7efda622b857a34928f5ab5feb0cdcc2 Mon Sep 17 00:00:00 2001 From: chasonjiang <1440499136@qq.com> Date: Sat, 9 Mar 2024 19:51:49 +0800 Subject: [PATCH 55/63] =?UTF-8?q?=09=E5=85=BC=E5=AE=B9=E4=BA=86flash=5Fatt?= =?UTF-8?q?ention=E7=9A=84=E6=89=B9=E9=87=8F=E6=8E=A8=E7=90=86,=E5=B9=B6?= =?UTF-8?q?=E4=BF=AE=E5=A4=8D=E4=BA=86=E4=B8=80=E4=BA=9Bbug=20=20=20GPT=5F?= =?UTF-8?q?SoVITS/AR/models/t2s=5Fmodel.py=20=09=E6=89=B9=E9=87=8F?= =?UTF-8?q?=E6=8E=A8=E7=90=86=E5=A4=87=E4=BB=BD=E6=96=87=E4=BB=B6:=20=20?= =?UTF-8?q?=20GPT=5FSoVITS/AR/models/t2s=5Fmodel=5Fbatch=5Fonly.py?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- GPT_SoVITS/AR/models/t2s_model.py | 80 ++- GPT_SoVITS/AR/models/t2s_model_batch_only.py | 483 +++++++++++++++++++ 2 files changed, 545 insertions(+), 18 deletions(-) create mode 100644 GPT_SoVITS/AR/models/t2s_model_batch_only.py diff --git a/GPT_SoVITS/AR/models/t2s_model.py b/GPT_SoVITS/AR/models/t2s_model.py index 00f4aa30..8e3c7fc0 100644 --- a/GPT_SoVITS/AR/models/t2s_model.py +++ b/GPT_SoVITS/AR/models/t2s_model.py @@ -99,7 +99,8 @@ class T2SBlock: attn = F.scaled_dot_product_attention(q, k, v, ~attn_mask) - attn = attn.permute(2, 0, 1, 3).reshape(batch_size, -1, self.hidden_dim) + attn = attn.permute(2, 0, 1, 3).reshape(batch_size*q_len, self.hidden_dim) + attn = attn.view(q_len, batch_size, self.hidden_dim).transpose(1, 0) attn = F.linear(attn, self.out_w, self.out_b) x = F.layer_norm( @@ -114,15 +115,15 @@ class T2SBlock: ) return x, k_cache, v_cache - def decode_next_token(self, x, k_cache, v_cache): + def decode_next_token(self, x, k_cache, v_cache, attn_mask : torch.Tensor): q, k, v = F.linear(x, self.qkv_w, self.qkv_b).chunk(3, dim=-1) k_cache = torch.cat([k_cache, k], dim=1) v_cache = torch.cat([v_cache, v], dim=1) - kv_len = k_cache.shape[1] - + batch_size = q.shape[0] q_len = q.shape[1] + kv_len = k_cache.shape[1] q = q.view(batch_size, q_len, self.num_heads, -1).transpose(1, 2) k = k_cache.view(batch_size, kv_len, self.num_heads, -1).transpose(1, 2) @@ -131,7 +132,8 @@ class T2SBlock: attn = F.scaled_dot_product_attention(q, k, v) - attn = attn.permute(2, 0, 1, 3).reshape(batch_size, -1, self.hidden_dim) + attn = attn.permute(2, 0, 1, 3).reshape(batch_size*q_len, self.hidden_dim) + attn = attn.view(q_len, batch_size, self.hidden_dim).transpose(1, 0) attn = F.linear(attn, self.out_w, self.out_b) x = F.layer_norm( @@ -164,10 +166,10 @@ class T2STransformer: return x, k_cache, v_cache def decode_next_token( - self, x, k_cache: List[torch.Tensor], v_cache: List[torch.Tensor] + self, x, k_cache: List[torch.Tensor], v_cache: List[torch.Tensor], attn_mask : torch.Tensor ): for i in range(self.num_blocks): - x, k_cache[i], v_cache[i] = self.blocks[i].decode_next_token(x, k_cache[i], v_cache[i]) + x, k_cache[i], v_cache[i] = self.blocks[i].decode_next_token(x, k_cache[i], v_cache[i], attn_mask) return x, k_cache, v_cache @@ -543,12 +545,16 @@ class Text2SemanticDecoder(nn.Module): xy_attn_mask = torch.concat([x_attn_mask_pad, y_attn_mask], dim=0).to( x.device ) - + + y_list = [None]*y.shape[0] + batch_idx_map = list(range(y.shape[0])) + idx_list = [None]*y.shape[0] + cache_y_emb = y_emb for idx in tqdm(range(1500)): - if xy_attn_mask is not None: + if idx == 0: xy_dec, k_cache, v_cache = self.t2s_transformer.process_prompt(xy_pos, xy_attn_mask) else: - xy_dec, k_cache, v_cache = self.t2s_transformer.decode_next_token(xy_pos, k_cache, v_cache) + xy_dec, k_cache, v_cache = self.t2s_transformer.decode_next_token(xy_pos, k_cache, v_cache, xy_attn_mask) logits = self.ar_predict_layer( xy_dec[:, -1] @@ -557,18 +563,51 @@ class Text2SemanticDecoder(nn.Module): if idx == 0: xy_attn_mask = None logits = logits[:, :-1] + samples = sample( - logits[0], y, top_k=top_k, top_p=top_p, repetition_penalty=1.35, temperature=temperature - )[0].unsqueeze(0) + logits, y, top_k=top_k, top_p=top_p, repetition_penalty=1.35, temperature=temperature + )[0] y = torch.concat([y, samples], dim=1) - - if early_stop_num != -1 and (y.shape[1] - prefix_len) > early_stop_num: + + ####### 移除batch中已经生成完毕的序列,进一步优化计算量 + reserved_idx_of_batch_for_y = None + if (self.EOS in samples[:, 0]) or \ + (self.EOS in torch.argmax(logits, dim=-1)): ###如果生成到EOS,则停止 + l = samples[:, 0]==self.EOS + removed_idx_of_batch_for_y = torch.where(l==True)[0].tolist() + reserved_idx_of_batch_for_y = torch.where(l==False)[0] + # batch_indexs = torch.tensor(batch_idx_map, device=y.device)[removed_idx_of_batch_for_y] + for i in removed_idx_of_batch_for_y: + batch_index = batch_idx_map[i] + idx_list[batch_index] = idx - 1 + y_list[batch_index] = y[i, :-1] + + batch_idx_map = [batch_idx_map[i] for i in reserved_idx_of_batch_for_y.tolist()] + + # 只保留batch中未生成完毕的序列 + if reserved_idx_of_batch_for_y is not None: + # index = torch.LongTensor(batch_idx_map).to(y.device) + y = torch.index_select(y, dim=0, index=reserved_idx_of_batch_for_y) + if cache_y_emb is not None: + cache_y_emb = torch.index_select(cache_y_emb, dim=0, index=reserved_idx_of_batch_for_y) + if k_cache is not None : + for i in range(len(k_cache)): + k_cache[i] = torch.index_select(k_cache[i], dim=0, index=reserved_idx_of_batch_for_y) + v_cache[i] = torch.index_select(v_cache[i], dim=0, index=reserved_idx_of_batch_for_y) + + + if (early_stop_num != -1 and (y.shape[1] - prefix_len) > early_stop_num) or idx==1499: print("use early stop num:", early_stop_num) stop = True - - if torch.argmax(logits, dim=-1)[0] == self.EOS or samples[0, 0] == self.EOS: + for i, batch_index in enumerate(batch_idx_map): + batch_index = batch_idx_map[i] + idx_list[batch_index] = idx + y_list[batch_index] = y[i, :-1] + + if not (None in idx_list): stop = True + if stop: if y.shape[1]==0: y = torch.concat([y, torch.zeros_like(samples)], dim=1) @@ -580,6 +619,11 @@ class Text2SemanticDecoder(nn.Module): y_emb = self.ar_audio_embedding(y[:, -1:]) xy_pos = y_emb * self.ar_audio_position.x_scale + self.ar_audio_position.alpha * self.ar_audio_position.pe[:, y_len + idx] + if (None in idx_list): + for i in range(x.shape[0]): + if idx_list[i] is None: + idx_list[i] = 1500-1 ###如果没有生成到EOS,就用最大长度代替 + if ref_free: - return y[:, :-1], 0 - return y[:, :-1], idx - 1 + return y_list, [0]*x.shape[0] + return y_list, idx_list \ No newline at end of file diff --git a/GPT_SoVITS/AR/models/t2s_model_batch_only.py b/GPT_SoVITS/AR/models/t2s_model_batch_only.py new file mode 100644 index 00000000..8c31f12a --- /dev/null +++ b/GPT_SoVITS/AR/models/t2s_model_batch_only.py @@ -0,0 +1,483 @@ +# modified from https://github.com/feng-yufei/shared_debugging_code/blob/main/model/t2s_model.py +import torch +from tqdm import tqdm + +from AR.models.utils import make_pad_mask +from AR.models.utils import ( + topk_sampling, + sample, + logits_to_probs, + multinomial_sample_one_no_sync, + dpo_loss, + make_reject_y, + get_batch_logps +) +from AR.modules.embedding import SinePositionalEmbedding +from AR.modules.embedding import TokenEmbedding +from AR.modules.transformer import LayerNorm +from AR.modules.transformer import TransformerEncoder +from AR.modules.transformer import TransformerEncoderLayer +from torch import nn +from torch.nn import functional as F +from torchmetrics.classification import MulticlassAccuracy + +default_config = { + "embedding_dim": 512, + "hidden_dim": 512, + "num_head": 8, + "num_layers": 12, + "num_codebook": 8, + "p_dropout": 0.0, + "vocab_size": 1024 + 1, + "phoneme_vocab_size": 512, + "EOS": 1024, +} + + +class Text2SemanticDecoder(nn.Module): + def __init__(self, config, norm_first=False, top_k=3): + super(Text2SemanticDecoder, self).__init__() + self.model_dim = config["model"]["hidden_dim"] + self.embedding_dim = config["model"]["embedding_dim"] + self.num_head = config["model"]["head"] + self.num_layers = config["model"]["n_layer"] + self.norm_first = norm_first + self.vocab_size = config["model"]["vocab_size"] + self.phoneme_vocab_size = config["model"]["phoneme_vocab_size"] + self.p_dropout = config["model"]["dropout"] + self.EOS = config["model"]["EOS"] + self.norm_first = norm_first + assert self.EOS == self.vocab_size - 1 + # should be same as num of kmeans bin + # assert self.EOS == 1024 + self.bert_proj = nn.Linear(1024, self.embedding_dim) + self.ar_text_embedding = TokenEmbedding( + self.embedding_dim, self.phoneme_vocab_size, self.p_dropout + ) + self.ar_text_position = SinePositionalEmbedding( + self.embedding_dim, dropout=0.1, scale=False, alpha=True + ) + self.ar_audio_embedding = TokenEmbedding( + self.embedding_dim, self.vocab_size, self.p_dropout + ) + self.ar_audio_position = SinePositionalEmbedding( + self.embedding_dim, dropout=0.1, scale=False, alpha=True + ) + + self.h = TransformerEncoder( + TransformerEncoderLayer( + d_model=self.model_dim, + nhead=self.num_head, + dim_feedforward=self.model_dim * 4, + dropout=0.1, + batch_first=True, + norm_first=norm_first, + ), + num_layers=self.num_layers, + norm=LayerNorm(self.model_dim) if norm_first else None, + ) + + self.ar_predict_layer = nn.Linear(self.model_dim, self.vocab_size, bias=False) + self.loss_fct = nn.CrossEntropyLoss(reduction="sum") + + self.ar_accuracy_metric = MulticlassAccuracy( + self.vocab_size, + top_k=top_k, + average="micro", + multidim_average="global", + ignore_index=self.EOS, + ) + + def make_input_data(self, x, x_lens, y, y_lens, bert_feature): + x = self.ar_text_embedding(x) + x = x + self.bert_proj(bert_feature.transpose(1, 2)) + x = self.ar_text_position(x) + x_mask = make_pad_mask(x_lens) + + y_mask = make_pad_mask(y_lens) + y_mask_int = y_mask.type(torch.int64) + codes = y.type(torch.int64) * (1 - y_mask_int) + + # Training + # AR Decoder + y, targets = self.pad_y_eos(codes, y_mask_int, eos_id=self.EOS) + x_len = x_lens.max() + y_len = y_lens.max() + y_emb = self.ar_audio_embedding(y) + y_pos = self.ar_audio_position(y_emb) + + xy_padding_mask = torch.concat([x_mask, y_mask], dim=1) + + ar_xy_padding_mask = xy_padding_mask + + x_attn_mask = F.pad( + torch.zeros((x_len, x_len), dtype=torch.bool, device=x.device), + (0, y_len), + value=True, + ) + + y_attn_mask = F.pad( + torch.triu( + torch.ones(y_len, y_len, dtype=torch.bool, device=x.device), + diagonal=1, + ), + (x_len, 0), + value=False, + ) + + xy_attn_mask = torch.concat([x_attn_mask, y_attn_mask], dim=0) + bsz, src_len = x.shape[0], x_len + y_len + _xy_padding_mask = ( + ar_xy_padding_mask.view(bsz, 1, 1, src_len) + .expand(-1, self.num_head, -1, -1) + .reshape(bsz * self.num_head, 1, src_len) + ) + xy_attn_mask = xy_attn_mask.logical_or(_xy_padding_mask) + new_attn_mask = torch.zeros_like(xy_attn_mask, dtype=x.dtype) + new_attn_mask.masked_fill_(xy_attn_mask, float("-inf")) + xy_attn_mask = new_attn_mask + # x 和完整的 y 一次性输入模型 + xy_pos = torch.concat([x, y_pos], dim=1) + + return xy_pos, xy_attn_mask, targets + + def forward(self, x, x_lens, y, y_lens, bert_feature): + """ + x: phoneme_ids + y: semantic_ids + """ + + reject_y, reject_y_lens = make_reject_y(y, y_lens) + + xy_pos, xy_attn_mask, targets = self.make_input_data(x, x_lens, y, y_lens, bert_feature) + + xy_dec, _ = self.h( + (xy_pos, None), + mask=xy_attn_mask, + ) + x_len = x_lens.max() + logits = self.ar_predict_layer(xy_dec[:, x_len:]) + + ###### DPO ############# + reject_xy_pos, reject_xy_attn_mask, reject_targets = self.make_input_data(x, x_lens, reject_y, reject_y_lens, bert_feature) + + reject_xy_dec, _ = self.h( + (reject_xy_pos, None), + mask=reject_xy_attn_mask, + ) + x_len = x_lens.max() + reject_logits = self.ar_predict_layer(reject_xy_dec[:, x_len:]) + + # loss + # from feiteng: 每次 duration 越多, 梯度更新也应该更多, 所以用 sum + + loss_1 = F.cross_entropy(logits.permute(0, 2, 1), targets, reduction="sum") + acc = self.ar_accuracy_metric(logits.permute(0, 2, 1).detach(), targets).item() + + A_logits, R_logits = get_batch_logps(logits, reject_logits, targets, reject_targets) + loss_2, _, _ = dpo_loss(A_logits, R_logits, 0, 0, 0.2, reference_free=True) + + loss = loss_1 + loss_2 + + return loss, acc + + def forward_old(self, x, x_lens, y, y_lens, bert_feature): + """ + x: phoneme_ids + y: semantic_ids + """ + x = self.ar_text_embedding(x) + x = x + self.bert_proj(bert_feature.transpose(1, 2)) + x = self.ar_text_position(x) + x_mask = make_pad_mask(x_lens) + + y_mask = make_pad_mask(y_lens) + y_mask_int = y_mask.type(torch.int64) + codes = y.type(torch.int64) * (1 - y_mask_int) + + # Training + # AR Decoder + y, targets = self.pad_y_eos(codes, y_mask_int, eos_id=self.EOS) + x_len = x_lens.max() + y_len = y_lens.max() + y_emb = self.ar_audio_embedding(y) + y_pos = self.ar_audio_position(y_emb) + + xy_padding_mask = torch.concat([x_mask, y_mask], dim=1) + ar_xy_padding_mask = xy_padding_mask + + x_attn_mask = F.pad( + torch.zeros((x_len, x_len), dtype=torch.bool, device=x.device), + (0, y_len), + value=True, + ) + y_attn_mask = F.pad( + torch.triu( + torch.ones(y_len, y_len, dtype=torch.bool, device=x.device), + diagonal=1, + ), + (x_len, 0), + value=False, + ) + xy_attn_mask = torch.concat([x_attn_mask, y_attn_mask], dim=0) + bsz, src_len = x.shape[0], x_len + y_len + _xy_padding_mask = ( + ar_xy_padding_mask.view(bsz, 1, 1, src_len) + .expand(-1, self.num_head, -1, -1) + .reshape(bsz * self.num_head, 1, src_len) + ) + xy_attn_mask = xy_attn_mask.logical_or(_xy_padding_mask) + new_attn_mask = torch.zeros_like(xy_attn_mask, dtype=x.dtype) + new_attn_mask.masked_fill_(xy_attn_mask, float("-inf")) + xy_attn_mask = new_attn_mask + # x 和完整的 y 一次性输入模型 + xy_pos = torch.concat([x, y_pos], dim=1) + xy_dec, _ = self.h( + (xy_pos, None), + mask=xy_attn_mask, + ) + logits = self.ar_predict_layer(xy_dec[:, x_len:]).permute(0, 2, 1) + # loss + # from feiteng: 每次 duration 越多, 梯度更新也应该更多, 所以用 sum + loss = F.cross_entropy(logits, targets, reduction="sum") + acc = self.ar_accuracy_metric(logits.detach(), targets).item() + return loss, acc + + # 需要看下这个函数和 forward 的区别以及没有 semantic 的时候 prompts 输入什么 + def infer( + self, + x, + x_lens, + prompts, + bert_feature, + top_k: int = -100, + early_stop_num: int = -1, + temperature: float = 1.0, + ): + x = self.ar_text_embedding(x) + x = x + self.bert_proj(bert_feature.transpose(1, 2)) + x = self.ar_text_position(x) + + # AR Decoder + y = prompts + prefix_len = y.shape[1] + x_len = x.shape[1] + x_attn_mask = torch.zeros((x_len, x_len), dtype=torch.bool) + stop = False + for _ in tqdm(range(1500)): + y_emb = self.ar_audio_embedding(y) + y_pos = self.ar_audio_position(y_emb) + # x 和逐渐增长的 y 一起输入给模型 + xy_pos = torch.concat([x, y_pos], dim=1) + y_len = y.shape[1] + x_attn_mask_pad = F.pad( + x_attn_mask, + (0, y_len), + value=True, + ) + y_attn_mask = F.pad( + torch.triu(torch.ones(y_len, y_len, dtype=torch.bool), diagonal=1), + (x_len, 0), + value=False, + ) + xy_attn_mask = torch.concat([x_attn_mask_pad, y_attn_mask], dim=0).to( + y.device + ) + + xy_dec, _ = self.h( + (xy_pos, None), + mask=xy_attn_mask, + ) + logits = self.ar_predict_layer(xy_dec[:, -1]) + samples = topk_sampling( + logits, top_k=top_k, top_p=1.0, temperature=temperature + ) + + if early_stop_num != -1 and (y.shape[1] - prefix_len) > early_stop_num: + print("use early stop num:", early_stop_num) + stop = True + + if torch.argmax(logits, dim=-1)[0] == self.EOS or samples[0, 0] == self.EOS: + # print(torch.argmax(logits, dim=-1)[0] == self.EOS, samples[0, 0] == self.EOS) + stop = True + if stop: + if prompts.shape[1] == y.shape[1]: + y = torch.concat([y, torch.zeros_like(samples)], dim=1) + print("bad zero prediction") + print(f"T2S Decoding EOS [{prefix_len} -> {y.shape[1]}]") + break + # 本次生成的 semantic_ids 和之前的 y 构成新的 y + # print(samples.shape)#[1,1]#第一个1是bs + # import os + # os._exit(2333) + y = torch.concat([y, samples], dim=1) + return y + + def pad_y_eos(self, y, y_mask_int, eos_id): + targets = F.pad(y, (0, 1), value=0) + eos_id * F.pad( + y_mask_int, (0, 1), value=1 + ) + # 错位 + return targets[:, :-1], targets[:, 1:] + + def infer_panel( + self, + x, #####全部文本token + x_lens, + prompts, ####参考音频token + bert_feature, + top_k: int = -100, + top_p: int = 100, + early_stop_num: int = -1, + temperature: float = 1.0, + ): + x = self.ar_text_embedding(x) + x = x + self.bert_proj(bert_feature.transpose(1, 2)) + x = self.ar_text_position(x) + + # AR Decoder + y = prompts + + x_len = x.shape[1] + x_attn_mask = torch.zeros((x_len, x_len), dtype=torch.bool) + stop = False + # print(1111111,self.num_layers) + cache = { + "all_stage": self.num_layers, + "k": [None] * self.num_layers, ###根据配置自己手写 + "v": [None] * self.num_layers, + # "xy_pos":None,##y_pos位置编码每次都不一样的没法缓存,每次都要重新拼xy_pos.主要还是写法原因,其实是可以历史统一一样的,但也没啥计算量就不管了 + "y_emb": None, ##只需要对最新的samples求emb,再拼历史的就行 + # "logits":None,###原版就已经只对结尾求再拼接了,不用管 + # "xy_dec":None,###不需要,本来只需要最后一个做logits + "first_infer": 1, + "stage": 0, + } + ################### first step ########################## + if y is not None: + y_emb = self.ar_audio_embedding(y) + y_len = y_emb.shape[1] + prefix_len = y.shape[1] + y_pos = self.ar_audio_position(y_emb) + xy_pos = torch.concat([x, y_pos], dim=1) + cache["y_emb"] = y_emb + ref_free = False + else: + y_emb = None + y_len = 0 + prefix_len = 0 + y_pos = None + xy_pos = x + y = torch.zeros(x.shape[0], 0, dtype=torch.int, device=x.device) + ref_free = True + + x_attn_mask_pad = F.pad( + x_attn_mask, + (0, y_len), ###xx的纯0扩展到xx纯0+xy纯1,(x,x+y) + value=True, + ) + y_attn_mask = F.pad( ###yy的右上1扩展到左边xy的0,(y,x+y) + torch.triu(torch.ones(y_len, y_len, dtype=torch.bool), diagonal=1), + (x_len, 0), + value=False, + ) + xy_attn_mask = torch.concat([x_attn_mask_pad, y_attn_mask], dim=0).to( + x.device + ) + + y_list = [None]*y.shape[0] + batch_idx_map = list(range(y.shape[0])) + idx_list = [None]*y.shape[0] + for idx in tqdm(range(1500)): + + xy_dec, _ = self.h((xy_pos, None), mask=xy_attn_mask, cache=cache) + logits = self.ar_predict_layer( + xy_dec[:, -1] + ) ##不用改,如果用了cache的默认就是只有一帧,取最后一帧一样的 + # samples = topk_sampling(logits, top_k=top_k, top_p=1.0, temperature=temperature) + if(idx==0):###第一次跑不能EOS否则没有了 + logits = logits[:, :-1] ###刨除1024终止符号的概率 + samples = sample( + logits, y, top_k=top_k, top_p=top_p, repetition_penalty=1.35, temperature=temperature + )[0] + # 本次生成的 semantic_ids 和之前的 y 构成新的 y + # print(samples.shape)#[1,1]#第一个1是bs + y = torch.concat([y, samples], dim=1) + + # 移除已经生成完毕的序列 + reserved_idx_of_batch_for_y = None + if (self.EOS in torch.argmax(logits, dim=-1)) or \ + (self.EOS in samples[:, 0]): ###如果生成到EOS,则停止 + l = samples[:, 0]==self.EOS + removed_idx_of_batch_for_y = torch.where(l==True)[0].tolist() + reserved_idx_of_batch_for_y = torch.where(l==False)[0] + # batch_indexs = torch.tensor(batch_idx_map, device=y.device)[removed_idx_of_batch_for_y] + for i in removed_idx_of_batch_for_y: + batch_index = batch_idx_map[i] + idx_list[batch_index] = idx - 1 + y_list[batch_index] = y[i, :-1] + + batch_idx_map = [batch_idx_map[i] for i in reserved_idx_of_batch_for_y.tolist()] + + # 只保留未生成完毕的序列 + if reserved_idx_of_batch_for_y is not None: + # index = torch.LongTensor(batch_idx_map).to(y.device) + y = torch.index_select(y, dim=0, index=reserved_idx_of_batch_for_y) + if cache["y_emb"] is not None: + cache["y_emb"] = torch.index_select(cache["y_emb"], dim=0, index=reserved_idx_of_batch_for_y) + if cache["k"] is not None: + for i in range(self.num_layers): + # 因为kv转置了,所以batch dim是1 + cache["k"][i] = torch.index_select(cache["k"][i], dim=1, index=reserved_idx_of_batch_for_y) + cache["v"][i] = torch.index_select(cache["v"][i], dim=1, index=reserved_idx_of_batch_for_y) + + + if early_stop_num != -1 and (y.shape[1] - prefix_len) > early_stop_num: + print("use early stop num:", early_stop_num) + stop = True + + if not (None in idx_list): + # print(torch.argmax(logits, dim=-1)[0] == self.EOS, samples[0, 0] == self.EOS) + stop = True + if stop: + # if prompts.shape[1] == y.shape[1]: + # y = torch.concat([y, torch.zeros_like(samples)], dim=1) + # print("bad zero prediction") + if y.shape[1]==0: + y = torch.concat([y, torch.zeros_like(samples)], dim=1) + print("bad zero prediction") + print(f"T2S Decoding EOS [{prefix_len} -> {y.shape[1]}]") + break + + ####################### update next step ################################### + cache["first_infer"] = 0 + if cache["y_emb"] is not None: + y_emb = torch.cat( + [cache["y_emb"], self.ar_audio_embedding(y[:, -1:])], dim = 1 + ) + cache["y_emb"] = y_emb + y_pos = self.ar_audio_position(y_emb) + xy_pos = y_pos[:, -1:] + else: + y_emb = self.ar_audio_embedding(y[:, -1:]) + cache["y_emb"] = y_emb + y_pos = self.ar_audio_position(y_emb) + xy_pos = y_pos + y_len = y_pos.shape[1] + + ###最右边一列(是错的) + # xy_attn_mask=torch.ones((1, x_len+y_len), dtype=torch.bool,device=xy_pos.device) + # xy_attn_mask[:,-1]=False + ###最下面一行(是对的) + xy_attn_mask = torch.zeros( + (1, x_len + y_len), dtype=torch.bool, device=xy_pos.device + ) + + if (None in idx_list): + for i in range(x.shape[0]): + if idx_list[i] is None: + idx_list[i] = 1500-1 ###如果没有生成到EOS,就用最大长度代替 + + if ref_free: + return y_list, [0]*x.shape[0] + return y_list, idx_list From 3b9259b0a1564dc83e35ebf8e513740fc7594bb3 Mon Sep 17 00:00:00 2001 From: chasonjiang <1440499136@qq.com> Date: Sat, 9 Mar 2024 20:21:11 +0800 Subject: [PATCH 56/63] modified: GPT_SoVITS/AR/models/t2s_model.py --- GPT_SoVITS/AR/models/t2s_model.py | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/GPT_SoVITS/AR/models/t2s_model.py b/GPT_SoVITS/AR/models/t2s_model.py index 8e3c7fc0..23da380c 100644 --- a/GPT_SoVITS/AR/models/t2s_model.py +++ b/GPT_SoVITS/AR/models/t2s_model.py @@ -166,10 +166,10 @@ class T2STransformer: return x, k_cache, v_cache def decode_next_token( - self, x, k_cache: List[torch.Tensor], v_cache: List[torch.Tensor], attn_mask : torch.Tensor + self, x, k_cache: List[torch.Tensor], v_cache: List[torch.Tensor] ): for i in range(self.num_blocks): - x, k_cache[i], v_cache[i] = self.blocks[i].decode_next_token(x, k_cache[i], v_cache[i], attn_mask) + x, k_cache[i], v_cache[i] = self.blocks[i].decode_next_token(x, k_cache[i], v_cache[i]) return x, k_cache, v_cache @@ -554,7 +554,7 @@ class Text2SemanticDecoder(nn.Module): if idx == 0: xy_dec, k_cache, v_cache = self.t2s_transformer.process_prompt(xy_pos, xy_attn_mask) else: - xy_dec, k_cache, v_cache = self.t2s_transformer.decode_next_token(xy_pos, k_cache, v_cache, xy_attn_mask) + xy_dec, k_cache, v_cache = self.t2s_transformer.decode_next_token(xy_pos, k_cache, v_cache) logits = self.ar_predict_layer( xy_dec[:, -1] From be49e32505f169772aac15ffdb5caaa1491965dd Mon Sep 17 00:00:00 2001 From: chasonjiang <1440499136@qq.com> Date: Sat, 9 Mar 2024 20:23:55 +0800 Subject: [PATCH 57/63] modified: GPT_SoVITS/AR/models/t2s_model.py --- GPT_SoVITS/AR/models/t2s_model.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/GPT_SoVITS/AR/models/t2s_model.py b/GPT_SoVITS/AR/models/t2s_model.py index 23da380c..4908c593 100644 --- a/GPT_SoVITS/AR/models/t2s_model.py +++ b/GPT_SoVITS/AR/models/t2s_model.py @@ -115,7 +115,7 @@ class T2SBlock: ) return x, k_cache, v_cache - def decode_next_token(self, x, k_cache, v_cache, attn_mask : torch.Tensor): + def decode_next_token(self, x, k_cache, v_cache): q, k, v = F.linear(x, self.qkv_w, self.qkv_b).chunk(3, dim=-1) k_cache = torch.cat([k_cache, k], dim=1) From 2fe3207d713f8d166e3cbaf1dc0a55aa7dd74784 Mon Sep 17 00:00:00 2001 From: chasonjiang <1440499136@qq.com> Date: Sat, 9 Mar 2024 22:11:07 +0800 Subject: [PATCH 58/63] modified: GPT_SoVITS/TTS_infer_pack/TTS.py --- GPT_SoVITS/TTS_infer_pack/TTS.py | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/GPT_SoVITS/TTS_infer_pack/TTS.py b/GPT_SoVITS/TTS_infer_pack/TTS.py index f54bf3bf..7a8ececc 100644 --- a/GPT_SoVITS/TTS_infer_pack/TTS.py +++ b/GPT_SoVITS/TTS_infer_pack/TTS.py @@ -87,6 +87,7 @@ class TTS_Config: self.n_speakers:int = 300 self.langauges:list = ["auto", "en", "zh", "ja", "all_zh", "all_ja"] + print(self) def _load_configs(self, configs_path: str)->dict: with open(configs_path, 'r') as f: @@ -118,6 +119,18 @@ class TTS_Config: configs_path = self.configs_path with open(configs_path, 'w') as f: yaml.dump(configs, f) + + + def __str__(self): + string = "----------------TTS Config--------------\n" + string += "device: {}\n".format(self.device) + string += "is_half: {}\n".format(self.is_half) + string += "bert_base_path: {}\n".format(self.bert_base_path) + string += "t2s_weights_path: {}\n".format(self.t2s_weights_path) + string += "vits_weights_path: {}\n".format(self.vits_weights_path) + string += "cnhuhbert_base_path: {}\n".format(self.cnhuhbert_base_path) + string += "----------------------------------------\n" + return string class TTS: From 9d3b868464ffa2b1197340c644c1b976a5e2feaa Mon Sep 17 00:00:00 2001 From: jiaqianjing Date: Sat, 9 Mar 2024 23:52:00 +0800 Subject: [PATCH 59/63] bugfix for import local tools module failed when sys.path have been some tools module e.g. python=3.10.x --- tools/__init__.py | 0 webui.py | 4 +++- 2 files changed, 3 insertions(+), 1 deletion(-) create mode 100644 tools/__init__.py diff --git a/tools/__init__.py b/tools/__init__.py new file mode 100644 index 00000000..e69de29b diff --git a/webui.py b/webui.py index c6430d92..f6d115d9 100644 --- a/webui.py +++ b/webui.py @@ -1,6 +1,8 @@ import os,shutil,sys,pdb,re now_dir = os.getcwd() -sys.path.append(now_dir) +print(now_dir) +sys.path.insert(0, now_dir) +print(sys.path) import json,yaml,warnings,torch import platform import psutil From 36d2b57e9dac9ac14b6266a0b4518b8167cd1d59 Mon Sep 17 00:00:00 2001 From: jiaqianjing Date: Sun, 10 Mar 2024 00:03:38 +0800 Subject: [PATCH 60/63] remove useless code --- tools/uvr5/uvr5_weights/.gitignore | 2 ++ webui.py | 2 -- 2 files changed, 2 insertions(+), 2 deletions(-) create mode 100644 tools/uvr5/uvr5_weights/.gitignore diff --git a/tools/uvr5/uvr5_weights/.gitignore b/tools/uvr5/uvr5_weights/.gitignore new file mode 100644 index 00000000..d6b7ef32 --- /dev/null +++ b/tools/uvr5/uvr5_weights/.gitignore @@ -0,0 +1,2 @@ +* +!.gitignore diff --git a/webui.py b/webui.py index f6d115d9..fc8680e1 100644 --- a/webui.py +++ b/webui.py @@ -1,8 +1,6 @@ import os,shutil,sys,pdb,re now_dir = os.getcwd() -print(now_dir) sys.path.insert(0, now_dir) -print(sys.path) import json,yaml,warnings,torch import platform import psutil From ed2ffe13569b4740ad525214ed0a6441628f4f1a Mon Sep 17 00:00:00 2001 From: chasonjiang <1440499136@qq.com> Date: Sun, 10 Mar 2024 01:20:42 +0800 Subject: [PATCH 61/63] =?UTF-8?q?=09=E4=BF=AE=E5=A4=8D=E4=BA=86t2s?= =?UTF-8?q?=E6=A8=A1=E5=9E=8B=E6=97=A0prompt=E8=BE=93=E5=85=A5=E6=97=B6?= =?UTF-8?q?=E7=9A=84bug=20=20=20GPT=5FSoVITS/AR/models/t2s=5Fmodel.py=20?= =?UTF-8?q?=09=E5=A2=9E=E5=8A=A0=E4=B8=80=E4=BA=9B=E6=96=B0=E7=89=B9?= =?UTF-8?q?=E6=80=A7=EF=BC=8C=E5=B9=B6=E4=BF=AE=E5=A4=8D=E4=BA=86=E4=B8=80?= =?UTF-8?q?=E4=BA=9Bbug=20=20=20GPT=5FSoVITS/TTS=5Finfer=5Fpack/TTS.py=20?= =?UTF-8?q?=09=E4=BC=98=E5=8C=96=E7=BD=91=E9=A1=B5=E5=B8=83=E5=B1=80=20=20?= =?UTF-8?q?=20GPT=5FSoVITS/inference=5Fwebui.py?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- GPT_SoVITS/AR/models/t2s_model.py | 7 +- GPT_SoVITS/TTS_infer_pack/TTS.py | 157 ++++++++++++++++++++---------- GPT_SoVITS/inference_webui.py | 131 ++++++++++++++++--------- 3 files changed, 194 insertions(+), 101 deletions(-) diff --git a/GPT_SoVITS/AR/models/t2s_model.py b/GPT_SoVITS/AR/models/t2s_model.py index 4908c593..e140b4fc 100644 --- a/GPT_SoVITS/AR/models/t2s_model.py +++ b/GPT_SoVITS/AR/models/t2s_model.py @@ -549,7 +549,6 @@ class Text2SemanticDecoder(nn.Module): y_list = [None]*y.shape[0] batch_idx_map = list(range(y.shape[0])) idx_list = [None]*y.shape[0] - cache_y_emb = y_emb for idx in tqdm(range(1500)): if idx == 0: xy_dec, k_cache, v_cache = self.t2s_transformer.process_prompt(xy_pos, xy_attn_mask) @@ -589,8 +588,6 @@ class Text2SemanticDecoder(nn.Module): if reserved_idx_of_batch_for_y is not None: # index = torch.LongTensor(batch_idx_map).to(y.device) y = torch.index_select(y, dim=0, index=reserved_idx_of_batch_for_y) - if cache_y_emb is not None: - cache_y_emb = torch.index_select(cache_y_emb, dim=0, index=reserved_idx_of_batch_for_y) if k_cache is not None : for i in range(len(k_cache)): k_cache[i] = torch.index_select(k_cache[i], dim=0, index=reserved_idx_of_batch_for_y) @@ -617,8 +614,8 @@ class Text2SemanticDecoder(nn.Module): ####################### update next step ################################### y_emb = self.ar_audio_embedding(y[:, -1:]) - xy_pos = y_emb * self.ar_audio_position.x_scale + self.ar_audio_position.alpha * self.ar_audio_position.pe[:, y_len + idx] - + xy_pos = y_emb * self.ar_audio_position.x_scale + self.ar_audio_position.alpha * self.ar_audio_position.pe[:, y_len + idx].to( dtype= y_emb.dtype,device=y_emb.device) + if (None in idx_list): for i in range(x.shape[0]): if idx_list[i] is None: diff --git a/GPT_SoVITS/TTS_infer_pack/TTS.py b/GPT_SoVITS/TTS_infer_pack/TTS.py index 7a8ececc..b26bb70f 100644 --- a/GPT_SoVITS/TTS_infer_pack/TTS.py +++ b/GPT_SoVITS/TTS_infer_pack/TTS.py @@ -1,8 +1,7 @@ import os, sys - -import ffmpeg now_dir = os.getcwd() sys.path.append(now_dir) +import ffmpeg import os from typing import Generator, List, Union import numpy as np @@ -164,6 +163,9 @@ class TTS: "bert_features":None, "norm_text":None, } + + + self.stop_flag:bool = False def _init_models(self,): self.init_t2s_weights(self.configs.t2s_weights_path) @@ -310,7 +312,7 @@ class TTS: batch = torch.stack(padded_sequences) return batch - def to_batch(self, data:list, prompt_data:dict=None, batch_size:int=5, threshold:float=0.75): + def to_batch(self, data:list, prompt_data:dict=None, batch_size:int=5, threshold:float=0.75, split_bucket:bool=True): _data:list = [] index_and_len_list = [] @@ -318,30 +320,35 @@ class TTS: norm_text_len = len(item["norm_text"]) index_and_len_list.append([idx, norm_text_len]) - index_and_len_list.sort(key=lambda x: x[1]) - # index_and_len_batch_list = [index_and_len_list[idx:min(idx+batch_size,len(index_and_len_list))] for idx in range(0,len(index_and_len_list),batch_size)] - index_and_len_list = np.array(index_and_len_list, dtype=np.int64) - - # for batch_idx, index_and_len_batch in enumerate(index_and_len_batch_list): - batch_index_list = [] - batch_index_list_len = 0 - pos = 0 - while pos =threshold) or (pos_end-pos==1): - batch_index=index_and_len_list[pos:pos_end, 0].tolist() - batch_index_list_len += len(batch_index) - batch_index_list.append(batch_index) - pos = pos_end - break - pos_end=pos_end-1 - - assert batch_index_list_len == len(data) + if split_bucket: + index_and_len_list.sort(key=lambda x: x[1]) + index_and_len_list = np.array(index_and_len_list, dtype=np.int64) + + batch_index_list_len = 0 + pos = 0 + while pos =threshold) or (pos_end-pos==1): + batch_index=index_and_len_list[pos:pos_end, 0].tolist() + batch_index_list_len += len(batch_index) + batch_index_list.append(batch_index) + pos = pos_end + break + pos_end=pos_end-1 + + assert batch_index_list_len == len(data) + + else: + for i in range(len(data)): + if i%batch_size == 0: + batch_index_list.append([]) + batch_index_list[-1].append(i) + for batch_idx, index_list in enumerate(batch_index_list): item_list = [data[idx] for idx in index_list] @@ -399,7 +406,8 @@ class TTS: _data[index] = data[i][j] return _data - + def stop(self,): + self.stop_flag = True def run(self, inputs:dict): @@ -409,22 +417,26 @@ class TTS: Args: inputs (dict): { - "text": "", - "text_lang: "", - "ref_audio_path": "", - "prompt_text": "", - "prompt_lang": "", - "top_k": 5, - "top_p": 0.9, - "temperature": 0.6, - "text_split_method": "", - "batch_size": 1, - "batch_threshold": 0.75, - "speed_factor":1.0, + "text": "", # str. text to be synthesized + "text_lang: "", # str. language of the text to be synthesized + "ref_audio_path": "", # str. reference audio path + "prompt_text": "", # str. prompt text for the reference audio + "prompt_lang": "", # str. language of the prompt text for the reference audio + "top_k": 5, # int. top k sampling + "top_p": 0.9, # float. top p sampling + "temperature": 0.6, # float. temperature for sampling + "text_split_method": "", # str. text split method, see text_segmentaion_method.py for details. + "batch_size": 1, # int. batch size for inference + "batch_threshold": 0.75, # float. threshold for batch splitting. + "split_bucket: True, # bool. whether to split the batch into multiple buckets. + "return_fragment": False, # bool. step by step return the audio fragment. + "speed_factor":1.0, # float. control the speed of the synthesized audio. } returns: tulpe[int, np.ndarray]: sampling rate and audio data. """ + self.stop_flag:bool = False + text:str = inputs.get("text", "") text_lang:str = inputs.get("text_lang", "") ref_audio_path:str = inputs.get("ref_audio_path", "") @@ -437,7 +449,20 @@ class TTS: batch_size = inputs.get("batch_size", 1) batch_threshold = inputs.get("batch_threshold", 0.75) speed_factor = inputs.get("speed_factor", 1.0) + split_bucket = inputs.get("split_bucket", True) + return_fragment = inputs.get("return_fragment", False) + if return_fragment: + split_bucket = False + print(i18n("分段返回模式已开启")) + if split_bucket: + split_bucket = False + print(i18n("分段返回模式不支持分桶处理,已自动关闭分桶处理")) + + if split_bucket: + print(i18n("分桶处理模式已开启")) + + no_prompt_text = False if prompt_text in [None, ""]: no_prompt_text = True @@ -481,7 +506,9 @@ class TTS: data, batch_index_list = self.to_batch(data, prompt_data=self.prompt_cache if not no_prompt_text else None, batch_size=batch_size, - threshold=batch_threshold) + threshold=batch_threshold, + split_bucket=split_bucket + ) t2 = ttime() zero_wav = torch.zeros( int(self.configs.sampling_rate * 0.3), @@ -557,27 +584,57 @@ class TTS: audio_fragment.cpu().numpy() ) ###试试重建不带上prompt部分 - audio.append(batch_audio_fragment) - # audio.append(zero_wav) t5 = ttime() t_45 += t5 - t4 + if return_fragment: + print("%.3f\t%.3f\t%.3f\t%.3f" % (t1 - t0, t2 - t1, t4 - t3, t5 - t4)) + yield self.audio_postprocess(batch_audio_fragment, + self.configs.sampling_rate, + batch_index_list, + speed_factor, + split_bucket) + else: + audio.append(batch_audio_fragment) + + if self.stop_flag: + yield self.configs.sampling_rate, (zero_wav.cpu().numpy()).astype(np.int16) + return + + if not return_fragment: + print("%.3f\t%.3f\t%.3f\t%.3f" % (t1 - t0, t2 - t1, t_34, t_45)) + yield self.audio_postprocess(audio, + self.configs.sampling_rate, + batch_index_list, + speed_factor, + split_bucket) + + + + def audio_postprocess(self, + audio:np.ndarray, + sr:int, + batch_index_list:list=None, + speed_factor:float=1.0, + split_bucket:bool=True)->tuple[int, np.ndarray]: + if split_bucket: + audio = self.recovery_order(audio, batch_index_list) + else: + audio = [item for batch in audio for item in batch] + - audio = self.recovery_order(audio, batch_index_list) - print("%.3f\t%.3f\t%.3f\t%.3f" % (t1 - t0, t2 - t1, t_34, t_45)) - audio = np.concatenate(audio, 0) audio = (audio * 32768).astype(np.int16) try: if speed_factor != 1.0: - audio = speed_change(audio, speed=speed_factor, sr=int(self.configs.sampling_rate)) + audio = speed_change(audio, speed=speed_factor, sr=int(sr)) except Exception as e: print(f"Failed to change speed of audio: \n{e}") - yield self.configs.sampling_rate, audio - - - + return sr, audio + + + def speed_change(input_audio:np.ndarray, speed:float, sr:int): # 将 NumPy 数组转换为原始 PCM 流 diff --git a/GPT_SoVITS/inference_webui.py b/GPT_SoVITS/inference_webui.py index f0336bb5..a1932207 100644 --- a/GPT_SoVITS/inference_webui.py +++ b/GPT_SoVITS/inference_webui.py @@ -6,8 +6,11 @@ 全部按英文识别 全部按日文识别 ''' -import os, re, logging +import os, sys +now_dir = os.getcwd() +sys.path.append(now_dir) +import os, re, logging logging.getLogger("markdown_it").setLevel(logging.ERROR) logging.getLogger("urllib3").setLevel(logging.ERROR) logging.getLogger("httpcore").setLevel(logging.ERROR) @@ -18,10 +21,7 @@ logging.getLogger("torchaudio._extension").setLevel(logging.ERROR) import pdb import torch # modified from https://github.com/feng-yufei/shared_debugging_code/blob/main/model/t2s_lightning_module.py -import os, sys -now_dir = os.getcwd() -sys.path.append(now_dir) infer_ttswebui = os.environ.get("infer_ttswebui", 9872) infer_ttswebui = int(infer_ttswebui) @@ -34,6 +34,7 @@ import gradio as gr from TTS_infer_pack.TTS import TTS, TTS_Config from TTS_infer_pack.text_segmentation_method import cut1, cut2, cut3, cut4, cut5 from tools.i18n.i18n import I18nAuto +from TTS_infer_pack.text_segmentation_method import get_method i18n = I18nAuto() os.environ['PYTORCH_ENABLE_MPS_FALLBACK'] = '1' # 确保直接启动推理UI时也能够设置。 @@ -68,19 +69,28 @@ tts_pipline = TTS(tts_config) gpt_path = tts_config.t2s_weights_path sovits_path = tts_config.vits_weights_path -def inference(text, text_lang, ref_audio_path, prompt_text, prompt_lang, top_k, top_p, temperature, text_split_method, batch_size, speed_factor): +def inference(text, text_lang, + ref_audio_path, prompt_text, + prompt_lang, top_k, + top_p, temperature, + text_split_method, batch_size, + speed_factor, ref_text_free, + split_bucket + ): inputs={ "text": text, "text_lang": dict_language[text_lang], "ref_audio_path": ref_audio_path, - "prompt_text": prompt_text, + "prompt_text": prompt_text if not ref_text_free else "", "prompt_lang": dict_language[prompt_lang], "top_k": top_k, "top_p": top_p, "temperature": temperature, "text_split_method": cut_method[text_split_method], "batch_size":int(batch_size), - "speed_factor":float(speed_factor) + "speed_factor":float(speed_factor), + "split_bucket":split_bucket, + "return_fragment":False, } yield next(tts_pipline.run(inputs)) @@ -121,7 +131,9 @@ with gr.Blocks(title="GPT-SoVITS WebUI") as app: gr.Markdown( value=i18n("本软件以MIT协议开源, 作者不对软件具备任何控制力, 使用软件者、传播软件导出的声音者自负全责.
如不认可该条款, 则不能使用或引用软件包内任何代码和文件. 详见根目录LICENSE.") ) - with gr.Group(): + + with gr.Column(): + # with gr.Group(): gr.Markdown(value=i18n("模型切换")) with gr.Row(): GPT_dropdown = gr.Dropdown(label=i18n("GPT模型列表"), choices=sorted(GPT_names, key=custom_sort_key), value=gpt_path, interactive=True) @@ -130,61 +142,88 @@ with gr.Blocks(title="GPT-SoVITS WebUI") as app: refresh_button.click(fn=change_choices, inputs=[], outputs=[SoVITS_dropdown, GPT_dropdown]) SoVITS_dropdown.change(tts_pipline.init_vits_weights, [SoVITS_dropdown], []) GPT_dropdown.change(tts_pipline.init_t2s_weights, [GPT_dropdown], []) - gr.Markdown(value=i18n("*请上传并填写参考信息")) - with gr.Row(): + + with gr.Row(): + with gr.Column(): + gr.Markdown(value=i18n("*请上传并填写参考信息")) inp_ref = gr.Audio(label=i18n("请上传3~10秒内参考音频,超过会报错!"), type="filepath") - with gr.Column(): - ref_text_free = gr.Checkbox(label=i18n("开启无参考文本模式。不填参考文本亦相当于开启。"), value=False, interactive=True, show_label=True) - gr.Markdown(i18n("使用无参考文本模式时建议使用微调的GPT,听不清参考音频说的啥(不晓得写啥)可以开,开启后无视填写的参考文本。")) - prompt_text = gr.Textbox(label=i18n("参考音频的文本"), value="") - prompt_language = gr.Dropdown( - label=i18n("参考音频的语种"), choices=[i18n("中文"), i18n("英文"), i18n("日文"), i18n("中英混合"), i18n("日英混合"), i18n("多语种混合")], value=i18n("中文") - ) - gr.Markdown(value=i18n("*请填写需要合成的目标文本和语种模式")) - with gr.Row(): - text = gr.Textbox(label=i18n("需要合成的文本"), value="") + prompt_text = gr.Textbox(label=i18n("参考音频的文本"), value="", lines=2) + with gr.Row(): + prompt_language = gr.Dropdown( + label=i18n("参考音频的语种"), choices=[i18n("中文"), i18n("英文"), i18n("日文"), i18n("中英混合"), i18n("日英混合"), i18n("多语种混合")], value=i18n("中文") + ) + with gr.Column(): + ref_text_free = gr.Checkbox(label=i18n("开启无参考文本模式。不填参考文本亦相当于开启。"), value=False, interactive=True, show_label=True) + gr.Markdown(i18n("使用无参考文本模式时建议使用微调的GPT,听不清参考音频说的啥(不晓得写啥)可以开,开启后无视填写的参考文本。")) + + with gr.Column(): + gr.Markdown(value=i18n("*请填写需要合成的目标文本和语种模式")) + text = gr.Textbox(label=i18n("需要合成的文本"), value="", lines=16, max_lines=16) text_language = gr.Dropdown( label=i18n("需要合成的语种"), choices=[i18n("中文"), i18n("英文"), i18n("日文"), i18n("中英混合"), i18n("日英混合"), i18n("多语种混合")], value=i18n("中文") ) - how_to_cut = gr.Radio( - label=i18n("怎么切"), - choices=[i18n("不切"), i18n("凑四句一切"), i18n("凑50字一切"), i18n("按中文句号。切"), i18n("按英文句号.切"), i18n("按标点符号切"), ], - value=i18n("凑四句一切"), - interactive=True, - ) - with gr.Row(): - gr.Markdown(value=i18n("gpt采样参数(无参考文本时不要太低):")) + + + with gr.Group(): + gr.Markdown(value=i18n("推理设置")) + with gr.Row(): + + with gr.Column(): batch_size = gr.Slider(minimum=1,maximum=20,step=1,label=i18n("batch_size"),value=1,interactive=True) speed_factor = gr.Slider(minimum=0.25,maximum=4,step=0.05,label="speed_factor",value=1.0,interactive=True) top_k = gr.Slider(minimum=1,maximum=100,step=1,label=i18n("top_k"),value=5,interactive=True) top_p = gr.Slider(minimum=0,maximum=1,step=0.05,label=i18n("top_p"),value=1,interactive=True) temperature = gr.Slider(minimum=0,maximum=1,step=0.05,label=i18n("temperature"),value=1,interactive=True) - inference_button = gr.Button(i18n("合成语音"), variant="primary") - output = gr.Audio(label=i18n("输出的语音")) - + with gr.Column(): + how_to_cut = gr.Radio( + label=i18n("怎么切"), + choices=[i18n("不切"), i18n("凑四句一切"), i18n("凑50字一切"), i18n("按中文句号。切"), i18n("按英文句号.切"), i18n("按标点符号切"), ], + value=i18n("凑四句一切"), + interactive=True, + ) + split_bucket = gr.Checkbox(label=i18n("数据分桶(可能会降低一点计算量,选就对了)"), value=True, interactive=True, show_label=True) + # with gr.Column(): + output = gr.Audio(label=i18n("输出的语音")) + with gr.Row(): + inference_button = gr.Button(i18n("合成语音"), variant="primary") + stop_infer = gr.Button(i18n("终止合成"), variant="primary") + - - inference_button.click( inference, - [text,text_language, inp_ref, prompt_text, prompt_language, top_k, top_p, temperature, how_to_cut, batch_size, speed_factor], + [ + text,text_language, inp_ref, + prompt_text, prompt_language, + top_k, top_p, temperature, + how_to_cut, batch_size, + speed_factor, ref_text_free, + split_bucket + ], [output], ) + stop_infer.click(tts_pipline.stop, [], []) + with gr.Group(): gr.Markdown(value=i18n("文本切分工具。太长的文本合成出来效果不一定好,所以太长建议先切。合成会根据文本的换行分开合成再拼起来。")) with gr.Row(): - text_inp = gr.Textbox(label=i18n("需要合成的切分前文本"), value="") - button1 = gr.Button(i18n("凑四句一切"), variant="primary") - button2 = gr.Button(i18n("凑50字一切"), variant="primary") - button3 = gr.Button(i18n("按中文句号。切"), variant="primary") - button4 = gr.Button(i18n("按英文句号.切"), variant="primary") - button5 = gr.Button(i18n("按标点符号切"), variant="primary") - text_opt = gr.Textbox(label=i18n("切分后文本"), value="") - button1.click(cut1, [text_inp], [text_opt]) - button2.click(cut2, [text_inp], [text_opt]) - button3.click(cut3, [text_inp], [text_opt]) - button4.click(cut4, [text_inp], [text_opt]) - button5.click(cut5, [text_inp], [text_opt]) + text_inp = gr.Textbox(label=i18n("需要合成的切分前文本"), value="", lines=4) + with gr.Column(): + _how_to_cut = gr.Radio( + label=i18n("怎么切"), + choices=[i18n("不切"), i18n("凑四句一切"), i18n("凑50字一切"), i18n("按中文句号。切"), i18n("按英文句号.切"), i18n("按标点符号切"), ], + value=i18n("凑四句一切"), + interactive=True, + ) + cut_text= gr.Button(i18n("切分"), variant="primary") + + def to_cut(text_inp, how_to_cut): + if len(text_inp.strip()) == 0 or text_inp==[]: + return "" + method = get_method(cut_method[how_to_cut]) + return method(text_inp) + + text_opt = gr.Textbox(label=i18n("切分后文本"), value="", lines=4) + cut_text.click(to_cut, [text_inp, _how_to_cut], [text_opt]) gr.Markdown(value=i18n("后续将支持转音素、手工修改音素、语音合成分步执行。")) app.queue(concurrency_count=511, max_size=1022).launch( From cae976ef5af5db171cc1795a765e179954290b57 Mon Sep 17 00:00:00 2001 From: chasonjiang <1440499136@qq.com> Date: Sun, 10 Mar 2024 01:57:04 +0800 Subject: [PATCH 62/63] =?UTF-8?q?=20=20=20=20=E5=A2=9E=E5=8A=A0=E4=BA=86?= =?UTF-8?q?=E6=B3=A8=E9=87=8A=20=20=20GPT=5FSoVITS/TTS=5Finfer=5Fpack/TTS.?= =?UTF-8?q?py?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- GPT_SoVITS/TTS_infer_pack/TTS.py | 36 +++++++++++++++++++++++--------- 1 file changed, 26 insertions(+), 10 deletions(-) diff --git a/GPT_SoVITS/TTS_infer_pack/TTS.py b/GPT_SoVITS/TTS_infer_pack/TTS.py index b26bb70f..cc460b81 100644 --- a/GPT_SoVITS/TTS_infer_pack/TTS.py +++ b/GPT_SoVITS/TTS_infer_pack/TTS.py @@ -21,7 +21,7 @@ from .text_segmentation_method import splits from .TextPreprocessor import TextPreprocessor i18n = I18nAuto() -# tts_infer.yaml +# configs/tts_infer.yaml """ default: device: cpu @@ -240,6 +240,12 @@ class TTS: self.t2s_model = t2s_model def set_ref_audio(self, ref_audio_path:str): + ''' + To set the reference audio for the TTS model, + including the prompt_semantic and refer_spepc. + Args: + ref_audio_path: str, the path of the reference audio. + ''' self._set_prompt_semantic(ref_audio_path) self._set_ref_spepc(ref_audio_path) @@ -399,6 +405,16 @@ class TTS: return _data, batch_index_list def recovery_order(self, data:list, batch_index_list:list)->list: + ''' + Recovery the order of the audio according to the batch_index_list. + + Args: + data (List[list(np.ndarray)]): the out of order audio . + batch_index_list (List[list[int]]): the batch index list. + + Returns: + list (List[np.ndarray]): the data in the original order. + ''' lenght = len(sum(batch_index_list, [])) _data = [None]*lenght for i, index_list in enumerate(batch_index_list): @@ -407,6 +423,9 @@ class TTS: return _data def stop(self,): + ''' + Stop the inference process. + ''' self.stop_flag = True @@ -435,8 +454,8 @@ class TTS: returns: tulpe[int, np.ndarray]: sampling rate and audio data. """ + ########## variables initialization ########### self.stop_flag:bool = False - text:str = inputs.get("text", "") text_lang:str = inputs.get("text_lang", "") ref_audio_path:str = inputs.get("ref_audio_path", "") @@ -475,6 +494,8 @@ class TTS: ((self.prompt_cache["prompt_semantic"] is None) or (self.prompt_cache["refer_spepc"] is None)): raise ValueError("ref_audio_path cannot be empty, when the reference audio is not set using set_ref_audio()") + + ###### setting reference audio and prompt text preprocessing ######## t0 = ttime() if (ref_audio_path is not None) and (ref_audio_path != self.prompt_cache["ref_audio_path"]): self.set_ref_audio(ref_audio_path) @@ -494,12 +515,8 @@ class TTS: self.prompt_cache["bert_features"] = bert_features self.prompt_cache["norm_text"] = norm_text - zero_wav = np.zeros( - int(self.configs.sampling_rate * 0.3), - dtype=np.float16 if self.configs.is_half else np.float32, - ) - + ###### text preprocessing ######## data = self.text_preprocessor.preprocess(text, text_lang, text_split_method) audio = [] t1 = ttime() @@ -516,6 +533,8 @@ class TTS: device=self.configs.device ) + + ###### inference ###### t_34 = 0.0 t_45 = 0.0 for item in data: @@ -525,12 +544,10 @@ class TTS: all_bert_features = item["all_bert_features"] norm_text = item["norm_text"] - # phones = phones.to(self.configs.device) all_phoneme_ids = all_phoneme_ids.to(self.configs.device) all_bert_features = all_bert_features.to(self.configs.device) if self.configs.is_half: all_bert_features = all_bert_features.half() - # all_phoneme_len = torch.tensor([all_phoneme_ids.shape[-1]]*all_phoneme_ids.shape[0], device=self.configs.device) print(i18n("前端处理后的文本(每句):"), norm_text) if no_prompt_text : @@ -539,7 +556,6 @@ class TTS: prompt = self.prompt_cache["prompt_semantic"].clone().repeat(all_phoneme_ids.shape[0], 1).to(self.configs.device) with torch.no_grad(): - # pred_semantic = t2s_model.model.infer( pred_semantic_list, idx_list = self.t2s_model.model.infer_panel( all_phoneme_ids, None, From cd746848e6ca45ffdc04ab22756d8336cd5d8ec4 Mon Sep 17 00:00:00 2001 From: chasonjiang <1440499136@qq.com> Date: Sun, 10 Mar 2024 12:13:57 +0800 Subject: [PATCH 63/63] fixed some bugs GPT_SoVITS/AR/models/t2s_model.py fixed some bugs GPT_SoVITS/TTS_infer_pack/TTS.py --- GPT_SoVITS/AR/models/t2s_model.py | 21 ++++++++++++++++++++- GPT_SoVITS/TTS_infer_pack/TTS.py | 14 ++++++++++---- 2 files changed, 30 insertions(+), 5 deletions(-) diff --git a/GPT_SoVITS/AR/models/t2s_model.py b/GPT_SoVITS/AR/models/t2s_model.py index e140b4fc..ed46b2b1 100644 --- a/GPT_SoVITS/AR/models/t2s_model.py +++ b/GPT_SoVITS/AR/models/t2s_model.py @@ -97,7 +97,7 @@ class T2SBlock: k = k_cache.view(batch_size, kv_len, self.num_heads, -1).transpose(1, 2) v = v_cache.view(batch_size, kv_len, self.num_heads, -1).transpose(1, 2) - attn = F.scaled_dot_product_attention(q, k, v, ~attn_mask) + attn = F.scaled_dot_product_attention(q, k, v, attn_mask) attn = attn.permute(2, 0, 1, 3).reshape(batch_size*q_len, self.hidden_dim) attn = attn.view(q_len, batch_size, self.hidden_dim).transpose(1, 0) @@ -532,6 +532,20 @@ class Text2SemanticDecoder(nn.Module): y = torch.zeros(x.shape[0], 0, dtype=torch.int, device=x.device) ref_free = True + + ##### create mask ##### + bsz = x.shape[0] + src_len = x_len + y_len + y_lens = torch.LongTensor([y_len]*bsz).to(x.device) + y_mask = make_pad_mask(y_lens) + x_mask = make_pad_mask(x_lens) + + + xy_padding_mask = torch.concat([x_mask, y_mask], dim=1) + _xy_padding_mask = ( + xy_padding_mask.view(bsz, 1, 1, src_len).expand(-1, self.num_head, -1, -1) + ) + x_attn_mask_pad = F.pad( x_attn_mask, (0, y_len), ###xx的纯0扩展到xx纯0+xy纯1,(x,x+y) @@ -545,7 +559,12 @@ class Text2SemanticDecoder(nn.Module): xy_attn_mask = torch.concat([x_attn_mask_pad, y_attn_mask], dim=0).to( x.device ) + xy_attn_mask = xy_attn_mask.logical_or(_xy_padding_mask) + new_attn_mask = torch.zeros_like(xy_attn_mask, dtype=x.dtype) + new_attn_mask.masked_fill_(xy_attn_mask, float("-inf")) + xy_attn_mask = new_attn_mask + ###### decode ##### y_list = [None]*y.shape[0] batch_idx_map = list(range(y.shape[0])) idx_list = [None]*y.shape[0] diff --git a/GPT_SoVITS/TTS_infer_pack/TTS.py b/GPT_SoVITS/TTS_infer_pack/TTS.py index cc460b81..ba29a03f 100644 --- a/GPT_SoVITS/TTS_infer_pack/TTS.py +++ b/GPT_SoVITS/TTS_infer_pack/TTS.py @@ -361,6 +361,7 @@ class TTS: phones_list = [] # bert_features_list = [] all_phones_list = [] + all_phones_len_list = [] all_bert_features_list = [] norm_text_batch = [] bert_max_len = 0 @@ -376,16 +377,18 @@ class TTS: phones = torch.LongTensor(item["phones"]) all_phones = phones.clone() # norm_text = item["norm_text"] + bert_max_len = max(bert_max_len, all_bert_features.shape[-1]) phones_max_len = max(phones_max_len, phones.shape[-1]) phones_list.append(phones) all_phones_list.append(all_phones) + all_phones_len_list.append(all_phones.shape[-1]) all_bert_features_list.append(all_bert_features) norm_text_batch.append(item["norm_text"]) - # phones_batch = phones_list + phones_batch = phones_list max_len = max(bert_max_len, phones_max_len) - phones_batch = self.batch_sequences(phones_list, axis=0, pad_value=0, max_length=max_len) + # phones_batch = self.batch_sequences(phones_list, axis=0, pad_value=0, max_length=max_len) all_phones_batch = self.batch_sequences(all_phones_list, axis=0, pad_value=0, max_length=max_len) all_bert_features_batch = torch.FloatTensor(len(item_list), 1024, max_len) all_bert_features_batch.zero_() @@ -397,6 +400,7 @@ class TTS: batch = { "phones": phones_batch, "all_phones": all_phones_batch, + "all_phones_len": torch.LongTensor(all_phones_len_list), "all_bert_features": all_bert_features_batch, "norm_text": norm_text_batch } @@ -541,10 +545,12 @@ class TTS: t3 = ttime() batch_phones = item["phones"] all_phoneme_ids = item["all_phones"] + all_phoneme_lens = item["all_phones_len"] all_bert_features = item["all_bert_features"] norm_text = item["norm_text"] all_phoneme_ids = all_phoneme_ids.to(self.configs.device) + all_phoneme_lens = all_phoneme_lens.to(self.configs.device) all_bert_features = all_bert_features.to(self.configs.device) if self.configs.is_half: all_bert_features = all_bert_features.half() @@ -558,7 +564,7 @@ class TTS: with torch.no_grad(): pred_semantic_list, idx_list = self.t2s_model.model.infer_panel( all_phoneme_ids, - None, + all_phoneme_lens, prompt, all_bert_features, # prompt_phone_len=ph_offset, @@ -588,7 +594,7 @@ class TTS: ## 改成串行处理 batch_audio_fragment = [] for i, idx in enumerate(idx_list): - phones = batch_phones[i].clone().unsqueeze(0).to(self.configs.device) + phones = batch_phones[i].unsqueeze(0).to(self.configs.device) _pred_semantic = (pred_semantic_list[i][-idx:].unsqueeze(0).unsqueeze(0)) # .unsqueeze(0)#mq要多unsqueeze一次 audio_fragment =(self.vits_model.decode( _pred_semantic, phones, refer_audio_spepc