GPT-SoVITS/requirements.txt
google-labs-jules[bot] d3b8f7e09e feat: Migrate from CUDA to XPU for Intel GPU support
This commit migrates the project from using NVIDIA CUDA to Intel XPU for GPU acceleration, based on the PyTorch 2.9 release.

Key changes include:
- Replaced `torch.cuda` with `torch.xpu` for device checks, memory management, and distributed training.
- Updated device strings from "cuda" to "xpu" across the codebase.
- Switched the distributed training backend from "nccl" to "ccl" for Intel GPUs.
- Disabled custom CUDA kernels in the `BigVGAN` module by setting `use_cuda_kernel=False`.
- Updated `requirements.txt` to include `torch==2.9` and `intel-extension-for-pytorch`.
- Modified CI/CD pipelines and build scripts to remove CUDA dependencies and build for an XPU target.
2025-11-10 13:09:27 +00:00

49 lines
786 B
Plaintext

--no-binary=opencc
numpy<2.0
scipy
tensorboard
librosa==0.10.2
numba
pytorch-lightning>=2.4
torch==2.9
intel-extension-for-pytorch
torchvision
gradio<5
ffmpeg-python
onnxruntime; platform_machine == "aarch64" or platform_machine == "arm64"
onnxruntime-gpu; platform_machine == "x86_64" or platform_machine == "AMD64"
tqdm
funasr==1.0.27
cn2an
pypinyin
pyopenjtalk>=0.4.1
g2p_en
torchaudio
modelscope==1.10.0
sentencepiece
transformers>=4.43,<=4.50
peft
chardet
PyYAML
psutil
jieba_fast
jieba
split-lang
fast_langdetect>=0.3.1
wordsegment
rotary_embedding_torch
ToJyutping
g2pk2
ko_pron
opencc
python_mecab_ko; sys_platform != 'win32'
fastapi[standard]>=0.115.2
x_transformers
torchmetrics<=1.5
pydantic<=2.10.6
ctranslate2>=4.0,<5
huggingface_hub>=0.13
tokenizers>=0.13,<1
av>=11
tqdm