19 KiB
CogVideo && CogVideoX
Experience the CogVideoX-5B model online at 🤗 Huggingface Space or 🤖 ModelScope Space
📚 Check here to view Paper
📍 Visit 清影 and API Platform to experience larger-scale commercial video generation models.
Update and News
- 🔥🔥 News:
2024/8/27
: We have open-sourced a larger model in the CogVideoX series, CogVideoX-5B. At the same time, CogVideoX-2B will be licensed under the Apache 2.0 License. We have significantly optimized the model's inference performance, greatly lowering the inference threshold. You can now run CogVideoX-2B on earlier GPUs like theGTX 1080TI
, and CogVideoX-5B on mainstream desktop GPUs like theRTX 3060
. - 🔥 News:
2024/8/20
: VEnhancer now supports enhancing videos generated by CogVideoX, achieving higher resolution and higher quality video rendering. We welcome you to try it out by following the tutorial. - 🔥 News:
2024/8/15
: TheSwissArmyTransformer
dependency in CogVideoX has been upgraded to0.4.12
. Fine-tuning no longer requires installingSwissArmyTransformer
from source. Additionally, theTied VAE
technique has been applied in the implementation within thediffusers
library. Please installdiffusers
andaccelerate
libraries from source. Inference for CogVideoX now requires only 12GB of VRAM. The inference code needs to be modified. Please check cli_demo. - 🔥 News:
2024/8/12
: The CogVideoX paper has been uploaded to arxiv. Feel free to check out the paper. - 🔥 News:
2024/8/7
: CogVideoX has been integrated intodiffusers
version 0.30.0. Inference can now be performed on a single 3090 GPU. For more details, please refer to the code. - 🔥 News:
2024/8/6
: We have also open-sourced 3D Causal VAE used in CogVideoX-2B, which can reconstruct the video almost losslessly. - 🔥 News:
2024/8/6
: We have open-sourced CogVideoX-2B,the first model in the CogVideoX series of video generation models. - 🌱 Source:
2022/5/19
: We have open-sourced CogVideo (now you can see inCogVideo
branch),the first open-sourced pretrained text-to-video model, and you can check ICLR'23 CogVideo Paper for technical details.
More powerful models with larger parameter sizes are on the way~ Stay tuned!
Table of Contents
Jump to a specific section:
- Quick Start
- CogVideoX-2B Video Works
- Introduction to the CogVideoX Model
- Full Project Structure
- Introduction to CogVideo(ICLR'23) Model
- Citations
- Open Source Project Plan
- Model License
Quick Start
Prompt Optimization
Before running the model, please refer to this guide to see how we use large models like GLM-4 (or other comparable products, such as GPT-4) to optimize the model. This is crucial because the model is trained with long prompts, and a good prompt directly impacts the quality of the video generation.
SAT
Please make sure your Python version is between 3.10 and 3.12, inclusive of both 3.10 and 3.12.
Follow instructions in sat_demo: Contains the inference code and fine-tuning code of SAT weights. It is recommended to improve based on the CogVideoX model structure. Innovative researchers use this code to better perform rapid stacking and development.
Diffusers
Please make sure your Python version is between 3.10 and 3.12, inclusive of both 3.10 and 3.12.
pip install -r requirements.txt
Then follow diffusers_demo: A more detailed explanation of the inference code, mentioning the significance of common parameters.
Gallery
CogVideoX-5B
CogVideoX-2B
To view the corresponding prompt words for the gallery, please click here
Model Introduction
Model Name | CogVideoX-2B | CogVideoX-5B |
---|---|---|
Model Description | Entry-level model, balancing compatibility. Low cost for running and secondary development. | Larger model with higher video generation quality and better visual effects. |
Inference Precision | FP16*(Recommended), BF16, FP32, FP8*(E4M3, E5M2), INT8, INT4 not supported | BF16(Recommended), FP16, FP32, FP8*(E4M3, E5M2), INT8, INT4 not supported |
Single GPU Memory Consumption |
FP16: 18GB using SAT / 12.5GB* using diffusers INT8: 7.8GB* using diffusers |
BF16: 26GB using SAT / 20.7GB* using diffusers INT8: 11.4GB* using diffusers |
Multi-GPU Inference Memory Consumption | FP16: 10GB* using diffusers |
BF16: 15GB* using diffusers |
Inference Speed (Step = 50) |
FP16: ~90* s | BF16: ~180* s |
Fine-Tuning Precision | FP16 | BF16 |
Fine-Tuning Memory Consumption (per GPU) | 47 GB (bs=1, LORA) 61 GB (bs=2, LORA) 62GB (bs=1, SFT) |
63 GB (bs=1, LORA) 80 GB (bs=2, LORA) 75GB (bs=1, SFT) |
Prompt Language | English* | |
Prompt Length Limit | 226 Tokens | |
Video Length | 6 seconds | |
Frame Rate | 8 frames per second | |
Video Resolution | 720 * 480, other resolutions not supported (including fine-tuning) | |
Positional Encoding | 3d_sincos_pos_embed | 3d_rope_pos_embed |
Download Links (Diffusers Model) | 🤗 HuggingFace 🤖 ModelScope 🟣 WiseModel |
🤗 HuggingFace 🤖 ModelScope 🟣 WiseModel |
Download Links (SAT Model) | SAT |
Data Explanation
- When testing with the diffusers library, the
enable_model_cpu_offload()
option andpipe.vae.enable_tiling()
optimization were enabled. This setup has not been tested for actual memory/VRAM usage on devices other than NVIDIA A100 / H100. Generally, this approach should be compatible with all devices using the NVIDIA Ampere architecture and above. If these optimizations are disabled, memory usage will increase significantly, with peak VRAM usage approximately three times higher than the values shown in the table. - When performing multi-GPU inference, the
enable_model_cpu_offload()
optimization must be disabled. - Using the INT8 model will result in slower inference speeds. This is done to ensure that inference can be performed on GPUs with lower memory without significant video quality loss, albeit with a notable reduction in speed.
- Inference speed tests were also conducted with the above memory optimizations. Without memory optimization, inference
speed increases by approximately 10%. Only the
diffusers
version of the model supports quantization. - The model only supports English input; other languages can be translated into English when refined through large language models.
Friendly Links
We highly welcome contributions from the community and actively contribute to the open-source community. The following works have already been adapted for CogVideoX, and we invite everyone to use them:
- Xorbits Inference: A powerful and comprehensive distributed inference framework, allowing you to easily deploy your own models or the latest cutting-edge open-source models with just one click.
- VideoSys: VideoSys provides a user-friendly, high-performance infrastructure for video generation, with full pipeline support and continuous integration of the latest models and techniques.
Project Structure
This open-source repository will guide developers to quickly get started with the basic usage and fine-tuning examples of the CogVideoX open-source model.
Inference
- dcli_demo: A more detailed inference code explanation, including the significance of common parameters. All of this is covered here.
- cli_demo_quantization: Quantized model inference code that can run on devices with lower memory. You can also modify this code to support running CogVideoX models in FP8 precision.
- diffusers_vae_demo: Code for running VAE inference separately.
- space demo: The same GUI code as used in the Huggingface Space, with frame interpolation and super-resolution tools integrated.
- convert_demo: How to convert user input into long-form input suitable for CogVideoX. Since CogVideoX is trained on long texts, we need to transform the input text distribution to match the training data using an LLM. The script defaults to using GLM4, but it can be replaced with GPT, Gemini, or any other large language model.
- gradio_web_demo: A simple Gradio web application demonstrating how to use the CogVideoX-2B model to generate videos. Similar to our Huggingface Space, you can use this script to run a simple web application for video generation.
cd inference
# For Linux and Windows users
python gradio_web_demo.py # humans mode
# For macOS with Apple Silicon users, Intel not supported, this maybe 20x slower than RTX 4090
PYTORCH_ENABLE_MPS_FALLBACK=1 python gradio_web_demo.py # humans mode
- streamlit_web_demo: A simple streamlit web application demonstrating how to use the CogVideoX-2B model to generate videos.
sat
- sat_demo: Contains the inference code and fine-tuning code of SAT weights. It is recommended to improve based on the CogVideoX model structure. Innovative researchers use this code to better perform rapid stacking and development.
Tools
This folder contains some tools for model conversion / caption generation, etc.
- convert_weight_sat2hf: Convert SAT model weights to Huggingface model weights.
- caption_demo: Caption tool, a model that understands videos and outputs them in text.
CogVideo(ICLR'23)
The official repo for the paper: CogVideo: Large-scale Pretraining for Text-to-Video Generation via Transformers is on the CogVideo branch
CogVideo is able to generate relatively high-frame-rate videos. A 4-second clip of 32 frames is shown below.
The demo for CogVideo is at https://models.aminer.cn/cogvideo, where you can get hands-on practice on text-to-video generation. The original input is in Chinese.
Citation
🌟 If you find our work helpful, please leave us a star and cite our paper.
@article{yang2024cogvideox,
title={CogVideoX: Text-to-Video Diffusion Models with An Expert Transformer},
author={Yang, Zhuoyi and Teng, Jiayan and Zheng, Wendi and Ding, Ming and Huang, Shiyu and Xu, Jiazheng and Yang, Yuanming and Hong, Wenyi and Zhang, Xiaohan and Feng, Guanyu and others},
journal={arXiv preprint arXiv:2408.06072},
year={2024}
}
@article{hong2022cogvideo,
title={CogVideo: Large-scale Pretraining for Text-to-Video Generation via Transformers},
author={Hong, Wenyi and Ding, Ming and Zheng, Wendi and Liu, Xinghan and Tang, Jie},
journal={arXiv preprint arXiv:2205.15868},
year={2022}
}
Open Source Project Plan
- CogVideoX Model Open Source
- CogVideoX Model Inference Example (CLI / Web Demo)
- CogVideoX Online Experience Example (Huggingface Space)
- CogVideoX Open Source Model API Interface Example (Huggingface)
- CogVideoX Model Fine-Tuning Example (SAT)
- CogVideoX Model Fine-Tuning Example (Huggingface Diffusers)
- CogVideoX-5B Open Source (Adapted to CogVideoX-2B Suite)
- CogVideoX Technical Report Released
- CogVideoX Technical Explanation Video
- CogVideoX Peripheral Tools
- Basic Video Super-Resolution / Frame Interpolation Suite
- Inference Framework Adaptation
- ComfyUI Full Ecosystem Tools
We welcome your contributions! You can click here for more information.
License Agreement
The code in this repository is released under the Apache 2.0 License.
The CogVideoX-2B model (including its corresponding Transformers module and VAE module) is released under the Apache 2.0 License.
The CogVideoX-5B model (Transformers module) is released under the CogVideoX LICENSE.