This commit is contained in:
zR 2024-08-06 12:40:34 +08:00
parent a0f71389dc
commit b4aec98e7d
2 changed files with 2 additions and 2 deletions

View File

@ -24,7 +24,7 @@
the video almost losslessly.
- 🔥 **News**: ``2024/8/6``: We have open-sourced **CogVideoX-2B**the first model in the CogVideoX series of video
generation models.
- 🌱 **Source**: ```2022.5.19```: We have open-sourced CogVideo (now you can see in `cogvideo-old` branch)a Transformer based text-to-video model, and you can check [paper](https://arxiv.org/abs/2205.15868) for technical details.
- 🌱 **Source**: ```2022/5/19```: We have open-sourced CogVideo (now you can see in `CogVideo` branch)a Transformer based text-to-video model, and you can check [ICLR'23 CogVideo Paper](https://arxiv.org/abs/2205.15868) for technical details.
**More powerful models with larger parameter sizes are on the way~ Stay tuned!**

View File

@ -23,7 +23,7 @@
- 🔥 **News**: ``2024/8/6``: 我们开源 **3D Causal VAE**,用于 **CogVideoX-2B**,可以几乎无损地重构视频。
- 🔥 **News**: ``2024/8/6``: 我们开源 CogVideoX 系列视频生成模型的第一个模型, **CogVideoX-2B**
- 🌱 **Source**: ```2022.5.19```: 我们开源了 CogVideo 视频生成模型(现在你可以在 `cogvideo-old` 分支中看到),这是一个基于 Transformer 的文本生成视频模型,您可以访问 [论文](https://arxiv.org/abs/2205.15868) 查看技术细节。
- 🌱 **Source**: ```2022/5/19```: 我们开源了 CogVideo 视频生成模型(现在你可以在 `CogVideo` 分支中看到),这是一个基于 Transformer 的文本生成视频模型,您可以访问 [ICLR'23 论文](https://arxiv.org/abs/2205.15868) 查看技术细节。
**性能更强,参数量更大的模型正在到来的路上~,欢迎关注**
## CogVideoX-2B 视频作品