From efabaef1326cc853f28b7c58158713183b10989a Mon Sep 17 00:00:00 2001 From: wenyihong <48993524+wenyihong@users.noreply.github.com> Date: Sat, 23 Jul 2022 20:46:07 +0800 Subject: [PATCH] Update README.md --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 70cb109..94c467c 100644 --- a/README.md +++ b/README.md @@ -2,10 +2,10 @@ This is the official repo for the paper: [CogVideo: Large-scale Pretraining for Text-to-Video Generation via Transformers](http://arxiv.org/abs/2205.15868). -# Web Demo +**News!** The [demo](https://wudao.aminer.cn/cogvideo/) for CogVideo is available! -**News!** The [demo](https://wudao.aminer.cn/cogvideo/) for CogVideo is available! It's also integrated into [Huggingface Spaces 🤗](https://huggingface.co/spaces) using [Gradio](https://github.com/gradio-app/gradio). Try out the Web Demo [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/THUDM/CogVideo) +It's also integrated into [Huggingface Spaces 🤗](https://huggingface.co/spaces) using [Gradio](https://github.com/gradio-app/gradio). Try out the Web Demo [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/THUDM/CogVideo) **News!** The code and model for text-to-video generation is now available! Currently we only supports *simplified Chinese input*.