mirror of
https://github.com/THUDM/CogVideo.git
synced 2025-04-06 03:57:56 +08:00
commit
ff87660ca5
12
README.md
12
README.md
@ -18,10 +18,20 @@ Experience the CogVideoX-5B model online at <a href="https://huggingface.co/spac
|
||||
</p>
|
||||
<p align="center">
|
||||
📍 Visit <a href="https://chatglm.cn/video?lang=en?fr=osm_cogvideo">QingYing</a> and <a href="https://open.bigmodel.cn/?utm_campaign=open&_channel_track_key=OWTVNma9">API Platform</a> to experience larger-scale commercial video generation models.
|
||||
|
||||
We have publicly shared the feishu <a href="https://zhipu-ai.feishu.cn/wiki/DHCjw1TrJiTyeukfc9RceoSRnCh">technical documentation</a> on CogVideoX fine-tuning scenarios, aiming to further increase the flexibility of distribution. All examples in the public documentation can be fully replicated.
|
||||
|
||||
CogVideoX fine-tuning is divided into SFT and LoRA fine-tuning. Based on our publicly available data processing scripts, you can more easily align specific styles in vertical scenarios. We provide guidance for ablation experiments on character image (IP) and scene style, further reducing the difficulty of replicating fine-tuning tasks.
|
||||
|
||||
We look forward to creative explorations and contributions.
|
||||
</p>
|
||||
|
||||
## Project Updates
|
||||
|
||||
- 🔥🔥 **News**: ```2024/10/10```: We have updated our technical report, including more training details and demos.
|
||||
|
||||
- 🔥🔥 **News**: ```2024/10/09```: We have publicly released the [technical documentation](https://zhipu-ai.feishu.cn/wiki/DHCjw1TrJiTyeukfc9RceoSRnCh) for CogVideoX fine-tuning on Feishu, further increasing distribution flexibility. All examples in the public documentation can be fully reproduced.
|
||||
|
||||
- 🔥🔥 **News**: ```2024/9/25```: CogVideoX web demo is available on Replicate. Try the text-to-video model **CogVideoX-5B** here [](https://replicate.com/chenxwh/cogvideox-t2v) and image-to-video model **CogVideoX-5B-I2V** here [](https://replicate.com/chenxwh/cogvideox-i2v).
|
||||
- 🔥🔥 **News**: ```2024/9/19```: We have open-sourced the CogVideoX series image-to-video model **CogVideoX-5B-I2V**.
|
||||
This model can take an image as a background input and generate a video combined with prompt words, offering greater
|
||||
@ -294,6 +304,8 @@ works have already been adapted for CogVideoX, and we invite everyone to use the
|
||||
Space image provided by community members.
|
||||
+ [Interior Design Fine-Tuning Model](https://huggingface.co/collections/bertjiazheng/koolcogvideox-66e4762f53287b7f39f8f3ba):
|
||||
is a fine-tuned model based on CogVideoX, specifically designed for interior design.
|
||||
+ [xDiT](https://github.com/xdit-project/xDiT): xDiT is a scalable inference engine for Diffusion Transformers (DiTs)
|
||||
on multiple GPU Clusters. xDiT supports real-time image and video generations services.
|
||||
|
||||
## Project Structure
|
||||
|
||||
|
10
README_ja.md
10
README_ja.md
@ -17,11 +17,18 @@
|
||||
👋 <a href="resources/WECHAT.md" target="_blank">WeChat</a> と <a href="https://discord.gg/dCGfUsagrD" target="_blank">Discord</a> に参加
|
||||
</p>
|
||||
<p align="center">
|
||||
📍 <a href="https://chatglm.cn/video?lang=en?fr=osm_cogvideo">清影</a> と <a href="https://open.bigmodel.cn/?utm_campaign=open&_channel_track_key=OWTVNma9">APIプラットフォーム</a> を訪問して、より大規模な商用ビデオ生成モデルを体験
|
||||
📍 <a href="https://chatglm.cn/video?lang=en?fr=osm_cogvideo">清影</a> と <a href="https://open.bigmodel.cn/?utm_campaign=open&_channel_track_key=OWTVNma9">APIプラットフォーム</a> を訪問して、より大規模な商用ビデオ生成モデルを体験.
|
||||
CogVideoXの動画生成に関連するエコシステムコミュニティをさらに活性化させるためには、生成モデルの最適化が非常に重要な方向性です。私たちは、CogVideoXの微調整シナリ飛書オを<a href="https://zhipu-ai.feishu.cn/wiki/DHCjw1TrJiTyeukfc9RceoSRnCh">技術文書</a>で公開し、分配の自由度をさらに高めるために、公開されている全てのサンプルを完全に再現可能にしています。
|
||||
|
||||
CogVideoXの微調整方法は、SFTとLoRA微調整に分かれており、公開されているデータ処理スクリプトを使用することで、特定の分野においてスタイルの一致をより手軽に達成できます。また、キャラクターイメージ(IP)やシーンスタイルのアブレーション実験のガイドも提供しており、微調整タスクの再現の難易度をさらに低減します。 私たちは、さらに創造的な探索が加わることを期待しています。
|
||||
</p>
|
||||
|
||||
## 更新とニュース
|
||||
|
||||
- 🔥🔥 **ニュース**: ```2024/10/10```: 技術報告書を更新し、より詳細なトレーニング情報とデモを追加しました。
|
||||
|
||||
- 🔥🔥 **ニュース**: ```2024/10/09```: 飛書の[技術ドキュメント](https://zhipu-ai.feishu.cn/wiki/DHCjw1TrJiTyeukfc9RceoSRnCh)でCogVideoXの微調整ガイドを公開しています。分配の自由度をさらに高めるため、公開されているドキュメント内のすべての例が完全に再現可能です。
|
||||
|
||||
- 🔥🔥 **ニュース**: ```2024/9/19```: CogVideoXシリーズの画像生成ビデオモデル **CogVideoX-5B-I2V**
|
||||
をオープンソース化しました。このモデルは、画像を背景入力として使用し、プロンプトワードと組み合わせてビデオを生成することができ、より高い制御性を提供します。これにより、CogVideoXシリーズのモデルは、テキストからビデオ生成、ビデオの継続、画像からビデオ生成の3つのタスクをサポートするようになりました。オンラインでの[体験](https://huggingface.co/spaces/THUDM/CogVideoX-5B-Space)
|
||||
をお楽しみください。
|
||||
@ -271,6 +278,7 @@ pipe.vae.enable_tiling()
|
||||
+ [AutoDLイメージ](https://www.codewithgpu.com/i/THUDM/CogVideo/CogVideoX-5b-demo): コミュニティメンバーが提供するHuggingface
|
||||
Spaceイメージのワンクリックデプロイメント。
|
||||
+ [インテリアデザイン微調整モデル](https://huggingface.co/collections/bertjiazheng/koolcogvideox-66e4762f53287b7f39f8f3ba): は、CogVideoXを基盤にした微調整モデルで、インテリアデザイン専用に設計されています。
|
||||
+ [xDiT](https://github.com/xdit-project/xDiT): xDiTは、複数のGPUクラスター上でDiTsを並列推論するためのエンジンです。xDiTはリアルタイムの画像およびビデオ生成サービスをサポートしています。
|
||||
|
||||
## プロジェクト構造
|
||||
|
||||
|
10
README_zh.md
10
README_zh.md
@ -19,10 +19,18 @@
|
||||
</p>
|
||||
<p align="center">
|
||||
📍 前往<a href="https://chatglm.cn/video?fr=osm_cogvideox"> 清影</a> 和 <a href="https://open.bigmodel.cn/?utm_campaign=open&_channel_track_key=OWTVNma9"> API平台</a> 体验更大规模的商业版视频生成模型。
|
||||
|
||||
我们在飞书<a href="https://zhipu-ai.feishu.cn/wiki/DHCjw1TrJiTyeukfc9RceoSRnCh">技术文档</a>公开CogVideoX微调指导,以进一步增加分发自由度,公开文档中所有示例可以完全复现
|
||||
|
||||
CogVideoX微调方式分为SFT和lora微调,在我们公开的数据处理的脚本上,你可以更加便捷的在垂类的场景上完成某些风格对齐,我们提供了人物形象(IP)和场景风格的消融实验指导,进一步减少复现微调任务的难度
|
||||
我们期待更加有创意探索加入[新月脸]
|
||||
</p>
|
||||
|
||||
## 项目更新
|
||||
|
||||
- 🔥🔥 **News**: ```2024/10/10```: 我们更新了我们的技术报告,附上了更多的训练细节和demo
|
||||
|
||||
- 🔥🔥 **News**: ```2024/10/09```: 我们在飞书[技术文档](https://zhipu-ai.feishu.cn/wiki/DHCjw1TrJiTyeukfc9RceoSRnCh")公开CogVideoX微调指导,以进一步增加分发自由度,公开文档中所有示例可以完全复现
|
||||
- 🔥🔥 **News**: ```2024/9/19```: 我们开源 CogVideoX 系列图生视频模型 **CogVideoX-5B-I2V**
|
||||
。该模型可以将一张图像作为背景输入,结合提示词一起生成视频,具有更强的可控性。
|
||||
至此,CogVideoX系列模型已经支持文本生成视频,视频续写,图片生成视频三种任务。欢迎前往在线[体验](https://huggingface.co/spaces/THUDM/CogVideoX-5B-Space)。
|
||||
@ -256,6 +264,8 @@ pipe.vae.enable_tiling()
|
||||
+ [AutoDL镜像](https://www.codewithgpu.com/i/THUDM/CogVideo/CogVideoX-5b-demo): 由社区成员提供的一键部署Huggingface
|
||||
Space镜像。
|
||||
+ [室内设计微调模型](https://huggingface.co/collections/bertjiazheng/koolcogvideox-66e4762f53287b7f39f8f3ba) 基于 CogVideoX的微调模型,它专为室内设计而设计
|
||||
+ [xDiT](https://github.com/xdit-project/xDiT): xDiT是一个用于在多GPU集群上对DiTs并行推理的引擎。xDiT支持实时图像和视频生成服务。
|
||||
|
||||
|
||||
## 完整项目代码结构
|
||||
|
||||
|
@ -39,7 +39,7 @@ from diffusers.optimization import get_scheduler
|
||||
from diffusers.pipelines.cogvideo.pipeline_cogvideox import get_resize_crop_region_for_grid
|
||||
from diffusers.training_utils import (
|
||||
cast_training_params,
|
||||
clear_objs_and_retain_memory,
|
||||
free_memory,
|
||||
)
|
||||
from diffusers.utils import check_min_version, convert_unet_state_dict_to_peft, export_to_video, is_wandb_available
|
||||
from diffusers.utils.hub_utils import load_or_create_model_card, populate_model_card
|
||||
@ -725,7 +725,7 @@ def log_validation(
|
||||
}
|
||||
)
|
||||
|
||||
clear_objs_and_retain_memory([pipe])
|
||||
free_memory()
|
||||
|
||||
return videos
|
||||
|
||||
|
@ -37,13 +37,15 @@ from huggingface_hub import hf_hub_download, snapshot_download
|
||||
|
||||
device = "cuda" if torch.cuda.is_available() else "cpu"
|
||||
|
||||
MODEL = "THUDM/CogVideoX-5b"
|
||||
|
||||
hf_hub_download(repo_id="ai-forever/Real-ESRGAN", filename="RealESRGAN_x4.pth", local_dir="model_real_esran")
|
||||
snapshot_download(repo_id="AlexWortega/RIFE", local_dir="model_rife")
|
||||
|
||||
pipe = CogVideoXPipeline.from_pretrained("/share/official_pretrains/hf_home/CogVideoX-5b", torch_dtype=torch.bfloat16).to(device)
|
||||
pipe = CogVideoXPipeline.from_pretrained(MODEL, torch_dtype=torch.bfloat16).to(device)
|
||||
pipe.scheduler = CogVideoXDPMScheduler.from_config(pipe.scheduler.config, timestep_spacing="trailing")
|
||||
pipe_video = CogVideoXVideoToVideoPipeline.from_pretrained(
|
||||
"/share/official_pretrains/hf_home/CogVideoX-5b",
|
||||
MODEL,
|
||||
transformer=pipe.transformer,
|
||||
vae=pipe.vae,
|
||||
scheduler=pipe.scheduler,
|
||||
@ -53,9 +55,9 @@ pipe_video = CogVideoXVideoToVideoPipeline.from_pretrained(
|
||||
).to(device)
|
||||
|
||||
pipe_image = CogVideoXImageToVideoPipeline.from_pretrained(
|
||||
"/share/official_pretrains/hf_home/CogVideoX-5b-I2V",
|
||||
MODEL,
|
||||
transformer=CogVideoXTransformer3DModel.from_pretrained(
|
||||
"/share/official_pretrains/hf_home/CogVideoX-5b-I2V", subfolder="transformer", torch_dtype=torch.bfloat16
|
||||
MODEL, subfolder="transformer", torch_dtype=torch.bfloat16
|
||||
),
|
||||
vae=pipe.vae,
|
||||
scheduler=pipe.scheduler,
|
||||
@ -315,7 +317,7 @@ with gr.Blocks() as demo:
|
||||
"></a>
|
||||
</div>
|
||||
<div style="text-align: center; font-size: 15px; font-weight: bold; color: red; margin-bottom: 20px;">
|
||||
⚠️ This demo is for academic research and experiential use only.
|
||||
⚠️ This demo is for academic research and experimental use only.
|
||||
</div>
|
||||
""")
|
||||
with gr.Row():
|
||||
|
@ -8,8 +8,9 @@ import numpy as np
|
||||
import logging
|
||||
import skvideo.io
|
||||
from rife.RIFE_HDv3 import Model
|
||||
|
||||
from huggingface_hub import hf_hub_download, snapshot_download
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
device = "cuda" if torch.cuda.is_available() else "cpu"
|
||||
|
||||
|
||||
@ -18,8 +19,8 @@ def pad_image(img, scale):
|
||||
tmp = max(32, int(32 / scale))
|
||||
ph = ((h - 1) // tmp + 1) * tmp
|
||||
pw = ((w - 1) // tmp + 1) * tmp
|
||||
padding = (0, 0, pw - w, ph - h)
|
||||
return F.pad(img, padding)
|
||||
padding = (0, pw - w, 0, ph - h)
|
||||
return F.pad(img, padding), padding
|
||||
|
||||
|
||||
def make_inference(model, I0, I1, upscale_amount, n):
|
||||
@ -36,15 +37,23 @@ def make_inference(model, I0, I1, upscale_amount, n):
|
||||
|
||||
@torch.inference_mode()
|
||||
def ssim_interpolation_rife(model, samples, exp=1, upscale_amount=1, output_device="cpu"):
|
||||
|
||||
print(f"samples dtype:{samples.dtype}")
|
||||
print(f"samples shape:{samples.shape}")
|
||||
output = []
|
||||
pbar = utils.ProgressBar(samples.shape[0], desc="RIFE inference")
|
||||
# [f, c, h, w]
|
||||
for b in range(samples.shape[0]):
|
||||
frame = samples[b : b + 1]
|
||||
_, _, h, w = frame.shape
|
||||
|
||||
I0 = samples[b : b + 1]
|
||||
I1 = samples[b + 1 : b + 2] if b + 2 < samples.shape[0] else samples[-1:]
|
||||
I1 = pad_image(I1, upscale_amount)
|
||||
|
||||
I0, padding = pad_image(I0, upscale_amount)
|
||||
I0 = I0.to(torch.float)
|
||||
I1, _ = pad_image(I1, upscale_amount)
|
||||
I1 = I1.to(torch.float)
|
||||
|
||||
# [c, h, w]
|
||||
I0_small = F.interpolate(I0, (32, 32), mode="bilinear", align_corners=False)
|
||||
I1_small = F.interpolate(I1, (32, 32), mode="bilinear", align_corners=False)
|
||||
@ -52,14 +61,32 @@ def ssim_interpolation_rife(model, samples, exp=1, upscale_amount=1, output_devi
|
||||
ssim = ssim_matlab(I0_small[:, :3], I1_small[:, :3])
|
||||
|
||||
if ssim > 0.996:
|
||||
I1 = I0
|
||||
I1 = pad_image(I1, upscale_amount)
|
||||
I1 = samples[b : b + 1]
|
||||
# print(f'upscale_amount:{upscale_amount}')
|
||||
# print(f'ssim:{upscale_amount}')
|
||||
# print(f'I0 shape:{I0.shape}')
|
||||
# print(f'I1 shape:{I1.shape}')
|
||||
I1, padding = pad_image(I1, upscale_amount)
|
||||
# print(f'I0 shape:{I0.shape}')
|
||||
# print(f'I1 shape:{I1.shape}')
|
||||
I1 = make_inference(model, I0, I1, upscale_amount, 1)
|
||||
|
||||
I1_small = F.interpolate(I1[0], (32, 32), mode="bilinear", align_corners=False)
|
||||
ssim = ssim_matlab(I0_small[:, :3], I1_small[:, :3])
|
||||
frame = I1[0]
|
||||
|
||||
# print(f'I0 shape:{I0.shape}')
|
||||
# print(f'I1[0] shape:{I1[0].shape}')
|
||||
I1 = I1[0]
|
||||
|
||||
# print(f'I1[0] unpadded shape:{I1.shape}')
|
||||
I1_small = F.interpolate(I1, (32, 32), mode="bilinear", align_corners=False)
|
||||
ssim = ssim_matlab(I0_small[:, :3], I1_small[:, :3])
|
||||
if padding[3] > 0 and padding[1] >0 :
|
||||
|
||||
frame = I1[:, :, : -padding[3],:-padding[1]]
|
||||
elif padding[3] > 0:
|
||||
frame = I1[:, :, : -padding[3],:]
|
||||
elif padding[1] >0:
|
||||
frame = I1[:, :, :,:-padding[1]]
|
||||
else:
|
||||
frame = I1
|
||||
|
||||
tmp_output = []
|
||||
if ssim < 0.2:
|
||||
@ -69,10 +96,17 @@ def ssim_interpolation_rife(model, samples, exp=1, upscale_amount=1, output_devi
|
||||
else:
|
||||
tmp_output = make_inference(model, I0, I1, upscale_amount, 2**exp - 1) if exp else []
|
||||
|
||||
frame = pad_image(frame, upscale_amount)
|
||||
tmp_output = [frame] + tmp_output
|
||||
for i, frame in enumerate(tmp_output):
|
||||
output.append(frame.to(output_device))
|
||||
frame, _ = pad_image(frame, upscale_amount)
|
||||
# print(f'frame shape:{frame.shape}')
|
||||
|
||||
frame = F.interpolate(frame, size=(h, w))
|
||||
output.append(frame.to(output_device))
|
||||
for i, tmp_frame in enumerate(tmp_output):
|
||||
|
||||
# tmp_frame, _ = pad_image(tmp_frame, upscale_amount)
|
||||
tmp_frame = F.interpolate(tmp_frame, size=(h, w))
|
||||
output.append(tmp_frame.to(output_device))
|
||||
pbar.update(1)
|
||||
return output
|
||||
|
||||
|
||||
@ -94,14 +128,26 @@ def frame_generator(video_capture):
|
||||
|
||||
|
||||
def rife_inference_with_path(model, video_path):
|
||||
# Open the video file
|
||||
video_capture = cv2.VideoCapture(video_path)
|
||||
tot_frame = video_capture.get(cv2.CAP_PROP_FRAME_COUNT)
|
||||
fps = video_capture.get(cv2.CAP_PROP_FPS) # Get the frames per second
|
||||
tot_frame = int(video_capture.get(cv2.CAP_PROP_FRAME_COUNT)) # Total frames in the video
|
||||
pt_frame_data = []
|
||||
pt_frame = skvideo.io.vreader(video_path)
|
||||
for frame in pt_frame:
|
||||
# Cyclic reading of the video frames
|
||||
while video_capture.isOpened():
|
||||
ret, frame = video_capture.read()
|
||||
|
||||
if not ret:
|
||||
break
|
||||
|
||||
# BGR to RGB
|
||||
frame_rgb = frame[..., ::-1]
|
||||
frame_rgb = frame_rgb.copy()
|
||||
tensor = torch.from_numpy(frame_rgb).float().to("cpu", non_blocking=True).float() / 255.0
|
||||
pt_frame_data.append(
|
||||
torch.from_numpy(np.transpose(frame, (2, 0, 1))).to("cpu", non_blocking=True).float() / 255.0
|
||||
)
|
||||
tensor.permute(2, 0, 1)
|
||||
) # to [c, h, w,]
|
||||
|
||||
pt_frame = torch.from_numpy(np.stack(pt_frame_data))
|
||||
pt_frame = pt_frame.to(device)
|
||||
@ -122,8 +168,17 @@ def rife_inference_with_latents(model, latents):
|
||||
for i in range(latents.size(0)):
|
||||
# [f, c, w, h]
|
||||
latent = latents[i]
|
||||
|
||||
frames = ssim_interpolation_rife(model, latent)
|
||||
pt_image = torch.stack([frames[i].squeeze(0) for i in range(len(frames))]) # (to [f, c, w, h])
|
||||
rife_results.append(pt_image)
|
||||
|
||||
return torch.stack(rife_results)
|
||||
|
||||
|
||||
# if __name__ == "__main__":
|
||||
# snapshot_download(repo_id="AlexWortega/RIFE", local_dir="model_rife")
|
||||
# model = load_rife_model("model_rife")
|
||||
|
||||
# video_path = rife_inference_with_path(model, "/mnt/ceph/develop/jiawei/CogVideo/output/20241003_130720.mp4")
|
||||
# print(video_path)
|
Loading…
x
Reference in New Issue
Block a user