mirror of
https://github.com/THUDM/CogVideo.git
synced 2025-09-20 21:10:00 +08:00
Merge branch 'main' of https://github.com/THUDM/CogVideo
This commit is contained in:
commit
6141e567e4
@ -101,7 +101,7 @@ of the **CogVideoX** open-source model.
|
|||||||
This folder contains some tools for model conversion / caption generation, etc.
|
This folder contains some tools for model conversion / caption generation, etc.
|
||||||
|
|
||||||
+ [convert_weight_sat2hf](tools/convert_weight_sat2hf.py): Convert SAT model weights to Huggingface model weights.
|
+ [convert_weight_sat2hf](tools/convert_weight_sat2hf.py): Convert SAT model weights to Huggingface model weights.
|
||||||
+ [caption_demo](tools/caption/README.md): Caption tool, a model that understands videos and outputs them in text.
|
+ [caption_demo](tools/caption): Caption tool, a model that understands videos and outputs them in text.
|
||||||
|
|
||||||
## Project Plan
|
## Project Plan
|
||||||
|
|
||||||
|
@ -1,16 +1,16 @@
|
|||||||
# 视频字幕
|
# 视频Caption
|
||||||
|
|
||||||
通常,大多数视频数据不带有相应的描述性文本,因此需要将视频数据转换为文本描述,以提供必要的训练数据用于文本到视频模型。
|
通常,大多数视频数据不带有相应的描述性文本,因此需要将视频数据转换为文本描述,以提供必要的训练数据用于文本到视频模型。
|
||||||
|
|
||||||
## 通过 CogVLM2-Video 模型生成视频描述
|
## 通过 CogVLM2-Video 模型生成视频Caption
|
||||||
|
|
||||||
🤗 [Hugging Face](https://huggingface.co/THUDM/cogvlm2-video-llama3-chat) | 🤖 [ModelScope](https://modelscope.cn/models/ZhipuAI/cogvlm2-video-llama3-chat) | 📑 [Blog](https://cogvlm2-video.github.io/) | [💬 Online Demo](http://cogvlm2-online.cogviewai.cn:7868/)
|
🤗 [Hugging Face](https://huggingface.co/THUDM/cogvlm2-video-llama3-chat) | 🤖 [ModelScope](https://modelscope.cn/models/ZhipuAI/cogvlm2-video-llama3-chat) | 📑 [Blog](https://cogvlm2-video.github.io/) | [💬 Online Demo](http://cogvlm2-online.cogviewai.cn:7868/)
|
||||||
|
|
||||||
CogVLM2-Video 是一个多功能的视频理解模型,具备基于时间戳的问题回答能力。用户可以输入诸如 `请详细描述这个视频` 的提示语给模型,以获得详细的视频字幕:
|
CogVLM2-Video 是一个多功能的视频理解模型,具备基于时间戳的问题回答能力。用户可以输入诸如 `请详细描述这个视频` 的提示语给模型,以获得详细的视频Caption:
|
||||||
|
|
||||||
|
|
||||||
<div align="center">
|
<div align="center">
|
||||||
<a href="https://cogvlm2-video.github.io/"><img width="600px" height="auto" src="./assests/cogvlm2-video-example.png"></a>
|
<a href="https://cogvlm2-video.github.io/"><img width="600px" height="auto" src="./assests/cogvlm2-video-example.png"></a>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
用户可以使用提供的[代码](https://github.com/THUDM/CogVLM2/tree/main/video_demo)加载模型或配置 RESTful API 生成视频字幕。
|
用户可以使用提供的[代码](https://github.com/THUDM/CogVLM2/tree/main/video_demo)加载模型或配置 RESTful API 来生成视频Caption。
|
Loading…
x
Reference in New Issue
Block a user