VGen

VGen

多功能开源视频生成工具库

VGen是一个功能丰富的开源视频生成工具库。它整合了多个先进的视频生成模型,可根据文本、图像、动作和主体等输入创建高质量视频。VGen提供可视化、采样、训练和推理等实用工具,支持图像到视频、文本到视频等多种任务。该项目具有良好的扩展性和完整性,由阿里巴巴集团通义实验室开发。

VGen视频生成AI模型开源项目阿里巴巴Github

VGen

figure1

VGen is an open-source video synthesis codebase developed by the Tongyi Lab of Alibaba Group, featuring state-of-the-art video generative models. This repository includes implementations of the following methods:

VGen can produce high-quality videos from the input text, images, desired motion, desired subjects, and even the feedback signals provided. It also offers a variety of commonly used video generation tools such as visualization, sampling, training, inference, join training using images and videos, acceleration, and more.

<a href='https://i2vgen-xl.github.io/'><img src='https://img.shields.io/badge/Project-Page-Green'></a> <a href='https://arxiv.org/abs/2311.04145'><img src='https://img.shields.io/badge/Paper-Arxiv-red'></a> Open in Spaces Paper page Open in Spaces YouTube <a href='https://cloud.video.taobao.com/play/u/null/p/1/e/6/t/1/441039979087.mp4'><img src='source/logo.png'></a> Replicate

🔥News!!!

  • [2024.06] We release the code and models of InstructVideo. InstructVideo enables the LoRA fine-tuning and inference in VGen. Feel free to use LoRA fine-tuning for other tasks.
  • [2024.04] We release the models of DreamVideo and ModelScopeT2V V1.5!!! ModelScopeT2V V1.5 is further fine-tuned on ModelScopeT2V for 365k iterations with more data.
  • [2024.04] We release the code and models of TF-T2V!
  • [2024.04] We release the code and models of VideoLCM!
  • [2024.03] We release the training and inference code of DreamVideo!
  • [2024.03] We release the code and model of HiGen!!
  • [2024.01] The gradio demo of I2VGen-XL has been completed in HuggingFace, thanks to our colleague @Wenmeng Zhou and @AK for the support, and welcome to try it out.
  • [2024.01] We support running the gradio app locally, thanks to our colleague @Wenmeng Zhou for the support and @AK for the suggestion, and welcome to have a try.
  • [2024.01] Thanks @Chenxi for supporting the running of i2vgen-xl on Replicate. Feel free to give it a try.
  • [2024.01] The gradio demo of I2VGen-XL has been completed in Modelscope, and welcome to try it out.
  • [2023.12] We have open-sourced the code and models for DreamTalk, which can produce high-quality talking head videos across diverse speaking styles using diffusion models.
  • [2023.12] We release TF-T2V that can scale up existing video generation techniques using text-free videos, significantly enhancing the performance of both Modelscope-T2V and VideoComposer at the same time.
  • [2023.12] We updated the codebase to support higher versions of xformer (0.0.22), torch2.0+, and removed the dependency on flash_attn.
  • [2023.12] We release InstructVideo that can accept human feedback signals to improve VLDM
  • [2023.12] We release the diffusion based expressive talking head generation DreamTalk
  • [2023.12] We release the high-efficiency video generation method VideoLCM
  • [2023.12] We release the code and model of I2VGen-XL and the ModelScope T2V
  • [2023.12] We release the T2V method HiGen and customizing T2V method DreamVideo.
  • [2023.12] We write an introduction document for VGen and compare I2VGen-XL with SVD.
  • [2023.11] We release a high-quality I2VGen-XL model, please refer to the Webpage

TODO

  • Release the technical papers and webpage of I2VGen-XL
  • Release the code and pretrained models that can generate 1280x720 videos
  • Release the code and models of DreamTalk that can generate expressive talking head
  • Release the code and pretrained models of HumanDiff
  • Release models optimized specifically for the human body and faces
  • Updated version can fully maintain the ID and capture large and accurate motions simultaneously
  • Release other methods and the corresponding models

Preparation

The main features of VGen are as follows:

  • Expandability, allowing for easy management of your own experiments.
  • Completeness, encompassing all common components for video generation.
  • Excellent performance, featuring powerful pre-trained models in multiple tasks.

Installation

conda create -n vgen python=3.8
conda activate vgen
pip install torch==1.12.0+cu113 torchvision==0.13.0+cu113 torchaudio==0.12.0 --extra-index-url https://download.pytorch.org/whl/cu113
pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple

You also need to ensure that your system has installed the ffmpeg command. If it is not installed, you can install it using the following command:

sudo apt-get update && apt-get install ffmpeg libsm6 libxext6  -y

Datasets

We have provided a demo dataset that includes images and videos, along with their lists in data.

Please note that the demo images used here are for testing purposes and were not included in the training.

Clone the code

git clone https://github.com/ali-vilab/VGen.git
cd VGen

Getting Started with VGen

(1) Train your text-to-video model

Executing the following command to enable distributed training is as easy as that.

python train_net.py --cfg configs/t2v_train.yaml

In the t2v_train.yaml configuration file, you can specify the data, adjust the video-to-image ratio using frame_lens, and validate your ideas with different Diffusion settings, and so on.

  • Before the training, you can download any of our open-source models for initialization. Our codebase supports custom initialization and grad_scale settings, all of which are included in the Pretrain item in yaml file.
  • During the training, you can view the saved models and intermediate inference results in the workspace/experiments/t2v_traindirectory.

After the training is completed, you can perform inference on the model using the following command.

python inference.py --cfg configs/t2v_infer.yaml

Then you can find the videos you generated in the workspace/experiments/test_img_01 directory. For specific configurations such as data, models, seed, etc., please refer to the t2v_infer.yaml file.

If you want to directly load our previously open-sourced Modelscope T2V model, please refer to this link.

<!-- <table> <center> <tr> <td ><center> <video muted="true" autoplay="true" loop="true" height="260" src="https://cloud.video.taobao.com/play/u/null/p/1/e/6/t/1/441754174077.mp4"></video> </center></td> <td ><center> <video muted="true" autoplay="true" loop="true" height="260" src="https://cloud.video.taobao.com/play/u/null/p/1/e/6/t/1/441138824052.mp4"></video> </center></td> </tr> </center> </table> </center> -->

(2) Run the I2VGen-XL model

(i) Download model and test data:

!pip install modelscope
from modelscope.hub.snapshot_download import snapshot_download
model_dir = snapshot_download('damo/I2VGen-XL', cache_dir='models/', revision='v1.0.0')

or you can also download it through HuggingFace (https://huggingface.co/damo-vilab/i2vgen-xl):

# Make sure you have git-lfs installed (https://git-lfs.com)
git lfs install
git clone https://huggingface.co/damo-vilab/i2vgen-xl

(ii) Run the following command:

python inference.py --cfg configs/i2vgen_xl_infer.yaml

or you can run:

python inference.py --cfg configs/i2vgen_xl_infer.yaml  test_list_path data/test_list_for_i2vgen.txt test_model models/i2vgen_xl_00854500.pth

The test_list_path represents the input image path and its corresponding caption. Please refer to the specific format and suggestions within demo file data/test_list_for_i2vgen.txt. test_model is the path for loading the model. In a few minutes, you can retrieve the high-definition video you wish to create from the workspace/experiments/test_list_for_i2vgen directory. At present, we find that the current model performs inadequately on anime images and images with a black background due to the lack of relevant training data. We are consistently working to optimize it.

(iii) Run the gradio app locally:

python gradio_app.py

(iv) Run the model on ModelScope and HuggingFace:

<span style="color:red">Due to the compression of our video quality in GIF format, please click 'HRER' below to view the original video.</span>

<center> <table> <center> <tr> <td ><center> <image height="260" src="https://img.alicdn.com/imgextra/i1/O1CN01CCEq7K1ZeLpNQqrWu_!!6000000003219-0-tps-1280-720.jpg"></image> </center></td> <td ><center> <!-- <video muted="true" autoplay="true" loop="true" height="260" src="https://cloud.video.taobao.com/play/u/null/p/1/e/6/t/1/442125067544.mp4"></video> --> <image height="260" src="https://img.alicdn.com/imgextra/i4/O1CN01hIQcvG1spmQMLqBo0_!!6000000005816-1-tps-1280-704.gif"></image> </center></td> </tr> <tr> <td ><center> <p>Input Image</p> </center></td> <td ><center> <p>Click <a href="https://cloud.video.taobao.com/play/u/null/p/1/e/6/t/1/442125067544.mp4">HERE</a> to view the generated video.</p> </center></td> </tr> <tr> <td ><center> <image height="260" src="https://img.alicdn.com/imgextra/i4/O1CN01ZXY7UN23K8q4oQ3uG_!!6000000007236-2-tps-1280-720.png"></image> </center></td> <td ><center> <!-- <video muted="true" autoplay="true" loop="true" height="260" src="https://cloud.video.taobao.com/play/u/null/p/1/e/6/t/1/441385957074.mp4"></video> --> <image height="260" src="https://img.alicdn.com/imgextra/i1/O1CN01iaSiiv1aJZURUEY53_!!6000000003309-1-tps-1280-704.gif"></image> </center></td> </tr> <tr> <td ><center> <p>Input Image</p> </center></td> <td ><center> <p>Click <a href="https://cloud.video.taobao.com/play/u/null/p/1/e/6/t/1/441385957074.mp4">HERE</a> to view the generated video.</p> </center></td> </tr> <tr> <td ><center> <image height="260" src="https://img.alicdn.com/imgextra/i3/O1CN01NHpVGl1oat4H54Hjf_!!6000000005242-2-tps-1280-720.png"></image> </center></td> <td ><center> <!-- <video muted="true" autoplay="true" loop="true" height="260" src="https://cloud.video.taobao.com/play/u/null/p/1/e/6/t/1/442102706767.mp4"></video> --> <!-- <image muted="true" height="260" src="https://img.alicdn.com/imgextra/i4/O1CN01DgLj1T240jfpzKoaQ_!!6000000007329-1-tps-1280-704.gif"></image> --> <image height="260" src="https://img.alicdn.com/imgextra/i4/O1CN01DgLj1T240jfpzKoaQ_!!6000000007329-1-tps-1280-704.gif"></image> </center></td> </tr> <tr> <td ><center> <p>Input Image</p> </center></td> <td ><center> <p>Click <a href="https://cloud.video.taobao.com/play/u/null/p/1/e/6/t/1/442102706767.mp4">HERE</a> to view the generated video.</p> </center></td> </tr> <tr> <td ><center> <image height="260"

编辑推荐精选

扣子-AI办公

扣子-AI办公

职场AI,就用扣子

AI办公助手,复杂任务高效处理。办公效率低?扣子空间AI助手支持播客生成、PPT制作、网页开发及报告写作,覆盖科研、商业、舆情等领域的专家Agent 7x24小时响应,生活工作无缝切换,提升50%效率!

堆友

堆友

多风格AI绘画神器

堆友平台由阿里巴巴设计团队创建,作为一款AI驱动的设计工具,专为设计师提供一站式增长服务。功能覆盖海量3D素材、AI绘画、实时渲染以及专业抠图,显著提升设计品质和效率。平台不仅提供工具,还是一个促进创意交流和个人发展的空间,界面友好,适合所有级别的设计师和创意工作者。

图像生成AI工具AI反应堆AI工具箱AI绘画GOAI艺术字堆友相机AI图像热门
码上飞

码上飞

零代码AI应用开发平台

零代码AI应用开发平台,用户只需一句话简单描述需求,AI能自动生成小程序、APP或H5网页应用,无需编写代码。

Vora

Vora

免费创建高清无水印Sora视频

Vora是一个免费创建高清无水印Sora视频的AI工具

Refly.AI

Refly.AI

最适合小白的AI自动化工作流平台

无需编码,轻松生成可复用、可变现的AI自动化工作流

酷表ChatExcel

酷表ChatExcel

大模型驱动的Excel数据处理工具

基于大模型交互的表格处理系统,允许用户通过对话方式完成数据整理和可视化分析。系统采用机器学习算法解析用户指令,自动执行排序、公式计算和数据透视等操作,支持多种文件格式导入导出。数据处理响应速度保持在0.8秒以内,支持超过100万行数据的即时分析。

AI工具酷表ChatExcelAI智能客服AI营销产品使用教程
TRAE编程

TRAE编程

AI辅助编程,代码自动修复

Trae是一种自适应的集成开发环境(IDE),通过自动化和多元协作改变开发流程。利用Trae,团队能够更快速、精确地编写和部署代码,从而提高编程效率和项目交付速度。Trae具备上下文感知和代码自动完成功能,是提升开发效率的理想工具。

AI工具TraeAI IDE协作生产力转型热门
AIWritePaper论文写作

AIWritePaper论文写作

AI论文写作指导平台

AIWritePaper论文写作是一站式AI论文写作辅助工具,简化了选题、文献检索至论文撰写的整个过程。通过简单设定,平台可快速生成高质量论文大纲和全文,配合图表、参考文献等一应俱全,同时提供开题报告和答辩PPT等增值服务,保障数据安全,有效提升写作效率和论文质量。

AI辅助写作AI工具AI论文工具论文写作智能生成大纲数据安全AI助手热门
博思AIPPT

博思AIPPT

AI一键生成PPT,就用博思AIPPT!

博思AIPPT,新一代的AI生成PPT平台,支持智能生成PPT、AI美化PPT、文本&链接生成PPT、导入Word/PDF/Markdown文档生成PPT等,内置海量精美PPT模板,涵盖商务、教育、科技等不同风格,同时针对每个页面提供多种版式,一键自适应切换,完美适配各种办公场景。

AI办公办公工具AI工具博思AIPPTAI生成PPT智能排版海量精品模板AI创作热门
潮际好麦

潮际好麦

AI赋能电商视觉革命,一站式智能商拍平台

潮际好麦深耕服装行业,是国内AI试衣效果最好的软件。使用先进AIGC能力为电商卖家批量提供优质的、低成本的商拍图。合作品牌有Shein、Lazada、安踏、百丽等65个国内外头部品牌,以及国内10万+淘宝、天猫、京东等主流平台的品牌商家,为卖家节省将近85%的出图成本,提升约3倍出图效率,让品牌能够快速上架。

下拉加载更多