Gen-L-Video

Gen-L-Video

无需额外训练实现多文本条件长视频生成和编辑

Gen-L-Video是一种扩展短视频扩散模型的视频生成方法,能实现多文本条件下的长视频生成和编辑。该方法无需额外训练即可处理数百帧的视频,并保持内容一致性。Gen-L-Video支持多语义段视频生成、平滑语义变化和视频内容编辑等功能,为长视频处理提供了一种通用解决方案。

Gen-L-Video长视频生成多文本条件视频编辑无需预训练Github开源项目

Gen-L-Video: Long Video Generation via Temporal Co-Denoising

This repository is the official implementation of Gen-L-Video.

Project Website arXiv Open In Colab

You might be interested in Gen-L^2, which works better.

Introduction

TL;DR: A <font color=#FF2000> universal</font> methodology that extends short video diffusion models for efficient <font color=#FF2000>multi-text conditioned long video</font> generation and editing.

Current methodologies for video generation and editing, while innovative, are often confined to extremely short videos (typically less than 24 frames) and are limited to a single text condition. These constraints significantly limit their applications given that real-world videos usually consist of multiple segments, each bearing different semantic information. To address this challenge, we introduce a novel paradigm dubbed as Gen-L-Video capable of extending off-the-shelf short video diffusion models for generating and editing videos comprising hundreds of frames with diverse semantic segments without introducing additional training, all while preserving content consistency.

<p align="center"> <img src="./statics/imgs/lvdm.png" width="1080px"/> <br> <em>Essentially, this procedure establishes an abstract long video generator and editor without necessitating any additional training, enabling the generation and editing of videos of any length using established short video generation and editing methodologies.</em> </p>

Setup

Clone the Repo

git clone https://github.com/G-U-N/Gen-L-Video cd Gen-L-Video # The repo might be too large to clone because many long gifs are over 100 M. Fork the repo, delete the statics, and then clone it.

Install Environment via Anaconda

conda env create -f requirements.yml conda activate glv conda install pytorch==1.13.1 torchvision==0.14.1 torchaudio==0.13.1 pytorch-cuda=11.6 -c pytorch -c nvidia

Install Xformers

# (Optional) Makes the build much faster pip install ninja # Set TORCH_CUDA_ARCH_LIST if running and building on different GPU types pip install -v -U git+https://github.com/facebookresearch/xformers.git@main#egg=xformers # (this can take dozens of minutes)

Install SAM and Grounding DINO

pip install git+https://github.com/facebookresearch/segment-anything.git pip install git+https://github.com/IDEA-Research/GroundingDINO.git

or

git clone https://github.com/facebookresearch/segment-anything.git cd segment-anything pip install -e . cd .. # If you have a CUDA environment, please make sure the environment variable CUDA_HOME is set. # If the cuda version of the system conflicts with the cudatoolkit version, See: https://github.com/G-U-N/Gen-L-Video/discussions/7 git clone https://github.com/IDEA-Research/GroundingDINO.git cd GroundingDINO pip install -e .

Note that if you are using GPU clusters that the management node has no access to GPU resources, you should submit the pip install -e . to the computing node as a computing task when building the GroundingDINO. Otherwise, it will not support detection computing through GPU.

Download Pretrained Weights

Make sure git-lfs is available. See: https://github.com/git-lfs/git-lfs/blob/main/INSTALLING.md

bash scripts/download_pretrained_models.sh

After downloading them, you should specify the absolute/relative path of them in the config files.

If you download all the above pretrained weights in the folder weights , set the configs files as follows:

  1. In configs/tuning-free-inpaint/girl-glass.yaml
sam_checkpoint: "weights/sam_vit_h_4b8939.pth" groundingdino_checkpoint: "weights/groundingdino_swinb_cogcoor.pth" controlnet_path: "weights/edit-anything-v0-3"
  1. In one-shot-tuning.py, set
adapter_paths={ "pose":"weights/T2I-Adapter/models/t2iadapter_openpose_sd14v1.pth", "sketch":"weights/T2I-Adapter/models/t2iadapter_sketch_sd14v1.pth", "seg": "weights/T2I-Adapter/models/t2iadapter_seg_sd14v1.pth", "depth":"weights/T2I-Adapter/models/t2iadapter_depth_sd14v1.pth", "canny":"weights/T2I-Adapter/models/t2iadapter_canny_sd14v1.pth" }
  1. In configs/one-shot-tuning/hike.yaml, set
pretrained_model_path: "weights/anything-v4.0"

Then all the other weights are able to be automatically downloaded through the API of Hugging Face.

For users who are unable to download weights automatically

Here is an additional instruction for installing and running grounding dino.

# Notice: If you use 'pip install git+https://github.com/IDEA-Research/GroundingDINO.git' # You should modify GroundingDINO_SwinB_cfg.py in python site-packages directory # e.g. ~/miniconda3/envs/glv/lib/python3.8/site-packages/groundingdino/config/GroundingDINO_SwinB_cfg.py cd GroundingDINO/groundingdino/config/ vim GroundingDINO_SwinB_cfg.py

set

text_encoder_type = "[Your Path]/bert-base-uncased"

Then

vim GroundingDINO/groundingdino/util/get_tokenlizer.py

Set

def get_pretrained_language_model(text_encoder_type): if text_encoder_type == "bert-base-uncased" or text_encoder_type.split("/")[-1]=="bert-base-uncased": return BertModel.from_pretrained(text_encoder_type) if text_encoder_type == "roberta-base": return RobertaModel.from_pretrained(text_encoder_type) raise ValueError("Unknown text_encoder_type {}".format(text_encoder_type))

Now you should be able to run your Grounding DINO with pre-downloaded bert weights.

Get your own control videos

git clone https://github.com/lllyasviel/ControlNet.git cd ControlNet git checkout f4748e3 mv ../process_data.py . python process_data.py --v_path=../data --t_path=../t_data --c_path=../c_data --fps=10

Inference

  1. One-Shot Tuning Method
accelerate launch one-shot-tuning.py --control=[your control]

[your control] can be set as pose , depth, seg, sketch, canny.

pose and depth are recommended.

  1. Tuning-Free Method for videos with smooth semantic changes.
accelerate launch tuning-free-mix.py
  1. Tuning-Free Edit Anything in Videos.
accelerate launch tuning-free-inpaint.py
  1. Long video generation with pretrained.
accelerate launch follow-your-pose-long.py
  1. Tuning-Free Long Video2Video generation
# canny accelerate launch tuning-free-control.py --config=./configs/tuning-free-control/girl-glass.yaml # hed accelerate launch tuning-free-control.py --config=./configs/tuning-free-control/girl.yaml

Comparisons

<table class="center"> <tr> <td>Method</td> <td>Long Video</td> <td>Multi-Text Conditioned</td> <td>Pretraining-Free</td> <td>Parallel Denoising</td> <td>Versatile</td> </tr> <tr> <td>Tune-A-Video</td> <td>❌</td> <td>❌</td> <td>✔</td> <td>❌</td> <td>❌</td> </tr> <tr> <td>LVDM</td> <td>✔</td> <td>❌</td> <td>❌</td> <td>❌</td> <td>❌</td> </tr> <tr> <td>NUWA-XL</td> <td>✔</td> <td>✔</td> <td>❌</td> <td>✔</td> <td>❌</td> </tr> <tr> <td>Gen-L-Video</td> <td>✔</td> <td>✔</td> <td>✔</td> <td>✔</td> <td>✔</td> </tr> </table>

Results

Most of the results can be generated with a single RTX 3090.

Multi-Text Conditioned Long Video Generation

https://github.com/G-U-N/Gen-L-Video/assets/60997859/9b370894-708a-4ed2-a2ac-abfa93829ea6

This video containing clips bearing various semantic information.

<img src="./statics/imgs/example.png" width=800px>

Long Video with Smooth Semantic Changes

All the following videos are directly generated with the pretrained Stable Diffusion weight without additional training.

<table class="center"> <tr> <td style="text-align:center;" colspan="4"><b>Videos with Smooth Semantic Changes</b></td> </tr> <tr> <td><img src="./statics/gifs/boat-walk-mix.gif"></td> <td><img src="./statics/gifs/car-turn-beach-mix.gif"></td> <td><img src="./statics/gifs/lion-cat-mix.gif"></td> <td><img src="./statics/gifs/surf-skiing-mix.gif"></td> </tr> <tr> <td width=25% style="text-align:center;">"A man is boating, village." → "A man is walking by, city, sunset."</td> <td width=25% style="text-align:center;">"A jeep car is running on the beach, sunny.” → "a jeep car is running on the beach, night."</td> <td width=25% style="text-align:center;">"Lion, Grass, Rainy." → "Cat, Grass, Sun." </td> <td width=25% style="text-align:center;">"A man is skiing in the sea." → "A man is surfing in the snow."</td> </tr> </table>

Edit Anything in Video

All the following videos are directly generated with the pretrained Stable Diffusion weight without additional training.

<table class="center"> <tr> <td style="text-align:center;" colspan="4"><b>Edit Anything in Videos</b></td> </tr> <tr> <td><img src="./statics/gifs/girl-glass-source.gif"></td> <td><img src="./statics/gifs/girl-glass-mask.gif"></td> <td><img src="./statics/gifs/girl-glass-pink.gif"></td> <td><img src="./statics/gifs/girl-glass-cyberpunk.gif"></td> </tr> <tr> <td width=25% style="text-align:center;">Source Video</td> <td width=25% style="text-align:center;">Mask of Sunglasses</td> <td width=25% style="text-align:center;">"Sunglasses" → "Pink Sunglasses" </td> <td width=25% style="text-align:center;">"Sunglasses" → "Cyberpunk Sunglasses with Neon Lights"</td> </tr> <tr> <td><img src="./statics/gifs/man-surfing-source.gif"></td> <td><img src="./statics/gifs/man-surfing-mask.gif"></td> <td><img src="./statics/gifs/man-surfing-batman.gif"></td> <td><img src="./statics/gifs/man-surfing-ironman.gif"></td> </tr> <tr> <td width=25% style="text-align:center;">Source Video</td> <td width=25% style="text-align:center;">Mask of Man</td> <td width=25% style="text-align:center;">"Man" → "Bat Man" </td> <td width=25% style="text-align:center;">"Man" → "Iron Man"</td> </tr> </table>

Controllable Video

<table class="center"> <tr> <td style="text-align:center;" colspan="4"><b>Controllable Video</b></td> </tr> <tr> <td><img src="./statics/gifs/tennis-pose.gif"></td> <td><img src="./statics/gifs/iron-man-tennis.gif"></td> <td><img src="./statics/gifs/vangogh-tennis.gif"></td> <td><img src="./statics/gifs/fire-tennis.gif"></td> </tr> <tr> <td width=25% style="text-align:center;">Pose Control</td> <td width=25% style="text-align:center;">"Iron Man is fighting in the snow."</td> <td width=25% style="text-align:center;">"A Van Gogh style painting of a man dancing."</td> <td width=25% style="text-align:center;">"A man is running in the fire."</td> </tr> <tr> <td><img src="./statics/gifs/cat-in-the-sun-depth.gif"></td> <td><img src="./statics/gifs/dog-in-the-sun.gif"></td> <td><img src="./statics/gifs/tiger-in-the-sun.gif"></td> <td><img src="./statics/gifs/girl-in-the-sun.gif"></td> </tr> <tr> <td width=25% style="text-align:center;">Depth Control</td> <td width=25% style="text-align:center;">"Dog in the sun.""</td> <td width=25% style="text-align:center;">"Tiger in the sun."</td> <td width=25% style="text-align:center;">"Girl in the sun."</td> </tr> </table>

Tuning-Free Long Video-to-Video Generation

<table class="center"> <tr> <td style="text-align:center;" colspan="2"><b>Tuning-Free Long Video-to-Video Generation</b></td> </tr> <tr> <td><img src="./statics/gifs/girl.gif"></td> <td><img src="./statics/gifs/girl-glass.gif"></td> </tr> <tr> <td width=50% style="text-align:center;"> "Girls."</td> <td width=50% style="text-align:center;"> "Girls wearing sunglasses."</td> </tr> </table>

Long Video Generation with Pretrained Short Text-to-Video Diffusion Model

All the following videos are directly generated with the pre-trained VideoCrafter without additional training.

<table class="center"> <tr> <td style="text-align:center;" colspan="4"><b>Long Video Generation with Pretrained Short Text-to-Video Diffusion Model</b></td> </tr> <tr> <td><img src="./statics/gifs/ride-horse-iso-1.gif"></td> <td><img src="./statics/gifs/ride-horse-2.gif"></td> <td><img src="./statics/gifs/ride-horse-iso-2.gif"></td> <td><img src="./statics/gifs/ride-horse-4.gif"></td> </tr> <tr> <td width=25% style="text-align:center;"> "Astronaut riding a horse." (Isolated)</td> <td width=25% style="text-align:center;">"Astronaut riding a horse." (Gen-L-Video)</td> <td width=25% style="text-align:center;">"Astronaut riding a horse, Loving Vincent Style." (Isolated)</td> <td width=25% style="text-align:center;">"Astronaut riding a horse, Loving Vincent Style." (Gen-L-Video)</td> </tr> <tr> <td><img src="./statics/gifs/monkey-drinking-iso.gif"></td> <td><img src="./statics/gifs/monkey-drinking.gif"></td> <td><img src="./statics/gifs/car-moving-iso.gif"></td> <td><img src="./statics/gifs/car-moving.gif"></td> </tr> <tr> <td width=25% style="text-align:center;">"A monkey is drinking water." (Isolated)</td> <td width=25% style="text-align:center;">"A monkey

编辑推荐精选

音述AI

音述AI

全球首个AI音乐社区

音述AI是全球首个AI音乐社区,致力让每个人都能用音乐表达自我。音述AI提供零门槛AI创作工具,独创GETI法则帮助用户精准定义音乐风格,AI润色功能支持自动优化作品质感。音述AI支持交流讨论、二次创作与价值变现。针对中文用户的语言习惯与文化背景进行专门优化,支持国风融合、C-pop等本土音乐标签,让技术更好地承载人文表达。

QoderWork

QoderWork

阿里Qoder团队推出的桌面端AI智能体

QoderWork 是阿里推出的本地优先桌面 AI 智能体,适配 macOS14+/Windows10+,以自然语言交互实现文件管理、数据分析、AI 视觉生成、浏览器自动化等办公任务,自主拆解执行复杂工作流,数据本地运行零上传,技能市场可无限扩展,是高效的 Agentic 生产力办公助手。

lynote.ai

lynote.ai

一站式搞定所有学习需求

不再被海量信息淹没,开始真正理解知识。Lynote 可摘要 YouTube 视频、PDF、文章等内容。即时创建笔记,检测 AI 内容并下载资料,将您的学习效率提升 10 倍。

AniShort

AniShort

为AI短剧协作而生

专为AI短剧协作而生的AniShort正式发布,深度重构AI短剧全流程生产模式,整合创意策划、制作执行、实时协作、在线审片、资产复用等全链路功能,独创无限画布、双轨并行工业化工作流与Ani智能体助手,集成多款主流AI大模型,破解素材零散、版本混乱、沟通低效等行业痛点,助力3人团队效率提升800%,打造标准化、可追溯的AI短剧量产体系,是AI短剧团队协同创作、提升制作效率的核心工具。

seedancetwo2.0

seedancetwo2.0

能听懂你表达的视频模型

Seedance two是基于seedance2.0的中国大模型,支持图像、视频、音频、文本四种模态输入,表达方式更丰富,生成也更可控。

nano-banana纳米香蕉中文站

nano-banana纳米香蕉中文站

国内直接访问,限时3折

输入简单文字,生成想要的图片,纳米香蕉中文站基于 Google 模型的 AI 图片生成网站,支持文字生图、图生图。官网价格限时3折活动

扣子-AI办公

扣子-AI办公

职场AI,就用扣子

AI办公助手,复杂任务高效处理。办公效率低?扣子空间AI助手支持播客生成、PPT制作、网页开发及报告写作,覆盖科研、商业、舆情等领域的专家Agent 7x24小时响应,生活工作无缝切换,提升50%效率!

堆友

堆友

多风格AI绘画神器

堆友平台由阿里巴巴设计团队创建,作为一款AI驱动的设计工具,专为设计师提供一站式增长服务。功能覆盖海量3D素材、AI绘画、实时渲染以及专业抠图,显著提升设计品质和效率。平台不仅提供工具,还是一个促进创意交流和个人发展的空间,界面友好,适合所有级别的设计师和创意工作者。

图像生成AI工具AI反应堆AI工具箱AI绘画GOAI艺术字堆友相机AI图像热门
码上��飞

码上飞

零代码AI应用开发平台

零代码AI应用开发平台,用户只需一句话简单描述需求,AI能自动生成小程序、APP或H5网页应用,无需编写代码。

Vora

Vora

免费创建高清无水印Sora视频

Vora是一个免费创建高清无水印Sora视频的AI工具

下拉加载更多