Yanqin Jiang<sup>1</sup>, Li Zhang<sup>2</sup>, Jin Gao<sup>1</sup>, Weimin Hu<sup>1</sup>, Yao Yao<sup>3 ✉</sup> <br> <sup>1</sup>CASIA, <sup>2</sup>Fudan University, <sup>3</sup>Nanjin University
| Project Page | arXiv | Paper | Video (Coming soon) | Data (only input video) | Data (test_dataset) |

In this paper, we present Consistent4D, a novel approach for generating dynamic objects from uncalibrated monocular videos.
Uniquely, we cast the 360-degree dynamic object reconstruction as a 4D generation problem, eliminating the need for tedious multi-view data collection and camera calibration. This is achieved by leveraging the object-level 3D-aware image diffusion model as the primary supervision signal for training Dynamic Neural Radiance Fields (DyNeRF). Specifically, we propose a Cascade DyNeRF to facilitate stable convergence and temporal continuity under the supervision signal which is discrete along the time axis. To achieve spatial and temporal consistency, we further introduce an Interpolation-driven Consistency Loss. It is optimized by minimizing the discrepancy between rendered frames from DyNeRF and interpolated frames from a pretrained video interpolation model.
Extensive experiments show that our Consistent4D can perform competitively to prior art alternatives, opening up new possibilities for 4D dynamic object generation from monocular videos, whilst also demonstrating advantage for conventional text-to-3D generation tasks. Our project page is https://consistent4d.github.io/
Recently I found some works trained on Objaverse animated models and adopted the test dataset in Consistent4D. However, Objaverse contains six out of seven animated objects in our work, and it is suggested to filter them when training on that dataset for a fiar test. The uid of objects in test dataset is provided in test_dataset_uid.txt
[2024.03.25] 🎉 Our new work STAG4D is avaliable on arxiv! The results produced by STAG4D is way better than those of Consistent4D;. Welcome to keep an eye on it! <br>
[2024.01.23] 🎉 All codes, including evaluation scripts, are released! Thanks for your interests! (The refractored code seems to be able to generate slightly better results than what we used before. Don't know why, but happy to hear this.) <br>
[2024.01.16] 🎉😜 Consistent4D is accepted by ICLR 2024! Thanks for all! Our paper will soon be updated according to the suggestions in rebuttal pharse. <br>
[2023.12.10] The code of Consistent4D is released! The code is refractored and optimized to accelerate training (~2 hours on a V100 GPU now!). For the convenience of quantitative comparison, we provide test dataset used in our paper (our results on test dataset are also included). <br>
[2023.11.07] The paper of Consistent4D is avaliable at arXiv. We also provide input videos used in our paper/project page here. For our results on the input videos, please visit our github project page to download them (see folder gallery).
The installation is the same as the original threestudio, so skip it if you have already installed threestudio.
# Recommand to use annoconda conda create -n consistent4d python=3.9 conda activate consistent4d # Clone the repo git clone https://github.com/yanqinJiang/Consistent4D cd Consistent4D # Build the environment # Install torch: the code is tested with torch1.12.1+cu113 pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 torchaudio==0.12.1 --extra-index-url https://download.pytorch.org/whl/cu113 # Install other packages pip install -r requirement.txt # Prepare Zero123 cd load/zero123 wget https://zero123.cs.columbia.edu/assets/zero123-xl.ckpt # Prepare video interpolation module cp /path/to/flownet/checkpoint ./extern/RIFE/flownet.pkl
We provide the processed input data used in our paper. If you want to use your own data, please follow the steps below for pre-processing:
The structure of the input data should be like
-image_seq_name - 0.png - 1.png - 2.png ...
# We provide three different configs (consistent4d_low_vram.yaml/consistent4d.yaml/consistent4d-4layers.yaml), requireing 24G/32G/40G VRAM for training, respectively. # The results in the paper and project page are produced by model in the config consistent4d.yaml. consistent4d_4layers.ymal is newly added, aiming at better results. # If you have aceess to GPU with enough memory, we higly recommand to set data.resolution_milestones in config to a larger number, i.e., 400, and you will get even better resutls. python launch.py --config configs/consistent4d.yaml --train --gpu 0 data.image_seq_path=./load/demo/blooming_rose
Video enhancer, as a post-processing step, can only slightly enhance the quality (sometimes it cannot enhance at all), while it requires tedious workflow to prepare the training data. So feel free to skip it. All results (qualitative/quantitative) in our paper and project page are without video enhancer if not specially mentioned. We will claim video enhancer as an optional stage in the updated paper. To use video enhancer:
video_enhancer/enhance_IF.py{frame_idx}.png, and starting from image 0, every seq_length images are considered as one group, and they must be consecutive.video_enhancer/train.sh.video_enhancer/test.sh.To evaluate, first transform rgba gt images to images with white background!!!. Then, download the pre-trained model (i3d_pretrained_400.pt) for calculating FVD here (This link is borrowed from DisCo, and the file for calculating FVD is a refractoring of their evaluation code. Thanks for their work!). Then, organize the reuslt folder as follows:
├── gt │ ├── object_0 │ │ ├── eval_0 │ │ │ ├── 0.png │ │ │ └── ... │ │ ├── eval_1 │ │ │ └── ... │ │ └── ... │ ├── object_1 │ │ └── ... │ └── ... ├── pred │ ├── object_0 │ │ ├── eval_0 │ │ │ ├── 0.png │ │ │ └── ... │ │ ├── eval_1 │ │ │ └── ... │ │ └── ... │ ├── object_1 │ │ └── ... │ └── ...
Next, run
cd evaluation # image-level metrics python compute_image_level_metrics.py --gt_root /path/to/gt --pred_root /path/to/pred # video-level metrics python compute_fvd.py --gt_root /path/to/gt --pred_root /path/to/pred --model_path /path/to/i3d_pretrained_400.pt
We have interest in continuously improving our work and add new features, i.e., advanced 4D representations and supervision signals. If you meet any problem during use or have any suggestions on improvement, please feel free to open an issue. We thanks for your feedback : ) !
system.loss.lambda_vif_{idx} or the probabily of the loss data.mode_section in the config could amplify the effect of ICL. However, too large weight/probability usually results in over-smoothing. (The spatial and temporal data sample interval in ICL could also be modified by data.random_camera.azimuth_interval_range and data.time_interval, and increasing the sample interval intuitively wil amplify the effect of ICL too.)data.batch_size and data.resolution_milestones in config) when dyNeRF training could lead to better results. However, it requires large GPU memory.system.geometry.grid_size also leads to better performance. (But do not make the temporal resolution, i.e., the last number in grid_size, larger than the number of frames). The former costs more GPU memeory but the latter not.Our code is based on Threestudio. We thank the authors for their effort in building such a great codebase. <br> The video interpolation model employed in our work is RIFE, which is continuously improved by its authors for real-world application. Thanks for their great work!
@inproceedings{ jiang2024consistentd, title={Consistent4D: Consistent 360{\textdegree} Dynamic Object Generation from Monocular Video}, author={Yanqin Jiang and Li Zhang and Jin Gao and Weiming Hu and Yao Yao}, booktitle={The Twelfth International Conference on Learning


免费创建高清无水印Sora视频
Vora是一个免费创建高清无水印Sora视频的AI工具


最适合小白的AI自动化工作流平台
无需编码,轻松生成可复用 、可变现的AI自动化工作流

大模型驱动的Excel数据处理工具
基于大模型交互的表格处理系统,允许用户通过对话方式完成数据整理和可视化分析。系统采用机器学习算法解析用户指令,自动执行排序、公式计算和数据透视等操作,支持多种文件格式导入导出。数据处理响应速度保持在0.8秒以内,支持超过100万行数据的即时分析。


AI辅助编程,代码自动修复
Trae是一种自适应的集成开发环境(IDE),通过自动化和多元协作改变开发流程。利用Trae,团队能够更快速、精确地编写和部署代码,从而提高编程效率和项目交付速度。Trae具备上下文感知和代码自动完成功能,是提升开发效率的理想工具。


AI论文写作指导平台
AIWritePaper论文写作是一站式AI论文写作辅助工具,简化了选题、文献检索至论文撰写的整个过程。通过简单设定,平台可快速生成高质量论文大纲和全文,配合图表、参考文献等一应俱全,同时提供开题报告和答辩PPT等增值服务,保障数据安全,有效提升写作效率和论文质量。


AI一键生成PPT,就用博思AIPPT!
博思AIPPT,新一代的AI生成PPT平台,支持智 能生成PPT、AI美化PPT、文本&链接生成PPT、导入Word/PDF/Markdown文档生成PPT等,内置海量精美PPT模板,涵盖商务、教育、科技等不同风格,同时针对每个页面提供多种版式,一键自适应切换,完美适配各种办公场景。


AI赋能电商视觉革命,一站式智能商拍平台
潮际好麦深耕服装行业,是国内AI试衣效果最好的软件。使用先进AIGC能力为电商卖家批量提供优质的、低成本的商拍图。合作品牌有Shein、Lazada、安踏、百丽等65个国内外头部品牌,以及国内10万+淘宝、天猫、京东等主流平台的品牌商家,为卖家节省将近85%的出图成本,提升约3倍出图效率,让品牌能够快速上架。


企业专属的AI法律顾问
iTerms是法大大集团旗下法律子品牌,基于最先进的 大语言模型(LLM)、专业的法律知识库和强大的智能体架构,帮助企业扫清合规障碍,筑牢风控防线,成为您企业专属的AI法律顾问。


稳定高效的流量提升解决方案,助力品牌曝光
稳定高效的流量提升解决方案,助力品牌曝光


最新版Sora2模型免费使用,一键生成无水印视频
最新版Sora2模型免费使用,一键生成无水印视频
最新AI工具、AI资讯
独家AI资源、AI项目落地

微信扫一扫关注公众号