This is the official PyTorch implementation of "Total-Recon: Deformable Scene Reconstruction for Embodied View Synthesis".
<a href="https://andrewsonga.github.io">Chonghyuk Song</a>, <a href="https://gengshan-y.github.io">Gengshan Yang</a>, <a href="https://dunbar12138.github.io">Kangle Deng</a>, <a href="https://www.cs.cmu.edu/~junyanz/">Jun-Yan Zhu</a>, <a href="https://www.cs.cmu.edu/~deva/">Deva Ramanan</a> <br> Carnegie Mellon University <br> ICCV 2023
Given a long video of deformable objects captured by a handheld RGBD sensor, Total-Recon renders the scene from novel camera trajectories derived from in-scene motion of actors: (1) egocentric cameras that simulate the point-of-view of a target actor (such as the pet) and (2) 3rd-person (or pet) cameras that follow the actor from behind. Our method also enables (3) 3D video filters that attach virtual 3D assets to the actor. Total-Recon achieves this by reconstructing the geometry, appearance, and root-body and articulated motion of each deformable object in the scene as well as the background.
We plan to release our code in the following 4 stages:
Before a recent commit there was a bug in the code that set the default $\lambda = 1$ (the interpolation factor of the EMA filter for updating object bounds and near-far plane: new state = (1 - $\lambda$) $\times$ signal + $\lambda$ $\times$ old state). This prevents these bounds from being updated at all during training and may result in failed reconstruction. The default value has now been corrected to 0.0 and the code now correctly updates the bounds and near-far plane during training. Please pull the latest version of the codebase for a bug-free experience.
(1) Clone repo (including submodules):
git clone https://github.com/andrewsonga/Total-Recon.git --recursive
# This step is REQUIRED for all subsequent steps!
cd Total-Recon
(2) Install conda env:
conda env create -f misc/totalrecon-cu113.yml
conda activate totalrecon-cu113
(3) Install submodules:
pip install -e third_party/pytorch3d
pip install -e third_party/kmeans_pytorch
python -m pip install detectron2 -f \
https://dl.fbaipublicfiles.com/detectron2/wheels/cu113/torch1.10/index.html
(4) Install ffmpeg:
apt-get install ffmpeg
(5) Download the pre-trained VCN optical flow model for data preprocessing (instructions are taken from BANMo):
mkdir lasr_vcn
wget https://www.dropbox.com/s/bgsodsnnbxdoza3/vcn_rob.pth -O ./lasr_vcn/vcn_rob.pth
The following steps (1) ~ (4) for downloading, preprocessing, and formatting RGBD sequences only pertains to Total-Recon's dataset. <ins><span style="color: orange;">To apply Total-Recon on your own RGBD videos, please follow the instructions</span></ins> here.
(1) Download and untar the raw data:
bash download_rawdata.sh
# untar raw data
tar -xzvf totalrecon_rawdata.tar.gz
(2) Save the raw data under raw/:
src_dir=totalrecon_rawdata
bash place_rawdata.sh $src_dir
###############################################################
# argv[1]: The directory inside Total-Recon/ where the downloaded raw data is stored
(3) Preprocess raw data (takes around a couple of hours per sequence):
Multi-actor sequences:
# e.g.
prefix=humandog-stereo000; gpu=0
bash preprocess/preprocess_rawdata_stereo_maskcamgiven_multiactor.sh $prefix $gpu
###############################################################
# argv[1]: prefix of the preprocessed data folders under "database/DAVIS/JPEGImages/" (minus suffices such as "-leftcam", "-rightcam", "-human", "-animal", "-bkgd", and "-uncropped")
# argv[2]: gpu number (0, 1, 2, ...)
Uni-actor sequences:
# e.g.
prefix=cat2-stereo000; ishuman='n'; gpu=0
bash preprocess/preprocess_rawdata_stereo_maskcamgiven_uniactor.sh $prefix $ishuman $gpu
###############################################################
# argv[1]: prefix of the preprocessed data folders under "database/DAVIS/JPEGImages/" (minus suffices such as "-leftcam", "-rightcam", and "-bkgd")
# argv[2]: human or not, where `y` denotes human and `n` denotes quadreped
# argv[3]: gpu number (0, 1, 2, ...)
(4) [NOT REQUIRED FOR INFERENCE] Format preprocessed data for training:
Multi-actor sequences:
# e.g.
prefix=humandog-stereo000; gpu=0
bash preprocess/format_processeddata_stereo_multiactor.sh $prefix $gpu
###############################################################
# argv[1]: prefix of the preprocessed data folders under "database/DAVIS/JPEGImages/" (minus suffices such as "-leftcam", "-rightcam", "-human", "-animal", "-bkgd", and "-uncropped")
# argv[2]: gpu number (0, 1, 2, ...)
Uni-actor sequences:
# e.g.
prefix=cat2-stereo000; gpu=0
bash preprocess/format_processeddata_stereo_uniactor.sh $prefix $gpu
###############################################################
# argv[1]: prefix of the preprocessed data folders under "database/DAVIS/JPEGImages/" (minus suffices such as "-leftcam", "-rightcam", and "-bkgd")
# argv[2]: gpu number (0, 1, 2, ...)
(1) Download the pre-optimized models and untar them.
bash download_models.sh
tar -xzvf totalrecon_models.tar.gz
(2) Appropriately relocate the pre-optimized models:
# Place the pre-optimized models under logdir/
# argv[1]: The directory inside Total-Recon where the downloaded pre-optimized models are stored
src_dir=totalrecon_models
bash place_models.sh $src_dir
To run the 3D video filter and to be able to visualize flying embodied-view cameras, purchase and download 3D models in .obj format for 1) the unicorn horn, and 2) the Canon camera.
Rename the .obj file for the camera mesh to camera.obj, then place the file camera.obj and unzipped folder UnicornHorn_OBJ inside mesh_material.
To train Total-Recon on our provided dataset, run per-object pretraining and joint-finetuning as follows:
# change appropriately (e.g. "humancat-stereo000" or "cat2-stereo000")
prefix=humandog-stereo000
gpus=0,1,2,3 # gpu ids for training
addr=10001 # master port for torch.distributed
bash train_$prefix.sh $gpus $addr
To train Total-Recon on your own videos, run one of the following commands:
# for multi-actor sequences
prefix=humancat-mono000
gpus=0,1,2,3
addr=10001
bash train_multiactor.sh $gpus $addr $prefix
# for uni-actor sequences
prefix=human2-mono000
gpus=0,1,2,3
addr=10001
use_human="" # "" (for human actors) / "no" (for animal actors)
bash train_uniactor.sh $gpus $addr $prefix "$use_human"
Before inference or evaluation can be done, extract the object-level meshes and root-body poses from the trained model. This only needs to be done once per model:
# argv[1]: gpu number (0, 1, 2, ...)
# argv[2]: folder name of the trained model inside logdir/
seqname=humandog-stereo000-leftcam-jointft # (appropriately rename `seqname`)
bash extract_fgbg.sh $gpu $seqname
Before inference or evaluation can be done, copy the left-right camera registration data from the raw data directory to the trained model directory:
prefix=humandog-stereo000 # (appropriately rename `prefix`)
seqname=$prefix-leftcam-jointft # directory name of trained model
# for uniactor sequences
cp raw/$prefix-leftcam/normrefcam2secondcam.npy logdir/$seqname/
# for multiactor sequences
cp raw/$prefix-human-leftcam/normrefcam2secondcam.npy logdir/$seqname/
(takes around a few hours)
The rendered videos will be saved as nvs-fpsview-*.mp4 inside logdir/$seqname/
bash scripts/render_nvs_fgbg_fps.sh $gpu $seqname "$add_args"
<details><summary>per-sequence arguments <code>(add_args)</code></summary>
seqname=humandog-stereo000-leftcam-jointft
add_args="--fg_obj_index 1 --asset_obj_index 1 --fg_normalbase_vertex_index 96800 --fg_downdir_vertex_index 1874 --asset_scale 0.003 --firstpersoncam_offset_z 0.05 --firstpersoncam_offsetabt_xaxis 15 --asset_offset_z -0.05 --scale_fps 0.50"
seqname=humancat-stereo000-leftcam-jointft
add_args="--fg_obj_index 0 --asset_obj_index 0 --fg_normalbase_vertex_index 150324 --fg_downdir_vertex_index 150506 --asset_scale 0.003 --firstpersoncam_offset_y 0.05 --firstpersoncam_offsetabt_xaxis 25 --firstpersoncam_offsetabt_yaxis 15 --firstpersoncam_offsetabt_zaxis 5 --fix_frame 50 --scale_fps 0.75"
seqname=cat1-stereo000-leftcam-jointft
add_args="--fg_obj_index 0 --asset_obj_index 0 --fg_normalbase_vertex_index 204713 --fg_downdir_vertex_index 204830 --asset_scale 0.003 --firstpersoncam_offset_z 0.05 --firstpersoncam_offsetabt_yaxis -20 --firstpersoncam_offsetabt_zaxis 10 --asset_offset_z -0.05 --scale_fps 0.75"
seqname=cat1-stereo001-leftcam-jointft
add_args="--fg_obj_index 0 --asset_obj_index 0 --fg_normalbase_vertex_index 34175 --fg_downdir_vertex_index 6043 --asset_scale 0.003 --firstpersoncam_offset_z 0.13 --firstpersoncam_offsetabt_xaxis 35 --firstpersoncam_offsetabt_yaxis -20 --firstpersoncam_offsetabt_zaxis -15 --scale_fps 0.75"
seqname=cat2-stereo000-leftcam-jointft
add_args="--fg_obj_index 0 --asset_obj_index 0 --fg_normalbase_vertex_index 338844 --fg_downdir_vertex_index 166318 --asset_scale 0.003 --firstpersoncam_offset_z 0.05 --firstpersoncam_offsetabt_xaxis 10 --firstpersoncam_offsetabt_yaxis 10 --asset_offset_z -0.05 --scale_fps 0.75"
seqname=cat2-stereo001-leftcam-jointft
add_args="--fg_obj_index 0 --asset_obj_index 0 --fg_normalbase_vertex_index 308732 --fg_downdir_vertex_index 309449 --asset_scale 0.003 --firstpersoncam_offset_z 0.05 --firstpersoncam_offsetabt_xaxis 20 --firstpersoncam_offsetabt_yaxis 20 --firstpersoncam_offsetabt_zaxis -20 --scale_fps 0.75"
seqname=cat3-stereo000-leftcam-jointft
add_args="--fg_obj_index 0 --asset_obj_index 0 --fg_normalbase_vertex_index 105919 --fg_downdir_vertex_index 246367 --asset_scale 0.003 --firstpersoncam_offset_z 0.05 --firstpersoncam_offsetabt_xaxis 20 --firstpersoncam_offsetabt_zaxis 10 --asset_offset_z -0.05 --scale_fps 0.75"
seqname=dog1-stereo000-leftcam-jointft
add_args="--fg_obj_index 0 --asset_obj_index 0 --fg_normalbase_vertex_index 159244 --fg_downdir_vertex_index 93456 --asset_scale 0.003 --firstpersoncam_offset_z 0.05 --firstpersoncam_offsetabt_xaxis 35 --firstpersoncam_offsetabt_yaxis 30 --firstpersoncam_offsetabt_zaxis 20 --asset_offset_z -0.05 --scale_fps 0.75"
seqname=dog1-stereo001-leftcam-jointft
add_args="--fg_obj_index 0 --asset_obj_index 0 --fg_normalbase_vertex_index 227642 --fg_downdir_vertex_index 117789 --asset_scale 0.003 --firstpersoncam_offset_z 0.035 --firstpersoncam_offsetabt_xaxis 45 --scale_fps 0.75"
seqname=human1-stereo000-leftcam-jointft
add_args="--fg_obj_index 0 --asset_obj_index 0 --fg_normalbase_vertex_index 161978 --fg_downdir_vertex_index 37496 --asset_scale 0.003 --firstpersoncam_offset_z 0.05 --firstpersoncam_offsetabt_xaxis 10 --firstpersoncam_offsetabt_yaxis 10 --asset_offset_z -0.05 --scale_fps 0.75"
seqname=human2-stereo000-leftcam-jointft
add_args="--fg_obj_index 0 --asset_obj_index 0 --fg_normalbase_vertex_index 114756 --fg_downdir_vertex_index 114499 --asset_scale 0.003 --firstpersoncam_offset_z 0.05 --firstpersoncam_offsetabt_xaxis 15 --firstpersoncam_offsetabt_yaxis 20 --asset_offset_z -0.05 --scale_fps 0.75"
</details>
<br>
(takes around a few hours)
The rendered videos will be saved as nvs-tpsview-*.mp4 inside logdir/$seqname/
bash scripts/render_nvs_fgbg_tps.sh $gpu $seqname $add_args
<details><summary>per-sequence arguments <code>(add_args)</code></summary>
seqname=humandog-stereo000-leftcam-jointft
add_args="--fg_obj_index 1 --asset_obj_index 1 --thirdpersoncam_offset_y 0.25 --thirdpersoncam_offset_z -0.80 --asset_scale 0.003 --scale_tps 0.70"
seqname=humancat-stereo000-leftcam-jointft
add_args="--fg_obj_index 0 --asset_obj_index 0 --thirdpersoncam_fgmeshcenter_elevate_y 1.00 --thirdpersoncam_offset_x -0.05 --thirdpersoncam_offset_y 0.15


阿里Qoder团队推出的桌面端AI智能体
QoderWork 是阿里推出的本地优先桌面 AI 智能体,适配 macOS14+/Windows10+,以自然语言交互实现文件管理、数据分析、AI 视觉生成、浏览器自动化等办公任务,自主拆解执行复杂工作流,数据本地运行零上传,技能市场可无限扩展,是高效的 Agentic 生产力办公助手。


全球首个AI音乐社区
音述AI是全球首个AI音乐社区,致力让每个人都能用音乐表达自我。音述AI提供零门槛AI创作工具,独创GETI法则帮助用户精准定义音乐风格,AI润色功能支持自动优化作品质感。音述AI支持交流讨论、二次创作与价值变现。针对中文用户的语言习惯与文化背景进行专门优化,支持国风融合、C-pop等本土音乐标签,让技术更好地承载人文表达。


一站式搞定所有学习需求
不再被海量信息淹没,开始真正理解知识。Lynote 可摘要 YouTube 视频、PDF、文章等内容。即时创建笔记,检测 AI 内容并下载资料,将您的学习效率提升 10 倍。


为AI短剧协作而生
专为AI短剧协作而生的AniShort正式发布,深度重构AI短剧全流程生产模式,整合创意策划、制作执行、实时协作、在线审片、资产复用等全链路功能,独创无限画布、双轨并行工业化工作流与Ani智能体助手,集成多款主流AI大模型,破解素材零散、版本混乱、沟通低效等行业痛点,助力3人团队效率提升800%,打造标准化、可追溯的AI短剧量产体系,是AI短剧团队协同创作、提升制作效率的核心工具。


能听懂你表达的视频模型
Seedance two是基于seedance2.0的中国大模型,支持图像、视频、音频、文本四种模态输入,表达方式更丰富,生成也更可控。


国内直接访问,限时3折
输入简单文字,生成想要的图片,纳米香蕉中文站基于 Google 模型的 AI 图片生成网站,支持文字生图、图生图。官网价格限时3折活动


职场AI,就用扣子
AI办公助手,复杂任务高效处理。办公效率低?扣子空间AI助手支持播客生成、PPT制作、网页开发及报告写作,覆盖科研、商业、舆情等领域的专家Agent 7x24小时响应,生活工作无缝切换,提升50%效率!


多风格AI绘画神器
堆友平台由阿里巴巴设计团队创建,作为一款AI驱动的设计工具,专为设计师提供一站式增长服务。功能覆盖海量3D素材、AI绘画、实时渲染以及专业抠图,显著提升设计品质和效率。平台不仅提供工具,还是一个促进创意交流和个人发展的空间,界面友好,适合所有级别的设计师和创意工作者。


零代码AI应用开发平台
零代码AI应用开发平台,用户只需一句话简单描述需求,AI能自动生成小程序、APP或H5网页应用,无需编写代码。


免费创建高清无水印Sora视频
Vora是一个免费创建高清无水印Sora视频的AI工具
最新AI工具、AI资讯
独家AI资源、AI项目落地

微信扫一扫关注公众号