The First Curation of Text-to-3D, Diffusion-to-3D works. Heavily inspired by awesome-NeRF
02.04.2024 - Begin linking to project pages and codes09.02.2024 - Level One Categorization11.11.2023 - Added Tutorial Videos05.08.2023 - Provided citations in BibTeX06.07.2023 - Created initial listZero-Shot Text-Guided Object Generation with Dream Fields, Ajay Jain et al., CVPR 2022 | citation | site | code
CLIP-Forge: Towards Zero-Shot Text-to-Shape Generation, Aditya Sanghi et al., Arxiv 2021 | citation | site | code
PureCLIPNERF: Understanding Pure CLIP Guidance for Voxel Grid NeRF Models, Han-Hung Lee et al., Arxiv 2022 | citation | site | code
SDFusion: Multimodal 3D Shape Completion, Reconstruction, and Generation, Yen-Chi Cheng et al., CVPR 2023 | citation | site | code
DreamFusion: Text-to-3D using 2D Diffusion, Ben Poole et al., ICLR 2023 | citation | site | code
Dream3D: Zero-Shot Text-to-3D Synthesis Using 3D Shape Prior and Text-to-Image Diffusion Models, Jiale Xu et al., Arxiv 2022 | citation | site | code
Novel View Synthesis with Diffusion Models, Daniel Watson et al., Arxiv 2022 | citation | site | code
NeuralLift-360: Lifting An In-the-wild 2D Photo to A 3D Object with 360° Views, Dejia Xu et al., Arxiv 2022 | citation | site | code
Point-E: A System for Generating 3D Point Clouds from Complex Prompts, Alex Nichol et al., Arxiv 2022 | citation | site | code
Latent-NeRF for Shape-Guided Generation of 3D Shapes and Textures, Gal Metzer et al., Arxiv 2023 | citation | site | code
Magic3D: High-Resolution Text-to-3D Content Creation, Chen-Hsuan Linet et al., CVPR 2023 | citation | site | code
RealFusion: 360° Reconstruction of Any Object from a Single Image, Luke Melas-Kyriazi et al., CVPR 2023 | citation | site | code
Monocular Depth Estimation using Diffusion Models, Saurabh Saxena et al., Arxiv 2023 | citation | site | code
SparseFusion: Distilling View-conditioned Diffusion for 3D Reconstruction, Zhizhuo Zho et al., CVPR 2023 | citation | site | code
NerfDiff: Single-image View Synthesis with NeRF-guided Distillation from 3D-aware Diffusion, Jiatao Gu et al., ICML 2023 | citation | site | code
Score Jacobian Chaining: Lifting Pretrained 2D Diffusion Models for 3D Generation, Haochen Wang et al., CVPR 2023 | citation | site | code
High-fidelity 3D Face Generation from Natural Language Descriptions, Menghua Wu et al., CVPR 2023 | citation | site | code
TEXTure: Text-Guided Texturing of 3D Shapes, Elad Richardson Chen et al., SIGGRAPH 2023 | citation | site | code
NeRDi: Single-View NeRF Synthesis with Language-Guided Diffusion as General Image Priors, Congyue Deng et al., CVPR 2023 | citation | site | code
DiffusioNeRF: Regularizing Neural Radiance Fields with Denoising Diffusion Models, Jamie Wynn et al., CVPR 2023 | citation | site | code
3DQD: Generalized Deep 3D Shape Prior via Part-Discretized Diffusion Process, Yuhan Li et al., CVPR 2023 | citation | site | code
DATID-3D: Diversity-Preserved Domain Adaptation Using Text-to-Image Diffusion for 3D Generative Model, Gwanghyun Kim et al., CVPR 2023 | citation | site | code
Novel View Synthesis with Diffusion Models, Daniel Watson et al., ICLR 2023 | citation | site | code
ProlificDreamer: High-Fidelity and Diverse Text-to-3D Generation with Variational Score Distillation, Zhengyi Wang et al., Arxiv 2023 | citation | site | code
3D-aware Image Generation using 2D Diffusion Models, Jianfeng Xiang et al., Arxiv 2023 | citation | site | code
Make-It-3D: High-Fidelity 3D Creation from A Single Image with Diffusion Prior, Junshu Tang et al., ICCV 2023 | citation | site | code
GECCO: Geometrically-Conditioned Point Diffusion Models, Michał J. Tyszkiewicz et al., ICCV 2023 | citation | site | code
Re-imagine the Negative Prompt Algorithm: Transform 2D Diffusion into 3D, alleviate Janus problem and Beyond, Mohammadreza Armandpour et al., Arxiv 2023 | citation | site | code
Generative Novel View Synthesis with 3D-Aware Diffusion Models, Eric R. Chan et al., Arxiv 2023 | citation | site | code
Text2NeRF: Text-Driven 3D Scene Generation with Neural Radiance Fields, Jingbo Zhang et al., Arxiv 2023 | citation | site | code
Magic123: One Image to High-Quality 3D Object Generation Using Both 2D and 3D Diffusion Priors, Guocheng Qian et al., Arxiv 2023 | citation | site | code
DreamBooth3D: Subject-Driven Text-to-3D Generation, Amit Raj et al., ICCV 2023 | citation | site | code
Zero-1-to-3: Zero-shot One Image to 3D Object, Ruoshi Liu et al., Arxiv 2023 | citation | site | code
ATT3D: Amortized Text-to-3D Object Synthesis, Jonathan Lorraine et al., ICCV 2023 | citation | site | code
Conditional 3D Shape Generation based on Shape-Image-Text Aligned Latent Representation, Zibo Zhao et al., Arxiv 2023 | citation | site | code
Diffusion-SDF: Conditional Generative Modeling of Signed Distance Functions, Gene Chou et al., Arxiv 2023 | citation | site | code
HiFA: High-fidelity Text-to-3D with Advanced Diffusion Guidance, Junzhe Zhu et al., Arxiv 2023 | citation | site | code
LERF: Language Embedded Radiance Fields, Justin Kerr et al., Arxiv 2023 | citation | site | code
3DFuse: Let 2D Diffusion Model Know 3D-Consistency for Robust Text-to-3D Generation, Junyoung Seo et al., Arxiv 2023 | citation | site | code
MVDiffusion: Enabling Holistic Multi-view Image Generation with Correspondence-Aware Diffusion, Shitao Tang et al., Arxiv 2023 | citation | site | code
One-2-3-45: Any Single Image to 3D Mesh in 45 Seconds without Per-Shape Optimization, Minghua Liu et al., Arxiv 2023 | citation | site | code
TextMesh: Generation of Realistic 3D Meshes From Text Prompts, Christina Tsalicoglou Liu et al., Arxiv 2023 | citation | site | code
Prompt-Free Diffusion: Taking "Text" out of Text-to-Image Diffusion Models, Xingqian Xu et al., Arxiv 2023 | citation | site | code
SceneScape: Text-Driven Consistent Scene Generation, Rafail Fridman et al., Arxiv 2023 | citation | site | code
CLIP-Mesh: Generating textured meshes from text using pretrained image-text models, Nasir Khalid et al., Arxiv 2023 | citation | site | code
Text2Room: Extracting Textured 3D Meshes from 2D Text-to-Image Models, Lukas Höllein et al., Arxiv 2023 | citation | site | code
Single-Stage Diffusion NeRF: A Unified Approach to 3D Generation and Reconstruction, Hansheng Chen et al., Arxiv 2023 | citation | site | code


全球首个AI音乐社区
音述AI是全球首个AI音乐社区,致力让每个人都能用音乐表达自我。音述AI提供零门槛AI创作工具,独创GETI法则帮助用户精准定义音乐风格,AI润色功能支持自动优化作品质感。音述AI支持交流讨论、二次创作与价值变现。针对中文用户的语言习惯与文化背景进行专门优化,支持国风融合、C-pop等本土音乐标签,让技术更好地承载人文表达。


阿里Qoder团队推出的桌面端AI智能体
QoderWork 是阿里推出的本地优先桌面 AI 智能体,适配 macOS14+/Windows10+,以自然语言交互实现文件管理、数据分析、AI 视觉生成、浏览器自动化等办公任务,自主拆解执行复杂工作流,数据本地运行零上传,技能市场可无限扩展,是高效的 Agentic 生产力办公助手。


一站式搞定所有学习需求
不再被海量信息淹没,开始真正理解知识。Lynote 可摘要 YouTube 视频、PDF、文章等内容。即时创建笔记,检测 AI 内容并下载资料,将您的学习效率提升 10 倍。


为AI短剧协作而生
专为AI短剧协作而生的AniShort正式发布,深度重构AI短剧全流程生产模式,整合创意策划、制作执行、实时协作、在线审片、资产复用等全链路功能,独创无限画布、双轨并行工业化工作流与Ani智能体助手,集成多款主流AI大模型,破解素材零散、版本混乱、沟通低效等行业痛点,助力3人团队效率提升800%,打造标准化、可追溯的AI短剧量产体系,是AI短剧团队协同创作、提升制作效率的核心工具。


能听懂你表达的视频模型
Seedance two是基于seedance2.0的中国大模型,支持图像、视频、音频、文本四种模态输入,表达方式更丰富,生成也更可控。


国内直接访问,限时3折
输入简单文字,生成想要的图片,纳米香蕉中文站基于 Google 模 型的 AI 图片生成网站,支持文字生图、图生图。官网价格限时3折活动


职场AI,就用扣子
AI办公助手,复杂任务高效处理。办公效率低?扣子空间AI助手支持播客生成、PPT制作、网页开发及报告写作,覆盖科研、商业、舆情等领域的专家Agent 7x24小时响应,生活工作无缝切换,提升50%效率!


多风格AI绘画神器
堆友平台由阿里巴巴设计团队创建,作为一款AI驱动的设计工具,专为设计师提供一站式增长服务。功能覆盖海量3D素材、AI绘画、实时渲染以及专业抠图,显著提升设计品质和效率。平台不仅提供工具,还是一个促进创意交流和个人发展的空间,界面友好,适合所有级别的设计师和创意工作者。


零代码AI应用开发平台
零代码AI应用开发平台,用户只需一句话简单描述需求,AI能自动生成小程序、APP或H5网页应用,无需编写代码。


免费创建高清无水印Sora视频
Vora是一个免费创建高清无水印Sora视频的AI工具
最新AI工具、AI资讯
独家AI资源、AI项目落地

微信扫一扫关注公众号