Awesome-Controllable-T2I-Diffusion-Models

Awesome-Controllable-T2I-Diffusion-Models

可控文本到图像扩散模型研究进展综述

该项目汇集了文本到图像扩散模型中可控生成的前沿研究。内容涵盖个性化生成、空间控制、高级文本条件生成等多个方向,并总结了多条件生成和通用可控生成方法。项目为研究人员和开发者提供了全面了解可控T2I扩散模型最新进展的资源,有助于促进该领域的发展。

Diffusion Models文本到图像生成个性化生成主体驱动生成可控生成Github开源项目
<!-- # <p align=center>`awesome gan-inversion`</p> -->

Awesome Maintenance PR's Welcome Survey Paper

<br /> <p align="center"> <h1 align="center">Awesome Controllable T2I Diffusion Models</h1> </p> <br />

We are focusing on how to Control text-to-image diffusion models with Novel Conditions.

For more detailed information, please refer to our survey paper: Controllable Generation with Text-to-Image Diffusion Models: A Survey

<p align="center"> <img src="assets/count.png" alt="img" width="49%" /> <img src="assets/controllable_generation.png" alt="img" width="49%" /> </p>

💖 Citation

If you find value in our survey paper or curated collection, please consider citing our work and starring our repo to support us.

@article{cao2024controllable, title={Controllable Generation with Text-to-Image Diffusion Models: A Survey}, author={Pu Cao and Feng Zhou and Qing Song and Lu Yang}, journal={arXiv preprint arXiv:2403.04279}, year={2024} }

🎁 How to contribute to this repository?

Since the following content is generated based on our database, please provide the following information in the issue to help us fill in the database to add new papers (please do not submit a PR directly).

1. Paper title 2. arXiv ID (if any) 3. Publication status (if any)

🌈 Contents

<!-- start -->

🚀Generation with Specific Condition

🍇Personalization

🍉Subject-Driven Generation

PartCraft: Crafting Creative Objects by Parts.<br> Kam Woh Ng, Xiatian Zhu, Yi-Zhe Song, Tao Xiang.<br> ECCV 2024. [PDF]

ClassDiffusion: More Aligned Personalization Tuning with Explicit Class Guidance.<br> Jiannan Huang, Jun Hao Liew, Hanshu Yan, Yuyang Yin, Yao Zhao, Yunchao Wei.<br> arXiv 2024. [PDF]

Personalized Residuals for Concept-Driven Text-to-Image Generation.<br> Cusuh Ham, Matthew Fisher, James Hays, Nicholas Kolkin, Yuchen Liu, Richard Zhang, Tobias Hinz.<br> arXiv 2024. [PDF]

Improving Subject-Driven Image Synthesis with Subject-Agnostic Guidance.<br> Kelvin C. K. Chan, Yang Zhao, Xuhui Jia, Ming-Hsuan Yang, Huisheng Wang.<br> arXiv 2024. [PDF]

MMTryon: Multi-Modal Multi-Reference Control for High-Quality Fashion Generation.<br> Xujie Zhang, Ente Lin, Xiu Li, Yuxuan Luo, Michael Kampffmeyer, Xin Dong, Xiaodan Liang.<br> arXiv 2024. [PDF]

Infusion: Preventing Customized Text-to-Image Diffusion from Overfitting.<br> Weili Zeng, Yichao Yan, Qi Zhu, Zhuo Chen, Pengzhi Chu, Weiming Zhao, Xiaokang Yang.<br> arXiv 2024. [PDF]

CAT: Contrastive Adapter Training for Personalized Image Generation.<br> Jae Wan Park, Sang Hyun Park, Jun Young Koh, Junha Lee, Min Song.<br> arXiv 2024. [PDF]

MoMA: Multimodal LLM Adapter for Fast Personalized Image Generation.<br> Kunpeng Song, Yizhe Zhu, Bingchen Liu, Qing Yan, Ahmed Elgammal, Xiao Yang.<br> arXiv 2024. [PDF]

U-VAP: User-specified Visual Appearance Personalization via Decoupled Self Augmentation.<br> You Wu, Kean Liu, Xiaoyue Mi, Fan Tang, Juan Cao, Jintao Li.<br> arXiv 2024. [PDF]

Automated Black-box Prompt Engineering for Personalized Text-to-Image Generation.<br> Yutong He, Alexander Robey, Naoki Murata, Yiding Jiang, Joshua Williams, George J. Pappas, Hamed Hassani, Yuki Mitsufuji, Ruslan Salakhutdinov, J. Zico Kolter.<br> arXiv 2024. [PDF]

Attention Calibration for Disentangled Text-to-Image Personalization.<br> Yanbing Zhang, Mengping Yang, Qin Zhou, Zhe Wang.<br> arXiv 2024. [PDF]

Selectively Informative Description can Reduce Undesired Embedding Entanglements in Text-to-Image Personalization.<br> Jimyeong Kim, Jungwon Park, Wonjong Rhee.<br> arXiv 2024. [PDF]

MM-Diff: High-Fidelity Image Personalization via Multi-Modal Condition Integration.<br> Zhichao Wei, Qingkun Su, Long Qin, Weizhi Wang.<br> arXiv 2024. [PDF]

Generative Active Learning for Image Synthesis Personalization.<br> Xulu Zhang, Wengyu Zhang, Xiao-Yong Wei, Jinlin Wu, Zhaoxiang Zhang, Zhen Lei, Qing Li.<br> arXiv 2024. [PDF]

Harmonizing Visual and Textual Embeddings for Zero-Shot Text-to-Image Customization.<br> Yeji Song, Jimyeong Kim, Wonhark Park, Wonsik Shin, Wonjong Rhee, Nojun Kwak.<br> arXiv 2024. [PDF]

Tuning-Free Image Customization with Image and Text Guidance.<br> Pengzhi Li, Qiang Nie, Ying Chen, Xi Jiang, Kai Wu, Yuhuan Lin, Yong Liu, Jinlong Peng, Chengjie Wang, Feng Zheng.<br> arXiv 2024. [PDF]

Fast Personalized Text-to-Image Syntheses With Attention Injection.<br> Yuxuan Zhang, Yiren Song, Jinpeng Yu, Han Pan, Zhongliang Jing.<br> arXiv 2024. [PDF]

OMG: Occlusion-friendly Personalized Multi-concept Generation in Diffusion Models.<br> Zhe Kong, Yong Zhang, Tianyu Yang, Tao Wang, Kaihao Zhang, Bizhu Wu, Guanying Chen, Wei Liu, Wenhan Luo.<br> arXiv 2024. [PDF]

StableGarment: Garment-Centric Generation via Stable Diffusion.<br> Rui Wang, Hailong Guo, Jiaming Liu, Huaxia Li, Haibo Zhao, Xu Tang, Yao Hu, Hao Tang, Peipei Li.<br> arXiv 2024. [PDF]

Block-wise LoRA: Revisiting Fine-grained LoRA for Effective Personalization and Stylization in Text-to-Image Generation.<br> Likun Li, Haoqi Zeng, Changpeng Yang, Haozhe Jia, Di Xu.<br> arXiv 2024. [PDF]

FaceChain-SuDe: Building Derived Class to Inherit Category Attributes for One-shot Subject-Driven Generation.<br> Pengchong Qiao, Lei Shang, Chang Liu, Baigui Sun, Xiangyang Ji, Jie Chen.<br> arXiv 2024. [PDF]

RealCustom: Narrowing Real Text Word for Real-Time Open-Domain Text-to-Image Customization.<br> Mengqi Huang, Zhendong Mao, Mingcong Liu, Qian He, Yongdong Zhang.<br> arXiv 2024. [PDF]

DiffuseKronA: A Parameter Efficient Fine-tuning Method for Personalized Diffusion Models.<br> Shyam Marjit, Harshit Singh, Nityanand Mathur, Sayak Paul, Chia-Mu Yu, Pin-Yu Chen.<br> arXiv 2024. [PDF]

Direct Consistency Optimization for Compositional Text-to-Image Personalization.<br> Kyungmin Lee, Sangkyung Kwak, Kihyuk Sohn, Jinwoo Shin.<br> arXiv 2024. [PDF]

ComFusion: Personalized Subject Generation in Multiple Specific Scenes From Single Image.<br> Yan Hong, Jianfu Zhang.<br> arXiv 2024. [PDF]

Visual Concept-driven Image Generation with Text-to-Image Diffusion Model.<br> Tanzila Rahman, Shweta Mahajan, Hsin-Ying Lee, Jian Ren, Sergey Tulyakov, Leonid Sigal.<br> arXiv 2024. [PDF]

Textual Localization: Decomposing Multi-concept Images for Subject-Driven Text-to-Image Generation.<br> Junjie Shentu, Matthew Watson, Noura Al Moubayed.<br> arXiv 2024. [PDF]

DreamMatcher: Appearance Matching Self-Attention for Semantically-Consistent Text-to-Image Personalization.<br> Jisu Nam, Heesu Kim, DongJae Lee, Siyoon Jin, Seungryong Kim, Seunggyu Chang.<br> CVPR 2024. [PDF]

SeFi-IDE: Semantic-Fidelity Identity Embedding for Personalized Diffusion-Based Generation.<br> Yang Li, Songlin Yang, Wei Wang, Jing Dong.<br> arXiv 2024. [PDF]

Pick-and-Draw: Training-free Semantic Guidance for Text-to-Image Personalization.<br> Henglei Lv, Jiayu Xiao, Liang Li, Qingming Huang.<br> arXiv 2024. [PDF]

Object-Driven One-Shot Fine-tuning of Text-to-Image Diffusion with Prototypical Embedding.<br> Jianxiang Lu, Cong Xie, Hui Guo.<br> arXiv 2024. [PDF]

BootPIG: Bootstrapping Zero-shot Personalized Image Generation Capabilities in Pretrained Diffusion Models.<br> Senthil Purushwalkam, Akash Gokul, Shafiq Joty, Nikhil Naik.<br> arXiv 2024. [PDF]

PALP: Prompt Aligned Personalization of Text-to-Image Models.<br> Moab Arar, Andrey Voynov, Amir Hertz, Omri Avrahami, Shlomi Fruchter, Yael Pritch, Daniel Cohen-Or, Ariel Shamir.<br> arXiv 2024. [PDF]

Cross Initialization for Personalized Text-to-Image Generation.<br> Lianyu Pang, Jian Yin, Haoran Xie, Qiping Wang, Qing Li, Xudong Mao.<br> CVPR 2024. [PDF]

DreamTuner: Single Image is Enough for Subject-Driven Generation.<br> Miao Hua, Jiawei Liu, Fei Ding, Wei Liu, Jie Wu, Qian He.<br> arXiv 2023. [PDF]

Decoupled Textual Embeddings for Customized Image Generation.<br> Yufei Cai, Yuxiang Wei, Zhilong Ji, Jinfeng Bai, Hu Han, Wangmeng Zuo.<br> arXiv 2023. [PDF]

Compositional Inversion for Stable Diffusion Models.<br> Xulu Zhang, Xiao-Yong Wei, Jinlin Wu, Tianyi Zhang, Zhaoxiang Zhang, Zhen Lei, Qing Li.<br> AAAI 2024. [PDF]

Customization Assistant for Text-to-image Generation.<br> Yufan Zhou, Ruiyi Zhang, Jiuxiang Gu, Tong Sun.<br> CVPR 2024. [PDF]

VideoBooth: Diffusion-based Video Generation with Image Prompts.<br> Yuming Jiang, Tianxing Wu, Shuai Yang, Chenyang Si, Dahua Lin, Yu Qiao, Chen Change Loy, Ziwei Liu.<br> arXiv 2023. [PDF]

HiFi Tuner: High-Fidelity Subject-Driven Fine-Tuning for Diffusion Models.<br> Zhonghao Wang, Wei Wei, Yang Zhao, Zhisheng Xiao, Mark Hasegawa-Johnson, Humphrey Shi, Tingbo Hou.<br> arXiv 2023. [PDF]

VideoAssembler: Identity-Consistent Video Generation with Reference Entities using Diffusion Model.<br> Haoyu Zhao, Tianyi Lu, Jiaxi Gu, Xing Zhang, Zuxuan Wu, Hang Xu, Yu-Gang Jiang.<br> arXiv 2023.

编辑推荐精选

iTerms

iTerms

企业专属的AI法律顾问

iTerms是法大大集团旗下法律子品牌,基于最先进的大语言模型(LLM)、专业的法律知识库和强大的智能体架构,帮助企业扫清合规障碍,筑牢风控防线,成为您企业专属的AI法律顾问。

SimilarWeb流量提升

SimilarWeb流量提升

稳定高效的流量提升解决方案,助力品牌曝光

稳定高效的流量提升解决方案,助力品牌曝光

Sora2视频免费生成

Sora2视频免费生成

最新版Sora2模型免费使用,一键生成无水印视频

最新版Sora2模型免费使用,一键生成无水印视频

Transly

Transly

实时语音翻译/同声传译工具

Transly是一个多场景的AI大语言模型驱动的同声传译、专业翻译助手,它拥有超精准的音频识别翻译能力,几乎零延迟的使用体验和支持多国语言可以让你带它走遍全球,无论你是留学生、商务人士、韩剧美剧爱好者,还是出国游玩、多国会议、跨国追星等等,都可以满足你所有需要同传的场景需求,线上线下通用,扫除语言障碍,让全世界的语言交流不再有国界。

讯飞绘文

讯飞绘文

选题、配图、成文,一站式创作,让内容运营更高效

讯飞绘文,一个AI集成平台,支持写作、选题、配图、排版和发布。高效生成适用于各类媒体的定制内容,加速品牌传播,提升内容营销效果。

热门AI辅助写作AI工具讯飞绘文内容运营AI创作个性化文章多平台分发AI助手
TRAE编程

TRAE编程

AI辅助编程,代码自动修复

Trae是一种自适应的集成开发环境(IDE),通过自动化和多元协作改变开发流程。利用Trae,团队能够更快速、精确地编写和部署代码,从而提高编程效率和项目交付速度。Trae具备上下文感知和代码自动完成功能,是提升开发效率的理想工具。

AI工具TraeAI IDE协作生产力转型热门
商汤小浣熊

商汤小浣熊

最强AI数据分析助手

小浣熊家族Raccoon,您的AI智能助手,致力于通过先进的人工智能技术,为用户提供高效、便捷的智能服务。无论是日常咨询还是专业问题解答,小浣熊都能以快速、准确的响应满足您的需求,让您的生活更加智能便捷。

imini AI

imini AI

像人一样思考的AI智能体

imini 是一款超级AI智能体,能根据人类指令,自主思考、自主完成、并且交付结果的AI智能体。

Keevx

Keevx

AI数字人视频创作平台

Keevx 一款开箱即用的AI数字人视频创作平台,广泛适用于电商广告、企业培训与社媒宣传,让全球企业与个人创作者无需拍摄剪辑,就能快速生成多语言、高质量的专业视频。

即梦AI

即梦AI

一站式AI创作平台

提供 AI 驱动的图片、视频生成及数字人等功能,助力创意创作

下拉加载更多