DeepImage-an-Image-to-Image-technology

DeepImage-an-Image-to-Image-technology

强大而多样化的图像生成与转换技术集合

DeepImage是一个综合性的图像生成与转换技术项目,包含多种先进算法如pix2pixHD、pix2pix和CycleGAN等。该项目提供了图像生成演示、理论研究资料和实践指南,涵盖从基础到前沿的生成对抗网络(GAN)技术。DeepImage为研究人员和开发者提供了一个全面的学习和实验平台,助力探索图像生成与转换的多种可能性。

Image-to-ImageGANDeepImageStyleGANCycleGANGithub开源项目

DeepImage-an-Image-to-Image-technology

中文版 | English Version

This warehouse contains the pix2pixHD (proposed by Nvidia) algorithm, and more importantly, the universal image generation theory and practical research behind it.

This resource includes the TensorFlow2 (Pytorch | PaddlePaddle) implementation of image generation models such as pix2pix, CycleGAN, UGATIT, DCGAN, SinGAN, VAE, ALAE, mGANprior and StarGAN-v2, which can be used to systematically learn to Generating Adversarial Network (GAN).


Content of this resource

  1. What is DeepImage?
  2. Fake Image Generation and Image-to-Image Demo
  3. DeepImage Algorithm: Normal to Pornography Image
  4. NSFW: Pornography to Normal Image, Pornographic Image Detection
  5. GAN Image Generation Theoretical Research
  6. GAN Image Generation Practice Research
  7. DeepImage to DeepFakes
  8. Future

What is DeepImage?

DeepImage uses a slightly modified version of the pix2pixHD GAN architecture, quoted from DeepImage_official. pix2pixHD is a general-purpose Image2Image technology proposed by NVIDIA. Obviously, DeepImage is the wrong application of artificial intelligence technology, but it uses Image2Image technology for researchers and developers working in other fields such as fashion, film and visual effects.


Fake Image Generation Demo

This section provides a fake image generation demo that you can use as you wish. They are fake images generated by StyleGAN without any copyright issues. Note: Each time you refresh the page, a new fake image will be generated, pay attention to save!

Image-to-Image Demo

This section provides a demo of Image-to-Image Demo: Black and white stick figures to colorful faces, cats, shoes, handbags. DeepImage software mainly uses Image-to-Image technology, which theoretically converts the images you enter into any image you want. You can experience Image-to-Image technology in your browser by clicking Image-to-Image Demo below.

Try Image-to-faces Demo

Try Image-to-Image Demo

An example of using this demo is as follows:

In the left side box, draw a cat as you imagine, and then click the process button, you can output a model generated cat.


:underage: DeepImage Algorithm

DeepImage is a pornographic software that is forbidden by minors. If you are not interested in DeepImage itself, you can skip this section and see the general Image-to-Image theory and practice in the following chapters.

DeepImage_software_itself content:

  1. Official DeepImage Algorithm(Based on Pytorch)
  2. DeepImage software usage process and evaluation of advantages and disadvantages.

:+1: NSFW

Recognition and conversion of five types of images [porn, hentai, sexy, natural, drawings]. Correct application of image-to-image technology.

NSFW(Not Safe/Suitable For Work) is a large-scale image dataset containing five categories of images [porn, hentai, sexy, natural, drawings]. Here, CycleGAN is used to convert different types of images, such as porn->natural.

  1. Click to try pornographic image detection Demo
  2. Click Start NSFW Research

Image Generation Theoretical Research

This section describes DeepImage-related AI/Deep Learning theory (especially computer vision) research. If you like to read the paper and use the latest papers, enjoy it.

  1. Click here to systematically understand GAN
  2. Click here to systematically image-to-image-papers

1. Pix2Pix

Result

Image-to-Image Translation with Conditional Adversarial Networks is a general solution for the use of conditional confrontation networks as an image-to-image conversion problem proposed by the University of Berkeley.

<details> <summary>View more paper studies (Click the black arrow on the left to expand)</summary>

2. Pix2PixHD

DeepImage mainly uses this Image-to-Image(Pix2PixHD) technology.

Result

Get high resolution images from the semantic map. The semantic graph is a color picture. The different color blocks on the map represent different kinds of objects, such as pedestrians, cars, traffic signs, buildings, and so on. Pix2PixHD takes a semantic map as input and produces a high-resolution, realistic image. Most of the previous techniques can only produce rough, low-resolution images that don't look real. This research has produced images with a resolution of 2k by 1k, which is very close to full HD photos.

3. CycleGAN

Result

CycleGAN uses a cycle consistency loss to enable training without the need for paired data. In other words, it can translate from one domain to another without a one-to-one mapping between the source and target domain. This opens up the possibility to do a lot of interesting tasks like photo-enhancement, image colorization, style transfer, etc. All you need is the source and the target dataset.

4. UGATIT

Result

UGATIT is a novel method for unsupervised image-to-image translation, which incorporates a new attention module and a new learnable normalization function in an end-to-end manner. UGATIT can do both image conversions that require Holistic Changes, and image conversions that require Large Shape Changes. It can be seen as an enhanced version of CycleGAN, a more efficient general image conversion framework.

5. StyleGAN

Result

Source A + Source B (Style) = ?

StyleGAN can not only generate fake images source A and source B, but also combine the content of source A and source B from different strengths, as shown in the following table.

Style level (from source b)Source ASource B
High level (coarse)all colors (eyes, hair, light) and details facial features from Source Ainherit advanced facial features from Source B, such as posture, general hair style, facial shape and glasses
Medium levelposture, general facial shape and glasses come from source ainherits the middle level facial features of source B, such as hair style, open / closed eyes
High level (fine)the main facial content comes from source ainherits the advanced facial features of source B, such as color scheme and microstructure

StyleGAN2

Without increasing the amount of calculation of StyleGAN, while solving the image artifacts generated by StyleGAN and obtaining high-quality images with better details, StyleGAN2 implements a new SOTA for unconditional image modeling tasks.

6. Image Inpainting

Result

In the image interface of Image_Inpainting(NVIDIA_2018).mp4 video, you only need to use tools to simply smear the unwanted content in the image. Even if the shape is very irregular, NVIDIA's model can “restore” the image with very realistic The picture fills the smeared blank. It can be described as a one-click P picture, and "no ps traces." The study was based on a team from Nvidia's Guilin Liu et al. who published a deep learning method that could edit images or reconstruct corrupted images, even if the images were worn or lost pixels. This is the current 2018 state-of-the-art approach.

7. SinGAN

ICCV2019 Best paper - Marr prize

Result

We introduce SinGAN, an unconditional generative model that can be learned from a single natural image. Our model is trained to capture the internal distribution of patches within the image, and is then able to generate high quality, diverse samples that carry the same visual content as the image. SinGAN contains a pyramid of fully convolutional GANs, each responsible for learning the patch distribution at a different scale of the image. This allows generating new samples of arbitrary size and aspect ratio, that have significant variability, yet maintain both the global structure and the fine textures of the training image. In contrast to previous single image GAN schemes, our approach is not limited to texture images, and is not conditional (i.e. it generates samples from noise). User studies confirm that the generated samples are commonly confused to be real images. We illustrate the utility of SinGAN in a wide range of image manipulation tasks.

8. ALAE

Result

Although studied extensively, the issues of whether they have the same generative power of GANs, or learn disentangled representations, have not been fully addressed. We introduce an autoencoder that tackles these issues jointly, which we call Adversarial Latent Autoencoder (ALAE). It is a general architecture that can leverage recent improvements on GAN training procedures.

9. mGANprior

Result

Despite the success of Generative Adversarial Networks (GANs) in image synthesis, applying trained GAN models to real image processing remains challenging. Previous methods typically invert a target image back to the latent space either by back-propagation or by learning an additional encoder. However, the reconstructions from both of the methods are far from ideal. In this work, we propose a novel approach, called mGANprior, to incorporate the well-trained GANs as effective prior to a variety of image processing tasks.

10. StarGAN v2

Result

A good image-to-image translation model should learn a mapping between different visual domains while satisfying the following properties: 1) diversity of generated images and 2) scalability over multiple domains. Existing methods address either of the issues, having limited diversity or multiple models for all domains. We propose StarGAN v2, a single framework that tackles both and shows significantly improved results over the baselines.

11. DeepFaceDrawing

Result

Recent deep image-to-image translation techniques allow fast generation of face images from freehand sketches. However, existing solutions tend to overfit to sketches, thus requiring professional sketches or even edge maps as input. To address this issue, our key idea is to implicitly model the shape space of plausible face images and synthesize a face image in this space to approximate an input sketch.

</details>

Image Generation Practice Research

These models are based on the latest implementation of TensorFlow2.

This section explains DeepImage-related AI/Deep Learning (especially computer vision) code practices, and if you like to experiment, enjoy them.

1. Pix2Pix

Use the Pix2Pix model (Conditional Adversarial Networks) to implement black and white stick figures to color graphics, flat houses to stereoscopic houses and aerial maps to maps.

Click Start Experience 1

2. Pix2PixHD

Under development... First you can use the official implementation

3. CycleGAN

The CycleGAN neural network model is used to realize the four functions of photo style conversion, photo effect enhancement, landscape season change, and object conversion.

Click Start Experience 3

4. DCGAN

DCGAN is used to achieve random number to image generation tasks, such as face generation.

Click Start Experience 4

5. Variational Autoencoder (VAE)

VAE is used to achieve random number to image generation tasks, such as face generation.

Click Start Experience 5

6. Neural style transfer

Use VGG19 to achieve image style migration

编辑推荐精选

讯飞智文

讯飞智文

一键生成PPT和Word,让学习生活更轻松

讯飞智文是一个利用 AI 技术的项目,能够帮助用户生成 PPT 以及各类文档。无论是商业领域的市场分析报告、年度目标制定,还是学生群体的职业生涯规划、实习避坑指南,亦或是活动策划、旅游攻略等内容,它都能提供支持,帮助用户精准表达,轻松呈现各种信息。

热门AI工具AI办公办公工具讯飞智文AI在线生成PPTAI撰写助手多语种文档生成AI自动配图
讯飞星火

讯飞星火

深度推理能力全新升级,全面对标OpenAI o1

科大讯飞的星火大模型,支持语言理解、知识问答和文本创作等多功能,适用于多种文件和业务场景,提升办公和日常生活的效率。讯飞星火是一个提供丰富智能服务的平台,涵盖科技资讯、图像创作、写作辅助、编程解答、科研文献解读等功能,能为不同需求的用户提供便捷高效的帮助,助力用户轻松获取信息、解决问题,满足多样化使用场景。

模型训练热门AI工具内容创作智能问答AI开发讯飞星火大模型多语种支持智慧生活
Spark-TTS

Spark-TTS

一种基于大语言模型的高效单流解耦语音令牌文本到语音合成模型

Spark-TTS 是一个基于 PyTorch 的开源文本到语音合成项目,由多个知名机构联合参与。该项目提供了高效的 LLM(大语言模型)驱动的语音合成方案,支持语音克隆和语音创建功能,可通过命令行界面(CLI)和 Web UI 两种方式使用。用户可以根据需求调整语音的性别、音高、速度等参数,生成高质量的语音。该项目适用于多种场景,如有声读物制作、智能语音助手开发等。

Trae

Trae

字节跳动发布的AI编程神器IDE

Trae是一种自适应的集成开发环境(IDE),通过自动化和多元协作改变开发流程。利用Trae,团队能够更快速、精确地编写和部署代码,从而提高编程效率和项目交付速度。Trae具备上下文感知和代码自动完成功能,是提升开发效率的理想工具。

热门AI工具生产力协作转型TraeAI IDE
咔片PPT

咔片PPT

AI助力,做PPT更简单!

咔片是一款轻量化在线演示设计工具,借助 AI 技术,实现从内容生成到智能设计的一站式 PPT 制作服务。支持多种文档格式导入生成 PPT,提供海量模板、智能美化、素材替换等功能,适用于销售、教师、学生等各类人群,能高效制作出高品质 PPT,满足不同场景演示需求。

讯飞绘文

讯飞绘文

选题、配图、成文,一站式创作,让内容运营更高效

讯飞绘文,一个AI集成平台,支持写作、选题、配图、排版和发布。高效生成适用于各类媒体的定制内容,加速品牌传播,提升内容营销效果。

AI助手热门AI工具AI创作AI辅助写作讯飞绘文内容运营个性化文章多平台分发
材料星

材料星

专业的AI公文写作平台,公文写作神器

AI 材料星,专业的 AI 公文写作辅助平台,为体制内工作人员提供高效的公文写作解决方案。拥有海量公文文库、9 大核心 AI 功能,支持 30 + 文稿类型生成,助力快速完成领导讲话、工作总结、述职报告等材料,提升办公效率,是体制打工人的得力写作神器。

openai-agents-python

openai-agents-python

OpenAI Agents SDK,助力开发者便捷使用 OpenAI 相关功能。

openai-agents-python 是 OpenAI 推出的一款强大 Python SDK,它为开发者提供了与 OpenAI 模型交互的高效工具,支持工具调用、结果处理、追踪等功能,涵盖多种应用场景,如研究助手、财务研究等,能显著提升开发效率,让开发者更轻松地利用 OpenAI 的技术优势。

Hunyuan3D-2

Hunyuan3D-2

高分辨率纹理 3D 资产生成

Hunyuan3D-2 是腾讯开发的用于 3D 资产生成的强大工具,支持从文本描述、单张图片或多视角图片生成 3D 模型,具备快速形状生成能力,可生成带纹理的高质量 3D 模型,适用于多个领域,为 3D 创作提供了高效解决方案。

3FS

3FS

一个具备存储、管理和客户端操作等多种功能的分布式文件系统相关项目。

3FS 是一个功能强大的分布式文件系统项目,涵盖了存储引擎、元数据管理、客户端工具等多个模块。它支持多种文件操作,如创建文件和目录、设置布局等,同时具备高效的事件循环、节点选择和协程池管理等特性。适用于需要大规模数据存储和管理的场景,能够提高系统的性能和可靠性,是分布式存储领域的优质解决方案。

下拉加载更多