unet.cu

unet.cu

UNet扩散模型的高性能CUDA实现

这个开源项目使用纯C++/CUDA实现了UNet扩散模型训练框架,支持无条件扩散。框架包含线性层、组归一化、注意力等核心算子的GPU加速实现,重点优化3x3卷积。通过多次迭代提升CUDA kernel性能,训练速度达PyTorch的40%。项目展示了深度学习框架在GPU上的高效实现过程,为相关开发提供参考。

UNetCUDA深度学习图像生成卷积神经网络Github开源项目

unet.cu

TL;DR:

  • UNet diffusion model training written in pure C++/CUDA (only unconditional diffusion right now).
  • Currently end to end training runs at about 40% the speed of PyTorch with torch.compile. The following are benchmarks on one RTX 4090 GPU:
Setupone full training loop (ms)
This repo142.44
PyTorch66.73
PyTorch with torch.compile59.20

Table of contents

Quick start

To train a diffusion model in CUDA with some sample images from ImageNet 64x64, run the following:

gunzip data/elephant_train.bin.gz # prepare the data python train_unet.py --init_model_only True # need to initialize model weights via python make train_unet ./train_unet

To train the model with your own data, you need to create a .bin file with your data first:

python prepare_data.py --data_dir YOUR_DATA_DIR --output_name YOUR_BINARY_DATA_FILENAME.bin # now run training, assuming you have already initialized the model as above ./train_unet --data_file YOUR_BINARY_DATA_FILENAME.bin

The PyTorch training code is essentially taken from the guided-diffusion repo. To run PyTorch training, do:

python train_unet.py --data_dir YOUR_DATA_DIR # use --compile 0 if you don't want to call torch.compile() on the model

The CUDA training loop will save model weights in .bin files. To generate new images with model weights saved in either .bin or .pt files, run:

python generate.py --model_filename YOUR_MODEL_WEIGHTS_FILENAME

Introduction

Inspired by Andrej Karpathy's llm.c, I built a UNet from scratch in C/CUDA. The goal of the project is to learn the concepts in llm.c, and to reach for PyTorch's performance with our CUDA implementation. I chose the UNet because it is a key architecture for diffusion models, and I will do some simple diffusion model training with it.

Diffusion model training is quite sophisticated nowadays. Since this project is focused on learning CUDA as opposed to building the best diffusion model, I prioritized simplicity over performance, and followed the implementation from the paper Diffusion Models Beat GANs on Image Synthesis. Currently the UNet only supports unconditioned diffusion training. I also did not reproduce all the model configurations from the paper; the details of the differences will be explained in the section on the architecture.

Here are some images generated with our CUDA implementation. The model is trained on elephant images from ImageNet 64x64 without class-conditioning. The model is highly over fitting the training set right now, but at least this confirms training is working.

<p align="center"> <img src="assets/cuda_sample1.jpg" alt="Sample 1" width="20%" /> <img src="assets/cuda_sample2.jpg" alt="Sample 2" width="20%" /> <img src="assets/cuda_sample3.jpg" alt="Sample 3" width="20%" /> </p>

The Github repository is organized as follows:

  • The dev/ directory contains all different kernels and tests written during development.
    • Most neural network layers have two corresponding files: a .cu file (e.g. groupnorm.cu), which contains different CUDA kernels for a layer, and a .py file (e.g. groupnorm.py), which contains an identical Pytorch implementation for the same layer.
    • We check the correctness of the CUDA kernels by checking that they produce the same outputs in both forward and backward passes as the ground truth PyTorch versions (up to floating point errors).
  • train_unet.cu is a single file with the full diffusion model training code (~ 5000 lines). We take the best kernels from dev/ and copy them here. The file also contains things like the data loader and AdamW.

For a tutorial on how to write the forward and backward pass of different layers, I recommend Andrej's layernorm tutorial.

The rest of these notes are organized as follows. The next section will cover some background, both on diffusion models and on the UNet architecture we use. Then the later sections document successive iterations on the model where I benchmark kernels and try to speed things up. It turns out that most of a UNet's running time is spent doing 3x3 image convolutions, so that is where most of the work went into and where these notes focus on.

Background

Diffusion models

Our goal is to train a diffusion model with a UNet. Let me give a short summary of how diffusion models work. A good mathematical description can be found in Appendix B of the paper Diffusion Models Beat GANs on Image Synthesis; a good hands-on tutorial can be found at Chenyang Yuan's blog. We start with a target distribution $\pi(x)$ on $\mathbb{R}^d$ that we want to sample from. In our case, the space will be RGB images with $C = 3$ channels, height and weight $H = W = 64$, and $d = C \times H \times W$, and the target distribution will be elephant images. The key idea is to set up a stochastic process $(X_t)_{t \ge 0}$ with the following three properties:

  1. At $t = 0$, $X_0$ is exactly sampled from $\pi(x)$.
  2. When $t$ is very large, $X_t$ is very close in distribution to the standard Gaussian distribution on $\mathbb{R}^d$.
  3. Given $X_t$, we can learn to sample from the conditional distribution $\pi(X_{t-1} \mid X_t)$.

These properties together enable us to draw samples from the target $\pi$ as follows:

  1. We draw a standard Gaussian random vector, and treat it as a sample of $X_T$ for a large $T$. This is valid because of property 2.
  2. Then, given $X_t$, we successively sample $X_{t-1}$ using property 3.
  3. Eventually we can sample from $X_0$, which by property 1 is exactly distributed as the target $\pi$.

So now we need a stochastic process that satisfies these properties, and a way to learn the conditional distributions in property 3. The stochastic process $(X_t)_{t \ge 0}$ will look like so: $X_0$ is drawn from $\pi$, and $X_t$ is distributed as follows:

$$ X_t = \sqrt{\alpha_t} \cdot X_0 + \sqrt{1 - \alpha_t}\cdot \epsilon, $$

where $\epsilon$ is a standard Gaussian in $\mathbb{R}^d$, and $\alpha_t$ is a non-increasing function of $t$ that we will choose, with the properties that $\alpha_0 = 1$ and $\alpha_t \to 0$ as $t \to \infty$. We see that when $t$ is large, $X_t \approx \epsilon$, which satisfies property 2. Note that the equation above is only specifying the marginal distribution of $X_t$, so the conditional distribution $\pi(X_{t-1} \mid X_t)$ may not be deterministic (when the conditional is deterministic, we have the DDIM models).

To sample from the conditional distribution $\pi(X_{t-1} \mid X_t)$, we will train a model $\epsilon_\theta(X_t, t)$ that takes $X_t$ and $t$ as input and minimizes the following objective:

$$ L = \mathbb{E}[\lVert \epsilon - \epsilon_\theta(X_t, t) \rVert^2]. $$

Here the expectation is taken over $\epsilon$, $t$ and $X_t$, where $t$ uniformly sampled from the range $[0, T]$, $\epsilon$ is sampled from the standard Gaussian, $X_0$ is sampled from $\pi$ (i.e. one of our training data), and $X_t$ is then constructed from $X_0$, $t$, and $\epsilon$ using the identity above. Conceptually the model $\epsilon_\theta$ takes in the noisy input $X_t$, and tries to learn the noise component $\epsilon$ within the input. With this model, it is then fairly easy to do the conditional sampling from $\pi(X_{t - 1} \mid X_t)$; the details can be found in Appendix B of Diffusion Models Beat GANs on Image Synthesis.

UNet architecture

Our loss function dictates that we want a model which takes an input of shape (B, C, H, W), where B is the batch dimension, and returns an output of the same shape. The UNet is a sample efficient architecture designed specifically for such scenarios. The UNet we use is a basic version taken from the paper Diffusion Models Beat GANs on Image Synthesis, and it looks like this:

<p align="center"> <img src="assets/unet_arch.png" width="100%" /> </p>

Specifically, we use the residual blocks from BigGAN, which look like so (from Figure 15 of Large Scale GAN Training for High Fidelity Natural Image Synthesis ):

<p align="center"> <img src="assets/resblock.png" width="30%" /> </p>

A few more notes on model details:

  • During the upsample blocks, we concatenate the skip connections from the corresponding downsampling blocks into the input.
  • To do diffusion model training, we also need to take in time step embeddings.
    • We use sinusoidal embeddings. Then we pass the embeddings through a fully connected layer, and add the embeddings to the input of each Residual block.
  • We do not currently support dropout.
  • In Diffusion Models Beat GANs on Image Synthesis, they use a custom normalization layer called adaptive group normalization. We currently don't support this.
  • The full code for our UNet can be found in train_unet.py. Our model exactly matches the official implementation with the following model configurations:
--attention_resolutions 16,8 --class_cond False --diffusion_steps 1000 --dropout 0.0 --image_size 64 --learn_sigma False --noise_schedule linear --num_channels 64 --num_head_channels 32 --num_res_blocks 2 --resblock_updown False --use_new_attention_order True --use_fp16 False --use_scale_shift_norm False

Version 1: naive implementation

Kernels taken from llm.c: linear, groupnorm, and attention

In the first version I wanted to get something working quickly, so I copied or adapted the kernels from llm.c. This approach took care of some kernels: the linear layer could be reused from llm.c without adaption; the groupnorm layer is different from the layernorm in llm.c, but we only needed to change the axis we reduce over and then we had a working kernel.

The self-attention layer was trickier. At first glance the adaptation seems straightforward: the attention layer functions identically for transformers and image models, and the only difference is that instead of the inputs having shape (B, T, C), they now have (B, C, H, W). So we can reuse the transformer attention kernels by first transposing the inputs to shape (B, H * W, C), then calling the kernels with T = H * W, then transposing the output back to shape (B, C, H, W).

This turns out to be highly inefficient, because for each transpose we need to move a block of size B * C * H * W in and out of GPU global memory, and as we will see later such steps should be avoided. So the attention kernels will be an obvious place for future improvements.

New kernels: upsample, downsample, and convolutions

Several kernels did not exist in llm.c, but they are needed for the UNet. They are:

  • Up and down sample,
  • 3x3 and 1x1 convolutions.

The up and down sample kernels (nearest interpolation and average pooling respectively) are easy: there is barely any computation, and we easily parallelize them by assigning one pixel to each GPU thread.

So we are left with the convolution kernels. I wanted to get something working quickly, but I also didn't want it to be too slow, so my plan was to convert all the convolutions to matrix multiplications, and then use cuBLAS, which should be fast.

This plan is quite natural for the 1x1 convolution: for inputs of shape (B, C_in, H, W) and weights of shape (C_out, C_in), the forward pass for a 1x1 convolution is essentially a matrix multiplication in the C_in dimension of the input with the weights. So 1x1 convolutions are done with the following steps:

  1. transpose the input from (B, C_in, H, W) to (B * H * W, C_in),
  2. do a single matrix multiplication of the input with the weights with cuBLAS SGEMM to get an output of shape (B * H * W, C_out), then add the bias,
  3. transpose the output back to shape (B, C_out, H, W).

Notice again that this approach needs two transposes of the entire input array, which are expensive. In iteration 2 we will write a custom kernel that avoids these transposes.

For the 3x3 convolutions, things are trickier. Let's focus on the forward pass, where the shapes of the relevant parameters are as follows:

  • input: (B, C_in, H, W),
  • weight: (C_out, C_in, 3, 3),
  • output: (B, C_out, H, W).

Since the plan is to cast the convolution into a matmul, it seems natural to transpose the input and output to shapes (B * H * W, C_in) and (B * H * W, C_out) respectively. For the weight tensor, we can think of it as consisting of 9 different weight matrices, all of shape (C_out, C_in), where each one corresponds to one of the 9 filters in the 3x3 convolution. Let's introduce some notation: let the transposed input be $X \in \mathbb{R}^{(B\cdot H \cdot W) \times C_\text{in}}$, the transposed output be $Y \in \mathbb{R}^{(B\cdot H\cdot W) \times C_\text{out}}$, and let the weight tensor be $W\in \mathbb{R}^{C_\text{out} \times C_\text{in} \times 9}$, where $W = (W_0, \dots, W_8)$, and each $W_i \in \mathbb{R}^{C_\text{out} \times C_\text{in}}$ is the weight matrix for filter $i \in {0, \dots, 8}$.

The convolution of a single pixel works by multiplying the pixels with the 9 filters and summing the values. So roughly speaking, the convolution for the entire batch looks something like this:

  1. For each filter $i$, multiply the transposed input $X$ and the filter weight $i$ and obtain an output $X W_i^\intercal$ of shape (B * H * W, C_out).
  2. Sum over the filters and obtain the transposed output $XW_0^\intercal + \dots XW_8^\intercal$.

So we have turned the 3x3 convolution into matrix multiplications. Except this is not quite right, because not every pixel is multiplied with every filter. For instance, if we

编辑推荐精选

音述AI

音述AI

全球首个AI音乐社区

音述AI是全球首个AI音乐社区,致力让每个人都能用音乐表达自我。音述AI提供零门槛AI创作工具,独创GETI法则帮助用户精准定义音乐风格,AI润色功能支持自动优化作品质感。音述AI支持交流讨论、二次创作与价值变现。针对中文用户的语言习惯与文化背景进行专门优化,支持国风融合、C-pop等本土音乐标签,让技术更好地承载人文表达。

QoderWork

QoderWork

阿里Qoder团队推出的桌面端AI智能体

QoderWork 是阿里推出的本地优先桌面 AI 智能体,适配 macOS14+/Windows10+,以自然语言交互实现文件管理、数据分析、AI 视觉生成、浏览器自动化等办公任务,自主拆解执行复杂工作流,数据本地运行零上传,技能市场可无限扩展,是高效的 Agentic 生产力办公助手。

lynote.ai

lynote.ai

一站式搞定所有学习需求

不再被海量信息淹没,开始真正理解知识。Lynote 可摘要 YouTube 视频、PDF、文章等内容。即时创建笔记,检测 AI 内容并下载资料,将您的学习效率提升 10 倍。

AniShort

AniShort

为AI短剧协作而生

专为AI短剧协作而生的AniShort正式发布,深度重构AI短剧全流程生产模式,整合创意策划、制作执行、实时协作、在线审片、资产复用等全链路功能,独创无限画布、双轨并行工业化工作流与Ani智能体助手,集成多款主流AI大模型,破解素材零散、版本混乱、沟通低效等行业痛点,助力3人团队效率提升800%,打造标准化、可追溯的AI短剧量产体系,是AI短剧团队协同创作、提升制作效率的核心工具。

seedancetwo2.0

seedancetwo2.0

能听懂你表达的视频模型

Seedance two是基于seedance2.0的中国大模型,支持图像、视频、音频、文本四种模态输入,表达方式更丰富,生成也更可控。

nano-banana纳米香蕉中文站

nano-banana纳米香蕉中文站

国内直接访问,限时3折

输入简单文字,生成想要的图片,纳米香蕉中文站基于 Google 模型的 AI 图片生成网站,支持文字生图、图生图。官网价格限时3折活动

扣子-AI办公

扣子-AI办公

职场AI,就用扣子

AI办公助手,复杂任务高效处理。办公效率低?扣子空间AI助手支持播客生成、PPT制作、网页开发及报告写作,覆盖科研、商业、舆情等领域的专家Agent 7x24小时响应,生活工作无缝切换,提升50%效率!

堆友

堆友

多风格AI绘画神器

堆友平台由阿里巴巴设计团队创建,作为一款AI驱动的设计工具,专为设计师提供一站式增长服务。功能覆盖海量3D素材、AI绘画、实时渲染以及专业抠图,显著提升设计品质和效率。平台不仅提供工具,还是一个促进创意交流和个人发展的空间,界面友好,适合所有级别的设计师和创意工作者。

图像生成AI工具AI反应堆AI工具箱AI绘画GOAI艺术字堆友相机AI图像热门
码上飞

码上飞

零代码AI应用开发平台

零代码AI应用开发平台,用户只需一句话简单描述需求,AI能自动生成小程序、APP或H5网页应用,无需编写代码。

Vora

Vora

免费创建高清无水印Sora视频

Vora是一个免费创建高清无水印Sora视频的AI工具

下拉加载更多