In this repository we release models from the papers
The models were pre-trained on the ImageNet and ImageNet-21k datasets. We provide the code for fine-tuning the released models in JAX/Flax.
The models from this codebase were originally trained in https://github.com/google-research/big_vision/ where you can find more advanced code (e.g. multi-host training), as well as some of the original training scripts (e.g. configs/vit_i21k.py for pre-training a ViT, or configs/transfer.py for transfering a model).
Table of contents:
Below Colabs run both with GPUs, and TPUs (8 cores, data parallelism).
The first Colab demonstrates the JAX code of Vision Transformers and MLP Mixers. This Colab allows you to edit the files from the repository directly in the Colab UI and has annotated Colab cells that walk you through the code step by step, and lets you interact with the data.
https://colab.research.google.com/github/google-research/vision_transformer/blob/main/vit_jax.ipynb
The second Colab allows you to explore the >50k Vision Transformer and hybrid
checkpoints that were used to generate the data of the third paper "How to train
your ViT? ...". The Colab includes code to explore and select checkpoints, and
to do inference both using the JAX code from this repo, and also using the
popular timm
PyTorch library that can directly load these checkpoints as
well. Note that a handful of models are also available directly from TF-Hub:
sayakpaul/collections/vision_transformer (external contribution by Sayak
Paul).
The second Colab also lets you fine-tune the checkpoints on any tfds dataset and your own dataset with examples in individual JPEG files (optionally directly reading from Google Drive).
Note: As for now (6/20/21) Google Colab only supports a single GPU (Nvidia Tesla T4), and TPUs (currently TPUv2-8) are attached indirectly to the Colab VM and communicate over slow network, which leads to pretty bad training speed. You would usually want to set up a dedicated machine if you have a non-trivial amount of data to fine-tune on. For details see the Running on cloud section.
Make sure you have Python>=3.10
installed on your machine.
Install JAX and python dependencies by running:
# If using GPU:
pip install -r vit_jax/requirements.txt
# If using TPU:
pip install -r vit_jax/requirements-tpu.txt
For newer versions of JAX, follow the instructions provided in the corresponding repository linked here. Note that installation instructions for CPU, GPU and TPU differs slightly.
Install Flaxformer, follow the instructions provided in the corresponding repository linked here.
For more details refer to the section Running on cloud below.
You can run fine-tuning of the downloaded model on your dataset of interest. All models share the same command line interface.
For example for fine-tuning a ViT-B/16 (pre-trained on imagenet21k) on CIFAR10
(note how we specify b16,cifar10
as arguments to the config, and how we
instruct the code to access the models directly from a GCS bucket instead of
first downloading them into the local directory):
python -m vit_jax.main --workdir=/tmp/vit-$(date +%s) \ --config=$(pwd)/vit_jax/configs/vit.py:b16,cifar10 \ --config.pretrained_dir='gs://vit_models/imagenet21k'
In order to fine-tune a Mixer-B/16 (pre-trained on imagenet21k) on CIFAR10:
python -m vit_jax.main --workdir=/tmp/vit-$(date +%s) \ --config=$(pwd)/vit_jax/configs/mixer_base16_cifar10.py \ --config.pretrained_dir='gs://mixer_models/imagenet21k'
The "How to train your ViT? ..." paper added >50k checkpoints that you can
fine-tune with the [configs/augreg.py
] config. When you only specify the model
name (the config.name
value from [configs/model.py
]), then the best i21k
checkpoint by upstream validation accuracy ("recommended" checkpoint, see
section 4.5 of the paper) is chosen. To make up your mind which model you want
to use, have a look at Figure 3 in the paper. It's also possible to choose a
different checkpoint (see Colab [vit_jax_augreg.ipynb
]) and then specify the
value from the filename
or adapt_filename
column, which correspond to the
filenames without .npz
from the [gs://vit_models/augreg
] directory.
python -m vit_jax.main --workdir=/tmp/vit-$(date +%s) \ --config=$(pwd)/vit_jax/configs/augreg.py:R_Ti_16 \ --config.dataset=oxford_iiit_pet \ --config.base_lr=0.01
Currently, the code will automatically download CIFAR-10 and CIFAR-100 datasets.
Other public or custom datasets can be easily integrated, using tensorflow
datasets library. Note that you will
also need to update vit_jax/input_pipeline.py
to specify some parameters about
any added dataset.
Note that our code uses all available GPUs/TPUs for fine-tuning.
To see a detailed list of all available flags, run python3 -m vit_jax.train --help
.
Notes on memory:
--config.accum_steps=8
-- alternatively, you could also decrease the
--config.batch=512
(and decrease --config.base_lr
accordingly).--config.shuffle_buffer=50000
.by Alexey Dosovitskiy*†, Lucas Beyer*, Alexander Kolesnikov*, Dirk Weissenborn*, Xiaohua Zhai*, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit and Neil Houlsby*†.
(*) equal technical contribution, (†) equal advising.
Overview of the model: we split an image into fixed-size patches, linearly embed each of them, add position embeddings, and feed the resulting sequence of vectors to a standard Transformer encoder. In order to perform classification, we use the standard approach of adding an extra learnable "classification token" to the sequence.
We provide a variety of ViT models in different GCS buckets. The models can be downloaded with e.g.:
wget https://storage.googleapis.com/vit_models/imagenet21k/ViT-B_16.npz
The model filenames (without the .npz
extension) correspond to the
config.model_name
in [vit_jax/configs/models.py
]
gs://vit_models/imagenet21k
] - Models pre-trained on ImageNet-21k.gs://vit_models/imagenet21k+imagenet2012
] - Models pre-trained on
ImageNet-21k and fine-tuned on ImageNet.gs://vit_models/augreg
] - Models pre-trained on ImageNet-21k,
applying varying amounts of [AugReg]. Improved performance.gs://vit_models/sam
] - Models pre-trained on ImageNet with [SAM].gs://vit_models/gsam
] - Models pre-trained on ImageNet with [GSAM].We recommend using the following checkpoints, trained with [AugReg] that have the best pre-training metrics:
Model | Pre-trained checkpoint | Size | Fine-tuned checkpoint | Resolution | Img/sec | Imagenet accuracy |
---|---|---|---|---|---|---|
L/16 | gs://vit_models/augreg/L_16-i21k-300ep-lr_0.001-aug_strong1-wd_0.1-do_0.0-sd_0.0.npz | 1243 MiB | gs://vit_models/augreg/L_16-i21k-300ep-lr_0.001-aug_strong1-wd_0.1-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.01-res_384.npz | 384 | 50 | 85.59% |
B/16 | gs://vit_models/augreg/B_16-i21k-300ep-lr_0.001-aug_medium1-wd_0.1-do_0.0-sd_0.0.npz | 391 MiB | gs://vit_models/augreg/B_16-i21k-300ep-lr_0.001-aug_medium1-wd_0.1-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.03-res_384.npz | 384 | 138 | 85.49% |
S/16 | gs://vit_models/augreg/S_16-i21k-300ep-lr_0.001-aug_light1-wd_0.03-do_0.0-sd_0.0.npz | 115 MiB | gs://vit_models/augreg/S_16-i21k-300ep-lr_0.001-aug_light1-wd_0.03-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.03-res_384.npz | 384 | 300 | 83.73% |
R50+L/32 | gs://vit_models/augreg/R50_L_32-i21k-300ep-lr_0.001-aug_medium1-wd_0.1-do_0.1-sd_0.1.npz | 1337 MiB | gs://vit_models/augreg/R50_L_32-i21k-300ep-lr_0.001-aug_medium1-wd_0.1-do_0.1-sd_0.1--imagenet2012-steps_20k-lr_0.01-res_384.npz | 384 | 327 | 85.99% |
R26+S/32 | gs://vit_models/augreg/R26_S_32-i21k-300ep-lr_0.001-aug_light1-wd_0.1-do_0.0-sd_0.0.npz | 170 MiB | gs://vit_models/augreg/R26_S_32-i21k-300ep-lr_0.001-aug_light1-wd_0.1-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.01-res_384.npz | 384 | 560 | 83.85% |
Ti/16 | gs://vit_models/augreg/Ti_16-i21k-300ep-lr_0.001-aug_none-wd_0.03-do_0.0-sd_0.0.npz | 37 MiB | gs://vit_models/augreg/Ti_16-i21k-300ep-lr_0.001-aug_none-wd_0.03-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.03-res_384.npz | 384 | 610 | 78.22% |
B/32 | gs://vit_models/augreg/B_32-i21k-300ep-lr_0.001-aug_light1-wd_0.1-do_0.0-sd_0.0.npz | 398 MiB | gs://vit_models/augreg/B_32-i21k-300ep-lr_0.001-aug_light1-wd_0.1-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.01-res_384.npz | 384 | 955 | 83.59% |
S/32 | gs://vit_models/augreg/S_32-i21k-300ep-lr_0.001-aug_none-wd_0.1-do_0.0-sd_0.0.npz | 118 MiB | gs://vit_models/augreg/S_32-i21k-300ep-lr_0.001-aug_none-wd_0.1-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.01-res_384.npz | 384 | 2154 | 79.58% |
R+Ti/16 | gs://vit_models/augreg/R_Ti_16-i21k-300ep-lr_0.001-aug_none-wd_0.03-do_0.0-sd_0.0.npz | 40 MiB | gs://vit_models/augreg/R_Ti_16-i21k-300ep-lr_0.001-aug_none-wd_0.03-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.03-res_384.npz | 384 | 2426 | 75.40% |
The results from the original ViT paper (https://arxiv.org/abs/2010.11929) have
been replicated using the models from [gs://vit_models/imagenet21k
]:
model | dataset | dropout=0.0 | dropout=0.1 |
---|---|---|---|
R50+ViT-B_16 | cifar10 | 98.72%, 3.9h (A100), tb.dev | 98.94%, 10.1h (V100), |
AI数字人视频创作平台
Keevx 一款开箱即用的AI数字人视频创作平台,广泛适用于电商广告、企业培训与社媒宣传,让全球企业与个人创作者无需拍摄剪辑,就能快速生成多语言、高质量的专业视频。
一站式AI创作平台
提供 AI 驱动的图片、视频生成及数字人等功能,助力创意创作
AI办公助手,复杂任务高效处理
AI办公助手,复杂任务高效处理。办公效率低?扣子空间AI助手支持播客生成、PPT制作、网页开发及报告写作,覆盖科研、商业、舆情等领域的专家Agent 7x24小时响应, 生活工作无缝切换,提升50%效率!
AI辅助编程,代码自动修复
Trae是一种自适应的集成开发环境(IDE),通过自动化和多元协作改变开发流程。利用Trae,团队能够更快速、精确地编写和部署代码,从而提高编程效率和项目交付速度。Trae具备上下文感知和代码自动完成功能,是提升开发效率的理想工具。
AI小说写作助手,一站式润色、改写、扩写
蛙蛙写作—国内先进的AI写作平台,涵盖小说、学术、社交媒体等多场景。提供续写、改写、润色等功能,助力创作者高效优化写作流程。界面简洁,功能全面,适合各类写作者提升内容品质和工作效率。
全能AI智能助手,随时解答生活与工作的多样问题
问小白,由元石科技研发的AI智能助手,快速准确地解答各种生活和工作问题,包括但不限于搜索、规划和社交互动,帮助用户在日常生活中提高效率,轻松管理个人事务。
实时语音翻译/同声传译工具
Transly是一个多场景的AI大语言模型驱动的同声传译、专业翻译助手,它拥有超精准的音频识别翻译能力,几乎零延迟的使用体验和支持多国语言可以让你带它走遍全球,无论你是留学生、商务人士、韩剧美剧爱好者,还是出国游玩、多国会议、跨国追星等等,都可以满足你所有需要同传的场景需求,线上线下通用,扫除语言障碍,让全世界的语言交流不再有国界。
一键生成PPT和Word,让学习生活更轻松
讯飞智文是一个利用 AI 技术的项目,能够帮助用户生成 PPT 以及各类文档。无论是商业领域的市场分析报告、年度目标制定,还是学生群体的职业生涯规划、实习避坑指南,亦或是活动策划、旅游攻略等内容,它都能提供支持,帮助用户精准表达,轻松呈现各种信息。
深度推理能力全新升级,全面对标OpenAI o1
科大讯飞的星火大模型,支持语言理解、知识问答和文本创作等多功能,适用于多种文件和业务场景,提升办公和日常生活的效率。讯飞星火是一个提供丰富智能服务的平台,涵盖科技资讯、图像创作、写作辅助、编程解答、科研文献解读等功能,能为不同需求的用户提供便捷高效的帮助,助力用户轻松获取信息、解决问题,满足多样化使用场景。
一种基于大语言模型的高效单流解耦语音令牌文本到语音合成模型
Spark-TTS 是一个基于 PyTorch 的开 源文本到语音合成项目,由多个知名机构联合参与。该项目提供了高效的 LLM(大语言模型)驱动的语音合成方案,支持语音克隆和语音创建功能,可通过命令行界面(CLI)和 Web UI 两种方式使用。用户可以根据需求调整语音的性别、音高、速度等参数,生成高质量的语音。该项目适用于多种场景,如有声读物制作、智能语音助手开发等。
最新AI工具、AI资讯
独家AI资源、AI项目落地
微信扫一扫关注公众号