Dockerfile
linkspython3.11
, latest
(Dockerfile)python3.10
, (Dockerfile)python3.9
, (Dockerfile)python3.8
, (Dockerfile)python3.7
, (Dockerfile)python3.11-slim
(Dockerfile)python3.10-slim
(Dockerfile)python3.9-slim
(Dockerfile)python3.8-slim
(Dockerfile)🚨 These tags are no longer supported or maintained, they are removed from the GitHub repository, but the last versions pushed might still be available in Docker Hub if anyone has been pulling them:
python3.8-alpine3.10
python3.9-alpine3.14
python3.7-alpine3.8
python3.6
python3.6-alpine3.8
The last date tags for these versions are:
python3.8-alpine3.10-2024-03-17
python3.9-alpine3.14-2024-03-17
python3.7-alpine3.8-2024-03-17
python3.6-2022-11-25
python3.6-alpine3.8-2022-11-25
Note: There are tags for each build date. If you need to "pin" the Docker image version you use, you can select one of those tags. E.g. tiangolo/uvicorn-gunicorn-starlette:python3.7-2019-10-15
.
Docker image with Uvicorn managed by Gunicorn for high-performance Starlette web applications in Python with performance auto-tuning.
GitHub repo: https://github.com/tiangolo/uvicorn-gunicorn-starlette-docker
Docker Hub image: https://hub.docker.com/r/tiangolo/uvicorn-gunicorn-starlette/
Starlette has shown to be a Python web framework with one of the best performances, as measured by third-party benchmarks.
The achievable performance is on par with (and in many cases superior to) Go and Node.js frameworks.
This image has an auto-tuning mechanism included to start a number of worker processes based on the available CPU cores. That way you can just add your code and get high performance automatically, which is useful in simple deployments.
You are probably using Kubernetes or similar tools. In that case, you probably don't need this image (or any other similar base image). You are probably better off building a Docker image from scratch as explained in the docs for FastAPI in Containers - Docker: Build a Docker Image for FastAPI, the same process could be applied to Starlette.
If you have a cluster of machines with Kubernetes, Docker Swarm Mode, Nomad, or other similar complex system to manage distributed containers on multiple machines, then you will probably want to handle replication at the cluster level instead of using a process manager (like Gunicorn with Uvicorn workers) in each container, which is what this Docker image does.
In those cases (e.g. using Kubernetes) you would probably want to build a Docker image from scratch, installing your dependencies, and running a single Uvicorn process instead of this image.
For example, your Dockerfile
could look like:
FROM python:3.9 WORKDIR /code COPY ./requirements.txt /code/requirements.txt RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt COPY ./app /code/app CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "80"]
You can read more about this in the FastAPI documentation about: FastAPI in Containers - Docker as the same ideas would apply to Starlette.
You could want a process manager like Gunicorn running Uvicorn workers in the container if your application is simple enough that you don't need (at least not yet) to fine-tune the number of processes too much, and you can just use an automated default, and you are running it on a single server, not a cluster.
You could be deploying to a single server (not a cluster) with Docker Compose, so you wouldn't have an easy way to manage replication of containers (with Docker Compose) while preserving the shared network and load balancing.
Then you could want to have a single container with a Gunicorn process manager starting several Uvicorn worker processes inside, as this Docker image does.
You could also have other reasons that would make it easier to have a single container with multiple processes instead of having multiple containers with a single process in each of them.
For example (depending on your setup) you could have some tool like a Prometheus exporter in the same container that should have access to each of the requests that come.
In this case, if you had multiple containers, by default, when Prometheus came to read the metrics, it would get the ones for a single container each time (for the container that handled that particular request), instead of getting the accumulated metrics for all the replicated containers.
Then, in that case, it could be simpler to have one container with multiple processes, and a local tool (e.g. a Prometheus exporter) on the same container collecting Prometheus metrics for all the internal processes and exposing those metrics on that single container.
Read more about it all in the FastAPI documentation about: FastAPI in Containers - Docker.
Uvicorn is a lightning-fast "ASGI" server.
It runs asynchronous Python web code in a single process.
You can use Gunicorn to start and manage multiple Uvicorn worker processes.
That way, you get the best of concurrency and parallelism in simple deployments.
Starlette is a lightweight ASGI framework/toolkit, which is ideal for building high performance asyncio services.
tiangolo/uvicorn-gunicorn-starlette
This image will set a sensible configuration based on the server it is running on (the amount of CPU cores available) without making sacrifices.
It has sensible defaults, but you can configure it with environment variables or override the configuration files.
There is also a slim version. If you want that, use one of the tags from above.
tiangolo/uvicorn-gunicorn
This image (tiangolo/uvicorn-gunicorn-starlette
) is based on tiangolo/uvicorn-gunicorn.
That image is what actually does all the work.
This image just installs Starlette and has the documentation specifically targeted at Starlette.
If you feel confident about your knowledge of Uvicorn, Gunicorn and ASGI, you can use that image directly.
tiangolo/uvicorn-gunicorn-fastapi
There is a sibling Docker image: tiangolo/uvicorn-gunicorn-fastapi
If you are creating a new FastAPI web application you should use tiangolo/uvicorn-gunicorn-fastapi instead.
Note: FastAPI is based on Starlette and adds several features on top of it. Useful for APIs and other cases: data validation, data conversion, documentation with OpenAPI, dependency injection, security/authentication and others.
You don't need to clone the GitHub repo.
You can use this image as a base image for other images.
Assuming you have a file requirements.txt
, you could have a Dockerfile
like this:
FROM tiangolo/uvicorn-gunicorn-starlette:python3.11 COPY ./requirements.txt /app/requirements.txt RUN pip install --no-cache-dir --upgrade -r /app/requirements.txt COPY ./app /app
It will expect a file at /app/app/main.py
.
Or otherwise a file at /app/main.py
.
And will expect it to contain a variable app
with your Starlette application.
Then you can build your image from the directory that has your Dockerfile
, e.g:
docker build -t myimage ./
Dockerfile
with:FROM tiangolo/uvicorn-gunicorn-starlette:python3.11 COPY ./requirements.txt /app/requirements.txt RUN pip install --no-cache-dir --upgrade -r /app/requirements.txt COPY ./app /app
app
directory and enter in it.main.py
file with:from starlette.applications import Starlette from starlette.responses import JSONResponse app = Starlette() @app.route("/") async def homepage(request): return JSONResponse({"message": "Hello World!"})
.
├── app
│ └── main.py
└── Dockerfile
Dockerfile
is, containing your app
directory).docker build -t myimage .
docker run -d --name mycontainer -p 80:80 myimage
Now you have an optimized Starlette server in a Docker container. Auto-tuned for your current server (and number of CPU cores).
You should be able to check it in your Docker container's URL, for example: http://192.168.99.100/ or http://127.0.0.1/ (or equivalent, using your Docker host).
You will see something like:
{"message": "Hello World!"}
You will probably also want to add any dependencies for your app and pin them to a specific version, probably including Uvicorn, Gunicorn, and Starlette.
This way you can make sure your app always works as expected.
You could install packages with pip
commands in your Dockerfile
, using a requirements.txt
, or even using Poetry.
And then you can upgrade those dependencies in a controlled way, running your tests, making sure that everything works, but without breaking your production application if some new version is not compatible.
Here's a small example of one of the ways you could install your dependencies making sure you have a pinned version for each package.
Let's say you have a project managed with Poetry, so, you have your package dependencies in a file pyproject.toml
. And possibly a file poetry.lock
.
Then you could have a Dockerfile
using Docker multi-stage building with:
FROM python:3.9 as requirements-stage WORKDIR /tmp RUN pip install poetry COPY ./pyproject.toml ./poetry.lock* /tmp/ RUN poetry export -f requirements.txt --output requirements.txt --without-hashes FROM tiangolo/uvicorn-gunicorn-starlette:python3.11 COPY --from=requirements-stage /tmp/requirements.txt /app/requirements.txt RUN pip install --no-cache-dir --upgrade -r /app/requirements.txt COPY ./app /app
That will:
./poetry.lock*
(ending with a *
), it won't crash if that file is not available yet.It's important to copy the app code after installing the dependencies, that way you can take advantage of Docker's cache. That way it won't have to install everything from scratch every time you update your application files, only when you add new dependencies.
This also applies for any other way you use to install your dependencies. If you use a requirements.txt
, copy it alone and install all the dependencies on the top of the Dockerfile
, and add your app code after it.
These are the environment variables that you can set in the container to configure it and their default values:
MODULE_NAME
The Python "module" (file) to be imported by Gunicorn, this module would contain the actual application in a variable.
By default:
app.main
if there's a file /app/app/main.py
ormain
if there's a file /app/main.py
For example, if your main file was at /app/custom_app/custom_main.py
, you could set it like:
docker run -d -p 80:80 -e MODULE_NAME="custom_app.custom_main" myimage
VARIABLE_NAME
The variable inside of the Python module that contains the Starlette application.
By default:
app
For example, if your main Python file has something like:
from starlette.applications import Starlette from starlette.responses import JSONResponse api = Starlette() @api.route("/") async def homepage(request): return JSONResponse({"message": "Hello World!"})
In this case api
would be the variable with the Starlette application. You could set it like:
docker run -d -p 80:80 -e VARIABLE_NAME="api" myimage
APP_MODULE
The string with the Python module and the variable name passed to Gunicorn.
By default, set based on the variables MODULE_NAME
and VARIABLE_NAME
:
app.main:app
ormain:app
You can set it like:
docker run -d -p 80:80 -e APP_MODULE="custom_app.custom_main:api" myimage
GUNICORN_CONF
The path to a Gunicorn Python configuration file.
By default:
/app/gunicorn_conf.py
if it exists/app/app/gunicorn_conf.py
if it exists/gunicorn_conf.py
(the included default)You can set it like:
docker run -d -p 80:80 -e GUNICORN_CONF="/app/custom_gunicorn_conf.py" myimage
You can use the config file from the base image as a starting point for yours.
WORKERS_PER_CORE
This image will check how many CPU cores are available in the current server running your container.
It will set the number of workers to the number of CPU cores multiplied by this value.
By default:
1
You can set it like:
docker run -d -p 80:80 -e WORKERS_PER_CORE="3" myimage
If you used the value 3
in a server with 2 CPU cores, it would run 6 worker processes.
You can use floating point values too.
So, for example, if you have a big server (let's say, with
AI数字人视频创作平台
Keevx 一款开箱即用的AI数字人视频创作平台,广泛适用于电商广告、企业培训与社媒宣传,让全球企业与个人创作者无需拍摄剪辑,就能快速生成多语言、高质量的专业视频。
一站式AI创作平台
提供 AI 驱动的图片、视频生成及数字人等功能,助力创意创作
AI办公助手,复杂任务高效处理
AI办公助手,复杂任务高效处理。办公效率低?扣子空间AI助手支持播客生成、PPT制作、网页开发及报告写作,覆盖科研、商业、舆情等领域的专家Agent 7x24小时响应,生活工作无缝切换,提升50%效率!
AI辅助编程,代码自动修复
Trae是一种自适应的集成开发环境(IDE),通过自动化和多元协作改变开发流程。利用Trae,团队能够更快速、精确地编写和部署代码,从而提高编程效率和项目交付速度。Trae具备上下文感知和代码自动完成功能,是提升开发效率的理想工具。
AI小说写作助手,一站式润色、改写、扩写
蛙蛙写作—国内先进的AI写作平台,涵盖小说、学术、社交媒体等多场景。提供续写、改写、润色等功能,助力创作者高效优化写作流程。界面简洁,功能全面,适合各类写作者提升内容品质和工作效率。
全能AI智能助手,随时解答生活与工作的多样问题
问小白,由元石科技研发的AI智能助手,快速准确地解答各种生活和工作问题,包括但不限于搜索、规划和社交互动,帮助用户在日常生活中提高效率,轻松管理个人事务。
实时语音翻译/同声传译工具
Transly是一个多场景的AI大语言模型驱动的同声传译、专业翻译助手,它拥有超精准的音频识别翻译能力,几乎零延迟的使用体验和支持多国语言可以让你带它走遍全球,无论你是留学生、商务人士、韩剧美剧爱好者,还是出国游玩、多国会议、跨国追星等等,都可以满足你所有需要同传的场景需求,线上线下通用,扫除语言障碍,让全世界的语言交流不再有国界。
一键生成PPT和Word,让学习生活更轻松
讯飞智文是一个利用 AI 技术的项目,能够帮助用户生成 PPT 以及各类文档。无论是商业领域的市场分析报告、年度目标制定,还是学生群体的职业生涯规划、实习避坑指南,亦或是活动策划、旅游攻略等内容,它都能提供支持,帮助用户精准表达,轻松呈现各种信息。
深度推理能力全新升级,全面对标OpenAI o1
科大讯飞的星火大模型,支持语言理解、知识问答和文本创作等多功能,适用于多种文件和业务场景,提升办公和日常生活的效率。讯飞星火是一个提供丰富智能服务的平台,涵盖科技资讯、图像创作、写作辅助、编程解答、科研文献解读等功能,能为不同需求的用户提供便捷高效的帮助,助力用户轻松获取信息、解决问题,满足多样化使用场景。
一种基于大语言模型的高效单流解耦语音令牌文本到语音合成模型
Spark-TTS 是一个基于 PyTorch 的开源文本到语音合成项目,由多个知名机构联合参与。该项目提供了高效的 LLM(大语言模型)驱动的语音合成方案,支持语音克隆和语音创建功 能,可通过命令行界面(CLI)和 Web UI 两种方式使用。用户可以根据需求调整语音的性别、音高、速度等参数,生成高质量的语音。该项目适用于多种场景,如有声读物制作、智能语音助手开发等。
最新AI工具、AI资讯
独家AI资源、AI项目落地
微信扫一扫关注公众号