containerd is an industry-standard container runtime with an emphasis on simplicity, robustness, and portability. It is available as a daemon for Linux and Windows, which can manage the complete container lifecycle of its host system: image transfer and storage, container execution and supervision, low-level storage and network attachments, etc.
containerd is a member of CNCF with 'graduated' status.
containerd is designed to be embedded into a larger system, rather than being used directly by developers or end-users.
We are a large inclusive OSS project that is welcoming help of any kind shape or form:
exp/beginner
tag, for example containerd/containerd beginner issues.See our documentation on containerd.io:
To get started contributing to containerd, see CONTRIBUTING.
If you are interested in trying out containerd see our example at Getting Started.
There are nightly builds available for download here.
Binaries are generated from main
branch every night for Linux
and Windows
.
Please be aware: nightly builds might have critical bugs, it's not recommended for use in production and no support provided.
The k8s CI dashboard group for containerd contains test results regarding the health of kubernetes when run against main and a number of containerd release branches.
Runtime requirements for containerd are very minimal. Most interactions with
the Linux and Windows container feature sets are handled via runc and/or
OS-specific libraries (e.g. hcsshim for Microsoft).
The current required version of runc
is described in RUNC.md.
There are specific features used by containerd core code and snapshotters that will require a minimum kernel version on Linux. With the understood caveat of distro kernel versioning, a reasonable starting point for Linux is a minimum 4.x kernel version.
The overlay filesystem snapshotter, used by default, uses features that were finalized in the 4.x kernel series. If you choose to use btrfs, there may be more flexibility in kernel version (minimum recommended is 3.18), but will require the btrfs kernel module and btrfs tools to be installed on your Linux distribution.
To use Linux checkpoint and restore features, you will need criu
installed on
your system. See more details in Checkpoint and Restore.
Build requirements for developers are listed in BUILDING.
Any registry which is compliant with the OCI Distribution Specification is supported by containerd.
For configuring registries, see registry host configuration documentation
containerd offers a full client package to help you integrate containerd into your platform.
import ( "context" containerd "github.com/containerd/containerd/v2/client" "github.com/containerd/containerd/v2/pkg/cio" "github.com/containerd/containerd/v2/pkg/namespaces" ) func main() { client, err := containerd.New("/run/containerd/containerd.sock") defer client.Close() }
Namespaces allow multiple consumers to use the same containerd without conflicting with each other. It has the benefit of sharing content while maintaining separation with containers and images.
To set a namespace for requests to the API:
context = context.Background() // create a context for docker docker = namespaces.WithNamespace(context, "docker") containerd, err := client.NewContainer(docker, "id")
To set a default namespace on the client:
client, err := containerd.New(address, containerd.WithDefaultNamespace("docker"))
// pull an image image, err := client.Pull(context, "docker.io/library/redis:latest") // push an image err := client.Push(context, "docker.io/library/redis:latest", image.Target())
In containerd, a container is a metadata object. Resources such as an OCI runtime specification, image, root filesystem, and other metadata can be attached to a container.
redis, err := client.NewContainer(context, "redis-master") defer redis.Delete(context)
containerd fully supports the OCI runtime specification for running containers. We have built-in functions to help you generate runtime specifications based on images as well as custom parameters.
You can specify options when creating a container about how to modify the specification.
redis, err := client.NewContainer(context, "redis-master", containerd.WithNewSpec(oci.WithImageConfig(image)))
containerd allows you to use overlay or snapshot filesystems with your containers. It comes with built-in support for overlayfs and btrfs.
// pull an image and unpack it into the configured snapshotter image, err := client.Pull(context, "docker.io/library/redis:latest", containerd.WithPullUnpack) // allocate a new RW root filesystem for a container based on the image redis, err := client.NewContainer(context, "redis-master", containerd.WithNewSnapshot("redis-rootfs", image), containerd.WithNewSpec(oci.WithImageConfig(image)), ) // use a readonly filesystem with multiple containers for i := 0; i < 10; i++ { id := fmt.Sprintf("id-%s", i) container, err := client.NewContainer(ctx, id, containerd.WithNewSnapshotView(id, image), containerd.WithNewSpec(oci.WithImageConfig(image)), ) }
Taking a container object and turning it into a runnable process on a system is done by creating a new Task
from the container. A task represents the runnable object within containerd.
// create a new task task, err := redis.NewTask(context, cio.NewCreator(cio.WithStdio)) defer task.Delete(context) // the task is now running and has a pid that can be used to setup networking // or other runtime settings outside of containerd pid := task.Pid() // start the redis-server process inside the container err := task.Start(context) // wait for the task to exit and get the exit status status, err := task.Wait(context)
If you have criu installed on your machine you can checkpoint and restore containers and their tasks. This allows you to clone and/or live migrate containers to other machines.
// checkpoint the task then push it to a registry checkpoint, err := task.Checkpoint(context) err := client.Push(context, "myregistry/checkpoints/redis:master", checkpoint) // on a new machine pull the checkpoint and restore the redis container checkpoint, err := client.Pull(context, "myregistry/checkpoints/redis:master") redis, err = client.NewContainer(context, "redis-master", containerd.WithNewSnapshot("redis-rootfs", checkpoint)) defer container.Delete(context) task, err = redis.NewTask(context, cio.NewCreator(cio.WithStdio), containerd.WithTaskCheckpoint(checkpoint)) defer task.Delete(context) err := task.Start(context)
In addition to the built-in Snapshot plugins in containerd, additional external plugins can be configured using GRPC. An external plugin is made available using the configured name and appears as a plugin alongside the built-in ones.
To add an external snapshot plugin, add the plugin to containerd's config file
(by default at /etc/containerd/config.toml
). The string following
proxy_plugin.
will be used as the name of the snapshotter and the address
should refer to a socket with a GRPC listener serving containerd's Snapshot
GRPC API. Remember to restart containerd for any configuration changes to take
effect.
[proxy_plugins]
[proxy_plugins.customsnapshot]
type = "snapshot"
address = "/var/run/mysnapshotter.sock"
See PLUGINS.md for how to create plugins
Please see RELEASES.md for details on versioning and stability of containerd components.
Downloadable 64-bit Intel/AMD binaries of all official releases are available on our releases page.
For other architectures and distribution support, you will find that many Linux distributions package their own containerd and provide it across several architectures, such as Canonical's Ubuntu packaging.
Starting with containerd 1.4, the urfave client feature for auto-creation of bash and zsh
autocompletion data is enabled. To use the autocomplete feature in a bash shell for example, source
the autocomplete/ctr file in your .bashrc
, or manually like:
$ source ./contrib/autocomplete/ctr
ctr
autocomplete for bash and zshFor bash, copy the contrib/autocomplete/ctr
script into
/etc/bash_completion.d/
and rename it to ctr
. The zsh_autocomplete
file is also available and can be used similarly for zsh users.
Provide documentation to users to source
this file into their shell if
you don't place the autocomplete file in a location where it is automatically
loaded for the user's shell environment.
cri
is a containerd plugin implementation of the Kubernetes container runtime interface (CRI). With it, you are able to use containerd as the container runtime for a Kubernetes cluster.
cri
is a native plugin of containerd. Since containerd 1.1, the cri plugin is built into the release binaries and enabled by default.
The cri
plugin has reached GA status, representing that it is:
See results on the containerd k8s test dashboard
cri
SetupA Kubernetes incubator project, cri-tools, includes programs for exercising CRI implementations. More importantly, cri-tools includes the program critest
which is used for running CRI Validation Testing.
crictl
cri
PluginsFor async communication and long-running discussions please use issues and pull requests on the GitHub repo. This will be the best place to discuss design and implementation.
For sync communication catch us in the #containerd
and #containerd-dev
Slack channels on Cloud Native Computing Foundation's (CNCF) Slack - cloud-native.slack.com
. Everyone is welcome to join and chat. Get Invite to CNCF Slack.
Security audits for the containerd project are hosted on our website. Please see the security page at containerd.io for more information.
Please follow the instructions at containerd/project
The containerd codebase is released under the Apache 2.0 license. The README.md file and files in the "docs" folder are licensed under the Creative Commons Attribution 4.0 International License. You may obtain a copy of the license, titled CC-BY-4.0, at http://creativecommons.org/licenses/by/4.0/.
containerd is the primary open source project within the broader containerd GitHub organization.
However, all projects within the repo have common maintainership, governance, and contributing
guidelines which are stored in a project
repository commonly for all containerd projects.
Please find all these core project documents, including the:
information in our containerd/project
repository.
Interested to see who is using containerd? Are you using containerd in a project? Please add yourself via pull request to our ADOPTERS.md
一款强大的视觉语言模型,支持图像和视频输入
Qwen2.5-VL 是一款强大的视觉语言模型,支持图像和视频输入,可用于多种场景,如商品特点总结、图像文字识别等。项目提供了 OpenAI API 服务、Web UI 示例等部署方式,还包含了视觉处理工具,有助于开发者快速集成和使用,提升工作效率。
HunyuanVideo 是一个可基于文本生成高质量图像和视频的项目。
HunyuanVideo 是一个专注于文本到图像及视频生成的项目。它具备强大的视频生成能力,支持多种分辨率和视频长度选择,能根据用户输入的文本生成逼真的图像 和视频。使用先进的技术架构和算法,可灵活调整生成参数,满足不同场景的需求,是文本生成图像视频领域的优质工具。
一个基于 Gradio 构建的 WebUI,支持与浏览器智能体进行便捷交互。
WebUI for Browser Use 是一个强大的项目,它集成了多种大型语言模型,支持自定义浏览器使用,具备持久化浏览器会话等功能。用户可以通过简洁友好的界面轻松控制浏览器智能体完成各类任务,无论是数据提取、网页导航还是表单填写等操作都能高效实现,有利于提高工作效率和获取信息的便捷性。该项目适合开发者、研究人员以及需要自动化浏览器操作的人群使用,在 SEO 优化方面,其关键词涵盖浏览器使用、WebUI、大型语言模型集成等,有助于提高网页在搜索引擎中的曝光度。
基于 ESP32 的小智 AI 开发项目,支持多种网络连接与协议,实现语音交互等功能。
xiaozhi-esp32 是一个极具创新性的基于 ESP32 的开发项目,专注于人工智能语音交互领域。项目涵盖了丰富的功能,如网络连接、OTA 升级、设备激活等,同时支持多种语言。无论是开发爱好者还是专业开发者,都能借助该项目快速搭建起高效的 AI 语音交互系统,为智能设备开发提供强大助力。
一个用于 OCR 的项目,支持多种模型和服务器进行 PDF 到 Markdown 的转换,并提供测试和报告功能。
olmocr 是一个专注于光学字符识别(OCR)的 Python 项目,由 Allen Institute for Artificial Intelligence 开发。它支持多种模型和服务器,如 vllm、sglang、OpenAI 等,可将 PDF 文件的页面转换为 Markdown 格式。项目还提供了测试框架和 HTML 报告生成功能,方便用户对 OCR 结果进行评估和分析。适用于科研、文档处理等领域,有助于提高工作效率和准确性。
飞书多维表格 ×DeepSeek R1 满血版
飞书多维表格联合 DeepSeek R1 模型,提供 AI 自动化解决方案,支持批量写作、数据分析、跨模态处理等功能,适用于电商、短视频、影视创作等场景,提升企业生产力与创作效率。关键词:飞书多维表格、DeepSeek R1、AI 自动化、批量处理、企业协同工具。
高质量语音生成模型
CSM 是一个开源的语音生成项目,它提供了一个基于 Llama-3.2-1B 和 CSM-1B 的语音生成模型。该项目支持多语言,可生 成多种声音,适用于研究和教育场景。通过使用 CSM,用户可以方便地进行语音合成,同时项目还提供了水印功能,确保生成音频的可追溯性和透明度。
Hugging Face 的 AI 智能体课程,涵盖多种智能体框架及相关知识
本项目是 Hugging Face 推出的 AI 智能体课程,深入介绍了 AI 智能体的相关概念,如大语言模型、工具使用等。课程包含多个单元,详细讲解了不同的智能体框架,如 smolagents 和 LlamaIndex,提供了丰富的学习资源和实践案例。适合对 AI 智能体感兴趣的开发者和学习者,有助于提升他们在该领域的知识和技能。
用于 AI 项目管理和 API 交互的工具集,助力 AI 项目高效开发与管理。
RagaAI-Catalyst 是一款专注于 AI 领域的强大工具集,为开发者提供了便捷的项目管理、API 交互、令牌管理等功能。支持多 API 密钥上传,能快速创建、列出和管理 AI 项目,还可获取项目用例和指标信息。适用于各类 AI 开发场景,提升开发效率,推动 AI 项目顺利开展。
一个包含多种工具和文档处理功能,适用于 LLM 使用的项目。
smolagents 是一个功能丰富的项目,提供了如文件格式转换、网页内容读取、语义搜索等多种工具,支持将常见文件类型或网页转换为 Markdown,方便进行文档处理和信息提取,能满足不同场景下的需求,提升工作效率和数据处理能力。
最新AI工具、AI资讯
独家AI资源、AI项目落地
微信扫一扫关注公众号