| Bazel CI |
|---|
🚨 rules_docker is no longer maintained and deprecated. Please see rules_oci for a better designed and maintained alternative.
These rules used to be docker_build, docker_push, etc. and the aliases for
these (mostly) legacy names still exist largely for backwards-compatibility. We
also have early-stage oci_image, oci_push, etc. aliases for folks that
enjoy the consistency of a consistent rule prefix. The only place the
format-specific names currently do any more than alias things is in foo_push,
where they also specify the appropriate format as which to publish the image.
This repository contains a set of rules for pulling down base images, augmenting them with build artifacts and assets, and publishing those images. These rules do not require / use Docker for pulling, building, or pushing images. This means:
boot2docker or docker-machine installed. Note use of these rules on Windows
is currently not supported.Also, unlike traditional container builds (e.g. Dockerfile), the Docker images
produced by container_image are deterministic / reproducible.
To get started with building Docker images, check out the examples that build the same images using both rules_docker and a Dockerfile.
NOTE: container_push and container_pull make use of
google/go-containerregistry for
registry interactions.
It is notable that: cc_image, go_image, rust_image, and d_image
also allow you to specify an external binary target.
This repo now includes rules that provide additional functionality to install packages and run commands inside docker containers. These rules, however, require a docker binary is present and properly configured. These rules include:
In addition to low-level rules for building containers, this repository
provides a set of higher-level rules for containerizing applications. The idea
behind these rules is to make containerizing an application built via a
lang_binary rule as simple as changing it to lang_image.
By default these higher level rules make use of the distroless language runtimes, but these
can be overridden via the base="..." attribute (e.g. with a container_pull
or container_image target).
Note also that these rules do not expose any docker related attributes. If you
need to add a custom env or symlink to a lang_image, you must use
container_image targets for this purpose. Specifically, you can use as base for your
lang_image target a container_image target that adds e.g., custom env or symlink.
Please see <a href=#go_image-custom-base>go_image (custom base)</a> for an example.
Add the following to your WORKSPACE file to add the external repositories:
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive") http_archive( # Get copy paste instructions for the http_archive attributes from the # release notes at https://github.com/bazelbuild/rules_docker/releases ) # OPTIONAL: Call this to override the default docker toolchain configuration. # This call should be placed BEFORE the call to "container_repositories" below # to actually override the default toolchain configuration. # Note this is only required if you actually want to call # docker_toolchain_configure with a custom attr; please read the toolchains # docs in /toolchains/docker/ before blindly adding this to your WORKSPACE. # BEGIN OPTIONAL segment: load("@io_bazel_rules_docker//toolchains/docker:toolchain.bzl", docker_toolchain_configure="toolchain_configure" ) docker_toolchain_configure( name = "docker_config", # OPTIONAL: Bazel target for the build_tar tool, must be compatible with build_tar.py build_tar_target="<enter absolute path (i.e., must start with repo name @...//:...) to an executable build_tar target>", # OPTIONAL: Path to a directory which has a custom docker client config.json. # See https://docs.docker.com/engine/reference/commandline/cli/#configuration-files # for more details. client_config="<enter Bazel label to your docker config.json here>", # OPTIONAL: Path to the docker binary. # Should be set explicitly for remote execution. docker_path="<enter absolute path to the docker binary (in the remote exec env) here>", # OPTIONAL: Path to the gzip binary. gzip_path="<enter absolute path to the gzip binary (in the remote exec env) here>", # OPTIONAL: Bazel target for the gzip tool. gzip_target="<enter absolute path (i.e., must start with repo name @...//:...) to an executable gzip target>", # OPTIONAL: Path to the xz binary. # Should be set explicitly for remote execution. xz_path="<enter absolute path to the xz binary (in the remote exec env) here>", # OPTIONAL: Bazel target for the xz tool. # Either xz_path or xz_target should be set explicitly for remote execution. xz_target="<enter absolute path (i.e., must start with repo name @...//:...) to an executable xz target>", # OPTIONAL: List of additional flags to pass to the docker command. docker_flags = [ "--tls", "--log-level=info", ], ) # End of OPTIONAL segment. load( "@io_bazel_rules_docker//repositories:repositories.bzl", container_repositories = "repositories", ) container_repositories() load("@io_bazel_rules_docker//repositories:deps.bzl", container_deps = "deps") container_deps() load( "@io_bazel_rules_docker//container:container.bzl", "container_pull", ) container_pull( name = "java_base", registry = "gcr.io", repository = "distroless/java", # 'tag' is also supported, but digest is encouraged for reproducibility. digest = "sha256:deadbeef", )
If the repositories that are imported by container_repositories() have already been
imported (at a different version) by other rules you called in your WORKSPACE, which
are placed above the call to container_repositories(), arbitrary errors might
occur. If you get errors related to external repositories, you will likely
not be able to use container_repositories() and will have to import
directly in your WORKSPACE all the required dependencies (see the most up
to date impl of container_repositories() for details).
This is an example of an error due to a diamond dependency. If you get this error, make sure to import rules_docker before other libraries, so that six can be patched properly.
See https://github.com/bazelbuild/rules_docker/issues/1022 for more details.
BUILD or BUILD.bazel file at the top level. This
can be a blank file if necessary. Otherwise you might see an error that looks
like:Unable to load package for //:WORKSPACE: BUILD file not found in any of the following directories.
build --@io_bazel_rules_docker//transitions:enable=false
Suppose you have a container_image target //my/image:helloworld:
container_image( name = "helloworld", ... )
You can load this into your local Docker client by running:
bazel run my/image:helloworld.
For the lang_image targets, this will also run the
container using docker run to maximize compatibility with lang_binary rules.
Arguments to this command are forwarded to docker, meaning the command
bazel run my/image:helloworld -- -p 8080:80 -- arg0
performs the following steps:
my/image:helloworld target into your local Docker clientarg0 is passed to the image entrypointdocker run documentationYou can suppress this behavior by passing the single flag: bazel run :foo -- --norun
Alternatively, you can build a docker load compatible bundle with:
bazel build my/image:helloworld.tar. This will produce a tar file
in your bazel-out directory that can be loaded into your local Docker
client. Building this target can be expensive for large images. You will
first need to query the ouput file location.
TARBALL_LOCATION=$(bazel cquery my/image:helloworld.tar \ --output starlark \ --starlark:expr="target.files.to_list()[0].path") docker load -i $TARBALL_LOCATION
These work with both container_image, container_bundle, and the
lang_image rules. For everything except container_bundle, the image
name will be bazel/my/image:helloworld. The container_bundle rule will
apply the tags you have specified.
You can use these rules to access private images using standard Docker authentication methods. e.g. to utilize the Google Container Registry. See here for authentication methods.
See also:
Once you've setup your docker client configuration, see here
for an example of how to use container_pull with custom docker authentication credentials
and here for an example of how
to use container_push with custom docker authentication credentials.
A common request from folks using
container_push, container_bundle, or container_image is to
be able to vary the tag that is pushed or embedded. There are two options
at present for doing this.
The first option is to use stamping.
Stamping is enabled when bazel is run with --stamp.
This enables replacements in stamp-aware attributes.
A python format placeholder (e.g. {BUILD_USER})
is replaced by the value of the corresponding workspace-status variable.
# A common pattern when users want to avoid trampling # on each other's images during development. container_push( name = "publish", format = "Docker", # Any of these components may have variables. registry = "gcr.io", repository = "my-project/my-image", # This will be replaced with the current user when built with --stamp tag = "{BUILD_USER}", )
Rules that are sensitive to stamping can also be forced to stamp or non-stamp mode irrespective of the
--stampflag to Bazel. Use thebuild_context_datarule to make a target that providesStampSettingInfo, and pass this to thebuild_context_dataattribute.
The next natural question is: "Well what variables can I use?" This
option consumes the workspace-status variables Bazel defines in
bazel-out/stable-status.txt and bazel-out/volatile-status.txt.
Note that changes to the stable-status file cause a rebuild of the action, while volatile-status does not.
You can add more stamp variables via --workspace_status_command,
see the bazel docs.
A common example is to provide the current git SHA, with
--workspace_status_command="echo STABLE_GIT_SHA $(git rev-parse HEAD)"
That flag is typically passed in the .bazelrc file, see for example .bazelrc in kubernetes.
The second option is to employ Makefile-style variables:
container_bundle( name = "bundle", images = { "gcr.io/$(project)/frontend:latest": "//frontend:image", "gcr.io/$(project)/backend:latest": "//backend:image", } )
These variables are specified on the CLI using:
bazel build --define project=blah //path/to:bundle
lang_image rulesBy default the lang_image rules use the distroless base runtime images,
which are optimized to be the minimal set of things your application needs
at runtime. That can make debugging these containers difficult because they
lack even a basic shell for exploring the filesystem.
To address this, we publish variants of the distroless runtime images tagged
:debug, which are the exact-same images, but with additions such as busybox
to make debugging easier.
For example (in this repo):
$ bazel run -c dbg testdata:go_image ... INFO: Build completed successfully, 5 total actions INFO: Running command line: bazel-bin/testdata/go_image Loaded image ID: sha256:9c5c2167a1db080a64b5b401b43b3c5cdabb265b26cf7a60aabe04a20da79e24 Tagging 9c5c2167a1db080a64b5b401b43b3c5cdabb265b26cf7a60aabe04a20da79e24 as bazel/testdata:go_image Hello, world! $ docker run -ti --rm --entrypoint=sh bazel/testdata:go_image -c "echo Hello, busybox." Hello, busybox.
container_image( name = "app", # References container_pull from WORKSPACE (above) base = "@java_base//image", files = ["//java/com/example/app:Hello_deploy.jar"], cmd = ["Hello_deploy.jar"] )
Hint: if you want to put files in specific directories inside the image
use <a href="https://docs.bazel.build/versions/master/be/pkg.html">pkg_tar rule</a>
to create the desired directory structure and pass that to container_image via
tars attribute. Note you might need to set strip_prefix = "." or strip_prefix = "{some directory}"
in your rule for the files to not be flattened.
See <a href="https://github.com/bazelbuild/bazel/issues/2176">Bazel upstream issue 2176</a> and
<a href="https://github.com/bazelbuild/rules_docker/issues/317">rules_docker issue 317</a>
for more details.
To use cc_image, add the following to WORKSPACE:
load( "@io_bazel_rules_docker//repositories:repositories.bzl", container_repositories = "repositories", ) container_repositories() load(


AI辅助编程,代码自动修复
Trae是一种自适应的集成开发环境(IDE),通过自动化和多元协作改变开发流程。利用Trae,团队能够更快速、精确地编写和部署代码,从而提高编程效率和项目交付速度。Trae具备上下文感知和代码自动完成功能,是提升开发效率的理想工具。


AI一键生成PPT,就用博思AIPPT!
博思AIPPT,新一代的AI生成PPT平台,支持智能生成PPT、AI美化PPT、文本&链接生成PPT、导入Word/PDF/Markdown文档生成PPT等,内置海量精美PPT模板,涵盖商务、教育、科技等不同风格,同时针对每个页面提供多种版式,一键自适应切换,完美适配各种办公场景。


AI赋能电商视觉革命,一站式智能商拍平台
潮际好麦深耕服装行业,是国内AI试衣效果最好的软件。使用先进AIGC能力为电商卖家批量提供优质的、低成本的商拍图。合作品牌有Shein、Lazada、安踏、百丽等65个国内外头部品牌,以及国内10万+淘宝、天猫、京东等主流平台的品牌商家,为卖家节省将近85%的出图成本,提升约3倍出图效率,让品牌能够快速上架。


企业专属的AI法律顾问
iTerms是法大大集团旗下法律子品牌,基于最先进的大语言模型(LLM)、专业的法律知识库和强大的智能体架构,帮助企业扫清合规障碍,筑牢风控防线,成为您企业专属的AI法律顾问。


稳定高效的流量提升解决方案,助力品牌曝光
稳定高效的流量提升解决方案,助力品牌曝光


最新版Sora2模型免费使用,一键生成无水印视频
最新版Sora2模型免费使用,一键生成无水印视频


实时语音翻译/同声传译工具
Transly是一个多场景的AI大语言模型驱动的同声传译、专业翻译助手,它拥有超精准的音频识别翻译能力,几乎零延迟的使用体验和支持多国语言可以让你带它走遍全球,无论你是留学生、商务人士、韩剧美剧爱好者,还是出国游玩、多国会议、跨国追星等等,都可以满足你所有需要同传的场景需求,线上线下通用,扫除语言障碍,让全世界的语言交流不再有国界。


选题、配图、成文,一站式创作,让内容运营更高效
讯飞绘文,一个AI集成平台,支持写作、选题、配图、排版和发布。高效生成适用于各类媒体的定制内容,加速品牌传播,提升内容营销效果。


最强AI数据分析助手
小浣熊家族Raccoon,您的AI智能助手,致力于通过先进的人工智能技术,为用户提供高效、便捷的智能服务。无论是日常咨询还是专业问题解答,小浣熊都能以快速、准确的响应满足您的需求,让您的生活更加智能便捷。


像人一样思考的AI智能体
imini 是一款超级AI智能体,能根据人类指令,自主思考、自主完成、并且交付结果的AI智能体。
最新AI工具、AI资讯
独家AI资源、AI项目落地

微信扫一扫关注公众号