This is an ingress controller for Kubernetes — the open-source container deployment, scaling, and management system — on AWS. It runs inside a Kubernetes cluster to monitor changes to your ingress resources and orchestrate AWS Load Balancers accordingly.
This ingress controller uses the EC2 instance metadata of the worker node where it's currently running to find the additional details about the cluster provisioned by Kubernetes on top of AWS. This information is used to manage AWS resources for each ingress objects of the cluster.
--cluster-local-domain=cluster.local
domains.--nlb-cross-zone=false
--nlb-zone-affinity=availability_zone_affinity
, see also NLB attributes and NLB zonal DNS affinity--cert-filter-tag=key=value
Version v0.15.0
removes support for deprecated Ingress versions
extensions/v1beta1
and networking.k8s.io/v1beta1
.
Version v0.14.0
makes target-access-mode
flag required to make upgrading users aware of the issue.
New deployment of the controller should use --target-access-mode=HostPort
or --target-access-mode=AWSCNI
.
To upgrade from <v0.12.17
use --target-access-mode=Legacy
- it is the same as HostPort
but does not set target type and
relies on CloudFormation to use instance
as a default value.
Note that changing later from --target-access-mode=Legacy
will change target type in CloudFormation and trigger target group recreation and downtime.
To upgrade from >=v0.12.17
when --target-access-mode
is not set use explicit --target-access-mode=HostPort
.
Version v0.13.0
use Ingress version v1 as default. You can downgrade
ingress version to earlier versions via flag. You will also need to
allow the access via RBAC, see more information in <v0.11.0 to >=0.11.0 below.
Please see release note and issue this update can cause 30s downtime, if you don't use AWS CNI mode.
Please upgrade to >=v0.14.0
.
Version v0.12.0
changes Network Load Balancer type handling if Application Load Balancer type feature is requested. See Load Balancers types notes for details.
Version v0.11.0
changes the default apiVersion
used for fetching/updating
ingresses from extensions/v1beta1
to networking.k8s.io/v1beta1
. For this to
work the controller needs to have permissions to list
ingresses
and
update
, patch
ingresses/status
from the networking.k8s.io
apiGroup
.
See deployment example. To fallback to
the old behavior you can set the apiVersion via the --ingress-api-version
flag. Value must be extensions/v1beta1
or networking.k8s.io/v1beta1
(default) or networking.k8s.io/v1
.
Version v0.9.0
changes the internal flag parsing library to
kingpin this means flags are now defined with --
(two dashes)
instead of a single dash. You need to change all the flags like this:
-stack-termination-protection
-> --stack-termination-protection
before
running v0.9.0
of the controller.
Version v0.8.0
added certificate verification check to automatically ignore
self-signed and certificates from internal CAs. The IAM role used by the controller
now needs the acm:GetCertificate
permission. acm:DescribeCertificate
permission
is no longer needed and can be removed from the role.
Version v0.7.0
deletes the annotation
zalando.org/aws-load-balancer-ssl-cert-domain
, which we do not
consider as feature since we have SNI enabled ALBs.
Version v0.6.0
introduced support for Multiple TLS Certificates per ALB
(SNI). When upgrading your ALBs will automatically be aggregated to a single
ALB with multiple certificates configured.
It also adds support for attaching single EC2 instances and multiple
AutoScalingGroups to the ALBs therefore you must ensure you have the correct
instance filter defined before upgrading. The default filter is
tag:kubernetes.io/cluster/<cluster-id>=owned tag-key=k8s.io/role/node
see
How it works for more information on how to configure this.
Version v0.5.0
introduced support for both internet-facing
and internal
load balancers. For this change we had to change the naming of the
CloudFormation stacks created by the controller. To upgrade from v0.4.* to
v0.5.0 no changes are needed, but since the naming change of the stacks
migrating back down to a v0.4.* version will not be non-disruptive as it will
be unable to manage the stacks with the new naming scheme. Deleting the stacks
manually will allow for a working downgrade.
In versions before v0.4.0 we used AWS Tags that were set by CloudFormation automatically to find some AWS resources. This behavior has been changed to use custom non cloudformation tags.
In order to update to v0.4.0, you have to add the following tags to your AWs Loadbalancer SecurityGroup before updating:
kubernetes:application=kube-ingress-aws-controller
kubernetes.io/cluster/<cluster-id>=owned
Additionally you must ensure that the instance where the ingress-controller is
running has the clusterID tag kubernetes.io/cluster/<cluster-id>=owned
set
(was ClusterID=<cluster-id>
before v0.4.0).
Overview of configuration which can be set via Ingress annotations.
Name | Value | Default |
---|---|---|
alb.ingress.kubernetes.io/ip-address-type | ipv4 | dualstack | ipv4 |
zalando.org/aws-load-balancer-ssl-cert | string | N/A |
zalando.org/aws-load-balancer-scheme | internal | internet-facing | internet-facing |
zalando.org/aws-load-balancer-shared | true | false | true |
zalando.org/aws-load-balancer-security-group | string | N/A |
zalando.org/aws-load-balancer-ssl-policy | string | ELBSecurityPolicy-2016-08 |
zalando.org/aws-load-balancer-type | nlb | alb | alb |
zalando.org/aws-load-balancer-http2 | true | false | true |
zalando.org/aws-waf-web-acl-id | string | N/A |
kubernetes.io/ingress.class | string | N/A |
The defaults can also be configured globally via a flag on the controller.
The controller supports both [Application Load Balancers][alb] and [Network Load Balancers][nlb]. Below is an overview of which features can be used with the individual Load Balancer types.
Feature | Application Load Balancer | Network Load Balancer |
---|---|---|
HTTPS | :heavy_check_mark: | :heavy_check_mark: |
HTTP | :heavy_check_mark: | :heavy_check_mark: --nlb-http-enabled |
HTTP -> HTTPS redirect | :heavy_check_mark: --redirect-http-to-https | :heavy_multiplication_x: |
Cross Zone Load Balancing | :heavy_check_mark: (only option) | :heavy_check_mark: --nlb-cross-zone |
Zone Affinity | :heavy_multiplication_x: | :heavy_check_mark: --nlb-zone-affinity |
Dualstack support | :heavy_check_mark: --ip-addr-type=dualstack | :heavy_multiplication_x: |
Idle Timeout | :heavy_check_mark: --idle-connection-timeout | :heavy_multiplication_x: |
Custom Security Group | :heavy_check_mark: | :heavy_multiplication_x: |
Web Application Firewall (WAF) | :heavy_check_mark: | :heavy_multiplication_x: |
HTTP/2 Support | :white_check_mark: | (not relevant) |
To facilitate default load balancer type switch from Application to Network when the default load balancer type is Network
(--load-balancer-type="network"
) and Custom Security Group (zalando.org/aws-load-balancer-security-group
) or
Web Application Firewall (zalando.org/aws-waf-web-acl-id
) annotation is present the controller configures Application Load Balancer.
If zalando.org/aws-load-balancer-type: nlb
annotation is also present then controller ignores the configuration and logs an error.
SecurityGroup auto detection needs the following AWS Tags on the SecurityGroup:
kubernetes.io/cluster/<cluster-id>=owned
kubernetes:application=<controller-id>
, controller-id defaults to
kube-ingress-aws-controller
and can be set by flag --controller-id=<my-ctrl-id>
.AutoScalingGroup auto detection needs the same AWS tags on the AutoScalingGroup as defined for the SecurityGroup.
In case you want to attach/detach single EC2 instances to the ALB
TargetGroup, you have to have the same <cluster-id>
set as on the
running kube-ingress-aws-controller. Normally this would be
kubernetes.io/cluster/<cluster-id>=owned
.
This controller is used in production since Q1 2017. It aims to be out-of-the-box useful for anyone running Kubernetes. Jump down to the Quickstart to try it out—and please let us know if you have trouble getting it running by filing an Issue. If you created your cluster with Kops, see our deployment guide for Kops
As of this writing, it's being used in production use cases at Zalando, and can be considered battle-tested in this setup. We're actively seeking devs/teams/companies to try it out and share feedback so we can make improvements.
We are also eager to bring new contributors on board. See our contributor guidelines to get started, or claim a "Help Wanted" item.
The maintainers of this project are building an infrastructure that runs Kubernetes on top of AWS at large scale (for nearly 200 delivery teams), and with automation. As such, we're creating our own tooling to support this new infrastructure. We couldn't find an existing ingress controller that operates like this one does, so we created one ourselves.
We're using this ingress controller with Skipper, an HTTP router that Zalando has used in production since Q4 2015 as part of its front-end microservices architecture. Skipper's also open source and has some outstanding features, that we documented here. Feel free to use it, or use another ingress of your choosing.
This controller continuously polls the API server to check for ingress resources. It runs an infinite loop. For each cycle it creates load balancers for new ingress resources, and deletes the load balancers for obsolete/removed ingress resources.
This is achieved using AWS CloudFormation. For more details check our CloudFormation Documentation
The controller will not manage the security groups required to allow access from the Internet to the load balancers. It assumes that their lifecycle is external to the controller itself.
During startup phase EC2 filters are constructed as follows:
CUSTOM_FILTERS
environment variable is set, it is used to generate filters that are later used
to fetch instances from EC2.CUSTOM_FILTERS
environment variable is not set or could not be parsed, then default
filters are tag:kubernetes.io/cluster/<cluster-id>=owned tag-key=k8s.io/role/node
where <cluster-id>
is determined from EC2 tags of instance on which Ingress Controller pod is started.CUSTOM_FILTERS
is a list of filters separated by spaces. Each filter has a form of name=value
where name can be a tag:
or tag-key:
prefixed expression, as would be recognized by the EC2 API, and value is value of a filter, or a comma seperated list of values.
For example:
tag-key=test
will filter instances that have a tag named test
, ignoring the value.AI小说写作助手,一站式润色、改写、扩写
蛙蛙写作—国内先进的AI写作平台,涵盖小说、学术、社交媒体等多场景。提供续写、改写、润色等功能,助力创作者高效优化写作流程。界面简洁,功能全面,适合各类写作者提升内容品质和工作效率。
字节跳动发布的AI编程神器IDE
Trae是一种自适应的集成开发环境(IDE),通过自动化和多元协作改变开发流程。利用Trae,团队能够更快速、精确地编写和部署代码,从而提高编程效率和项目交付速度。Trae具备上下文感知和代码自动完成功能,是提升开发效率的理想工具。
全能AI智能助手,随时解答生活与工作的多样问题
问小白,由元石科技研发的AI智能助手,快速准确地解答各种生活和工作问题,包括但不限于搜索、规划和社交互动,帮助用户在日常生活中提高效率,轻松管理个人事务。
实时语音翻译/同声传译工具
Transly是一个多场景的AI大语言模型驱动的同声传译、专业翻译助手,它拥有超精准的音频识别翻译能力,几乎零延迟的使用体验和支持多国语言可以让你带它走遍全球,无论你是留学生、商务人士、韩剧美剧爱好者,还是出国游玩、多国会议、跨国追星等等,都可以满足你所有需要同传的场景需求,线上线下通用,扫除语言障碍,让全世界的语言交流不再有国界。
一键生成PPT和Word,让学习生活更轻松
讯飞智文是一个利用 AI 技术的项目,能够帮助用户生成 PPT 以及各类文档。无论是商业领域的市场分析报告、年度目标制定,还是学生群体的职业生涯规划、实习避坑指南,亦或是活动策划、旅游攻略等内容,它都能提供支持,帮助用户精准表达,轻松呈现各种信息。
深度推理能力全新升级,全面对标OpenAI o1
科大讯飞的星火大模型,支持语言理解、知识问答和文本创作等多功能,适用于多种 文件和业务场景,提升办公和日常生活的效率。讯飞星火是一个提供丰富智能服务的平台,涵盖科技资讯、图像创作、写作辅助、编程解答、科研文献解读等功能,能为不同需求的用户提供便捷高效的帮助,助力用户轻松获取信息、解决问题,满足多样化使用场景。
一种基于大语言模型的高效单流解耦语音令牌文本到语音合成模型
Spark-TTS 是一个基于 PyTorch 的开源文本到语音合成项目,由多个知名机构联合参与。该项目提供了高效的 LLM(大语言模型)驱动的语音合成方案,支持语音克隆和语音创建功能,可通过命令行界面(CLI)和 Web UI 两种方式使用。用户可以根据需求调整语音的性别、音高、速度等参数,生成高质量的语音。该项目适用于多种场景,如有声读物制作、智能语音助手开发等。
AI助力,做PPT更简单!
咔片是一款轻量化在线演示设计工具,借助 AI 技术,实现从内容生成到智能设计的一站式 PPT 制作服务。支持多种文档格式导入生成 PPT,提供海量模板、智能美化、素材替换等功能,适用于销售、教师、学生等各类人群,能高效制作出高品质 PPT,满足不同场景演示需求。
选题、配图、成文,一站式创作,让内容运营更高效
讯飞绘文,一个AI集成平台,支持写作、选题、配图、排版和发布。高效生成适用于各类媒体的定制内容,加速品牌传播,提升内容营销效果。
专业的AI公文写作平台,公文写作神器
AI 材料星,专业的 AI 公文写作辅助平台,为体制内工作人员提供高效的公文写作解决方案。拥有海量公文文库、9 大核心 AI 功能,支持 30 + 文稿类型生成,助力快 速完成领导讲话、工作总结、述职报告等材料,提升办公效率,是体制打工人的得力写作神器。