This is an ingress controller for Kubernetes — the open-source container deployment, scaling, and management system — on AWS. It runs inside a Kubernetes cluster to monitor changes to your ingress resources and orchestrate AWS Load Balancers accordingly.
This ingress controller uses the EC2 instance metadata of the worker node where it's currently running to find the additional details about the cluster provisioned by Kubernetes on top of AWS. This information is used to manage AWS resources for each ingress objects of the cluster.
--cluster-local-domain=cluster.local domains.--nlb-cross-zone=false--nlb-zone-affinity=availability_zone_affinity, see also NLB attributes and NLB zonal DNS affinity--cert-filter-tag=key=valueVersion v0.15.0 removes support for deprecated Ingress versions
extensions/v1beta1 and networking.k8s.io/v1beta1.
Version v0.14.0 makes target-access-mode flag required to make upgrading users aware of the issue.
New deployment of the controller should use --target-access-mode=HostPort or --target-access-mode=AWSCNI.
To upgrade from <v0.12.17 use --target-access-mode=Legacy - it is the same as HostPort but does not set target type and
relies on CloudFormation to use instance as a default value.
Note that changing later from --target-access-mode=Legacy will change target type in CloudFormation and trigger target group recreation and downtime.
To upgrade from >=v0.12.17 when --target-access-mode is not set use explicit --target-access-mode=HostPort.
Version v0.13.0 use Ingress version v1 as default. You can downgrade
ingress version to earlier versions via flag. You will also need to
allow the access via RBAC, see more information in <v0.11.0 to >=0.11.0 below.
Please see release note and issue this update can cause 30s downtime, if you don't use AWS CNI mode.
Please upgrade to >=v0.14.0.
Version v0.12.0 changes Network Load Balancer type handling if Application Load Balancer type feature is requested. See Load Balancers types notes for details.
Version v0.11.0 changes the default apiVersion used for fetching/updating
ingresses from extensions/v1beta1 to networking.k8s.io/v1beta1. For this to
work the controller needs to have permissions to list ingresses and
update, patch ingresses/status from the networking.k8s.io apiGroup.
See deployment example. To fallback to
the old behavior you can set the apiVersion via the --ingress-api-version
flag. Value must be extensions/v1beta1 or networking.k8s.io/v1beta1
(default) or networking.k8s.io/v1.
Version v0.9.0 changes the internal flag parsing library to
kingpin this means flags are now defined with -- (two dashes)
instead of a single dash. You need to change all the flags like this:
-stack-termination-protection -> --stack-termination-protection before
running v0.9.0 of the controller.
Version v0.8.0 added certificate verification check to automatically ignore
self-signed and certificates from internal CAs. The IAM role used by the controller
now needs the acm:GetCertificate permission. acm:DescribeCertificate permission
is no longer needed and can be removed from the role.
Version v0.7.0 deletes the annotation
zalando.org/aws-load-balancer-ssl-cert-domain, which we do not
consider as feature since we have SNI enabled ALBs.
Version v0.6.0 introduced support for Multiple TLS Certificates per ALB
(SNI). When upgrading your ALBs will automatically be aggregated to a single
ALB with multiple certificates configured.
It also adds support for attaching single EC2 instances and multiple
AutoScalingGroups to the ALBs therefore you must ensure you have the correct
instance filter defined before upgrading. The default filter is
tag:kubernetes.io/cluster/<cluster-id>=owned tag-key=k8s.io/role/node see
How it works for more information on how to configure this.
Version v0.5.0 introduced support for both internet-facing and internal
load balancers. For this change we had to change the naming of the
CloudFormation stacks created by the controller. To upgrade from v0.4.* to
v0.5.0 no changes are needed, but since the naming change of the stacks
migrating back down to a v0.4.* version will not be non-disruptive as it will
be unable to manage the stacks with the new naming scheme. Deleting the stacks
manually will allow for a working downgrade.
In versions before v0.4.0 we used AWS Tags that were set by CloudFormation automatically to find some AWS resources. This behavior has been changed to use custom non cloudformation tags.
In order to update to v0.4.0, you have to add the following tags to your AWs Loadbalancer SecurityGroup before updating:
kubernetes:application=kube-ingress-aws-controllerkubernetes.io/cluster/<cluster-id>=ownedAdditionally you must ensure that the instance where the ingress-controller is
running has the clusterID tag kubernetes.io/cluster/<cluster-id>=owned set
(was ClusterID=<cluster-id> before v0.4.0).
Overview of configuration which can be set via Ingress annotations.
| Name | Value | Default |
|---|---|---|
alb.ingress.kubernetes.io/ip-address-type | ipv4 | dualstack | ipv4 |
zalando.org/aws-load-balancer-ssl-cert | string | N/A |
zalando.org/aws-load-balancer-scheme | internal | internet-facing | internet-facing |
zalando.org/aws-load-balancer-shared | true | false | true |
zalando.org/aws-load-balancer-security-group | string | N/A |
zalando.org/aws-load-balancer-ssl-policy | string | ELBSecurityPolicy-2016-08 |
zalando.org/aws-load-balancer-type | nlb | alb | alb |
zalando.org/aws-load-balancer-http2 | true | false | true |
zalando.org/aws-waf-web-acl-id | string | N/A |
kubernetes.io/ingress.class | string | N/A |
The defaults can also be configured globally via a flag on the controller.
The controller supports both [Application Load Balancers][alb] and [Network Load Balancers][nlb]. Below is an overview of which features can be used with the individual Load Balancer types.
| Feature | Application Load Balancer | Network Load Balancer |
|---|---|---|
| HTTPS | :heavy_check_mark: | :heavy_check_mark: |
| HTTP | :heavy_check_mark: | :heavy_check_mark: --nlb-http-enabled |
| HTTP -> HTTPS redirect | :heavy_check_mark: --redirect-http-to-https | :heavy_multiplication_x: |
| Cross Zone Load Balancing | :heavy_check_mark: (only option) | :heavy_check_mark: --nlb-cross-zone |
| Zone Affinity | :heavy_multiplication_x: | :heavy_check_mark: --nlb-zone-affinity |
| Dualstack support | :heavy_check_mark: --ip-addr-type=dualstack | :heavy_multiplication_x: |
| Idle Timeout | :heavy_check_mark: --idle-connection-timeout | :heavy_multiplication_x: |
| Custom Security Group | :heavy_check_mark: | :heavy_multiplication_x: |
| Web Application Firewall (WAF) | :heavy_check_mark: | :heavy_multiplication_x: |
| HTTP/2 Support | :white_check_mark: | (not relevant) |
To facilitate default load balancer type switch from Application to Network when the default load balancer type is Network
(--load-balancer-type="network") and Custom Security Group (zalando.org/aws-load-balancer-security-group) or
Web Application Firewall (zalando.org/aws-waf-web-acl-id) annotation is present the controller configures Application Load Balancer.
If zalando.org/aws-load-balancer-type: nlb annotation is also present then controller ignores the configuration and logs an error.
SecurityGroup auto detection needs the following AWS Tags on the SecurityGroup:
kubernetes.io/cluster/<cluster-id>=ownedkubernetes:application=<controller-id>, controller-id defaults to
kube-ingress-aws-controller and can be set by flag --controller-id=<my-ctrl-id>.AutoScalingGroup auto detection needs the same AWS tags on the AutoScalingGroup as defined for the SecurityGroup.
In case you want to attach/detach single EC2 instances to the ALB
TargetGroup, you have to have the same <cluster-id> set as on the
running kube-ingress-aws-controller. Normally this would be
kubernetes.io/cluster/<cluster-id>=owned.
This controller is used in production since Q1 2017. It aims to be out-of-the-box useful for anyone running Kubernetes. Jump down to the Quickstart to try it out—and please let us know if you have trouble getting it running by filing an Issue. If you created your cluster with Kops, see our deployment guide for Kops
As of this writing, it's being used in production use cases at Zalando, and can be considered battle-tested in this setup. We're actively seeking devs/teams/companies to try it out and share feedback so we can make improvements.
We are also eager to bring new contributors on board. See our contributor guidelines to get started, or claim a "Help Wanted" item.
The maintainers of this project are building an infrastructure that runs Kubernetes on top of AWS at large scale (for nearly 200 delivery teams), and with automation. As such, we're creating our own tooling to support this new infrastructure. We couldn't find an existing ingress controller that operates like this one does, so we created one ourselves.
We're using this ingress controller with Skipper, an HTTP router that Zalando has used in production since Q4 2015 as part of its front-end microservices architecture. Skipper's also open source and has some outstanding features, that we documented here. Feel free to use it, or use another ingress of your choosing.
This controller continuously polls the API server to check for ingress resources. It runs an infinite loop. For each cycle it creates load balancers for new ingress resources, and deletes the load balancers for obsolete/removed ingress resources.
This is achieved using AWS CloudFormation. For more details check our CloudFormation Documentation
The controller will not manage the security groups required to allow access from the Internet to the load balancers. It assumes that their lifecycle is external to the controller itself.
During startup phase EC2 filters are constructed as follows:
CUSTOM_FILTERS environment variable is set, it is used to generate filters that are later used
to fetch instances from EC2.CUSTOM_FILTERS environment variable is not set or could not be parsed, then default
filters are tag:kubernetes.io/cluster/<cluster-id>=owned tag-key=k8s.io/role/node where <cluster-id>
is determined from EC2 tags of instance on which Ingress Controller pod is started.CUSTOM_FILTERS is a list of filters separated by spaces. Each filter has a form of name=value where name can be a tag: or tag-key: prefixed expression, as would be recognized by the EC2 API, and value is value of a filter, or a comma seperated list of values.
For example:
tag-key=test will filter instances that have a tag named test, ignoring the value.

AI辅助编程,代码自动修复
Trae是一种自适应的集成开发环境(IDE),通过自动化和多元协作改变开发流程。利用Trae,团队能够更快速、精确地编写和部署代码,从而提高编程效率和项目交付速度。Trae具备上下文感知和代码自动完成功能,是提升开发效率的理想工具。


AI一键生成PPT,就用博思AIPPT!
博思AIPPT,新一代的AI生成PPT平台,支持智能生成PPT、AI美化PPT、文本&链接生成PPT、导入Word/PDF/Markdown文档生成PPT等,内置海量精美PPT模板,涵盖商务、教育、科技等不同风格,同时针对每个页面提供多种版式,一键自适应切换,完美适配各种办公场景。


AI赋能电商视觉革命,一站式智能商拍平台
潮际好麦深耕服装行业,是国内AI试衣效果最好的软件。使用先进AIGC能力为电商卖家批量提供优质的、低成本的商拍图。合作品牌有Shein、Lazada、安踏、百丽等65个国内外头部品牌,以及国内10万+淘宝、天猫、京东等主流平台的品牌商家,为卖家节省将近85%的出图成本,提升约3倍出图效率,让品牌能够快速上架。


企业专属的AI法律顾问
iTerms是法大大集团旗下法律子品牌,基于最先进的大语言模型(LLM)、专业的法律知识库和强大的智能体架构, 帮助企业扫清合规障碍,筑牢风控防线,成为您企业专属的AI法律顾问。


稳定高效的流量提升解决方案,助力品牌曝光
稳定高效的流量提升解决方案,助力品牌曝光


最新版Sora2模型免费使用,一键生成无水印视频
最新版Sora2模型免费使用,一键生成无水印视频


实时语音翻译/同声传译工具
Transly是一个多场景的AI大语言模型驱动的同声传译、专业翻译助手,它拥有超精准的音频识别翻译能力,几乎零延迟的使用体验和支持多国语言可以让你带它走遍全球,无论你是留 学生、商务人士、韩剧美剧爱好者,还是出国游玩、多国会议、跨国追星等等,都可以满足你所有需要同传的场景需求,线上线下通用,扫除语言障碍,让全世界的语言交流不再有国界。


选题、配图、成文,一站式创作,让内容运营更高效
讯飞绘文,一个AI集成平台,支持写作、选题、配图、排版和发布。高效生成适用于各类媒体的定制内容,加速品牌传播,提升内容营销效果。


最强AI数据分析助手
小浣熊家族Raccoon,您的AI智能助手,致力于通过先进的人工智能技术,为用户提供高效、便捷的智能服务。无论是日常咨询还是专业问题解答,小浣熊都能以快速、准确的响应满足您的需求,让您的生活更加智能便捷。


像人一样思考的AI智能体
imini 是一款超级AI智能体,能根据人类指令,自主思考、自主完成、并且交付结果的AI智能体。
最新AI工具、AI资讯
独家AI资源、AI项目落地

微信扫一扫关注公众号