kube-ingress-aws-controller

kube-ingress-aws-controller

AWS Kubernetes 集群的智能入口流量控制器

kube-ingress-aws-controller 是专为 AWS Kubernetes 集群设计的入口控制器。它可自动管理 AWS 负载均衡器,支持 ALB 和 NLB,具备 SSL 证书自动发现、多 TLS 证书等功能。该控制器配置简便,适用于各种规模的集群,能高效管理入口流量。

KubernetesAWS负载均衡入口控制器CloudFormationGithub开源项目

Kubernetes Ingress Controller for AWS

This is an ingress controller for Kubernetes — the open-source container deployment, scaling, and management system — on AWS. It runs inside a Kubernetes cluster to monitor changes to your ingress resources and orchestrate AWS Load Balancers accordingly.

Build Status Coverage Status GitHub release go-doc

This ingress controller uses the EC2 instance metadata of the worker node where it's currently running to find the additional details about the cluster provisioned by Kubernetes on top of AWS. This information is used to manage AWS resources for each ingress objects of the cluster.

Features

  • Uses CloudFormation to guarantee consistent state
  • Automatic discovery of SSL certificates
  • Automatic forwarding of requests to all Worker Nodes, even with auto scaling
  • Automatic cleanup of unnecessary managed resources
  • Support for both [Application Load Balancers][alb] and [Network Load Balancers][nlb].
  • Support for internet-facing and internal load balancers
  • Support for ignoring cluster-internal ingress, that only have --cluster-local-domain=cluster.local domains.
  • Support for denying traffic for internal domains.
  • Support for multiple Auto Scaling Groups
  • Support for instances that are not part of Auto Scaling Group
  • Support for SSLPolicy, set default and per ingress
  • Support for CloudWatch Alarm configuration
  • Can be used in clusters created by Kops, see our deployment guide for Kops
  • Support Multiple TLS Certificates per ALB (SNI).
  • Support for AWS WAF and WAFv2
  • Support for AWS CNI pod direct access
  • Support for Kubernetes CRD RouteGroup
  • Support for zone aware traffic (defaults to cross zone traffic and no zone affinity)
    • enable and disable cross zone traffic: --nlb-cross-zone=false
    • set zone affinity to resolve DNS to same zone: --nlb-zone-affinity=availability_zone_affinity, see also NLB attributes and NLB zonal DNS affinity
  • Support for explicitly enable certificates by using certificate Tags --cert-filter-tag=key=value

Upgrade

<v0.15.0 to >=v0.15.0

Version v0.15.0 removes support for deprecated Ingress versions extensions/v1beta1 and networking.k8s.io/v1beta1.

<v0.14.0 to >=v0.14.0

Version v0.14.0 makes target-access-mode flag required to make upgrading users aware of the issue.

New deployment of the controller should use --target-access-mode=HostPort or --target-access-mode=AWSCNI.

To upgrade from <v0.12.17 use --target-access-mode=Legacy - it is the same as HostPort but does not set target type and relies on CloudFormation to use instance as a default value.

Note that changing later from --target-access-mode=Legacy will change target type in CloudFormation and trigger target group recreation and downtime.

To upgrade from >=v0.12.17 when --target-access-mode is not set use explicit --target-access-mode=HostPort.

<v0.13.0 to >=0.13.0

Version v0.13.0 use Ingress version v1 as default. You can downgrade ingress version to earlier versions via flag. You will also need to allow the access via RBAC, see more information in <v0.11.0 to >=0.11.0 below.

<v0.12.17 to <v0.14.0

Please see release note and issue this update can cause 30s downtime, if you don't use AWS CNI mode.

Please upgrade to >=v0.14.0.

<v0.12.0 to <=0.12.16

Version v0.12.0 changes Network Load Balancer type handling if Application Load Balancer type feature is requested. See Load Balancers types notes for details.

<v0.11.0 to >=0.11.0

Version v0.11.0 changes the default apiVersion used for fetching/updating ingresses from extensions/v1beta1 to networking.k8s.io/v1beta1. For this to work the controller needs to have permissions to list ingresses and update, patch ingresses/status from the networking.k8s.io apiGroup. See deployment example. To fallback to the old behavior you can set the apiVersion via the --ingress-api-version flag. Value must be extensions/v1beta1 or networking.k8s.io/v1beta1 (default) or networking.k8s.io/v1.

<v0.9.0 to >=v0.9.0

Version v0.9.0 changes the internal flag parsing library to kingpin this means flags are now defined with -- (two dashes) instead of a single dash. You need to change all the flags like this: -stack-termination-protection -> --stack-termination-protection before running v0.9.0 of the controller.

<v0.8.0 to >=v0.8.0

Version v0.8.0 added certificate verification check to automatically ignore self-signed and certificates from internal CAs. The IAM role used by the controller now needs the acm:GetCertificate permission. acm:DescribeCertificate permission is no longer needed and can be removed from the role.

<v0.7.0 to >=v0.7.0

Version v0.7.0 deletes the annotation zalando.org/aws-load-balancer-ssl-cert-domain, which we do not consider as feature since we have SNI enabled ALBs.

<v0.6.0 to >=v0.6.0

Version v0.6.0 introduced support for Multiple TLS Certificates per ALB (SNI). When upgrading your ALBs will automatically be aggregated to a single ALB with multiple certificates configured. It also adds support for attaching single EC2 instances and multiple AutoScalingGroups to the ALBs therefore you must ensure you have the correct instance filter defined before upgrading. The default filter is tag:kubernetes.io/cluster/<cluster-id>=owned tag-key=k8s.io/role/node see How it works for more information on how to configure this.

<v0.5.0 to >=v0.5.0

Version v0.5.0 introduced support for both internet-facing and internal load balancers. For this change we had to change the naming of the CloudFormation stacks created by the controller. To upgrade from v0.4.* to v0.5.0 no changes are needed, but since the naming change of the stacks migrating back down to a v0.4.* version will not be non-disruptive as it will be unable to manage the stacks with the new naming scheme. Deleting the stacks manually will allow for a working downgrade.

<v0.4.0 to >=v0.4.0

In versions before v0.4.0 we used AWS Tags that were set by CloudFormation automatically to find some AWS resources. This behavior has been changed to use custom non cloudformation tags.

In order to update to v0.4.0, you have to add the following tags to your AWs Loadbalancer SecurityGroup before updating:

  • kubernetes:application=kube-ingress-aws-controller
  • kubernetes.io/cluster/<cluster-id>=owned

Additionally you must ensure that the instance where the ingress-controller is running has the clusterID tag kubernetes.io/cluster/<cluster-id>=owned set (was ClusterID=<cluster-id> before v0.4.0).

Ingress annotations

Overview of configuration which can be set via Ingress annotations.

Annotations

NameValueDefault
alb.ingress.kubernetes.io/ip-address-typeipv4 | dualstackipv4
zalando.org/aws-load-balancer-ssl-certstringN/A
zalando.org/aws-load-balancer-schemeinternal | internet-facinginternet-facing
zalando.org/aws-load-balancer-sharedtrue | falsetrue
zalando.org/aws-load-balancer-security-groupstringN/A
zalando.org/aws-load-balancer-ssl-policystringELBSecurityPolicy-2016-08
zalando.org/aws-load-balancer-typenlb | albalb
zalando.org/aws-load-balancer-http2true | falsetrue
zalando.org/aws-waf-web-acl-idstringN/A
kubernetes.io/ingress.classstringN/A

The defaults can also be configured globally via a flag on the controller.

Load Balancers types

The controller supports both [Application Load Balancers][alb] and [Network Load Balancers][nlb]. Below is an overview of which features can be used with the individual Load Balancer types.

FeatureApplication Load BalancerNetwork Load Balancer
HTTPS:heavy_check_mark::heavy_check_mark:
HTTP:heavy_check_mark::heavy_check_mark: --nlb-http-enabled
HTTP -> HTTPS redirect:heavy_check_mark: --redirect-http-to-https:heavy_multiplication_x:
Cross Zone Load Balancing:heavy_check_mark: (only option):heavy_check_mark: --nlb-cross-zone
Zone Affinity:heavy_multiplication_x::heavy_check_mark: --nlb-zone-affinity
Dualstack support:heavy_check_mark: --ip-addr-type=dualstack:heavy_multiplication_x:
Idle Timeout:heavy_check_mark: --idle-connection-timeout:heavy_multiplication_x:
Custom Security Group:heavy_check_mark::heavy_multiplication_x:
Web Application Firewall (WAF):heavy_check_mark::heavy_multiplication_x:
HTTP/2 Support:white_check_mark:(not relevant)

To facilitate default load balancer type switch from Application to Network when the default load balancer type is Network (--load-balancer-type="network") and Custom Security Group (zalando.org/aws-load-balancer-security-group) or Web Application Firewall (zalando.org/aws-waf-web-acl-id) annotation is present the controller configures Application Load Balancer. If zalando.org/aws-load-balancer-type: nlb annotation is also present then controller ignores the configuration and logs an error.

AWS Tags

SecurityGroup auto detection needs the following AWS Tags on the SecurityGroup:

  • kubernetes.io/cluster/<cluster-id>=owned
  • kubernetes:application=<controller-id>, controller-id defaults to kube-ingress-aws-controller and can be set by flag --controller-id=<my-ctrl-id>.

AutoScalingGroup auto detection needs the same AWS tags on the AutoScalingGroup as defined for the SecurityGroup.

In case you want to attach/detach single EC2 instances to the ALB TargetGroup, you have to have the same <cluster-id> set as on the running kube-ingress-aws-controller. Normally this would be kubernetes.io/cluster/<cluster-id>=owned.

Development Status

This controller is used in production since Q1 2017. It aims to be out-of-the-box useful for anyone running Kubernetes. Jump down to the Quickstart to try it out—and please let us know if you have trouble getting it running by filing an Issue. If you created your cluster with Kops, see our deployment guide for Kops

As of this writing, it's being used in production use cases at Zalando, and can be considered battle-tested in this setup. We're actively seeking devs/teams/companies to try it out and share feedback so we can make improvements.

We are also eager to bring new contributors on board. See our contributor guidelines to get started, or claim a "Help Wanted" item.

Why We Created This Ingress Controller

The maintainers of this project are building an infrastructure that runs Kubernetes on top of AWS at large scale (for nearly 200 delivery teams), and with automation. As such, we're creating our own tooling to support this new infrastructure. We couldn't find an existing ingress controller that operates like this one does, so we created one ourselves.

We're using this ingress controller with Skipper, an HTTP router that Zalando has used in production since Q4 2015 as part of its front-end microservices architecture. Skipper's also open source and has some outstanding features, that we documented here. Feel free to use it, or use another ingress of your choosing.

How It Works

This controller continuously polls the API server to check for ingress resources. It runs an infinite loop. For each cycle it creates load balancers for new ingress resources, and deletes the load balancers for obsolete/removed ingress resources.

This is achieved using AWS CloudFormation. For more details check our CloudFormation Documentation

The controller will not manage the security groups required to allow access from the Internet to the load balancers. It assumes that their lifecycle is external to the controller itself.

During startup phase EC2 filters are constructed as follows:

  • If CUSTOM_FILTERS environment variable is set, it is used to generate filters that are later used to fetch instances from EC2.
  • If CUSTOM_FILTERS environment variable is not set or could not be parsed, then default filters are tag:kubernetes.io/cluster/<cluster-id>=owned tag-key=k8s.io/role/node where <cluster-id> is determined from EC2 tags of instance on which Ingress Controller pod is started.

CUSTOM_FILTERS is a list of filters separated by spaces. Each filter has a form of name=value where name can be a tag: or tag-key: prefixed expression, as would be recognized by the EC2 API, and value is value of a filter, or a comma seperated list of values.

For example:

  • tag-key=test will filter instances that have a tag named test, ignoring the value.

编辑推荐精选

Vora

Vora

免费创建高清无水印Sora视频

Vora是一个免费创建高清无水印Sora视频的AI工具

Refly.AI

Refly.AI

最适合小白的AI自动化工作流平台

无需编码,轻松生成可复用、可变现的AI自动化工作流

酷表ChatExcel

酷表ChatExcel

大模型驱动的Excel数据处理工具

基于大模型交互的表格处理系统,允许用户通过对话方式完成数据整理和可视化分析。系统采用机器学习算法解析用户指令,自动执行排序、公式计算和数据透视等操作,支持多种文件格式导入导出。数据处理响应速度保持在0.8秒以内,支持超过100万行数据的即时分析。

AI工具酷表ChatExcelAI智能客服AI营销产品使用教程
TRAE编程

TRAE编程

AI辅助编程,代码自动修复

Trae是一种自适应的集成开发环境(IDE),通过自动化和多元协作改变开发流程。利用Trae,团队能够更快速、精确地编写和部署代码,从而提高编程效率和项目交付速度。Trae具备上下文感知和代码自动完成功能,是提升开发效率的理想工具。

AI工具TraeAI IDE协作生产力转型热门
AIWritePaper论文写作

AIWritePaper论文写作

AI论文写作指导平台

AIWritePaper论文写作是一站式AI论文写作辅助工具,简化了选题、文献检索至论文撰写的整个过程。通过简单设定,平台可快速生成高质量论文大纲和全文,配合图表、参考文献等一应俱全,同时提供开题报告和答辩PPT等增值服务,保障数据安全,有效提升写作效率和论文质量。

AI辅助写作AI工具AI论文工具论文写作智能生成大纲数据安全AI助手热门
博思AIPPT

博思AIPPT

AI一键生成PPT,就用博思AIPPT!

博思AIPPT,新一代的AI生成PPT平台,支持智能生成PPT、AI美化PPT、文本&链接生成PPT、导入Word/PDF/Markdown文档生成PPT等,内置海量精美PPT模板,涵盖商务、教育、科技等不同风格,同时针对每个页面提供多种版式,一键自适应切换,完美适配各种办公场景。

AI办公办公工具AI工具博思AIPPTAI生成PPT智能排版海量精品模板AI创作热门
潮际好麦

潮际好麦

AI赋能电商视觉革命,一站式智能商拍平台

潮际好麦深耕服装行业,是国内AI试衣效果最好的软件。使用先进AIGC能力为电商卖家批量提供优质的、低成本的商拍图。合作品牌有Shein、Lazada、安踏、百丽等65个国内外头部品牌,以及国内10万+淘宝、天猫、京东等主流平台的品牌商家,为卖家节省将近85%的出图成本,提升约3倍出图效率,让品牌能够快速上架。

iTerms

iTerms

企业专属的AI法律顾问

iTerms是法大大集团旗下法律子品牌,基于最先进的大语言模型(LLM)、专业的法律知识库和强大的智能体架构,帮助企业扫清合规障碍,筑牢风控防线,成为您企业专属的AI法律顾问。

SimilarWeb流量提升

SimilarWeb流量提升

稳定高效的流量提升解决方案,助力品牌曝光

稳定高效的流量提升解决方案,助力品牌曝光

Sora2视频免费生成

Sora2视频免费生成

最新版Sora2模型免费使用,一键生成无水印视频

最新版Sora2模型免费使用,一键生成无水印视频

下拉加载更多