
Jenkins流水线AWS服务集成插件
该Jenkins插件提供AWS API交互功能,支持S3、CloudFront、CloudFormation等多种AWS服务操作。插件通过withAWS步骤实现授权,支持多种凭证和角色切换,简化Jenkins与AWS集成。它可自动化各类AWS任务,提高CI/CD流程效率。插件提供了丰富的步骤,如s3Upload、cfInvalidate、cfnUpdate等,方便在Jenkins流水线中集成AWS操作。通过这些预定义步骤,用户可以轻松实现AWS资源管理和部署自动化,无需编写复杂的AWS SDK代码。
This plugins adds Jenkins pipeline steps to interact with the AWS API.
see the changelog for release information
This plugin is not optimized to setups with a primary and multiple agents. Only steps that touch the workspace are executed on the agents while the rest is executed on the master.
For the best experience make sure that primary and agents have the same IAM permission and networking capabilities.
By default, credentials lookup is done on the master node for all steps.
To enable credentials lookup on the current node, enable Retrieve credentials from node in Jenkins global configuration. This is globally applicable and restricts all access to the master's credentials.
the withAWS step provides authorization for the nested steps.
You can provide region and profile information or let Jenkins
assume a role in another or the same AWS account.
You can mix all parameters in one withAWS block.
Set region information (note that region and endpointUrl are mutually exclusive):
withAWS(region:'eu-west-1') { // do something }
Use provided endpointUrl (endpointUrl is optional, however, region and endpointUrl are mutually exclusive):
withAWS(endpointUrl:'https://minio.mycompany.com',credentials:'nameOfSystemCredentials',federatedUserId:"${submitter}@${releaseVersion}") { // do something }
Use Jenkins UsernamePassword credentials information (Username: AccessKeyId, Password: SecretAccessKey):
withAWS(credentials:'IDofSystemCredentials') { // do something }
Use Jenkins AWS credentials information (AWS Access Key: AccessKeyId, AWS Secret Key: SecretAccessKey):
withAWS(credentials:'IDofAwsCredentials') { // do something }
Use profile information from ~/.aws/config:
withAWS(profile:'myProfile') { // do something }
Assume role information (account is optional - uses current account as default. externalId, roleSessionName and policy are optional. duration is optional - if specified it represents the maximum amount of time in seconds the session may persist for, defaults to 3600.):
withAWS(role:'admin', roleAccount:'123456789012', externalId: 'my-external-id', policy: '{"Version":"2012-10-17","Statement":[{"Sid":"Stmt1","Effect":"Deny","Action":"s3:DeleteObject","Resource":"*"}]}', duration: 3600, roleSessionName: 'my-custom-session-name') { // do something }
Assume federated user id information (federatedUserId is optional - if specified it generates a set of temporary credentials and allows you to push a federated user id into cloud trail for auditing. duration is optional - if specified it represents the maximum amount of time in seconds the session may persist for, defaults to 3600.):
withAWS(region:'eu-central-1',credentials:'nameOfSystemCredentials',federatedUserId:"${submitter}@${releaseVersion}", duration: 3600) { // do something }
Authentication with a SAML assertion (fetched from your company IdP) by assuming a role
withAWS(role: 'myRole', roleAccount: '123456789', principalArn: 'arn:aws:iam::123456789:saml-provider/test', samlAssertion: 'base64SAML', region:'eu-west-1') { // do something }
Authentication by retrieving credentials from the node in scope
node('myNode') { // Credentials will be fetched from this node withAWS(role: 'myRole', roleAccount: '123456789', region:'eu-west-1', useNode: true) { // do something } }
When you use Jenkins Declarative Pipelines you can also use withAWS in an options block:
options { withAWS(profile:'myProfile') } stages { ... }
Print current AWS identity information to the log.
The step returns an objects with the following fields:
def identity = awsIdentity()
Invalidate given paths in CloudFront distribution.
cfInvalidate(distribution:'someDistributionId', paths:['/*']) cfInvalidate(distribution:'someDistributionId', paths:['/*'], waitForCompletion: true)
All s3* steps take an optional pathStyleAccessEnabled and payloadSigningEnabled boolean parameter.
s3Upload(pathStyleAccessEnabled: true, payloadSigningEnabled: true, file:'file.txt', bucket:'my-bucket', path:'path/to/target/file.txt') s3Copy(pathStyleAccessEnabled: true, fromBucket:'my-bucket', fromPath:'path/to/source/file.txt', toBucket:'other-bucket', toPath:'path/to/destination/file.txt') s3Delete(pathStyleAccessEnabled: true, bucket:'my-bucket', path:'path/to/source/file.txt') s3Download(pathStyleAccessEnabled: true, file:'file.txt', bucket:'my-bucket', path:'path/to/source/file.txt', force:true) exists = s3DoesObjectExist(pathStyleAccessEnabled: true, bucket:'my-bucket', path:'path/to/source/file.txt') files = s3FindFiles(pathStyleAccessEnabled: true, bucket:'my-bucket')
Upload a file/folder from the workspace (or a String) to an S3 bucket.
If the file parameter denotes a directory, the complete directory including all subfolders will be uploaded.
s3Upload(file:'file.txt', bucket:'my-bucket', path:'path/to/target/file.txt') s3Upload(file:'someFolder', bucket:'my-bucket', path:'path/to/targetFolder/')
Another way to use it is with include/exclude patterns which are applied in the specified subdirectory (workingDir).
The option accepts a comma-separated list of patterns.
s3Upload(bucket:"my-bucket", path:'path/to/targetFolder/', includePathPattern:'**/*', workingDir:'dist', excludePathPattern:'**/*.svg,**/*.jpg')
Specific user metadatas can be added to uploaded files
s3Upload(bucket:"my-bucket", path:'path/to/targetFolder/', includePathPattern:'**/*.svg', workingDir:'dist', metadatas:['Key:SomeValue','Another:Value'])
Specific cachecontrol can be added to uploaded files
s3Upload(bucket:"my-bucket", path:'path/to/targetFolder/', includePathPattern:'**/*.svg', workingDir:'dist', cacheControl:'public,max-age=31536000')
Specific content encoding can be added to uploaded files
s3Upload(file:'file.txt', bucket:'my-bucket', contentEncoding: 'gzip')
Specific content type can be added to uploaded files
s3Upload(bucket:"my-bucket", path:'path/to/targetFolder/', includePathPattern:'**/*.ttf', workingDir:'dist', contentType:'application/x-font-ttf', contentDisposition:'attachment')
Canned ACLs can be added to upload requests.
s3Upload(file:'file.txt', bucket:'my-bucket', path:'path/to/target/file.txt', acl:'PublicRead') s3Upload(file:'someFolder', bucket:'my-bucket', path:'path/to/targetFolder/', acl:'BucketOwnerFullControl')
A Server Side Encryption Algorithm can be added to upload requests.
s3Upload(file:'file.txt', bucket:'my-bucket', path:'path/to/target/file.txt', sseAlgorithm:'AES256')
A KMS alias or KMS id can be used to encrypt the uploaded file or directory at rest.
s3Upload(file: 'foo.txt', bucket: 'my-bucket', path: 'path/to/target/file.txt', kmsId: 'alias/foo') s3Upload(file: 'foo.txt', bucket: 'my-bucket', path: 'path/to/target/file.txt', kmsId: '8e1d420d-bf94-4a15-a07a-8ad965abb30f') s3upload(file: 'bar-dir', bucket: 'my-bucket', path: 'path/to/target', kmsId: 'alias/bar')
A redirect location can be added to uploaded files.
s3Upload(file: 'file.txt', bucket: 'my-bucket', redirectLocation: '/redirect')
Creating an S3 object by creating the file whose contents is the provided text argument.
s3Upload(path: 'file.txt', bucket: 'my-bucket', text: 'Some Text Content') s3Upload(path: 'path/to/targetFolder/file.txt', bucket: 'my-bucket', text: 'Some Text Content')
Tags can be added to uploaded files.
s3Upload(file: 'file.txt', bucket: 'my-bucket', tags: '[tag1:value1, tag2:value2]') def tags=[:] tags["tag1"]="value1" tags["tag2"]="value2" s3Upload(file: 'file.txt', bucket: 'my-bucket', tags: tags.toString())
Log messages can be less verbose. Disable it when you feel the logs are excessive but you will lose the visibility of what files having been uploaded to S3.
s3Upload(path: 'source/path/', bucket: 'my-bucket', verbose: false)
Download a file/folder from S3 to the local workspace.
Set optional parameter force to true to overwrite existing file in workspace.
If the path ends with a / the complete virtual directory will be downloaded.
s3Download(file:'file.txt', bucket:'my-bucket', path:'path/to/source/file.txt', force:true) s3Download(file:'targetFolder/', bucket:'my-bucket', path:'path/to/sourceFolder/', force:true)
Copy file between S3 buckets.
s3Copy(fromBucket:'my-bucket', fromPath:'path/to/source/file.txt', toBucket:'other-bucket', toPath:'path/to/destination/file.txt')
Delete a file/folder from S3. If the path ends in a "/", then the path will be interpreted to be a folder, and all of its contents will be removed.
s3Delete(bucket:'my-bucket', path:'path/to/source/file.txt') s3Delete(bucket:'my-bucket', path:'path/to/sourceFolder/')
Check if object exists in S3 bucket.
exists = s3DoesObjectExist(bucket:'my-bucket', path:'path/to/source/file.txt')
This provides a way to query the files/folders in the S3 bucket, analogous to the findFiles step provided by "pipeline-utility-steps-plugin".
If specified, the path limits the scope of the operation to that folder only.
The glob parameter tells s3FindFiles what to look for. This can be a file name, a full path to a file, or a standard glob ("*", "*.ext", "path/**/file.ext", etc.).
If you do not specify path, then it will default to the root of the bucket.
The path is assumed to be a folder; you do not need to end it with a "/", but it is okay if you do.
The path property of the results will be relative to this value.
This works by enumerating every file/folder in the S3 bucket under path and then performing glob matching.
When possible, you should use path to limit the search space for efficiency purposes.
If you do not specify glob, then it will default to "*".
By default, this will return both files and folders.
To only return files, set the onlyFiles parameter to true.
files = s3FindFiles(bucket:'my-bucket') files = s3FindFiles(bucket:'my-bucket', glob:'path/to/targetFolder/file.ext') files = s3FindFiles(bucket:'my-bucket', path:'path/to/targetFolder/', glob:'file.ext') files = s3FindFiles(bucket:'my-bucket', path:'path/to/targetFolder/', glob:'*.ext') files = s3FindFiles(bucket:'my-bucket', path:'path/', glob:'**/file.ext')
s3FindFiles returns an array of FileWrapper objects exactly identical to those returned by findFiles.
Each FileWrapper object has the following properties:
name: the filename portion of the path (for "path/to/my/file.ext", this would be "file.ext")path: the full path of the file, relative to the path specified (for path="path/to/", this property of the file "path/to/my/file.ext" would be "my/file.ext")directory: true if this is a directory; false otherwiselength: the length of the file (this is always "0" for directories)lastModified: the last modification timestamp, in milliseconds since the Unix epoch (this is always "0" for directories)When used in a string context, a FileWrapper object returns the value of its path property.
Will presign the bucket/key and return a url. Defaults to 1 minute duration, using GET.
def url = s3PresignURL(bucket: 'mybucket', key: 'mykey')
The duration can be overridden:
def url = s3PresignURL(bucket: 'mybucket', key: 'mykey', durationInSeconds: 300) //5 minutes
The method can also be overridden:
def url = s3PresignURL(bucket: 'mybucket', key: 'mykey', httpMethod: 'POST')
Validates the given CloudFormation template.
def response = cfnValidate(file:'template.yaml') echo "template description: ${response.description}"
Create or update the given CloudFormation stack using the given template from the workspace.
You can specify an optional list of parameters, either as a key/value pair or a map.
You can also specify a list of keepParams of parameters which will use the previous value on stack updates.
Using timeoutInMinutes you can specify the amount of time that can pass before the stack status becomes CREATE_FAILED and the stack gets rolled back.
Due to limitations in the AWS API, this only applies to stack creation.
If you have many parameters you can specify a paramsFile containing the parameters. The format is either a standard
JSON file like with the cli or a YAML file for the cfn-params command line utility.
Additionally you can specify a list of tags that are set on the stack and all resources created by CloudFormation.
The step returns the outputs of the stack as a map. It also contains special values prefixed


免费创建高清无水印Sora视频
Vora是一个免费创建高清无水印Sora视频的AI工具


最适合小白的AI自动化工作流平台
无需编码,轻松生成可复用、可变现的AI自动化工作流

大模型驱动的Excel数据处理工具
基于大模型交互的表格处理系统,允许用户通过对话方式完成数据整理和可视化分析。系统采用机器学习算法解析用户指令,自动执行排序、公式计算和数据透视等操作,支持多种文件格式导入导出。数据处理响应速度保持在0.8秒以内,支持超过100万行数据的即时分析。


AI辅助编程,代码自动修复
Trae是一种自适应的集成开发环境(IDE),通过自动化和多元协作改变开发流程。利用Trae,团队能够更快速、精确地编写和部署代码,从而提高编程效率和项目交付速度。Trae具备上下文感知和代码自动完成功能,是提升开发效率的理想工具。


AI论文写作指导平台
AIWritePaper论文写作是一站式AI论文写作辅助工具,简化了选题、文献检索至论文撰写的整个过程。通过简单设定,平台可快速生成高质量论文大纲和全文,配合图表、参考文献等一应俱全,同时提供开题报告和答辩PPT等增值服务,保障数据安全,有效提升写作效率和论文质量。


AI一键生成PPT,就用博思AIPPT!
博思AIPPT,新一代的AI生成PPT平台,支持智能生成PPT、AI美化PPT、文本&链接生成PPT、导入Word/PDF/Markdown文档生成PPT等,内置海量精美PPT模板,涵盖商务、教育、科技等不同风格,同时针对每个页面提供多种版式,一键自适应切换,完美适配各种办公场景。


AI赋能电商视觉革命,一站式智能商拍平台
潮际好麦深耕服装行业,是国内AI试衣效果最好的软件。使用先进AIGC能力为电商卖家批量提供优质的、低成本的商拍图。合作品牌有Shein、Lazada、安踏、百丽等65个国内外头部品牌,以及国内10万+淘宝、天猫、京东等主流平台的品牌商家,为卖家节省将近85%的出图成本,提升约3倍出图效率,让品牌能够快速上架。


企业专属的AI法律顾问
iTerms是法大大集团旗下法律子品牌,基于最先进的大语言模型(LLM)、专业的法律知识库和强大的智能体架构,帮助企业扫清合规障碍,筑牢风控防线,成为您企业专属的AI法律顾问。


稳定高效的流量提升解决方案,助力品牌曝光
稳定高效的流量提升解决方案,助力品牌曝光


最新版Sora2模型免费使用,一键生成无水印视频
最新版Sora2模型免费使用,一键生成无水印视频
最新AI工具、AI资讯
独家AI资源、AI项目落地

微信扫一扫关注公众号