I have a bucket whose objects have commitId as their name. I want to pass these commitID to my codepipeline and utilise those id for slack messages.
I am trying to trigger Codepipeline when a zip file is uploaded to s3, however as I can see in the documents, it can only trigger with a static bucket key. I want to trigger with any file name
https://docs.aws.amazon.com/codepipeline/latest/userguide/action-reference-S3.html I am dealing with a use case where the uploaded object in s3 will have dynamic object keys.
How to deal with this situation?
I have read this question so I know using s3 with lambda and then trigger pipeline with lambda but this will still not work because I need to pass zip file to codebuild
CodePudding user response:
TL;DR Have the Lambda record the ID in commit_id.txt
and add it to the bundle.
I understand you want to execute a pipeline when an arbitrary object, say a5bf8c1.zip
is added to a S3 path, say MyPipelineBucket/commits/
. The pipeline has a S3 source, say MyPipelineBucket/source.zip
. Your pipeline executions also require the file name value (a5bf8c1
).
- Set up S3 Event Notifications on the bucket. Apply object key name filtering on the
MyPipelineBucket/commits/
prefix. - Set a Lambda Function as the destination
- The Lambda receives the Commit ID in the event notification payload as the triggering file name. Write it to
commit_id.txt
file. Using the SDK, get theMyPipelineBucket/commits/a5bf8c1.zip
bundle from S3. Addcommit_id.txt
to the bundle. Put the new bundle toMyPipelineBucket/source.zip
. This will trigger an execution. - In your pipeline, your CodeBuild commands now have access to the Commit ID. For instance, you can set the Commit ID as an environment variable:
COMMIT_ID=$(cat commit_id.txt)
echo COMMIT_ID # -> a5bf8c1