I have one lambda function on AWS, which is storing it's logs over AWS cloudwatch. I want to store ALL these logs to S3 using CLI. My linux server is already configured with CLI and has all the necessary permissions to access AWS resources.
I want that the logs that are getting displayed on my AWS cloudwatch console, should get created over an S3 bucket.
Once, these logs are stored to some location on S3, then I can easily export them to an SQL table over Redshift.
Any idea how to bring these logs to S3?
Thanks for reading.
CodePudding user response:
You can use boto3 in lambda and export logs into S3 need to write a lambda function thatcsubscribe to CloudWatch logs and triggered on cloud watch log events.
AWS Doc:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/S3Export.html
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/S3ExportTasksConsole.html
CodePudding user response:
Your question does not specify that you want to export it onetime or regular basis. Thus, there are two options to export the cloudwatch logs to s3 location:
- Create a export task (onetime) You can create a task with below command:
aws logs create-export-task \
--profile {PROFILE_NAME} \
--task-name {TASK_NAME} \
--log-group-name {CW_LOG_GROUP_NAME} \
--from {START_TIME_IN_MILLS} \
--to {END_TIME_IN_MILLS} \
--destination {BUCKET_NAME} \
--destination-prefix {BUCKET_DESTINATION_PREFIX}
You can refer this in detail.
- A lambda to write the logs to s3 (event based from CloudWatch subscription)
exports.lambdaHandler = async (event, context) => {
// get the logs content from the event
// if any change to data
// write the data to s3 location
}
My recommendation would be to push the logs to ELK stack or any equivalent logging systems(Splunk, Loggly, etc) for better anylysis, visualization of the data.