I am developing a Python Lambda function.
The documentation suggests that we can download files like this:
s3.download_file('BUCKET_NAME', 'OBJECT_NAME', 'FILE_NAME')
I have a bucket and a zip file inside the bucket. So what do I put as the object name when there's no folder?
I tried these:
s3.download_file('testunzipping','DataPump_10000838.zip','DataPump_10000838.zip')
s3.download_file('testunzipping','DataPump_10000838.zip')
But I get a time-out error in both cases.
"errorMessage": "2021-10-17T14:51:34.889Z 4257cbc1-2dd0-4fb9-b147-0dffce1f97a1 Task timed out after 3.06 seconds"
However, this works just fine:
lst = s3.list_objects(Bucket='testunzipping')['Contents']
There also doesn't seem to be any permission issues as the Lambda's execution role has a policy giving it the s3:GetObject
permission:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ExampleStmt",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::testunzipping"
]
}
]
}
The role also has S3FullAccess.
What is the issue?
CodePudding user response:
Your task is timing out as the default Lambda execution timeout is 3 seconds & the download_file
method is taking longer than 3 seconds.
Go into the general configuration settings for the function and increase the timeout to 10 seconds, which should be plenty of time for downloading a 17kb file.
With that fixed, you still won't be able to download the file as you'll get a [Errno 13] Permission denied
error.
In Lambda functions, you must download the file to the /tmp
directory as that is the only available file system that AWS permits you to write to (and read from).
s3.download_file('testunzipping','DataPump_10000838.zip','/tmp/DataPump_10000838.zip')
The /tmp
directory also has a fixed size of 512MB so keep that in mind if downloading larger objects.