I have a s3 bucket trigger set up to call a lambda function when a new file is saved to the bucket. That lambda function then parses the event data and sends it as a post request to an api running on on ec2 instance. That post request, when received starts a file conversion pipeline process that can take a minute or so to run. The lambda seems to time-out every 3 seconds, so it keeps trying and then saying 'timed out' because it never receives a response from the api call. The actual conversion process kicks off and completes just fine though so thats not an issue but I didn't think this was good practice. Is there a fix to this or some other way I should be configuring this?
My lambda function is
import json
import requests
def lambda_handler(event, context):
# Get the bucket name and object key from the event
bucket_name = event['Records'][0]['s3']['bucket']['name']
object_key = event['Records'][0]['s3']['object']['key']
# Encode the data as JSON
data = {
'bucket_name': bucket_name,
'object_key': object_key
}
json_data = json.dumps(data)
# Make the POST request
url = "http://<domain>/api/new/file"
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=json_data)
# Return the response status code
print(f'response code: {response}')
# return response.status_code
print(data)
return data
The api on the ec2 instance is:
from flask import Flask, request
app = Flask(__name__)
@app.route('/api/new/file', methods=['POST'])
def handle_request():
content_type = request.headers.get('Content-Type')
if (content_type == 'application/json'):
data = request.json
file_name = os.path.basename(data['object_key'])
try:
<my_long_function>(file_name)
except Exception as e:
print(e)
return "False"
return "True"
else:
return 'Content-Type not supported!'
CodePudding user response:
That's because lambda by default runs for 3 secs. All you have to do is increase the timeout value, which can be up to 15 mins == 900 secs Lambda > Configuration > General Configuration > Timeout