In our team's infrastructure we have a Databricks job which sends data to an SQS queue which triggers a Lambda function. The Databricks job runs one in every 30 minutes. A week ago the Databricks job was failing continuously so it was not sending data, therefore the Lambda function was not triggered. Is there any way to set up an alert so that I get notified if the lambda function is not triggered for a period of 2 hours?
When I searched for a solution I was only able to see to get an alert if and when a Lambda fails or if a specific log type is found in its cloudwatch logs etc, but couldn't see any solution for the above scenario.
CodePudding user response:
You can create a Cloudwatch alarm for the Invocation
metrics for that lambda; you can configure the alarm so that if there are no invocations over a timespan of two hours, it goes into an ALARM state.
If you wish to be notified, you can also configure the Cloudwatch alarm to send a message to an SNS topic, which can then be configured to trigger SES so that it sends you an email (for example).