In our stack, we use many separated lambda functions (each function as an HTTP or WS handler), and each of those functions imports utils that are quite large in terms of size. Those required utils also handle the connection to external entities (cache and database), so that every time we call the function there's an init phase that lasts between 1 and 2 seconds. As different functions have separated runtimes (even though we use the connection to the same database and cache everywhere), the init phase happens in separation for every function. Is this possible (perhaps with lambda layers) to share the "init phase" among many functions if they require the same dependency? For example, based on the js code below:
import utils from "../common"
const db = utils.get_db()
export const handler_A = async (event) => {
// do sth with db here
}
export const handler_B = async (event) => {
// do sth with db here
}
Let's say we have the execution order as follows: handler_A ----> handler_A ----> handler_B
In that case, the second execution for handler_A
should be able to reuse db
from the first execution, but that's not gonna work for the handler_b
.
Is this possible to reuse the "init phase", or in other words, keep the warm start for handler_B if the handler_A was used a couple of seconds before the handler_B? Any help will be highly appreciated!
CodePudding user response:
As far as my knowledge goes each lambda is run in its own firecracker VM, so even though the code is in the same zipfile, lambda execution environments might not even be on the same VM (AWS does some optimization depending on the nature of the workload). So answer would be: no, not possible since AWS treats each lambda (and lambda version) in a completely isolated environment.
From the docs:
Execution environments are isolated from one another using several container-like technologies built into the Linux kernel, along with AWS proprietary isolation technologies.
They also have their own copy of the code and own /tmp
directory so nothing is shared really.
Best you can do is probably check why it takes so long to set up the connection in the first place (2 secs is more than average). Sometimes simply increasing the lambda size to >1GB also drastically reduces start up times although it might not work for your case of course.
CodePudding user response:
You wouldn't be able to share network connections because discrete functions run in separate firecracker VM containers, so those invocations (and thus network connections) may not be in the same underlying hosts.
You've got two options here:
- Provisioned concurrency. This will keep your functions "warm", and thus avoid the connection latency during cold starts.
- Mono-lamdba API pattern. This will combine your functions into one Lambda function, and you'd be responsible for routing inside your handler code.
The mono-lambda API pattern has many advantages as well as disadvantages, you can read more about that here