While going through the Postgres Architecture, one of the things mentioned was that the Postgres DB has a connection limit of 500(which can be modified). And to fetch any data from the Postgres DB, we first need to make a connection to it. So in this case what happens if there are simultaneous 10k requests coming to the DB? How does the requests map to the connection limit, since we have the limit of 500. Do we need to increase the limit or do we need to create more instance of Postgres or is concurrency in play?
CodePudding user response:
If there are 10000 concurrent statements running on a single database, any hardware will be overloaded. You just cannot do that.
Even 500 is way too many concurrent requests, so that value is too high for max_connections
(or for the number of concurrent active sessions to be precise).
The good thing is that you don't have to do that. You use a connection pool that acts as a proxy between the application and the database. If your database statements are sufficiently short, you can easily handle thousands of concurrent application users with a few dozen database connections. This protects the database from getting overloaded and avoids opening database connections frequently, which is expensive.
If you try to open more database connections than max_connections
allows, you will get an error message. If more processes request a database connection from the pool than the limit allows, some sessions will hang and wait until a connection is available. Yet another point for using a connection pool!