Home > Software engineering >  Database connection in a running Microservice process
Database connection in a running Microservice process

Time:10-21

I run into some issues with open database connections in a flask process.

Setup

flask app running

import psycopg2, json
from flask_cors import CORS
from flask import Flask, Response, request

app = Flask(__name__)
CORS(app)
db_connection = psycopg2.connect(host="some_host_address", port="some_port", database="some_db_name",
                                 user="some_user_name", password="some_password")

@app.get("/user")
def get_user_data():
    user = request.args.get('user')
    query = "Select * from user.data where user = '{}'".format(user)
    cursor = db_connection.cursor()
    cursor.execute(query)
    user_data = cursor.fetchall()
    return Response(json.dumps(user_data), mimetype="application/json")

if __name__ == "__main__":
    app.run(host="0.0.0.0", port=4000)

Deployed via Gunicorn

python3 -m gunicorn -b :4000 --limit-request-line 0 --timeout 300 -w 4 example.flask_app:app

Not closing the db connection with every call to the /user endpoint, leads to issues I think.

After it runs for some time, maybe 10-12 hours and several requests made to the endpoint and the DB, it seems that the DB connection gets stale. I do not get any data back anymore from the DB.

If it would be better to open and close the DB connection with every request, it would be great if someone could explain why this is needed?

So more or less, why can we not have long running open tcp connections, if tcp is used here to connect to postgres?

PS: I started with this design, because opening and closing the DB connection to postgres is a performance issue, since it obviously takes more time to open and close the connection with every reqeust, instead of keeping it open.

CodePudding user response:

Without seeing exactly what you're setup is/network topology it's hard to answer why you're seeing connections fail after a period of 12 hours.

You could write your own connection pool system that would keep long running connections alive, handle when they disconnect and avoid the start up time hit on every request.

You could also use a library like SQLAlchemy that has connection pooling already built in.

  • Related