Home > Net >  How do I analyze what is hanging my Flask application
How do I analyze what is hanging my Flask application

Time:01-26

I have a Python Flask web application, which uses a Postgresql database.

When I put a load on my application, it stops to respond. This only happens when I request pages which uses the database.

My setup:

  • nginx frontend (although in my test environment, skipping this tier doesn't make a difference), connecting via UNIX socket to:
  • gunicorn application server with 3 child processes, connecting via UNIX socket to:
  • pgbouncer, connection pooler for PostgreSQL, connecting via TCP/IP to:
    • I need pgbouncer, because SQLAlchemy has connection pooling per process. If I don't use pgbouncer, my database get's overloaded with connection requests very quickly.
  • postgresql 13, the database server.

I have a test environment on Debian Linux (with nginx) and on my iMac, and the application hang occurs on both machines.

I put load on the application with hey, a http load generator. I use the default, which generates 200 requests with 50 workers. The test-page issues two queries to the database.

When I run my load test, I see gunicorn getting worker timeouts. It's killing the timedout processes, and starts up new ones. Eventually (after a lot of timeouts) everything is fine again. For this, I lowered the statement timeout setting of Postgresql. First is was 30 and later I set it to 15 seconds. Gunicorn's worker timeouts happend more quickly now. (I don't understand this behaviour; why would gunicorn recycle a worker, when a query times out?)

When I look at pgbouncer, with the show clients; command I see some waiting clients. I think this is a hint of the problem. My Web application is waiting on pgbouncer, and pgbouncer seems to be waiting for Postgres. When the waiting lines are gone, the application behaves normally again (trying a few requests). Also, when I restart the gunicorn process, everything goes back to normal.

But with my application under stress, when I look at postgresql (querying with a direct connection, by-passing pgbouncer), I can't see anything wrong, or waiting or whatever. When I query pg_stat_activity, all I see are idle connections (except from then connection I use to query the view).

How do I debug this? I'm a bit stuck. pg_stat_activity should show queries running, but this doesn't seem to be the case. Is there something else wrong? How do I get my application to work under load, and how to analyze this.

CodePudding user response:

So, I solved my question.

As it turned out, not being able to see what SqlAlchemy was doing turned out to be the most confusing part. I could see what Postgres was doing (pg_stat_activity), and also what pgbouncer was doing (show clients;).

SqlAlchemy does have an echo and pool_echo setting, but for some reason this didn't help me.

What helped me was the realization that SqlAlchemy uses standard python logging. For me, the best way to check it out was to add the default Flask logging handler to these loggers, something like this:

log_level = "INFO"
app.logger.setLevel(log_level)
for log_name in ["sqlalchemy.dialects", "sqlalchemy.engine", "sqlalchemy.orm", "sqlalchemy.pool"]:
    additional_logger = logging.getLogger(log_name)
    additional_logger.setLevel(log_level)
    additional_logger.addHandler(app.logger.handlers[0])

(of course I can control my solution via a config-file, but I left that part out for clarity)

Now I could see what was actually happening. Still no statistics, like with the other tiers, but this helped.

Eventually I found the problem. I was using two (slightly) different connection strings to the same database. I had them because the first was for authentication (used by Flask-Session and Flask-Login via ORM), and the other for application queries (used by my own queries via PugSQL). In the end, different connection strings were not necessary. However it made SqlAlchemy do strange things when in stress.

I'm still not sure what the actual problem was (probably there were two connection pools which were fighting each other), but this solved it.

Nice benefit: I don't need pg_bouncer in my situation, so that removes a lot of complexity.

  • Related