Home > Back-end >  How to set FastAPI websocket properly with NGINX?
How to set FastAPI websocket properly with NGINX?

Time:08-14

I have set up a FastAPI WebSocket endpoint. It works without a problem on my local machine. But I can't get it to work with the NGINX server in the production properly.

I can actually connect to the WebSocket, but the updates are not being received. So the essential feature of the websocket is not working.

But the weird thing is, sometimes when I connect, I get updates. With the excitement of having being successfully connected, I connect another user from another machine, then that user doesn't get the updates again. However, the successfully connected user still gets the updates.

I'm really confused here, what might be the problem?

My endpoint:

@router.websocket("/ticket-ws/{uuid}")
async def ticket_ws(websocket: WebSocket, token: str = Depends(check_token)):
    await manager.connect(websocket)

    try:
        while True:
            data = await websocket.receive_text()
            if data == "command: __ShutDownTicket" and token["isOwner"]:
                await manager.disconnect_everyone()
            else:
                await manager.broadcast(data)

    except WebSocketDisconnect:
        manager.disconnect(websocket)

My NGINX configuration:

location /api {
    proxy_pass http://localhost:8000;
    
    include /etc/nginx/proxy_params;
    proxy_redirect off;
}

location /api/kwl/ticket-ws/ {
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "Upgrade";
    proxy_pass http://localhost:8000;
}

error_log of NGINX:

http upstream request:
"/api/kwl/ticket-ws/4c7d82f4-0606-4107-b788-a116830d30a2?token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJwaW4iOjMyNjM5N>2022/08/12 13:22:55 [debug] 25963#25963: *14 http upstream process upgraded, fu:1 2022/08/12 13:22:55 [debug]
25963#25963: *14 recv: eof:0, avail:-1 2022/08/12 13:22:55 [debug] 25963#25963: *14 recv: fd:21 7 of 4096
2022/08/12 13:22:55 [debug] 25963#25963: *14 SSL to write: 7 2022/08/12 13:22:55 [debug] 25963#25963: *14
SSL_write: 7 2022/08/12 13:22:55 [debug] 25963#25963: *14 event timer del: 21: 89445363 2022/08/12 13:22:55
[debug] 25963#25963: *14 event timer add: 21: 60000:89446675 2022/08/12 13:22:55 [debug] 25963#25963: *14 http
upstream request:
"/api/kwl/ticket-ws/4c7d82f4-0606-4107-b788-a116830d30a2?token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJwaW4iOjMyNjM5N>2022/08/12 13:22:55 [debug] 25963#25963: *14 http upstream process upgraded, fu:0 2022/08/12 13:22:55 [debug]
25963#25963: *14 event timer: 21, old: 89446675, new: 89446675 2022/08/12 13:22:55 [debug] 25963#25963: *7
http upstream request:
"/api/kwl/ticket-ws/4c7d82f4-0606-4107-b788-a116830d30a2?token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJwaW4iOjMyNjM5N>2022/08/12 13:22:55 [debug] 25963#25963: *7 http upstream process upgraded, fu:1 2022/08/12 13:22:55 [debug]
25963#25963: *7 recv: eof:0, avail:-1 2022/08/12 13:22:55 [debug] 25963#25963: *7 recv: fd:17 7 of 4096
2022/08/12 13:22:55 [debug] 25963#25963: *7 SSL to write: 7 2022/08/12 13:22:55 [debug] 25963#25963: *7
SSL_write: 7 2022/08/12 13:22:55 [debug] 25963#25963: *7 event timer: 17, old: 89446675, new: 89446675
2022/08/12 13:22:55 [debug] 25963#25963: *7 http upstream request:
"/api/kwl/ticket-ws/4c7d82f4-0606-4107-b788-a116830d30a2?token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJwaW4iOjMyNjM5N>2022/08/12 13:22:55 [debug] 25963#25963: *7 http upstream process upgraded, fu:0 2022/08/12 13:22:55 [debug]
25963#25963: *7 event timer: 17, old: 89446675, new: 89446675

I wonder if NGINX thinks it's an HTTP connection. Here's my Gunicorn configuration as well:

[program:kwl] 
directory=/myapi/kwl/backend/app/ 
command=/myapi/kwl/backend/env/bin/gunicorn run:app --workers 5 --worker-class uvicorn.workers.UvicornWorker --bind 0.0.0.0:8000
stderr_logfile=/myapi/log/backend.err.log 
stdout_logfile=/myapi/log/backend.out.log

Maybe it's an issue related to Uvicorn workers? I really can't understand the problem. Any help is welcomed.

EDIT: I realized that if I run the app with this command (I was using --workers 5 before) everything runs without a problem:

gunicorn main:app --workers 1 --worker-class uvicorn.workers.UvicornWorker --bind 0.0.0.0:8000

But the solution shouldn't be like this, right? What might be causing this effect when the app is run on multiple workers?

CodePudding user response:

lets try this nginx config:

http {
    map $http_upgrade $connection_upgrade {
        default upgrade;
        '' close;
    }
 
    upstream websocket {
        server 0.0.0.0:8010;
    }
 
    server {
        listen 8020;
        location / {
            proxy_pass http://websocket;
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection $connection_upgrade;
            proxy_set_header Host $host;
        }
    }
}

CodePudding user response:

I figured the problem. Multiple workers can't share data between them. That's why it works only in a single worker. I need to use another Pub/Sub service such as Redis.

  • Related