I'm making an API that connects to a database using FastAPI and SQLAlchemy but I'm experiencing some strange behaviour when the database (SQLite) is in-memory which doesn't occur when it's stored in a file.
Essentially the error is that the database doesn't seem to be created properly when in memory, but it works just fine in a file.
Here is a minimal working example to reproduce my error.
First I declare a model/table.
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column, Integer, String
Base = declarative_base()
class Thing(Base):
__tablename__ = "thing"
id = Column(Integer, primary_key=True, autoincrement=True)
name = Column(String)
Now I will create two global engine objects. One with an SQLite database stored in a file, and the other with an in-memory SQLite database.
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
args = dict(echo=True, connect_args={"check_same_thread": False})
engine1 = create_engine("sqlite:///db.sqlite", **args)
engine2 = create_engine("sqlite:///:memory:", **args)
Session1 = sessionmaker(bind=engine1)
Session2 = sessionmaker(bind=engine2)
Now we will create our FastAPI app, and create a path that will add an object to the database.
from fastapi import FastAPI
app = FastAPI()
@app.get("/")
def foo(x: int):
with {1: Session1, 2: Session2}[x]() as session:
session.add(Thing(name="foo"))
session.commit()
Now we'll write a short main to simulate requests and check that everything is working as expected.
from fastapi.testclient import TestClient
if __name__ == "__main__":
Base.metadata.create_all(engine1)
Base.metadata.create_all(engine2)
client = TestClient(app)
assert client.get("/1").status_code == 200
assert client.get("/2").status_code == 200
However when running this, we can see that first the thing
table is created in engine1 and is committed, then the same thing happens with engine2. Then when the first request is made we see that "foo" was successfully inserted into engine1's database (stored in a file), but the second request raises an sqlite3.OperationalError
claiming that there's no such table: thing
.
Question
Why is there different behaviour between the two engines? Why does the in-memory database claim that the table doesn't exist even though the sqlalchemy engine logs show that the create table statement ran successfully and was committed?
Edit: Originally reproduced this with SQLModel (wraps SQLAlchemy) but I found that the bug occurs in vanilla SQLAlchemy
CodePudding user response:
The docs explain this in the following https://docs.sqlalchemy.org/en/14/dialects/sqlite.html#using-a-memory-database-in-multiple-threads
To use a :memory: database in a multithreaded scenario, the same connection object must be shared among threads, since the database exists only within the scope of that connection. The StaticPool implementation will maintain a single connection globally, and the check_same_thread flag can be passed to Pysqlite as False
It also shows how to get the intended behavior, so in your case
from sqlalchemy.pool import StaticPool
args = dict(echo=True, connect_args={"check_same_thread": False}, poolclass=StaticPool)