Home > database >  SQLAlchemy INSERT stmt with OUTPUT return multiple rows
SQLAlchemy INSERT stmt with OUTPUT return multiple rows

Time:11-06

When I insert multiple rows with sqlalchemy, I am not getting the inserted id values.

For example, when I execute the plain SQL in MS SQL Server, it display 2 rows in the resultset (the recs I inserted...1,2).

INSERT INTO dbo.emp (id, fname, lname, dob)    
OUTPUT inserted.id     
VALUES (1, 'Test1','Rec1','01/01/1990'), (2, 'Test2','Rec2','01/01/1990')    

When I execute the same query in sqlalchemy using the below code, print statement just shows the value 2 (not 1). It looks like resultset has last row inserted. Is there any issue with below code ?

try:    
    sql = """\    
    INSERT INTO dbo.emp (id, fname, lname, dob)   
    OUTPUT inserted.id    
    VALUES (:id, :fname, :lname, :dob)    
    """    
    stmt = text(sql)    
    data = [    
         {'id': 1, 'fname':'Test1', 'lname':'rec1', 'dob':'05/05/2000'}    
        ,{'id': 2, 'fname': 'Test2', 'lname': 'rec2', 'dob': '05/05/2001'}    
    ]    
    result = conn.execute(stmt, data)    
    for id in result:    
        print(id)    
    
    conn.commit()    
except exc.IntegrityError as e:    
    print('Code3 - Integrity error raised',  e)    
    conn.rollback()    
except exc.SQLAlchemyError as e:    
    print('Code3 - Something else went wrong', e)     
    conn.rollback()

CodePudding user response:

With multi-row parameter data like a list of dicts, SQLAlchemy uses pyodbc's .executemany() method which is not suitable for returning multiple results from an OUTPUT clause.

If you are using SQL Server 2016 you can get the results using OPENJSON and passing a single string that represents the multi-row data:

import json

import sqlalchemy as sa

engine = sa.create_engine("mssql pyodbc://@mssqlLocal")

with engine.begin() as conn:
    conn.exec_driver_sql("DROP TABLE IF EXISTS emp")
    conn.exec_driver_sql(
        "CREATE TABLE emp ("
            "id int identity primary key, "
            "fname nvarchar(50), "
            "lname nvarchar(50)"
        ")"
    )

sql = """\
INSERT INTO emp (fname, lname)
OUTPUT inserted.id
SELECT fname, lname FROM 
OPENJSON(:json_str)
    WITH (
        fname nvarchar(50) '$.fname',
        lname nvarchar(50) '$.lname'
    )
"""
data = [
    {"fname": "Homer", "lname": "Simpson"},
    {"fname": "Ned", "lname": "Flanders"},
]
with engine.begin() as conn:
    results = conn.execute(
        sa.text(sql), {"json_str": json.dumps(data)}
    ).fetchall()
    print(results)  # [(1,), (2,)]

For older versions of SQL Server you could upload the row data to a temporary table and then use

INSERT INTO emp (fname, lname)
OUTPUT inserted.id
SELECT fname, lname FROM #temp_table

CodePudding user response:

I am pretty sure the conn.execute runs the insert commands twice. See, your sql command is an insert of one data row, therefore it won't translate to this:

INSERT INTO dbo.emp (id, fname, lname, dob)    
OUTPUT inserted.id     
VALUES (1, 'Test1','Rec1','01/01/1990'), (2, 'Test2','Rec2','01/01/1990')   

instead 2 commands for insert are run consecutively, and sqlalchemy it seems keeps only the second result. (I tried to dig into the sqlalchemy code but could not find the place where it is hapenning).

If you really want to use the strength of sqlalchemy, I would recommend using at least sqlalchemy query language and let sqlalchemy construct your sql commands. Look up SQL Expression Language Tutorial to get started.

  • Related