I have a dataframe called df2 which has the following columns:
IN: df2.columns
Out:
Index(['Section Names', 'Job Code', 'Job Title', 'Footnotes', 'Rates',
'Wage Determination', 'Revision Date', 'Revision Number'],
dtype='object')
I count this as 8 columns.
I have saved the column names into cols:
IN: cols = "','".join([str(i) for i in df2.columns.tolist()])
IN: print(cols)
OUT: Section Names','Job Code','Job Title','Footnotes','Rates','Wage Determination','Revision Date','Revision Number
Again, I'm seeing 8 columns.
And, in Access I have a table that has these columns (none of which are assigned as a primary key, for now):
Section Name
Job Code
Job Title
Footnotes
Rates
Wage Determination
Revision Date
Revision Number
which is 8 columns. I tried to execute this code to populate test_table with the data from df2:
for i, row in df2.iterrows():
sql = "INSERT INTO `test_table` (`" cols "`) VALUES (?,?,?,?,?,?,?,?)"
cursor.execute(sql, tuple(row))
conn.commit()
but I am getting this error:
Error: ('21S01', '[21S01] [Microsoft][ODBC Microsoft Access Driver] Number of query values and destination fields are not the same. (-3520) (SQLExecDirectW)')
I was thinking it was because it is trying to insert the index, and did index_col = 0
when reading the csv file (which assigned to df2). However, that decreased the column count by 1, and I adjusted the cols values and VALUES
in the sql statement accordingly, but I came up with the same error.
Any ideas? TIA.
EDIT I thought it might be a data types issue. I changed all the data types in Access to 'short text' and the datatype in df2
to object. Still the same error.
CodePudding user response:
In MS Access, you need to square bracket or backtick table or column names with spaces and/or special characters. Right now, you use single quotes in your str.join
but then use backticks in the SQL statement. Simply, use backticks or square brackets in both strings:
# DELIMIT WITH BACK TICKS
cols = "`, `".join(i for i in df2.columns)
qmarks = ", ".join('?' for _ in df2.columns)
# USING F-STRING
sql = f"INSERT INTO `test_table` (`{cols}`) VALUES ({qmarks})"
for i, row in df2.iterrows():
cursor.execute(sql, tuple(row))
conn.commit()
Consider even executemany
:
cols = "`, `".join(i for i in df2.columns)
qmarks = ", ".join('?' for _ in df2.columns)
sql = f"INSERT INTO `test_table` (`{cols}`) VALUES ({qmarks})"
vals = df2.to_numpy().tolist()
cursor.executemany(sql, vals)
conn.commit()