Home > Back-end >  Python: CREATE TABLE permission denied in database 'master'
Python: CREATE TABLE permission denied in database 'master'

Time:11-30

I am trying to get the Field no from NAV using the approach i found on: https://redthree.com/nav-object-field-and-option-numbers-in-the-sql-server-database/?utm_source=rss&utm_medium=rss&utm_campaign=nav-object-field-and-option-numbers-in-the-sql-server-database

And it works great! until i have to get the data back to MSSQL database I get the following exception on the last to lines:

(pyodbc.ProgrammingError) ('42000', "[42000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]CREATE TABLE permission denied in database 'master'. (262) (SQLExecDirectW)")

The weird thing is that i am not trying to connect to the master database, but a database where i am db.owner and i have permission to create tables...

i Am using Visual studio and Python 3.9

Below is the code i am tying to run

import zlib, sys, struct
import xml.etree.ElementTree as ET
import numpy as np  "1.21.4"
import pandas as pd "1.3.4"
import pyodbc       "4.0.32"    
import sqlalchemy as sa "1.4.27"
import urllib 
from urllib.parse import quote


Driver = "ODBC Driver 17 for SQL Server"
Server = "Server"
Database = "DB"
ConnectionsString = "Driver={"  Driver  "}; Server=" Server "; Database =" Database "; Trusted_Connection=yes;"
conn = pyodbc.connect(ConnectionsString)


SQLQuery = """
Select 
    meta.[Object ID] AS tableNo 
,   obj.Name as TableName
,   meta.Metadata
From DB.dbo.[Object Metadata] as meta
    Join DB.dbo.Object as obj
        On meta.[Object Type] = obj.Type
            And meta.[Object ID] = obj.ID
Where obj.Type = 1
And meta.[Object ID] = 18"""

ObjectMetaData = pd.read_sql(sql =SQLQuery, con=conn)

for name, tableNo, TableName, Metadata in ObjectMetaData.itertuples():
        MetaDataToDecompress = bytearray (Metadata)

        MetaDataToDecompress = MetaDataToDecompress[4:]

        output = zlib.decompress(MetaDataToDecompress, -15)

        print(output.decode("utf-8"))

        OutputFileName = "NAV Table - "   str(tableNo)   ".xml"
        Outputfile = open(OutputFileName, "w ")
        Outputfile.write(output.decode("utf-8"))
        Outputfile.close


with open(OutputFileName, 'r') as xml_file:
    tree = ET.parse(xml_file)
root = tree.getroot()

for child in root.iter('{urn:schemas-microsoft-com:dynamics:NAV:MetaObjects}Field'):
    print(
        child.attrib.get('ID'),
        child.attrib.get('Name'),
        child.attrib.get('Datatype'),
        child.attrib.get('OptionString')
    )

    # Creating empty dataframe one for each table we will eventually write
dfNAVFieldNames = pd.DataFrame(columns=['TableNo','TableName','FieldNo',
                                        'FieldName'])
dfNAVFieldOptions = pd.DataFrame(columns=['TableNo','TableName','FieldNo',
                                           'FieldName','OptionNo','OptionName'])

with open(OutputFileName, 'r') as xml_file:
    tree = ET.parse(xml_file)
root = tree.getroot()
TableNo = 18
TableName = "Customer"

# this gets me all the field level attributes from the XML. It creates row in our
# dataframe.  One row for each distinct field name and one row for each option

for child in root.iter('{urn:schemas-microsoft-com:dynamics:NAV:MetaObjects}Field'):
    dfNAVFieldNames = dfNAVFieldNames.append(
             {'TableNo'   : TableNo,
              'TableName' : TableName,
              'FieldNo'   : child.attrib.get('ID'),
              'FieldName' : child.attrib.get('Name')},ignore_index=True)
    OptionString = child.attrib.get('OptionString') 
    if (OptionString is not None):
        # print(OptionString.split(","))
        OptionNo = 0        
        for Option in OptionString.split(","):
            dfNAVFieldOptions = dfNAVFieldOptions.append(
             {'TableNo'   : TableNo,
              'TableName' : TableName,
              'FieldNo'   : child.attrib.get('ID'),
              'FieldName' : child.attrib.get('Name'),
              'OptionNo'  : OptionNo,
              'OptionName' : Option},ignore_index=True)
            OptionNo  = 1    
# the below just show examples
print(dfNAVFieldNames.head())
print(dfNAVFieldOptions.head())
# We have our Dataframe and we want to load them into SQL server
# Create Our Connection String-
Driver="ODBC Driver 17 for SQL Server"
# Update your parameters for your database
Server = "Server"
Database = "DB"
ConnectionsString = "Driver={"   Driver   "}; Server=" Server "; Database =" Database "; Trusted_Connection=yes;"
conn = pyodbc.connect(ConnectionsString)
Schema              = "dbo"
params              = urllib.parse.quote_plus(ConnectionsString)
engine = sa.create_engine("mssql pyodbc:///?odbc_connect=%s" % params, fast_executemany=True)


# here we take our dataframes and put them into SQL
dfNAVFieldNames.to_sql("NAVFieldNames",con=engine,schema=Schema,if_exists="append",index=False)
dfNAVFieldOptions.to_sql("NAVFieldOptions",con=engine,schema=Schema,if_exists="append",index=False)

i hope someone can point me ind the right direction! :)

CodePudding user response:

i found the Solution I was using the driver "ODBC Driver 17 for SQL Server" when i should be using the SQL "Server Native Client 11.0" I worked in my case :)

  • Related