Home > OS >  How to import a .csv file to existing table on sql server on Azure Data Studio?
How to import a .csv file to existing table on sql server on Azure Data Studio?

Time:09-16

I am using mac m1 so I can't use SSMS, I can use Azure Data Studio. I want to import an excel file to my existing table on SQL Server. If I use BULK INSERT sql query, it gives an error because Azure Data Studio cannot see local file storage except Azure Blob Storage. If I use SQL Server Import extension, it gives an error like table already exists. How to I solve that? I must write .NET application with using .csv data.

CodePudding user response:

Azure Data Studio is a client application just like SSMS. Both are IDEs which means both are unsuitable for scripted imports and exports.

Importing with bcp

You can install the SQL Server command line tools on Linux or Mac and use the bcp tool to import or export data from the command line or a script. The tools can be installed using Homebrew:

brew tap microsoft/mssql-release https://github.com/Microsoft/homebrew-mssql-release
brew update
brew install mssql-tools

The section Import data from the source data file shows how to import data and explains the options. The full syntax is here :

bcp TestEmployees in ~/test_data.txt -S localhost -U sa -P <your_password> -d BcpSampleDB -c -t  ','

Importing in .NET

If you have to write a .NET application, you can use the SqlBulkCopy class to import data directly into the database. The data has to be accessible through an IDataReader interface.

One way to read CSV data through an IDataReader is to use the CsvDataReader class from the CsvHelper package:

using (var reader = new StreamReader(pathToCSV))
using (var csv = new CsvReader(reader, CultureInfo.InvariantCulture))
{
    using (var dr = new CsvDataReader(csv))
    {       
        using (var connection=new SqlConnection(connectionString))
        {
            connection.Open();

            using (var bcp=new SqlBulkCopy(connection))
            {
                bcp.DestinationTableName = "dbo.BulkCopyDemoMatchingColumns";

                bcp.WriteToServer(reader);
            }
        }
}

This will work if the file headers and table columns match by position. If not, the mapping from source to target columns must be specified through the ColumnMappings property

CodePudding user response:

import sqlalchemy
import pyodbc
SERVERNAME="your server name"
DB="your database name"
engine_mssql = sqlalchemy.create_engine(f"mssql pyodbc://{SERVERNAME}/{DB}driver=ODBC Driver 17 for SQL Server",fast_executemany=True)
df=pd.read_csv("your csv file")
df.to_sql('table name',engine_mssql,if_exists='replace',index=False)
print("success")




     

`
  • Related