Home > front end >  How to use Azure Key Vault in Apache Spark Connector for SQL Server
How to use Azure Key Vault in Apache Spark Connector for SQL Server

Time:05-02

Following example from Azure team on Using Apache Spark connector for SQL Server is using hard-coded user name and password. But I am storing password in Azure key Vault for security requirements.

Question: In the following example code, instead of using hard-coded password, how can we use a secret (password) stored in an Azure Key Vault? For example, suppose I have created Azure Key Vault secret call mySQLSecret with a value myPassword. How would I use mySQLSecret in the code below?

    server_name = "jdbc:sqlserver://{SERVER_ADDR}"
    database_name = "database_name"
    url = server_name   ";"   "databaseName="   database_name   ";"
    
    table_name = "table_name"
    username = "username"
    password = "password123!#" # Please specify password here
    
    try:
      df.write \
        .format("com.microsoft.sqlserver.jdbc.spark") \
        .mode("overwrite") \
        .option("url", url) \
        .option("dbtable", table_name) \
        .option("user", username) \
        .option("password", password) \
        .save()
    except ValueError as error :
        print("Connector write failed", error)

CodePudding user response:

The simplest way is to link azure keyvault as Databricks secret scope (doc), and fetch a secret with password from it using the dbutils.secrets.get function. This function accepts two parameters - name of secret scope that you used to register when linking, and name of the secret inside keyvault

  • Related