Home > Enterprise >  Azure databricks Job fails with error message
Azure databricks Job fails with error message

Time:07-01

When node restart, job fails with the following message:

ImportError: No module named mlflow

I have installed mlflow from Databricks Cluster UI, still facing this issue.

Cluster Configuration: Runtime 10.4 LTS Scala 2.12, Spark 3.2.1

CodePudding user response:

The Cluster Manager is part of the Azure Databricks service that manages customer Apache Spark clusters. It sends commands to install Python and R libraries when it restarts each node. Sometimes, library installation or downloading of artifacts from the internet can take more time than expected. This occurs due to network latency, or it occurs if the library that is being attached to the cluster has many dependent libraries.

Solution:

Use notebook-scoped library installation commands in the notebook. You can enter the following commands in one cell, which ensures that all of the specified libraries are installed.

dbutils.library.installPyPI("mlflow")
dbutils.library.restartPython()

Refer - https://docs.microsoft.com/en-us/azure/databricks/kb/libraries/library-install-latency

  • Related