I trained a Kernel Density model, then dumped the model using joblib. I then made a function while calling the same .pkl file. It works fine on my local machine, but when I deploy it on a cloud machine and create a docker image out of the same I get one of the following errors:
ModuleNotFoundError: No module named 'sklearn.neighbors._dist_metrics'
or
ModuleNotFoundError: No module named 'sklearn.neighbors._kde'
What might be causing this issue and how to solve it?
The Code for the initial training is:
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
import os
%matplotlib inline
import seaborn as sns
import csv
from sklearn.neighbors import KernelDensity
import joblib
arr = df_trim.values
kde = KernelDensity(kernel='gaussian', bandwidth=0.2).fit(arr)
joblib.dump(kde, 'kde.pkl')
# This is the array that is used for training
# array([[3.5, 3.5, 3.5, 3.5],
[4. , 4. , 3.5, 4. ],
[3.5, 3. , 2.5, 3. ],
...,
[2.5, 2.5, 2. , 2. ],
[1.5, 1.5, 2. , 2.5],
[3. , 3. , 2.5, 3. ]])
The following code is for the function that invokes this saved model:
from itertools import combinations
import joblib
filename = 'kde.pkl' # filename for the model's pickle file.
model = joblib.load(filename) # loading the pre trained model using joblib.
def rSubset(arr, r):
# return list of all subsets of length r
# to deal with duplicate subsets use
# set(list(combinations(arr, r)))
return list(combinations(arr, r))
def datapred(*args):
no_args = len(args)
args = list(args)
pred_data = []
model_score = []
arr = [3.5 , 4 , 3, 2.5, 1.5, 2, 1, 0.5, 0.25]
n = (4 - no_args)
comb_arr = (rSubset(arr, n))
if(no_args==1):
gpa1 = args[0]
for i in range(1,len(comb_arr)):
var = comb_arr[i]
var = list(var)
var = [gpa1] var
output = model.score_samples([var])
model_score.append(output)
pred_data.append(var)
position = model_score.index(max(model_score))
result = pred_data[position]
return(result)
elif(no_args==2):
gpa1 = args[0]
gpa2 = args[1]
for i in range(1,len(comb_arr)):
var = comb_arr[i]
var = list(var)
var = [gpa1] [gpa2] var
output = model.score_samples([var])
model_score.append(output)
pred_data.append(var)
position = model_score.index(max(model_score))
result = pred_data[position]
return(result)
elif(no_args==3):
gpa1 = args[0]
gpa2 = args[1]
gpa3 = args[2]
for i in range(1,len(comb_arr)):
var = comb_arr[i]
var = list(var)
var = [gpa1] [gpa2] [gpa3] var
output = model.score_samples([var])
model_score.append(output)
pred_data.append(var)
position = model_score.index(max(model_score))
result = pred_data[position]
return(result)
Also the following is the requirements.txt file for the docker image :
logger
Flask==1.1.2
Flask-RESTful==0.3.8
joblib==0.15.1
MarkupSafe==1.1.1
pandas==1.0.3
scikit-learn==0.19
sklearn >= 0.0
threadpoolctl==2.0.0
gunicorn==20.0.4
xgboost ==1.5.2
scipy >= 0.0
CodePudding user response:
The scikit-learn
library is a different version on your cloud machine.
Specifically, the sklearn.neighbors._dist_metrics
was removed around version 1.0.2
. Perhaps your docker container is not actually using your requirements.txt properly.
Here's an example of different versions:
This one doesn't throw an error
>>> import sklearn
>>> sklearn.__version__
'0.24.2'
>>> from sklearn.neighbors import _dist_metrics
This one throws an error
>>> import sklearn
>>> sklearn.__version__
'1.0.2'
>>> from sklearn.neighbors import _dist_metrics
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: cannot import name '_dist_metrics' from 'sklearn.neighbors'