I am working in pandas in Python with a data frame df
. I am carrying out a classification task and have two imbalanced classes df['White']
and df['Non-white']
. For this reason, I have built a pipeline that includes both SMOTE and RandomUnderSampling.
This is what my pipeline looks like:
model = Pipeline([
('preprocessor', preprocessor),
('smote', over),
('random_under_sampler', under),
('classification', knn)
])
And these are the exact steps:
Pipeline(steps=[('preprocessor',
ColumnTransformer(remainder='passthrough',
transformers=[('knnimputer', KNNImputer(),
['policePrecinct']),
('onehotencoder-1',
OneHotEncoder(), ['gender']),
('standardscaler',
StandardScaler(),
['long', 'lat']),
('onehotencoder-2',
OneHotEncoder(),
['neighborhood',
'problem'])])),
('smote', SMOTE()),
('random_under_sampler', RandomUnderSampler()),
('classification', KNeighborsClassifier())])
I would like to evaluate the different sampling_strategy
within SMOTE and RandomUnderSampling. Can I do this directly within GridSearch when tuning the parameters? For now, I have written the following for loop
. This loop does not work (ValueError: too many values to unpack (expected 2)
).
strategy_sm = [0.1, 0.3, 0.5]
strategy_un = [0.15, 0.30, 0.50]
best_strat = []
for k, n in strategy_sm, strategy_un:
over = SMOTE(sampling_strategy=k)
under = RandomUnderSampler(sampling_strategy=n)
model = Pipeline([
('preprocessor', preprocessor),
('smote', over),
('random_under_sampler', under),
('classification', knn)
])
mode.fit(X_train, y_train)
best_strat.append[(model.score(X_train, y_train))]
I am not very proficient in Python, and I suspect there is a better way to do this. Also, I'd like the for loop
(if this is indeed the way to do it), to visualize the difference performance for combinations of sampling_strategy
. Any ideas?
CodePudding user response:
Below is an example of how you could compare the classifier's accuracy for different parameter combinations using 5-fold cross-validation and visualize the results.
import pandas as pd
import seaborn as sns
from sklearn.datasets import make_classification
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import GridSearchCV, StratifiedKFold
from imblearn.over_sampling import SMOTE
from imblearn.under_sampling import RandomUnderSampler
from imblearn.pipeline import Pipeline
# generate some data
X, y = make_classification(n_classes=2, weights=[0.1, 0.9], n_features=20, random_state=42)
# define the pipeline
estimator = Pipeline([
('smote', SMOTE()),
('random_under_sampler', RandomUnderSampler()),
('classification', KNeighborsClassifier())
])
# define the parameter grid
param_grid = {
'smote__sampling_strategy': [0.3, 0.4, 0.5],
'random_under_sampler__sampling_strategy': [0.5, 0.6, 0.7]
}
# run a grid search to calculate the cross-validation
# accuracy associated to each parameter combination
clf = GridSearchCV(
estimator=estimator,
param_grid=param_grid,
cv=StratifiedKFold(n_splits=3)
)
clf.fit(X, y)
# organize the grid search results in a data frame
res = pd.DataFrame(clf.cv_results_)
res = res.rename(columns={
'param_smote__sampling_strategy': 'smote_strategy',
'param_random_under_sampler__sampling_strategy': 'random_under_sampler_strategy',
'mean_test_score': 'accuracy'
})
res = res[['smote_strategy', 'random_under_sampler_strategy', 'accuracy']]
print(res)
# smote_strategy random_under_sampler_strategy accuracy
# 0 0.3 0.5 0.829471
# 1 0.4 0.5 0.869578
# 2 0.5 0.5 0.899881
# 3 0.3 0.6 0.809269
# 4 0.4 0.6 0.819370
# 5 0.5 0.6 0.778669
# 6 0.3 0.7 0.708259
# 7 0.4 0.7 0.778966
# 8 0.5 0.7 0.768568
# plot the grid search results
res_ = res.pivot(index='smote_strategy', columns='random_under_sampler_strategy', values='accuracy')
sns.heatmap(res_, annot=True, cbar_kws={'label': 'accuracy'})