Home > Net >  How to create DataFrame with feature importance from XGBClassifier made by GridSearchCV?
How to create DataFrame with feature importance from XGBClassifier made by GridSearchCV?

Time:08-09

I use GridSearchCV of scikit-learn to find the best parameters for my XGBClassifier model, I use code like below:

grid_params = {
      'n_estimators' : [100, 500, 1000],
      'subsample' : [0.01, 0.05]
}

est = xgb.Classifier()
grid_xgb = GridSearchCV(param_grid = grid_params,
                        estimator = est,
                        scoring = 'roc_auc',
                        cv = 4,
                        verbose = 0)
grid_xgb.fit(X_train, y_train)

print('best estimator:', grid_xgb.best_estimator_)
print('best AUC:', grid_xgb.best_score_)
print('best parameters:', grid_xgb.best_params_)

I need to have feature importance DataFrame with my variables and their importance something like below:

variable | importance
---------|-------
x1       | 12.456
x2       | 3.4509
x3       | 1.4456
...      | ...

How can I achieve above DF from my XGBClassifier made by using GridSearchCV ?

I tried to achieve that by using something like below:

f_imp_xgb = grid_xgb.get_booster().get_score(importance_type='gain')
keys = list(f_imp_xgb.keys())
values = list(f_imp_xgb.values())

df_f_imp_xgb = pd.DataFrame(data = values, index = keys, columns = ['score']).sort_values(by='score', ascending = False)

But I have error:

AttributeError: 'GridSearchCV' object has no attribute 'get_booster'

What can I do?

CodePudding user response:

You can use

clf.best_estimator_.get_booster().get_score(importance_type='gain')

where clf is the fitted GridSearchCV.

import pandas as pd
import numpy as np
from xgboost import XGBClassifier
from sklearn.model_selection import GridSearchCV
np.random.seed(42)

# generate some dummy data
df = pd.DataFrame(data=np.random.normal(loc=0, scale=1, size=(100, 3)), columns=['x1', 'x2', 'x3'])
df['y'] = np.where(df.mean(axis=1) > 0, 1, 0)

# find the best model
X = df.drop(labels=['y'], axis=1)
y = df['y']

parameters = {
    'n_estimators': [100, 500, 1000],
    'subsample': [0.01, 0.05]
}

clf = GridSearchCV(
    param_grid=parameters,
    estimator=XGBClassifier(random_state=42),
    scoring='roc_auc',
    cv=4,
    verbose=0
)

clf.fit(X, y)

# get the feature importances
importances = clf.best_estimator_.get_booster().get_score(importance_type='gain')
print(importances)
# {'x1': 1.7825901508331299, 'x2': 1.4209487438201904, 'x3': 1.5004568099975586}

After that you can create the data frame as follows

importances = pd.DataFrame(importances, index=[0]).transpose().reset_index()
importances.columns = ['variable', 'importance']
print(importances)
#   variable  importance
# 0       x1    1.782590
# 1       x2    1.420949
# 2       x3    1.500457
  • Related