I saw the explanation of SBS on multiple sites for example (https://www.analyticsvidhya.com/blog/2016/12/introduction-to-feature-selection-methods-with-an-example-or-how-to-select-the-right-variables/), which states:
- In backward elimination, we start with all the features and removes the least significant feature at each iteration which improves the performance of the model. We repeat this until no improvement is observed on removal of features.
I wonder why do I have to choose the number of features to pick in the scikit package? If SBS should stop picking features when model doesn't improve anymore?
Am I missing something?
CodePudding user response:
There is a pull request to that end: https://github.com/scikit-learn/scikit-learn/pull/20145. (See also the linked Issue#20137.)