I'm supposed to perform feature selection of my dataset (independent variables: some aspects of a patient, target varibale: patient ill or not) using a dcision tree. After that with the features selected I've to implement a different ML model.
My doubt is: when I'm implementing the decison tree is it necessary having a train and a test set or just fit the model on the whole data?
CodePudding user response:
it's necessary to split the dataset into train-test because otherwise you will measure the performance with data used in training and could end up into over-fitting.
Over-fitting is where the training error constantly decrease but the generalization error increase, where by generalization error is intended as the ability of the model to classify correctly new (never seen before) samples.