Home > other >  Neural network (CNN) training set accuracy 88%, only 50% test set, is this why
Neural network (CNN) training set accuracy 88%, only 50% test set, is this why

Time:11-16

On CNN to one dimensional data classification, 206 samples were divided into 5 classes, training set accuracy can reach 88%, only 50%, the accuracy of test set, test set correct why so low??????? The seemingly useless, do not know to have a great god can tell??????

CodePudding user response:

Hello, can you leave your, specific change two-dimensional CNN to one-dimensional CNN code?

CodePudding user response:

The feeling is a fitting

CodePudding user response:

This is normal, not fitting,
You might as well use cross test,

CodePudding user response:

The landlord to find the solution yet I also is this kind of problem training set by more than 90% accuracy test sets is 50%

CodePudding user response:

The
reference 3 floor CNMHX response:
this is normal, not fitting,
You might as well use cross test,


The
reference 3 floor CNMHX response:
this is normal, not fitting,
You might as well use cross test,




Could you tell me why is this normal?

CodePudding user response:

Want to ask the BP network with the training set data to test, time has been changed, up to forty percent, the lowest is zero, what reason is this

CodePudding user response:

I also have this problem, use SVM to train the model, a more than 90% accuracy, a 50%, the great god answers

CodePudding user response:

This should be a fitting

CodePudding user response:

If is divided into two classes of fifty percent is the random guessing

CodePudding user response:

Machine learning is based on the training set, as far as possible make the correct training set the global or local optimal, training too hard and fitting is too idealistic, can ten crossed,
If the training accuracy was low at the beginning, only that parameter is not set or this group of data is not applicable to this algorithm, after all, there is no universal algorithm,

CodePudding user response:

Absolutely not normal, with several layers of dropout

CodePudding user response:

Need of training set and testing set random division, don't directly capture part of the training set/test set

CodePudding user response:

I training set effect is very good also, no training was validated in the validation set 100 times, the effect is very good, but the effect of test set is poor, it should not be a fitting, right??

CodePudding user response:

This is a fitting? Can try using a model, or in the loss of joining a regularization item,

CodePudding user response:

206 samples,,,
Such a small sample size is too easy to fitting, or any problems are likely to occur,
Is divided into five types,,,

Or expand the sample say something else,

CodePudding user response:

Better the training sample, or less class, also can try add a dropout

CodePudding user response:

More than 200 samples, divided into category I have too little data, also 5 class, do you think what is neural network? Increase and other data

CodePudding user response:

The features of this kind of situation can be input to the network has a problem, is your there is no clear correlation between the features and label, also may be the features of too little, you remember the training set through higher training accuracy shows that network training set features, 50% on the test set show that the model generalization ability, if you adjust and still no word could really change the features

CodePudding user response:

references the 19th floor Wang Fanchao response:
the features of this kind of situation is likely to be input to the network has a problem, is you of there is no clear correlation between the features and label, also may be the features of too little, you remember the training set through higher training accuracy shows that network training set features, 50% on the test set show that the model generalization ability, if you adjust and still no word could really change on the features of


Brother hello, I have 30 k image data sets, of which 13 k are samples, samples of 17 k minus, two classes, using the AlexeyAB version of darknet, iterative probably 15000 times, the training set mAP=69.04% test set mAP=63.07%, could you tell me whether have been fitting? Before have I seen a blog mentioned if the error is within 20% is acceptable?
  • Related