Home > Back-end >  deep neural network doing wrong predictions on Realtime videos
deep neural network doing wrong predictions on Realtime videos

Time:04-17

I have created a model using TensorFlow for detecting any type of violence in the video. I have trained the model on approx. 2000 videos by splitting it into frames.

But when I use that model on any unseen video or real-time video then it's not predicted correctly.

I just wanted to ask if anyone can tell me I have taken the correct hidden layers and if there are any tweaks I can make for correct predictions.

The neural_v2.ipynb is used to train the model. The test_v2.py is the file that loads the model and captures videos and predicts.

If you need any more technical clarification please ask me.

If anyone can help in any way, I would really appreciate it.

Dataset Link

Code Link

CodePudding user response:

You may set the epochs=50 to train again, it will be better

CodePudding user response:

Ideally, you would split your data into three: training, validation, and test (you are using your testing data as your validation).

As @finko's answer, I would try a more epochs, but more importantly a denser model. Experiment with some state of the art models (like VGG16, ResNet152, MobileNet etc). All of these are available as Keras applications (https://www.tensorflow.org/api_docs/python/tf/keras/applications).

  • Related