Home > database >  Tensorflow - batch size issue thrown
Tensorflow - batch size issue thrown

Time:03-01

I am following along this tutorial (https://colab.research.google.com/github/khanhlvg/tflite_raspberry_pi/blob/main/object_detection/Train_custom_model_tutorial.ipynb) from Colab and running it on my own Windows machine.

When I debug my script it throws me this error > The size of the train_data (0) couldn't be smaller than batch_size (4). To solve this problem, set the batch_size smaller or increase the size of the train_data.

On this snippet of my code

model = object_detector.create(train_data, model_spec=spec, batch_size=4, train_whole_model=True, epochs=20, validation_data=val_data)

My own train data contains 101 images and the example from Colab only contains 62 in their training folder.

I understand it's complaining about training data can't be smaller than batch size but I don't understand why its throwing it in the first place since my training data is not empty.

On my own machine I have Tensorflow Version: 2.8.0 just like in the colab.

I've tried increasing batch sizes all the way from 0 to 100plus but stil gives me the same error.

I've tried dropping one sample so there are 100 images and setting sample size to 2 , 4 etc... but still throws the error.

I'm leading to the conclusion that it is not loading in the data correctly but why?

CodePudding user response:

For anybody running into the same issue as I was , here was my solution.

Okay so the reason this is happening is because of different versions of Python.

I was trying to run this locally with Python 3.8.10

Colab is running 3.7.12 .

I ran all of my data on colab using version (3.7.12) and trained my model with no more further issues.

  • Related