Home > front end >  Getting same Output array for an image when predict using model.predict()
Getting same Output array for an image when predict using model.predict()

Time:11-23

I am new to neural networks and I am working on prediction from images.I Downloaded a Hand Gesture Recognition code from Kaggle https://www.kaggle.com/code/benenharrington/hand-gesture-recognition-database-with-cnn and saved the model using model.save('hand-guesture-model.h5')

so using this model when i tried to predict the image. iam getting the same output array for different images.

code:

from keras.models import load_model
import cv2
from PIL import Image
import numpy as np

model = load_model('hand-guesture-model.h5')

model.compile(loss='binary_crossentropy',
              optimizer='rmsprop',
              metrics=['accuracy'])

img = Image.open('test.jpg').convert('L')
img = img.resize((120, 320))
arr = np.array(img, dtype = 'float32')
arr = arr.reshape(1,120, 320,1)

prediction = model.predict(arr)
print(prediction)
np.argmax(prediction[0])

result:

1/1 [==============================] - 0s 77ms/step
[[1. 0. 0. 0. 0. 0. 0. 0. 0. 0.]]

No matter what image i use iam getting same answer.

CodePudding user response:

The model requires the input image to be in the range 0-1, not 0-255
So, scaling the image pixel values by dividing it by 255 should fix your problem. Here is the full code:

from keras.models import load_model
import cv2
from PIL import Image
import numpy as np

model = load_model('hand-guesture-model.h5')

model.compile(loss='binary_crossentropy',
              optimizer='rmsprop',
              metrics=['accuracy'])

img = Image.open('test.jpg').convert('L')
img = img.resize((120, 320))
arr = np.array(img, dtype = 'float32')
arr = arr.reshape(1,120, 320,1)
arr = arr/255.0 # Scale pixel values

prediction = model.predict(arr)
print(prediction)
np.argmax(prediction[0])
  • Related