I made an ML model that classifies images, and I want to implement it to my application. But the problem is that the file is too heavy and requires an unreasonable amount of disk space for an android application. So I was wondering if I could host the Model function in a server, which would return the prediction back to the client, when I can call from my python script. I couldn't find any specific tutorials on hosting functions, with would return output back the the client. So How do I go about doing this?
Here is the function I wish to host-
from keras.models import load_model
from PIL import Image, ImageOps
import numpy as np
labels= ["Banana", "Fan", "Clock","Coin","Leaf","Paper_airplane","Pen","Phone","Spoon","Tomato"]
model = load_model('keras_model.h5')
data = np.ndarray(shape=(1, 224, 224, 3), dtype=np.float32)
#Here I wish to pass the image, from my python program, and recieve the prediction
def RunPrediction(img):
image = img
size = (224, 224)
image = ImageOps.fit(image, size, Image.ANTIALIAS)
image_array = np.asarray(image)
normalized_image_array = (image_array.astype(np.float32) / 127.0) - 1
data[0] = normalized_image_array
#And I want to recieve this output in my code
prediction = model.predict(data)
return prediction
CodePudding user response:
You can host the application using HTTP(s) endpoint using any of the following frameworks
- Flask - https://flask.palletsprojects.com/en/2.0.x/
- Bottle - https://bottlepy.org/docs/dev/
- FastAPI - https://fastapi.tiangolo.com/
You can refer to following article to check about building and hosting application using flask.
CodePudding user response:
it is not a good idea to predict on a request! it should answer the user under 1 sec. so better to create model and save it on the disk. after receiving a req, you can only evaluate and pass the tag