My yolov5 model was trained on 416 * 416 images. I need to detect objects on my input image of size 4008 * 2672. I split the image into tiles of size 416 * 416 and fed to the model and it can able to detect objects but at the time of stitching the predicted image tiles to reconstruct original image, I could see some objects at the edge of tiles become split and detecting half in one tile and another half in another tile, can someone tell me how to made that half detections into a single detection in the reconstruction.
CodePudding user response:
Running a second detection after offseting the tiles split would ensure that all previously cut objects would be in a single tile (assuming they are smaller than a tile). Maybe you could then combine the two results to get only the full objects
CodePudding user response:
You wrote "I need to detect objects" but didn't say why splitting the image is the solution you chose. I must ask, is splitting the image necessary? Here is the output of yolov4 on a (3840,2160,3) image. yolov4 resize the image internally to size specified as an argument (YOLO FAMILY ALLOWED IN_DIMS: (320, 320), (416, 416), (512, 512), (608, 608)), that should be transparent to the user.