I'm using Mtcnn network (https://towardsdatascience.com/face-detection-using-mtcnn-a-guide-for-face-extraction-with-a-focus-on-speed-c6d59f82d49) to detect faces and heads. For this I'm using the classical lines code for face detection :I get the coordinate of the top-left corner of the bouding-box of the face (x,y) the height and width of the box (h,w), then I expand the box to get the head in my crop :
import mtcnn
img = cv2.imread('images/' path_res)
faces = detector.detect_faces(img)# result
for result in faces:
x, y, w, h = result['box']
x1, y1 = x w, y h
x, y, w, h = result['box']
x1, y1 = x w, y h
if x-100>=0:
a=x-100
else:
a=0
if y-150 >=0:
b=y-150
else:
b=0
if x1 100 >= w:
c=x1 100
else:
c=w
if y1 60 >= h:
d=y1 60
else:
d=h
crop=img[b:d,a:c] #<--- final crop of the head
the problem is this solution works for some images, but for many anothers, in my crop, I get the shoulders and the neck of the target person. I think, it's because, the pixels/inch in each image (i.e. 150pixels in one image isn't the same in another image). Hence, what can I do to extract the head properly ? Many thanks
CodePudding user response:
You can use relative instead of absolute sizes for the margins around the detected faces. For example, 50% on top, bottom, left and right:
import mtcnn
img = cv2.imread('images/' path_res)
faces = []
for result in detector.detect_faces(img):
x, y, w, h = result['box']
b = max(0, y - (h//2))
d = min(img.shape[0], (y h) (h//2))
a = max(0, x - (w//2):(x w))
c = min(img.shape[1], (x w) (w//2))
face = img[b:d, a:c, :]
faces.append(face)