I'm trying to make a face detection program using OpenCV from an open source tutorial. The goal is to detect face from a video which result is used to be a model for each face. Each face will be saved in folder, i have 2 problems when i try the program:
OpenCV is not only capturing the face but the whole video
Only 1 picture taken, while more than 1 needed (every frame)
#insert picture 1
Are there any solutions?
Here's the code :
model = cv2.CascadeClassifier("../model/haarcascade_frontalface_alt2.xml")
cap = cv2.VideoCapture('../video/videoplayback.mp4') #Video
while True:
ret, image = cap.read()
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
bounding_box = model.detectMultiScale(gray, scaleFactor=1.01,
minNeighbors=5, minSize=(30, 30), flags=cv2.CASCADE_SCALE_IMAGE)
for (x, y, w, h) in bounding_box:
cv2.rectangle(image, (x, y), (x w, y h), (0, 255, 0), 2)
image2 = image[y:(y h),x:(x w)]
image3 = cv2.blur(image2, (40,40))
image[y:(y h),x:(x w)] = image3
cv2.imwrite("../output_model/videos/image.jpg", image)
cv2.imshow("hasil", image)
if cv2.waitKey(1) and 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()```
CodePudding user response:
- To take only faces, it is necessary to take variables that are taken from the face only (It is in the image2 variable). What you do when it detects the video will blur the detected face so that it can pick up the blurred part.
image2 = image[y:(y h),x:(x w)]
cv2.imwrite("../output_model/videos/{}.jpg".format(counter), image2)
- Can be tricked by increments and loops that are performed when loading face images. And each folder is filled with increments. Here's an example:
counter = 0 #Increment
while True:
ret, image = cap.read()
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
bounding_box = detector_wajah.detectMultiScale(gray, scaleFactor=1.01,
minNeighbors=5, minSize=(30, 30), flags=cv2.CASCADE_SCALE_IMAGE)
for (x, y, w, h) in bounding_box:
counter =1 #Increment
cv2.rectangle(image, (x, y), (x w, y h), (0, 255, 0), 2)
image2 = image[y:(y h),x:(x w)]
For completeness (and there are variables that are changed) as follows:
import cv2
model = cv2.CascadeClassifier("../model/haarcascade_frontalface_alt2.xml")
cap = cv2.VideoCapture('../video/videoplayback.mp4') #Video
counter = 0
while True:
ret, image = cap.read()
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
bounding_box = model.detectMultiScale(gray, scaleFactor=1.01,
minNeighbors=5, minSize=(30, 30), flags=cv2.CASCADE_SCALE_IMAGE)
for (x, y, w, h) in bounding_box:
counter =1
cv2.rectangle(image, (x, y), (x w, y h), (0, 255, 0), 2)
face = image[y:(y h),x:(x w)]
cv2.imwrite("../output_model/videos/{}.jpg".format(counter), face)
blur_face = cv2.blur(face, (40,40))
image[y:(y h),x:(x w)] = blur_face
cv2.imshow("hasil", image)
if cv2.waitKey(1) and 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()