Home > front end >  Blur face in face detection in vision kit
Blur face in face detection in vision kit

Time:07-02

I'm using Apple tutorial about face detection in vision kit in a live camera feed, not an image.

https://developer.apple.com/documentation/vision/tracking_the_user_s_face_in_real_time

It detects the face and adds some lines using CAShapeLayer to draw lines between different parts of the face.

fileprivate func setupVisionDrawingLayers() {
    let captureDeviceResolution = self.captureDeviceResolution
    
    let captureDeviceBounds = CGRect(x: 0,
                                     y: 0,
                                     width: captureDeviceResolution.width,
                                     height: captureDeviceResolution.height)
    
    let captureDeviceBoundsCenterPoint = CGPoint(x: captureDeviceBounds.midX,
                                                 y: captureDeviceBounds.midY)
    
    let normalizedCenterPoint = CGPoint(x: 0.5, y: 0.5)
    
    guard let rootLayer = self.rootLayer else {
        self.presentErrorAlert(message: "view was not property initialized")
        return
    }
    
    let overlayLayer = CALayer()
    overlayLayer.name = "DetectionOverlay"
    overlayLayer.masksToBounds = true
    overlayLayer.anchorPoint = normalizedCenterPoint
    overlayLayer.bounds = captureDeviceBounds
    overlayLayer.position = CGPoint(x: rootLayer.bounds.midX, y: rootLayer.bounds.midY)
    
    let faceRectangleShapeLayer = CAShapeLayer()
    faceRectangleShapeLayer.name = "RectangleOutlineLayer"
    faceRectangleShapeLayer.bounds = captureDeviceBounds
    faceRectangleShapeLayer.anchorPoint = normalizedCenterPoint
    faceRectangleShapeLayer.position = captureDeviceBoundsCenterPoint
    faceRectangleShapeLayer.fillColor = nil
    faceRectangleShapeLayer.strokeColor = UIColor.green.withAlphaComponent(0.7).cgColor
    faceRectangleShapeLayer.lineWidth = 5
    faceRectangleShapeLayer.shadowOpacity = 0.7
    faceRectangleShapeLayer.shadowRadius = 5
    
    let faceLandmarksShapeLayer = CAShapeLayer()
    faceLandmarksShapeLayer.name = "FaceLandmarksLayer"
    faceLandmarksShapeLayer.bounds = captureDeviceBounds
    faceLandmarksShapeLayer.anchorPoint = normalizedCenterPoint
    faceLandmarksShapeLayer.position = captureDeviceBoundsCenterPoint
    faceLandmarksShapeLayer.fillColor = nil
    faceLandmarksShapeLayer.strokeColor = UIColor.yellow.withAlphaComponent(0.7).cgColor
    faceLandmarksShapeLayer.lineWidth = 3
    faceLandmarksShapeLayer.shadowOpacity = 0.7
    faceLandmarksShapeLayer.shadowRadius = 5
    
    overlayLayer.addSublayer(faceRectangleShapeLayer)
    faceRectangleShapeLayer.addSublayer(faceLandmarksShapeLayer)
    rootLayer.addSublayer(overlayLayer)
    
    self.detectionOverlayLayer = overlayLayer
    self.detectedFaceRectangleShapeLayer = faceRectangleShapeLayer
    self.detectedFaceLandmarksShapeLayer = faceLandmarksShapeLayer
    
    self.updateLayerGeometry()
}

How can I fill inside the lines (different part of face) with a blurry view? I need to blur the face.

CodePudding user response:

You could try placing a UIVisualEffectView on top of your video feed, and then adding a masking CAShapeLayer to that UIVisualEffectView. I don't know if that would work or not.

The docs on UIVisualEffectView say:

When using the UIVisualEffectView class, avoid alpha values that are less than 1. Creating views that are partially transparent causes the system to combine the view and all the associated subviews during an offscreen render pass. UIVisualEffectView objects need to be combined as part of the content they are layered on top of in order to look correct. Setting the alpha to less than 1 on the visual effect view or any of its superviews causes many effects to look incorrect or not show up at all.

I don't know if using a mask layer on a visual effect view would cause the same rendering problems or not. You'd have to try it. (And be sure to try it on a range of different hardware, since the rendering performance varies quite a bit between different versions of Apple's chipsets.)

You could also try using a shape layer filled with visual hash or a "pixellated" pattern instead of blurring. That would be faster and probably render more reliably.

Note that face detection tends to be a little jumpy. It might drop out for a few frames, or lag on quick pans or change of scene. If you're trying to hide people's faces in a live feed for privacy, it might not be reliable. It would only take a few un-blurred frames for somebody's identity to be revealed.

  • Related