Four years ago, I used this type method that is now deprecated (as well as its AVCaptureStillImageOutput
class).
func jpegStillImageNSDataRepresentation(_ jpegSampleBuffer: CMSampleBuffer) -> Data? { }
let data = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(jpegBuffer)
The AVCapturePhotoOutput
class designed to replace the AVCaptureStillImageOutput
class does not have the required method in its arsenal. What method should I use instead of deprecated one?
CodePudding user response:
So you indeed need to implement AVCapturePhotoOutput
, something like (very rough idea of implementation, follow Apple's guide instead):
class PhotoCapturer {
var photoOutput = AVCapturePhotoOutput()
func takePhoto() {
let photoSettings: AVCapturePhotoSettings = {
let processedFormat: [String: Any] = [AVVideoCodecKey: AVVideoCodecType.jpeg]
return AVCapturePhotoSettings(format: processedFormat)
}()
photoOutput.capturePhoto(with: photoSettings, delegate: self)
}
}
As you notice, we also need to implement the AVCapturePhotoCaptureDelegate
delegate, like so:
extension PhotoCapturer: AVCapturePhotoCaptureDelegate {
/// Receives this message whenever a captured image is ready
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
When the image is taken, this method will receive it in the AVCapturePhoto
format. From here you have a variety of options, depending on your needs:
- .fileDataRepresentation() - a flat data representation suitable for saving in file. Since we requested the
jpeg
format above, the data will represent the jpeg as well. - .cgImageRepresentation() - a
CGImage
representation. Good for if you want to show or edit the image. Not quite JPEG, but JPEG-convertable usingUIImage(cgImage: ...).jpegData(compressionQuality:...)
- Finally, there's pixelBuffer, which returns you to a familiar
PixelBuffer
format, if you wish to reuse some parts of your existing processing.