Home > database >  CIFilter color cube data loading
CIFilter color cube data loading

Time:12-20

I have around 50 3D LUTs (stored as png images, each being 900KB in size) and use CIColorCube filter to generate a filtered image. I use UICollectionView to display filtered thumbnails (100x100) for each LUT (like in Photos app). The problem is UICollectionView scrolling becomes extremely slow(no where close to smoothness of Photos app) when I generate filtered images as user scrolls. I thought of pre generating filtered images but the problem is it takes around 150 milliseconds to generate cubeData from LUT png, so for 50 thumbnails it takes around 7-8 seconds to prepare filtered thumbnails which is long. And this is exactly the culprit for scrolling performance as well. I am wondering what I can do to make it smooth like in Photos app or other photo editing apps. Here is my code to generate cube data from LUT png. I believe there is more of a CoreImage/Metal trick to fix the issue than UIKit/DispatchQueue/NSOperation based fixes.

      public static func colorCubeDataFromLUTPNGImage(_ image : UIImage, lutSize:Int) -> Data? {

        let size = lutSize

        let lutImage = image.cgImage!
        let lutWidth    = lutImage.width
        let lutHeight   = lutImage.height
        let rowCount    = lutHeight / size
        let columnCount = lutWidth / size

        if ((lutWidth % size != 0) || (lutHeight % size != 0) || (rowCount * columnCount != size)) {
            NSLog("Invalid colorLUT")
            return nil
        }

        let bitmap  = getBytesFromImage(image: image)!
        let floatSize = MemoryLayout<Float>.size

        let cubeData = UnsafeMutablePointer<Float>.allocate(capacity: size * size * size * 4 * floatSize)
        var z = 0
        var bitmapOffset = 0

        for _ in 0 ..< rowCount {
            for y in 0 ..< size {
                let tmp = z
                for _ in 0 ..< columnCount {
                    for x in 0 ..< size {

                        let alpha   = Float(bitmap[bitmapOffset]) / 255.0
                        let red     = Float(bitmap[bitmapOffset 1]) / 255.0
                        let green   = Float(bitmap[bitmapOffset 2]) / 255.0
                        let blue    = Float(bitmap[bitmapOffset 3]) / 255.0

                        let dataOffset = (z * size * size   y * size   x) * 4

                        cubeData[dataOffset   3] = alpha
                        cubeData[dataOffset   2] = red
                        cubeData[dataOffset   1] = green
                        cubeData[dataOffset   0] = blue
                        bitmapOffset  = 4
                    }
                    z  = 1
                }
                z = tmp
            }
            z  = columnCount
        }

    let colorCubeData = Data(bytesNoCopy: cubeData, count: size * size * size * 4 * floatSize, deallocator: Data.Deallocator.free)
    
    return colorCubeData
}


fileprivate static func getBytesFromImage(image:UIImage?) -> [UInt8]?
{
    var pixelValues: [UInt8]?
    if let imageRef = image?.cgImage {
        let width = Int(imageRef.width)
        let height = Int(imageRef.height)
        let bitsPerComponent = 8
        let bytesPerRow = width * 4
        let totalBytes = height * bytesPerRow

        let bitmapInfo = CGImageAlphaInfo.premultipliedLast.rawValue | CGBitmapInfo.byteOrder32Little.rawValue
        let colorSpace = CGColorSpaceCreateDeviceRGB()
        var intensities = [UInt8](repeating: 0, count: totalBytes)

        let contextRef = CGContext(data: &intensities, width: width, height: height, bitsPerComponent: bitsPerComponent, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo)
        contextRef?.draw(imageRef, in: CGRect(x: 0.0, y: 0.0, width: CGFloat(width), height: CGFloat(height)))

        pixelValues = intensities
    }
    return pixelValues!
}

And here is my code for UICollectionViewCell setup:

   func collectionView(_ collectionView: UICollectionView, cellForItemAt indexPath: IndexPath) -> UICollectionViewCell {
    let lutPath = self.lutPaths[indexPath.item]
    
    let cell = collectionView.dequeueReusableCell(withReuseIdentifier: "FilterCell", for: indexPath) as! FilterCell
    
    if let lutImage = UIImage(contentsOfFile: lutPath) {
        let renderer = CIFilter(name: "CIColorCube")!
        let lutData = ColorCubeHelper.colorCubeDataFromLUTPNGImage(lutImage, lutSize: 64)
       
        renderer.setValue(lutData!, forKey: "inputCubeData")
        renderer.setValue(64, forKey: "inputCubeDimension")
        renderer.setValue(inputCIImage, forKey: kCIInputImageKey)
        let outputImage = renderer.outputImage!
        
        let cgImage = self.ciContext.createCGImage(outputImage, from: outputImage.extent)!
        cell.configure(image: UIImage(cgImage: cgImage))
        
    } else {
        NSLog("LUT not found at \(indexPath.item)")
    }
    
    return cell
}

CodePudding user response:

We found that you can render the LUT image directly into a float-based context to get the format that is needed by CIColorCube:

// render LUT into a 32-bit float context, since that's the data format needed by CIColorCube
let pixelData = UnsafeMutablePointer<simd_float4>.allocate(capacity: lutImage.width * lutImage.height)
let bitmapInfo = CGImageAlphaInfo.premultipliedLast.rawValue | CGBitmapInfo.floatComponents.rawValue | CGBitmapInfo.byteOrder32Little.rawValue
guard let bitmapContext = CGContext(data: pixelData,
                                    width: lutImage.width,
                                    height: lutImage.height,
                                    bitsPerComponent: MemoryLayout<simd_float4.Scalar>.size * 8,
                                    bytesPerRow: MemoryLayout<simd_float4>.size * lutImage.width,
                                    space: lutImage.colorSpace ?? CGColorSpace.sRGBColorSpace,
                                    bitmapInfo: bitmapInfo)
else {
    assertionFailure("Failed to create bitmap context for conversion")
    return nil
}
bitmapContext.draw(lutImage, in: CGRect(x: 0, y: 0, width: lutImage.width, height: lutImage.height))

let lutData = Data(bytesNoCopy: pixelData, count: bitmapContext.bytesPerRow * bitmapContext.height, deallocator: .free)

However, if I remember correctly, we had to swap the red and blue components in our LUT images since Core Image uses the BGRA format (as you do in your code as well).

Also, in your collection view delegate, it will probably improve performance if you return the cell as fast as possible and post the generation of the thumbnail image to a background thread that will set the cell's image when done. Like this:

func collectionView(_ collectionView: UICollectionView, cellForItemAt indexPath: IndexPath) -> UICollectionViewCell {
    let lutPath = self.lutPaths[indexPath.item]

    let cell = collectionView.dequeueReusableCell(withReuseIdentifier: "FilterCell", for: indexPath) as! FilterCell

    DispatchQueue.global(qos: .background).async {
        if let lutImage = UIImage(contentsOfFile: lutPath) {
            let renderer = CIFilter(name: "CIColorCube")!
            let lutData = ColorCubeHelper.colorCubeDataFromLUTPNGImage(lutImage, lutSize: 64)

            renderer.setValue(lutData!, forKey: "inputCubeData")
            renderer.setValue(64, forKey: "inputCubeDimension")
            renderer.setValue(inputCIImage, forKey: kCIInputImageKey)
            let outputImage = renderer.outputImage!

            let cgImage = self.ciContext.createCGImage(outputImage, from: outputImage.extent)!

            DispatchQueue.main.async {
                cell.configure(image: UIImage(cgImage: cgImage))
            }
        } else {
            NSLog("LUT not found at \(indexPath.item)")
        }
    }

    return cell
}

CodePudding user response:

Here is I think a more efficient way to get the bytes out of the LUT image with correct components ordering, while staying within the Core Image space all the way through until the image is rendered to the screen.

        guard let image = CIImage(contentsOf: url) else {
            return
        }
        let totalPixels = Int(image.extent.width) * Int(image.extent.height)
        let pixelData = UnsafeMutablePointer<simd_float4>.allocate(capacity: totalPixels)

        // [.workingColorSpace: NSNull()] below is important.
        // We don't want any color conversion when rendering pixels to buffer.
        let context = CIContext(options: [.workingColorSpace: NSNull()])
        context.render(image,
                       toBitmap: pixelData,
                       rowBytes: Int(image.extent.width) * MemoryLayout<simd_float4>.size,
                       bounds: image.extent,
                       format: .RGBAf, // Float32 per component in that order
                       colorSpace: nil)

        let dimension = cbrt(Double(image.extent.size.volume))
        let data = Data(bytesNoCopy: pixelData, count: totalPixels * MemoryLayout<simd_float4>.size, deallocator: .free)

The assumption that Core Image uses BGRA format is incorrect. Core Image uses RGBA color format (RGBA8, RGBAf, RGBAh and so forth). The CIColorCube look up table is layout out in BGR order, but the colors themselves are in RGBAf format, where each component is represented by 32 bit Floating point number.

Of course for the code above to work the LUT image has to be laid out in a certain way. Here is the example of the identity LUT PNG: Identity LUT

BTW, please check this app out: https://apps.apple.com/us/app/filter-magic/id1594986951. Fresh from the press. It doesn't have the feature of loading the lookup table from the LUT png (yet) but has host of other useful features and provides a reach playground to experiment with every single filter out there,

  • Related