0

I'm trying to save a 16-bit depth PNG image with P3 color space from a Metal texture on iOS. The texture has pixelformat = .rgba16Unorm, and I extract the data with this code

func dataProviderRef() -> CGDataProvider? {
    let pixelCount = width * height
    var imageBytes = [UInt8](repeating: 0, count: pixelCount * bytesPerPixel)
    let region = MTLRegionMake2D(0, 0, width, height)
    getBytes(&imageBytes, bytesPerRow: bytesPerRow, from: region, mipmapLevel: 0)
    return CGDataProvider(data: NSData(bytes: &imageBytes, length: pixelCount * bytesPerPixel * MemoryLayout<UInt8>.size))
}

I figured out that the way to save a PNG image on iOS would be to create a UIImage first, and to initialize it, I need to create a CGImage. The problem is I don't know what to pass to CGIBitmapInfo. In the documentation I can see you can specify the byteOrder for 32-bit formats, but not for 64-bit.

The function I use to convert the texture to an UIImage is this,

extension UIImage {
  public convenience init?(texture: MTLTexture) {
    guard let rgbColorSpace = texture.defaultColorSpace else {
        return nil
    }
    let bitmapInfo:CGBitmapInfo = [CGBitmapInfo(rawValue: CGImageAlphaInfo.last.rawValue)]

    guard let provider = texture.dataProviderRef() else {
        return nil
    }
    guard let cgim = CGImage(
        width: texture.width,
        height: texture.height,
        bitsPerComponent: texture.bitsPerComponent,
        bitsPerPixel: texture.bitsPerPixel,
        bytesPerRow: texture.bytesPerRow,
        space: rgbColorSpace,
        bitmapInfo: bitmapInfo,
        provider: provider,
        decode: nil,
        shouldInterpolate: false,
        intent: .defaultIntent
        )
    else {
        return nil
    }
    self.init(cgImage: cgim)
  }
}

Note that "texture" is using a series of attributes that do not exist in MTLTexture. I created a simple extension for convenience. The only interesting bit I guess it's the color space, that at the moment is simply,

public extension MTLTexture {
  var defaultColorSpace: CGColorSpace? {
    get {
        switch pixelFormat {
        case .rgba16Unorm:
            return CGColorSpace(name: CGColorSpace.displayP3)
        default:
            return CGColorSpaceCreateDeviceRGB()
        }
    }
  }
}

It looks like the image I'm creating with that code above is sampling 4 bytes per pixel, instead of 8. So I obviously end up with a funny looking image...

How do I create the appropriate CGBitmapInfo? Is it even possible?

P.S. If you want to see the full code with an example, it's all in github: https://github.com/endavid/VidEngine/tree/master/SampleColorPalette

endavid
  • 1,781
  • 17
  • 42

1 Answers1

1

The answer was using byteOrder16. For instance, I've replace bitmapInfo in the code above for this,

    let isFloat = texture.bitsPerComponent == 16
    let bitmapInfo:CGBitmapInfo = [isFloat ? .byteOrder16Little : .byteOrder32Big, CGBitmapInfo(rawValue: CGImageAlphaInfo.last.rawValue)]

(The alpha can be premultiplied as well).

The SDK documentation does not provide many hints of why this is, but the book Programming with Quartz has a nice explanation of the meaning of these 16 bits:

The value byteOrder16Little specifies to Quartz that each 16-bit chunk of data supplied by your data provider should be treated in little endian order [...] For example, when using a value of byteOrder16Little for an image that specifies RGB format with 16 bits per component and 48 bits per pixel, your data provider supplies the data for each pixel where the components are ordered R, G, B, but each color component value is in little-endian order [...] For best performance when using byteOrder16Little, either the pixel size or the component size of the image must be 16 bits.

So for a 64-bit image in rgba16, the pixel size is 64 bits, but the component size is 16 bits. It works nicely :)

(Thanks @warrenm !)

endavid
  • 1,781
  • 17
  • 42