5

I'm able to convert a UIImage to a ARGB CVPixelBuffer, but now i'm trying to convert the UIImage to a grayscale one buffer. I thought i had it since the code goes through but the coreML model complains saying:

"Error Domain=com.apple.CoreML Code=1 "Image is not expected type 8-Gray, instead is Unsupported (40)"

Here is the grayscale CGContext I have so far:

public func pixelBufferGray(width: Int, height: Int) -> CVPixelBuffer? {

        var pixelBuffer : CVPixelBuffer?
        let attributes = [kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue, kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue]

        let status = CVPixelBufferCreate(kCFAllocatorDefault, Int(width), Int(height), kCVPixelFormatType_8IndexedGray_WhiteIsZero, attributes as CFDictionary, &pixelBuffer)

        guard status == kCVReturnSuccess, let imageBuffer = pixelBuffer else {
            return nil
        }

        CVPixelBufferLockBaseAddress(imageBuffer, CVPixelBufferLockFlags(rawValue: 0))

        let imageData =  CVPixelBufferGetBaseAddress(imageBuffer)

        guard let context = CGContext(data: imageData, width: Int(width), height:Int(height),
                                      bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(imageBuffer),
                                      space: CGColorSpaceCreateDeviceGray(),
                                      bitmapInfo: CGImageAlphaInfo.none.rawValue) else {
                                        return nil
        }

        context.translateBy(x: 0, y: CGFloat(height))
        context.scaleBy(x: 1, y: -1)

        UIGraphicsPushContext(context)
        self.draw(in: CGRect(x:0, y:0, width: width, height: height) )
        UIGraphicsPopContext()
        CVPixelBufferUnlockBaseAddress(imageBuffer, CVPixelBufferLockFlags(rawValue: 0))

        return imageBuffer

    }

Any help would be greatly appreciated

Alex Brown
  • 41,819
  • 10
  • 94
  • 108
user1988824
  • 2,947
  • 2
  • 22
  • 27
  • 2
    When you're creating a context in this code, have you try to replace the space argument with kCVPixelFormatType_8IndexedGray_WhiteIsZero or kCVPixelFormatType_8Indexed ? – haik.ampardjian Jun 13 '17 at 20:02
  • Thanks for your help, the correct pixel format is kCVPixelFormatType_OneComponent8 for grayscale UIImage – user1988824 Jun 13 '17 at 20:31
  • You could've up vote my comment at least lol – haik.ampardjian Jun 13 '17 at 20:54
  • i just did, cheers! – user1988824 Jun 13 '17 at 21:42
  • Glad you got it working... but you don't need to do your own pixel buffer conversion to feed images into Core ML models: Vision framework does that for you. See https://stackoverflow.com/q/44400741/957768 – rickster Jun 14 '17 at 04:00
  • Great question @Rickster!! I already know about Vision framework, i've tried it first actually...and believe it or not, the Vision framework did not work with the type of input i had from this custom vision model (VNRequest result was nil). I was able to Using a pixelbuffer and got the expected result though. cheers! – user1988824 Jun 14 '17 at 14:14
  • Error message could be better, agreed. – Alex Brown Jun 14 '17 at 17:09

1 Answers1

9

Even though the image is called grayscale, the correct pixel format is: kCVPixelFormatType_OneComponent8


Hope this complete code snippet will help someone along:

public func pixelBufferGray(width: Int, height: Int) -> CVPixelBuffer? {

        var pixelBuffer : CVPixelBuffer?
        let attributes = [kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue, kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue]

        let status = CVPixelBufferCreate(kCFAllocatorDefault, Int(width), Int(height), kCVPixelFormatType_OneComponent8, attributes as CFDictionary, &pixelBuffer)

        guard status == kCVReturnSuccess, let imageBuffer = pixelBuffer else {
            return nil
        }

        CVPixelBufferLockBaseAddress(imageBuffer, CVPixelBufferLockFlags(rawValue: 0))

        let imageData =  CVPixelBufferGetBaseAddress(imageBuffer)

        guard let context = CGContext(data: imageData, width: Int(width), height:Int(height),
                                      bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(imageBuffer),
                                      space: CGColorSpaceCreateDeviceGray(),
                                      bitmapInfo: CGImageAlphaInfo.none.rawValue) else {
                                        return nil
        }

        context.translateBy(x: 0, y: CGFloat(height))
        context.scaleBy(x: 1, y: -1)

        UIGraphicsPushContext(context)
        self.draw(in: CGRect(x:0, y:0, width: width, height: height) )
        UIGraphicsPopContext()
        CVPixelBufferUnlockBaseAddress(imageBuffer, CVPixelBufferLockFlags(rawValue: 0))

        return imageBuffer

    }
Alex Brown
  • 41,819
  • 10
  • 94
  • 108
user1988824
  • 2,947
  • 2
  • 22
  • 27
  • I don't understand how to use this code snippet, where does the context gets the image info from? is this an UIImage extension? please provide an example. thanks – Juan Boero Apr 15 '19 at 21:14