Apple's new CoreML framework has a prediction function that takes a CVPixelBuffer
. In order to classify a UIImage
a conversion must be made between the two.
Conversion code I got from an Apple Engineer:
1 // image has been defined earlier
2
3 var pixelbuffer: CVPixelBuffer? = nil
4
5 CVPixelBufferCreate(kCFAllocatorDefault, Int(image.size.width), Int(image.size.height), kCVPixelFormatType_OneComponent8, nil, &pixelbuffer)
6 CVPixelBufferLockBaseAddress(pixelbuffer!, CVPixelBufferLockFlags(rawValue:0))
7
8 let colorspace = CGColorSpaceCreateDeviceGray()
9 let bitmapContext = CGContext(data: CVPixelBufferGetBaseAddress(pixelbuffer!), width: Int(image.size.width), height: Int(image.size.height), bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pixelbuffer!), space: colorspace, bitmapInfo: 0)!
10
11 bitmapContext.draw(image.cgImage!, in: CGRect(x: 0, y: 0, width: image.size.width, height: image.size.height))
This solution is in swift and is for a grayscale image. Changes that must be made depending on the type of image are:
- Line 5 |
kCVPixelFormatType_OneComponent8
to anotherOSType
(kCVPixelFormatType_32ARGB
for RGB) - Line 8 |
colorSpace
to anotherCGColorSpace
(CGColorSpaceCreateDeviceRGB
for RGB) - Line 9 |
bitsPerComponent
to the number of bits per pixel of memory (32 for RGB) - Line 9 |
bitmapInfo
to a nonzeroCGBitmapInfo
property (kCGBitmapByteOrderDefault
is the default)