0

I have this code for a pixel buffer but there are problems. For whatever reason, it requires that the images be 224 by 224. The second problem is that the CoreML is not that accurate. There must be a more accurate way of setting up a pixel buffer.

import CoreVideo import UIKit import CoreImage

struct pixelBufferGenerator {
    static func buffer(from image: UIImage) -> CVPixelBuffer? {



        let tDictionary = [kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue, kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue] as CFDictionary
        var pixelBuffer : CVPixelBuffer?
        let status = CVPixelBufferCreate(kCFAllocatorDefault, Int(image.size.width), Int(image.size.height), kCVPixelFormatType_32ARGB, tDictionary, &pixelBuffer)
    guard (status == kCVReturnSuccess) else {
        return nil


    }

    CVPixelBufferLockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0))
    let pixelData = CVPixelBufferGetBaseAddress(pixelBuffer!)

    let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
    let context = CGContext(data: pixelData, width: Int(image.size.width), height: Int(image.size.height), bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pixelBuffer!), space: rgbColorSpace, bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue)


    UIGraphicsPushContext(context!)
    image.draw(in: CGRect(x: 0, y: 0, width: image.size.width, height: image.size.height))
    UIGraphicsPopContext()
    CVPixelBufferUnlockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0))

        return pixelBuffer
    }
}

This solution seems a bit verbose.

paralaxbison
  • 189
  • 1
  • 2
  • 13
  • what issue are you having exactly? CoreML should take care of the image resizing itself – Guig Jul 17 '17 at 22:14
  • It gives me an error saying that I have to resize the image to 224 x 244 and the program crashes. – paralaxbison Jul 17 '17 at 22:35
  • What model are you using? That's where the 224x224 requirement is. That's a pretty standard ML rule. I have two functions that resize a UIImage and returns a CVPixelBuffer. You already are using one - and the other is even more "verbose" by about 3 times the lines of code. As for how accurate CoreML is or not, it's not *CoreML* that's the issue, it's the model and how well it was trained *before* it was imported into CoreML. –  Jul 18 '17 at 00:28
  • The two models I am using are GoogLeNetPlaces() and VGG16(). – paralaxbison Jul 18 '17 at 13:55

0 Answers0