2

I am trying to develop an app that classifies an image taken from camera or chosen from image library using a model trained using Apple's CoreML. The model is properly trained and tested. It showed no problem when I tested it using Preview after it had been added to the xcode project. But when I tried to get the prediction using Swift, the results were wrong and completely different from what Preview showed. It felt like the model was untrained.

This is my code to access the prediction made by the model:

let pixelImage = buffer(from: (image ?? UIImage(named: "imagePlaceholder"))!)
self.imageView.image = image

guard let result = try? imageClassifier!.prediction(image: pixelImage!) else {
    fatalError("unexpected error happened")
}
        
let className: String = result.classLabel
let confidence: Double = result.classLabelProbs[result.classLabel] ?? 1.0
classifier.text = "\(className)\nWith Confidence:\n\(confidence)"

print("the classification result is: \(className)\nthe confidence is: \(confidence)")

imageClassifier is the model I have created using this line of code before the code segment:

let imageClassifier = try? myImageClassifier(configuration: MLModelConfiguration())

myImageClassifier is the name of the ML model I created using CoreML.

The image is correct and it shows a different result other than preview even if I input the same image. But it had to be converted to type UIImage to CVPixelBuffer since prediction only allows the input of type CVPixelBuffer. pixelImage in the code segment above is the image after it had been changed to type CVPixelBuffer. I used the solution in this stackoverflow question for conversion. The code is here in case something is wrong with it:

func buffer(from image: UIImage) -> CVPixelBuffer? {
    let attrs = [kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue, kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue] as CFDictionary
    var pixelBuffer : CVPixelBuffer?
    let status = CVPixelBufferCreate(kCFAllocatorDefault, Int(image.size.width), Int(image.size.height), kCVPixelFormatType_32ARGB, attrs, &pixelBuffer)
    guard (status == kCVReturnSuccess) else {
        return nil
    }
    
    CVPixelBufferLockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0))
    let pixelData = CVPixelBufferGetBaseAddress(pixelBuffer!)

    let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
    let context = CGContext(data: pixelData, width: Int(image.size.width), height: Int(image.size.height), bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pixelBuffer!), space: rgbColorSpace, bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue)

    context?.translateBy(x: 0, y: image.size.height)
    context?.scaleBy(x: 1.0, y: -1.0)

    UIGraphicsPushContext(context!)
    image.draw(in: CGRect(x: 0, y: 0, width: image.size.width, height: image.size.height))
    UIGraphicsPopContext()
    CVPixelBufferUnlockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0))

    return pixelBuffer
}

I don't think there are anything wrong with the model itself, only with the ways I have implemented it into the app.

Edit: I have downloaded a sample project form Apple's tutorial and implemented its model MobileNet into my project. The code executed without error and the result is correct. Could something be wrong with the model I created?

Sheldon Li
  • 67
  • 6

1 Answers1

1

I am getting correct results now. The only thing I have changed is to run the app on a real device (in this case, my iPad mini 6, iPadOS 15.0) instead of a simulator and the model and code remained unchanged. I can only assume that nothing really is wrong with my code or my model, only there are some issues with the simulator that caused this error. I have no idea why this might happen. My Xcode is version 13.1 and the simulator runs on iOS 15. If this is indeed a bug, then apple needs to fix it. It really makes life harder for us developers.

Sheldon Li
  • 67
  • 6
  • On Simulator try please: let configuration = MLModelConfiguration() configuration.computeUnits = .cpuOnly For cases I'm getting different inference for GPU and CPU. – Dmytro Hrebeniuk Nov 12 '21 at 14:39
  • @DmytroHrebeniuk I tried this but I am still getting different and wrong predictions even if I input the same image multiple times. I also tried .cpuAndGpu and .all, but nothing different happened. Still nothing is wrong when I run it on a real device. – Sheldon Li Nov 14 '21 at 02:35
  • Thank for your sharing, save my day Orz – Young Jan 22 '22 at 12:28