4

I'm working on a machine learning app that classifies numbers that are hand drawn. I have made a model using CreateML that supposedly has 100% accuracy (I will admit my sample size was only about 50 images per number). When running it on my app however, it does not work. To see if it was a problem with my app, I downloaded the Apple Vision+CoreML Example Xcode project and replaced the MobileNet classifier with my own. I loaded in the images saved on my phone from my own app and the classifications were still inaccurate. What makes this interesting is that I tried testing the exact same images in the CreateML UI space on the playground where you can test images and the classification works.

TL/DR: The image classification works on the CreateML Live View on playgrounds but does not on the exact copy of the vision+coreML example project from Apple.

Here is an example of an image that I tried to classify

Here is what shows up on the app for 7, Here is what shows up on the app for 5

Here is what shows up on the playground for 7, Here is what shows up on the playground for 5

Andy Jazz
  • 49,178
  • 17
  • 136
  • 220
Anirudh
  • 49
  • 1
  • 5
  • Did you split your data into about 2/3 training data and 1/3 test data when building the model in the playground? – dktaylor Mar 20 '19 at 01:47
  • @dktaylor yeah I did, the model works on the UI for other new images not used in the training/testing as well – Anirudh Mar 20 '19 at 02:06
  • My best guess is that the difference is the crop of the image. I’m pretty sure the Apple vision example takes a center crop of the image passed in – dktaylor Mar 20 '19 at 02:09
  • 1
    @dktaylor Apple does run this line of code `request.imageCropAndScaleOption = .centerCrop` but I tried commenting it out and it gives the same result. – Anirudh Mar 20 '19 at 02:16
  • How different are the images you've trained the model on versus the images you're using in the app (image sizes, colors, etc)? What if you use one of the training images in the app, does it work then? – Matthijs Hollemans Mar 20 '19 at 10:25
  • @MatthijsHollemans The training images are directly from the app so the images are the same size, color, etc, and if I use one of the training images on the app it doesn't work but it works on the createML UI – Anirudh Mar 20 '19 at 23:02
  • My first debugging step would be to use the CheckInputImage app from my Core ML Survival Guide repo (https://github.com/hollance/coreml-survival-guide) to verify that the input image that Core ML / Vision sees really is what you expect it is. – Matthijs Hollemans Mar 21 '19 at 09:43

1 Answers1

1

I had the similar issue for days, the issue is CreateML might create neural networks for BGR format and in Xcode project colorSpace works in RGB. You can test your model on Python with coremltools and PIL library.

Diagnosing The Problem

Get metadata of your model##

import coremltools
from PIL import Image

#Import your model.
mlmodel = coremltools.models.MLModel('Path/To/Your/Model.mlmodel')
    
#print your metadata of your model, you will see input colorSpace.
print(mlmodel)

Input might looks like this

input {
  name: "image"
  shortDescription: "Input image to be classified"
  type {
    imageType {
      width: 299
      height: 299
      colorSpace: BGR
      imageSizeRange {
        widthRange {
          lowerBound: 299
          upperBound: -1
        }
        heightRange {
          lowerBound: 299
          upperBound: -1
        }
      }
    }
  }
}

Convert color space of your input

img = Image.open("Path/To/Your/Image")
img = img.convert("RGBA").   
data = np.array(img) 
red, green, blue, alpha = data.T
data = np.array([blue, green, red, alpha])
data = data.transpose()
img = Image.fromarray(data)
PIL_image = Image.fromarray(np.array(image))

Predict from your model with converted image

print(str(mlmodel.predict({'image': PIL_image})) + '\n')

This time your predictions should be correct.

My Solution

Unfortunately I had to give up on CreateML, on App side I tried converting color space in PixelBuffer and even by importing OpenCV library to convert color space via casting UIImage to cv::Mat and cv::Mat to UIImage, but none of them worked for me. I solved my problem using another easy ML creation platform by Apple called Turi Create. You have to use python to interact with this API but documentation is very clear and ML templates are same with CreatML. This API is better than CreateML because you are able to interact with your model before and after training while CreateML can be very closed box even with coremltools your are not able to interact with it a lot. This API very accessible and easy for everyone, there are really good code examples and scenarios in its documentation.

Intout
  • 11
  • 3