1

I used keras to train the MNIST dataset. After training I tested the data on the MNIST dataset. Now I drew a picture of a ZERO,ONE,TWO and THREE on paper and uploaded it to my jupiternotebook and would like to predict the numbers that I drew. I tried to preprocess those numbers but I am still getting errors while predicting them.

Here is the code, plus one of picture I drew.

img = np.random.rand(224,224,3)
img_path = "0_a.jpg"
img = image.load_img(img_path, target_size=(224, 224))
print(type(img))

x = image.img_to_array(img)
print(type(x))
print(x.shape)
plt.imshow(x/255.)



img = np.random.rand(224,224,3)
img_path = "0_a.jpg"
img = image.load_img(img_path, target_size=(224, 224))
print(type(img))

x = image.img_to_array(img)
print(type(x))
print(x.shape)
plt.imshow(x/255.)
model.predict(x)

Here is the error I get, I am not sure what to do.

   ---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-166-2648d9cfd8aa> in <module>()
----> 1 model.predict(x)

/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/keras/models.py in predict(self, x, batch_size, verbose, steps)
   1025             self.build()
   1026         return self.model.predict(x, batch_size=batch_size, verbose=verbose,
-> 1027                                   steps=steps)
   1028 
   1029     def predict_on_batch(self, x):

/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/keras/engine/training.py in predict(self, x, batch_size, verbose, steps)
   1780         x = _standardize_input_data(x, self._feed_input_names,
   1781                                     self._feed_input_shapes,
-> 1782                                     check_batch_axis=False)
   1783         if self.stateful:
   1784             if x[0].shape[0] > batch_size and x[0].shape[0] % batch_size != 0:

/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/keras/engine/training.py in _standardize_input_data(data, names, shapes, check_batch_axis, exception_prefix)
    108                         ': expected ' + names[i] + ' to have ' +
    109                         str(len(shape)) + ' dimensions, but got array '
--> 110                         'with shape ' + str(data_shape))
    111                 if not check_batch_axis:
    112                     data_shape = data_shape[1:]

ValueError: Error when checking : expected dense_13_input to have 2 dimensions, but got array with shape (224, 224, 3)

  [1]: https://i.stack.imgur.com/SvB

ta.jpg

desertnaut
  • 57,590
  • 26
  • 140
  • 166
fwa
  • 11
  • 1

1 Answers1

0

Your images have three channels (RGB), while the MNIST images only have one channel (grayscale). Your images need to be opened in grayscale mode. More info can be found at How can I convert an RGB image into grayscale in Python?, assuming you're using PIL.

ATOMP
  • 1,311
  • 10
  • 29