2

I tried the following, expecting to see the grayscale version of source image:

from PIL import Image
import numpy as np
img = Image.open("img.png").convert('L')
arr = np.array(img.getdata())
field = np.resize(arr, (img.size[1], img.size[0]))
out = field
img = Image.fromarray(out, mode='L')
img.show()

But for some reason, the whole image is pretty much a lot of dots with black in between. Why does it happen?

rayryeng
  • 102,964
  • 22
  • 184
  • 193
RomaValcer
  • 2,786
  • 4
  • 19
  • 29

1 Answers1

4

When you are creating the numpy array using the image data from your Pillow object, be advised that the default precision of the array is int32. I'm assuming that your data is actually uint8 as most images seen in practice are this way. Therefore, you must explicitly ensure that the array is the same type as what was seen in your image. Simply put, ensure that the array is uint8 after you get the image data, so that would be the fourth line in your code1.

arr = np.array(img.getdata(), dtype=np.uint8) # Note the dtype input

1. Take note that I've added two more lines in your code at the beginning to import the necessary packages for this code to work (albeit with an image offline).

rayryeng
  • 102,964
  • 22
  • 184
  • 193
  • Otherwise numpy puts for pixels into one element? – RomaValcer Aug 24 '16 at 09:05
  • It's not that. The dynamic range of the grayscale values is quite small so you don't see the true picture. In addition, the byte order that is read out to be placed in an image will be 4 times as much as it is expected green, blue and alpha data. – rayryeng Aug 24 '16 at 15:01