0

I'm new to python and currently playing around with creating masks for a Word Cloud using pillow and numpy.

I've encountered an issue between an original image and a cropped version of it (cropping done in MS Paint, where I also inverted the colours). When I run the following code:

mask = Image.open("C:/Users/d-j-h/downloads/original.png")
mask = np.array(mask)

mask2 = Image.open("C:/Users/d-j-h/downloads/cropped.png")
mask2 = np.array(mask2)  

The original mask displays as expected (type uint8, size (137,361), and if i look at the array you can make out the original image), whereas the cropped image has an additional dimension (type uint8, size (70,294,3), looks nothing like the image and, when I attempt to do some transformations (transform instances of 0 in the image to 255) with the following code

def transform_format(val):
if val == 0:
    return 255
else:
    return val

transformed_mask = np.ndarray((mask.shape[0],mask.shape[1]), np.int32)

for i in range(len(mask)):
    transformed_mask[i] = list(map(transform_format, mask[i])) 

it works perfectly for mask (the original image) but not for mask2, even if I change the code (mask>mask2) and add an extra dimension to the np.ndarray. I get the following error message:

ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()

Any help is greatly appreciated.

m4kr0
  • 1
  • 1
    `val` is here a numpy array, so `if val == 0` does not make much sense. – Willem Van Onsem Aug 04 '19 at 14:17
  • Related: [Use a.any() or a.all()](https://stackoverflow.com/questions/34472814/use-a-any-or-a-all). ; https://docs.scipy.org/doc/numpy/reference/generated/numpy.all.html#numpy.all – wwii Aug 04 '19 at 14:22
  • `transformed_mask` can be *created* with `np.where(mask == 0, 255, mask)` : [numpy.where()](https://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html?highlight=where#numpy.where) – wwii Aug 04 '19 at 14:35
  • Is your question `Why did MS Paint mess up my image when I cropped it?` What is `Image`? – wwii Aug 04 '19 at 14:38

1 Answers1

0

Usually, some images can be read in grayscale by default. but, the cropped image is read as RGB (3 channels) as it appears.

Why it doesn't appear similar to the original? it depends. Maybe , you should upload the images to answer that.

As a soluation you can crop after reading the original image and converting to numpy , to get what you need:

mask = Image.open("C:/Users/d-j-h/downloads/original.png")
mask = np.array(mask)

mask2 = mask[new_rows_start:rows_end, new_cols_start:cols_end]

This will result in grayscale image, you need to know the new dimensions though

Yasin Yousif
  • 969
  • 7
  • 23