I have the following Python code where I am saving some images of shape 326x490x3
as numpy
arrays for pre-processing at a later stage. I want to save my images in a 4D numpy array so that I can process them in batches later. The code works fine, but I found out that when I convert each 3D element of the 4D array back to an RGB image, I just get a static image.
CODE:
data = np.zeros((129, 326, 490, 3))
image_path = '0.jpg'
img = Image.open(image_path)
data[0,:,:,:] = np.asarray(img)
im = Image.fromarray(data[0], 'RGB')
im.show()
OUTPUT:
But when I try to display the 3D numpy array slice from the 4D array as a grayscale image it works fine.
CODE:
data = np.zeros((129, 326, 490, 3))
image_path = '0.jpg'
img = Image.open(image_path)
data[0,:,:,:] = np.asarray(img)
im = Image.fromarray(np.dot(data[0], [0.299, 0.587, 0.114]))
im.show()
OUTPUT:
The solution given here works as expected when I save the image to a 3D numpy array and switch back to a PIL image.
CODE:
data = np.zeros((129, 326, 490, 3))
image_path = '0.jpg'
img = Image.open(image_path)
im = Image.fromarray(np.asarray(img), 'RGB')
im.show()
OUTPUT:
Can someone please explain this behavior? I don't understand how the code works as expected for a 3D numpy array, but works differently for a 3D array slice of a 4D numpy array.