I'm really confused on this aspect. Each of the squares in the image contains a number of pixels where some of the squares are bigger, for example one square might have a width of 9 and a height of 8 whereas another one has a width of 7 and a height of 8. What I'm trying to do is create a smaller image from this initial image where each square represents a pixel and all the pixels would be the same size.
I've done it for the greyscale image but unfortunately, I get the wrong result i.e. the resulting image is not an exact copy of the input image
Input image
Output image
Code for greyscale image
from PIL import Image
import numpy as np
name1 = raw_input("What is the name of the .png file you want to open? ")
filename1 = "%s.png" % name1
img = Image.open(filename1).convert('L') # convert image to 8-bit grayscale
WIDTH, HEIGHT = img.size
a = list(img.getdata()) # convert image data to a list of integers
# convert that to 2D list (list of lists of integers)
a = np.array ([a[offset:offset+WIDTH] for offset in range(0, WIDTH*HEIGHT, WIDTH)])
print " "
print "Intial array from image:" #print as array
print " "
print a
rows_mask = np.insert(np.diff(a[:, 0]).astype(np.bool), 0, True)
columns_mask = np.insert(np.diff(a[0]).astype(np.bool), 0, True)
b = a[np.ix_(rows_mask, columns_mask)]
print " "
print "Subarray from Image:" #print as array
print " "
print b
print " "
print "Subarray from Image (clearer format):" #print as array
print " "
for row in b: #print as a table like format
print(' '.join('{:3}'.format(value) for value in row))
img = Image.fromarray(b, mode='L')
img.save("chocolate.png")
#print np.mean(b) #finding mean