I won't use ImageChops.difference
here, since it can't handle different image modes, cf.
from PIL import Image, ImageChops
# Read images
img1 = Image.open('image.png').convert('L')
img2 = Image.open('image.png').convert('L').convert('F')
diff = ImageChops.difference(img1, img2)
Although both images are identical (w.r.t. the pixels' intensities), we get the following ValueError
:
Traceback (most recent call last):
File "...", line 7, in <module>
diff = ImageChops.difference(img1, img2)
File "...\lib\site-packages\PIL\ImageChops.py", line 102, in difference
return image1._new(image1.im.chop_difference(image2.im))
ValueError: images do not match
In general, I'd agree to use NumPy's vectorization abilities to speed up calculations, but there also might be this quite simple Pillow only approach:
- Check number of bands. If they don't match, images must be different.
- Manually calculate the absolute intensity differences for each pixel (i.e. what
ImageChops.difference
actually does), but be sure to support any two image modes. This is slightly different for single and multi channel images.
- Sum differences over all pixels as suggested before. If that sum is greater than
0
, images must be different.
That'd be my code:
from PIL import Image
# Read images
img1 = Image.open('path/to/your/image.png').convert('RGB')
img2 = Image.open('path/to/your/image.png').convert('RGB')
# Check for different number of channels
if img1.im.bands != img2.im.bands:
print('Images are different; number of channels do not match.')
exit(-1)
# Get image (pixel) data
imdata1 = list(img1.getdata())
imdata2 = list(img2.getdata())
# Calculate pixel-wise absolute differences, and sum those differences
diff = 0
if img1.im.bands == 1:
diff = sum([abs(float(i1) - float(i2)) for i1, i2 in zip(imdata1, imdata2)])
else:
diff = sum([abs(float(i1) - float(i2)) for i1, i2 in
zip([i for p in imdata1 for i in p],
[i for p in imdata2 for i in p])])
if diff > 0:
print('Images are different; pixel-wise difference > 0.')
exit(-1)
print('Images are the same.')
For some image, the code as-is will return:
Images are the same.
Also, for the mentioned case in the beginning, we'll get this output. Nevertheless, for some input like
img1 = Image.open('image.png').convert('L')
img2 = Image.open('image.png').convert('F')
the output most likely will be:
Images are different; pixel-wise difference > 0.
The direct conversion to mode F
will result in some fractional parts for the single intensity values, such there's a difference to plain converting to mode L
.
Please let me know, if you have use-cases, for which that code fails. I'm curious, if I missed some edge cases here!
----------------------------------------
System information
----------------------------------------
Platform: Windows-10-10.0.16299-SP0
Python: 3.9.1
PyCharm: 2021.1.1
Pillow: 8.2.0
----------------------------------------