If you open your first image, you'll see that the canvas is larger than the visible image, as you have a transparent frame represented by pixels having rgba=(255,255,255,0)
. When you remove the alpha
channel by converting RGBA to RGB, that transparency disappear, as only rgb=(255,255,255)
remains, which turns out to be the white you see in the second image.
So you want to make something similar to what's suggested here
from PIL import Image, ImageChops
def trim_and_convert(im):
bg = Image.new(im.mode, im.size, (255,255,255,0))
diff = ImageChops.difference(im, bg)
diff = ImageChops.add(diff, diff, 2.0, -100)
bbox = diff.getbbox()
if bbox:
return im.crop(bbox).convert('RGB')
im = Image.open("path.png")
rgb_im = trim_and_convert(im)
rgb_im.save("rgb_im.png")