0

EDIT: While waiting to see what possible solutions can be thought up for using Image.point() I would like to add that I now understand why my expression doesn't work in ImageMath.eval.

ImageMath.eval performs individual actions on the whole image, not the entire eval statement pixel by pixel. So when I ask if (a-b) > 255 what I'm really asking is if the image that results from (a-b) is > 255. Which is a ridiculous question. And Pillow auto-clips values with no option to allow for wrapping in the source. If I had a better understanding of C++ and compiling python libraries I'd gladly fork and change that xD.

But until I gain that knowledge I'll just have to do it manually outside of the library.

ORIGINAL QUESTION: I have this subtraction routine to subtract two channels in an image from each other. (I use a middleman image because I occasionally got weird results if I added the bias in too soon)

def subChannels(c1,c2,divisor=1,bias=0):
    ecpression = "(a/%s)-(b/%s)" % (divisor,divisor)
    middleman = ImageMath.eval(expression, a=c1, b=c2).convert("L")
    return Image.Math.eval("a + b", a = middleman, b = bias).convert("L")

Exceedingly simple. And it works. Except in cases where I have a bias/offset that pushes the value over 255 or under 0. Then it just clips the results.

Now I have tried adding a ridiculous if else evaluation statement that attempts to do the wrapping for me.

def subChannels(c1,c2,divisor=1,bias=0):
    overflow = "((%s) - 256) if (%s) > 255 else (256 + (%s)) if (%s) < 0 else (%s)"
    sub = "(a/%s)-(b/%s)" % (divisor,divisor)
    expression = overflow % (sub, sub, sub, sub, sub)
    middleman = ImageMath.eval(expression, a=c1, b=c2).convert("L")
    add = "a + b"
    expression = overlfow % (add, add, add, add, add)
    return ImageMath.eval(expression, a=middleman, b = bias).convert("L")

And that failed spectacularly. The resulting image only got a few of the pixels correct. The rest were blown out, pure white. It was really weird.

Then I tried performing the equation manually, knowing it would be slower for me to loop over every pixel in the image inside Python.

def subChannels(c1,c2,divisor=1, bias=0, clip=True):
    #images have already been resized to match before sending here
    pixels1 = c1.load()
    pixels2 = c2.load()
    for x in range(c1.size[0]):
        for y in range(c1.size[1]):
            r = pixels1[x,y]
            g = pixels2[x,y]
            value = wrapOrClip((r/divisor)-(g/divisor), clip)
            value = wrapOrClip((value + bias), clip)
            pixels2[x,y] = value
    return c2

def wrapOrClip(value, clip = True):
    if clip:
        return max(min(value,255),0)
    else:
        return ((value)) - 256 if ((value)) > 255 else ((value)) + 256 if ((value)) < 0 else ((value))

Oddly enough, the result was exactly what I expected it to be. So, for some reason, I don't understand my equation works on a pixel by pixel basis. But if used as the expression inside of ImageMath.eval it produces a drastically different result.

I would prefer to use eval because it a time test eval took 1 second to do the work. My manual method took 9 seconds.

Any idea where my obvious flaw in reasoning is?

EDIT: Including a full working version of the code I'm working with.

from PIL import ImageGrab, Image, ImageMath, ImageTk
from io import StringIO
import sys

#One stop shop for resizing, splitting, and combining the channels
#for all methods on the dialog

def evalImages(image1,image2, channel1 = -1, channel2 = -1, divisor=1, bias=0, clip=True, mode="add"):
    modes = {"add": addChannels, 
             "sub": subChannels, 
             "mul": mulChannels,
             "and": andChannels,
             "or": orChannels,
             "xor": xorChannels,
             "average": averageChannels,
             "dark": darkChannels,
             "light": lightChannels,
             "diff": diffChannels}
    #resize image 2 to fit image 1         
    if image1.size != image2.size:
        image2 = image2.resize(image1.size, Image.BICUBIC)
    if channel1 == -1:
        #if we are combining all channels
        r1,g1,b1 = image1.split()
        r2,g2,b2 = image2.split()
        r3 = modes[mode](r1,r2,d = divisor, b = bias, clip=clip)
        g3 = modes[mode](g1,g2,d = divisor, b = bias, clip=clip)
        b3 = modes[mode](b1,b2,d = divisor, b = bias, clip=clip)
        return Image.merge("RGB", (r3,g3,b3))
    else:
        #0 = Red, 1 = Green, Blue = 2
        c1 = image1.split()[channel1]
        c2 = image2.split()[channel2]
        return modes[mode](c1, c2, d = divisor, b = bias, clip=clip)

#Function to either wrap the value aroun or clip it at 0/255
def wrapOrClip(value, clip = True):
    if clip:
        return max(min(value,255),0)
    else:
        return ((value)) - 256 if ((value)) > 255 else ((value)) + 256 if ((value)) < 0 else ((value))

#Function subtracts two single band images from each other
#divisor divides the values before they are subtracted
#bias is an offset applie to each pixel after subtraction and clipping or wrapping        
def subChannels(c1,c2,d=1, b=0, clip=True):
    #images have already been resized to match before sending here
    pixels1 = c1.load()
    pixels2 = c2.load()
    for x in range(c1.size[0]):
        for y in range(c1.size[1]):
            r = pixels1[x,y]
            g = pixels2[x,y]
            value = wrapOrClip((r/d)-(g/d), clip)
            value = wrapOrClip((value + b), clip)
            pixels2[x,y] = value
    return c2

#Multiply channels    
def mulChannels(c1,c2,d=1, b=0, clip=True):
    pass

    #Bitwise AND channels
def andChannels(c1,c2, d=1, b=0, clip=True):
    pass

    #Bitwise OR channels
def orChannels(c1,c2, d=1, b=0, clip=True):
    pass

    #Bitwise XOR channels
def xorChannels(c1,c2, d=1, b=0, clip=True):
    pass

#Average the two pixels in each channel together
def averageChannels(c1,c2, d=1, b=0, clip=True):
    pass

# Choose the darkest pixels from both images (min(c1,c2))
def darkChannels(c1,c2, d=1, b=0, clip=True):
    pass

# Choose the lightest pixels from both channels (max(c1, c2))
def lightChannels(c1,c2, d=1, b=0, clip=True):
    pass

# Absolute value after Subtract
def diffChannels(c1,c2, d=1, b=0, clip=True):
    pass

#add both channels together to get a ligher image
def addChannels(c1, c2, d=1, b=0, clip=True):
    #bias is added to end result
    #divisor is applied to each channel individually
    pass

if __name__ == '__main__':
    #I normally have code here to put two images on the clipboad
    #then grab the down with ImageGrab.grapclipboard
    #But that's specific to the program that runs this code.
    #so instead here are two simple open functions
    if len(sys.argv) >= 3:
        print "args are good"
        image1 = Image.open(sys.argv[1])
        image2 = Image.open(sys.argv[2])
        print "files opened"
        #In this example, I'm combining the Red an Green channels of the two images together
        image3 = evalImages(image1, image2,channel1 = 0, channel2=1, divisor = 1, bias=0, mode="sub")
        print "channels merged"
        image3.save("3.jpg")
        print "file saved"
  • Instead of using `ImageMath.eval()` I would suggest using the [`Image.point()`](http://pillow.readthedocs.io/en/4.3.x/reference/Image.html#PIL.Image.Image.point) method. You can do this because the results of applying your expression to every possible combination of input values are small enough to store in a lookup-table can be built that is indexed by all possible result values. It'll take some cycles to create the table, but that will then be used to convert every pixel in the image it is applied to very quickly. – martineau Dec 15 '17 at 22:58
  • I used the `Image.point()` technique in answering the question [Colorize image while preserving transparency with PIL?](https://stackoverflow.com/questions/12251896/colorize-image-while-preserving-transparency-with-pil). The code's complicated by the fact that there's three channels involved at once, the but it's an application of the same concept's. If you [edit] your question and add enough code to it to be runnable, I'll see if I can show you how to do this. – martineau Dec 15 '17 at 23:01
  • Thank you for your reply. I hadn't considered Image.point as I couldn't think of a way to make a table that would work. I have updated my question with a functional version of the code I'm working with. – LeviFiction Dec 16 '17 at 01:29
  • OK, I'll take a look at the updated material. – martineau Dec 16 '17 at 01:59
  • Please clarify what you mean by "wrapping" the computed values. I understand clipping, but am not sure what you mean by former (and can't tell for sure from the your code). – martineau Dec 16 '17 at 21:09
  • Wrapping means the value wraps around. So if after subtracting the two values or a negative bias is added to the end result and the result moves below a minimum value (in this case 0) it wraps around to the maximum value. (in this case 255). So quick example, 75 - 125 = -50 So we add 256 to the negative and it becomes 206. – LeviFiction Dec 16 '17 at 22:17
  • Not leave you hanging: I was wrong, this can't be accomplished with the `Image.point()` method because it's a function of two images. I think there may be a way to do it with `ImageMath.eval()`, but that's beyond me at this point. Sorry. – martineau Dec 18 '17 at 16:14
  • Thank you very much for looking. I do greatly appreciate it. Unfortunately I don't think it is possible as PIL manually clips all values. I could be wrong, it's possible I just need to convert it to the correct format. "I" instead of "L". But from what I've read so far I don't think it'll make a difference. I've solved it for now by using Numpy when value wrapping is necessary and eval when clipping is necessary. It seems to be working perfectly. Again, thank you very much for looking into it. – LeviFiction Dec 18 '17 at 17:36

0 Answers0