2

I want to create functionality similar to PIL's Image.blend, using a different blending algorithm. To do this would I need to: (1) directly modify the PIL modules and compile my own custom PIL or (2) write a python c module which imports and extends PIL?

I have unsuccessfully tried:

#include "_imaging.c"

I also was trying to just pull out the parts I need from the PIL source and put them in my own file. The farther I got in the more things I had to pull and it seems that is not the ideal solution.

UPDATE: edited to add the blending algorithm implemented in python (this emulates the overlay blending mode in Photoshop):

def overlay(upx, lpx):
    return (2 * upx * lpx / 255 ) if lpx < 128 else ((255-2 * (255 - upx) * (255 - lpx) / 255))

def blend_images(upper = None, lower = None):
    upixels = upper.load()
    lpixels = lower.load()
    width, height = upper.size
    pixeldata = [0] * len(upixels[0, 0])
    for x in range(width):
        for y in range(height):
            # the next for loop is to deal with images of any number of bands
            for i in range(len(upixels[x,y])):
                pixeldata[i] =  overlay(upixels[x, y][i], lpixels[x, y][i])
            upixels[x,y] = tuple(pixeldata)
    return upper

I have also unsuccessfully tried implementing this using scipy's weave.inline:

def blend_images(upper=None, lower=None):
    upixels = numpy.array(upper)
    lpixels = numpy.array(lower)
    width, height = upper.size
    nbands = len(upixels[0,0])
    code = """
        #line 120 "laplace.py" (This is only useful for debugging)
        int upx, lpx;
        for (int i = 0; i < width-1; ++i) {
            for (int j=0; j<height-1; ++j) {
                for (int k = 0; k < nbands-1; ++k){
                    upx = upixels[i,j][k];
                    lpx = lpixels[i,j][k];
                    upixels[i,j][k] = ((lpx < 128) ? (2 * upx * lpx / 255):(255 - 2 * (255 - upx) * (255 - lpx) / 255));
                }
            }
        }
        return_val = upixels;
        """
        # compiler keyword only needed on windows with MSVC installed
    upixels = weave.inline(code,
                           ['upixels', 'lpixels', 'width', 'height', 'nbands'],
                           type_converters=converters.blitz,
                           compiler = 'gcc')
    return Image.fromarray(upixels)

I'm doing something wrong with the upixel and lpixel arrays but I'm not sure how to fix them. I'm a bit confused about the type of upixels[i,j][k], and not sure what I could assign it to.

Jason Sundram
  • 12,225
  • 19
  • 71
  • 86
joshcartme
  • 2,717
  • 1
  • 22
  • 34
  • If you are willing to post your algorithm, I bet we can recreate it in NumPy and get all the speed of the compiled C code. Similar to this Q/A: http://stackoverflow.com/questions/2034037/image-embossing-in-python-with-pil-adding-depth-azimuth-etc Otherwise, look into cython: http://cython.org/ – Paul Mar 01 '11 at 23:45
  • I just edited my question and put an example in. It does what I want in pure python but I am looking to speed it up. – joshcartme Mar 02 '11 at 02:59

1 Answers1

3

Here's my implementation in NumPy. I have no unit tests, so I do not know if it contains bugs. I assume I'll hear from you if it fails. Explanation of what is going on is in the comments. It processes a 200x400 RGBA image in 0.07 seconds

import Image, numpy

def blend_images(upper=None, lower=None):
    # convert to arrays
    upx = numpy.asarray(upper).astype('uint16')
    lpx = numpy.asarray(lower).astype('uint16')
    # do some error-checking
    assert upper.mode==lower.mode
    assert upx.shape==lpx.shape
    # calculate the results of the two conditions
    cond1 = 2 * upx * lpx / 255
    cond2 = 255 - 2 * (255 - upx) * (255 - lpx) / 255
    # make a new array that is defined by condition 2
    arr = cond2
    # this is a boolean array that defines where in the array lpx<128
    mask = lpx<128
    # populate the parts of the new arry that meet the critera for condition 1
    arr[mask] = cond1[mask]
    # prevent overflow (may not be necessary)
    arr.clip(0, 255, arr)
    # convert back to image
    return Image.fromarray(arr.astype('uint8'), upper.mode)
Glorfindel
  • 21,988
  • 13
  • 81
  • 109
Paul
  • 42,322
  • 15
  • 106
  • 123
  • That works great and is definitely way faster. Thanks! What do you use for benchmarking? For example how did you figure out it takes exactly 0.07 seconds? And do you have any ideas on how to use scipy's weave.inline? (I updated my question) – joshcartme Mar 03 '11 at 01:01
  • @Loarfatron: I use timeit http://docs.python.org/library/timeit.html to time short fast pieces of code. Or sometimes (as in this case) I simply use time http://docs.python.org/library/time.html#module-time I don't know much about `scipy.weave.inline`, sorry. – Paul Mar 03 '11 at 02:38