4

What is the simplest/cleanest way to rescale the intensities of a PIL Image?

Suppose that I have a 16-bit image from a 12-bit camera, so only the values 0–4095 are in use. I would like to rescale the intensities so that the entire range 0–65535 is used. What is the simplest/cleanest way to do this when the image is represented as PIL's Image type?

The best solution I have come up with so far is:

pixels = img.getdata()
img.putdata(pixels, 16)

That works, but always leaves the four least significant bits blank. Ideally, I would like to shift each value four bits to the left, then copy the four most significant bits to the four least significant bits. I don't know how to do that fast.

Vebjorn Ljosa
  • 17,438
  • 13
  • 70
  • 88

5 Answers5

3

Since you know that the pixel values are 0-4095, I can't find a faster way than this:

new_image= image.point(lambda value: value<<4 | value>>8)

According to the documentation, the lambda function will be called at most 4096 times, whatever the size of your image.

EDIT: Since the function given to point must be of the form argument * scale + offset for in I image, then this is the best possible using the point function:

new_image= image.point(lambda argument: argument*16)

The maximum output pixel value will be 65520.

A second take:

A modified version of your own solution, using itertools for improved efficiency:

import itertools as it # for brevity
import operator

def scale_12to16(image):
    new_image= image.copy()
    new_image.putdata(
        it.imap(operator.or_,
            it.imap(operator.lshift, image.getdata(), it.repeat(4)),
            it.imap(operator.rshift, image.getdata(), it.repeat(8))
        )
    )
    return new_image

This avoids the limitation of the point function argument.

Community
  • 1
  • 1
tzot
  • 92,761
  • 29
  • 141
  • 204
  • That is a very good idea, but because 16-bit images are if PIL type "I", the argument to `point` must be a function of the form `argument * scale + offset`. – Vebjorn Ljosa Sep 16 '09 at 18:52
  • You are correct, of course. Hopefully the "second take" should be efficient enough for you. – tzot Sep 16 '09 at 22:56
2

Why would you want to copy the 4 msb back into the 4 lsb? You only have 12 significant bits of information per pixel. Nothing you do will improve that. If you are OK with only having 4K of intensities, which is fine for most applications, then your solution is correct and probably optimal. If you need more levels of shading, then as David posted, recompute using a histogram. But, this will be significantly slower.

But, copying the 4 msb into the 4 lsb is NOT the way to go :)

  • 2
    The reason I thought to copy the 4 MSB is that it would use the entire range 0–(2^16-1), so saturated pixels in the original image will appear saturated in the rescaled image as well. – Vebjorn Ljosa Aug 25 '09 at 13:58
  • Indeed, this is the standard way of upscaling to a higher number of bits per channel. – bobince Aug 25 '09 at 14:20
2

You need to do a histogram stretch (link to a similar question I answered) not histogram equalization: histogram stretch http://cct.rncan.gc.ca/resource/tutor/fundam/images/linstre.gif

Image source

In your case you one need to multiply all the pixel values by 16, which is the factor between the two dynamic ranges (65536/4096).

Community
  • 1
  • 1
Ivan
  • 7,436
  • 1
  • 21
  • 21
1

What you need to do is Histogram Equalization. For how to do it with python and pil:

EDIT: Code to shift each value four bits to the left, then copy the four most significant bits to the four least significant bits...

def f(n):
   return  n<<4 + int(bin(n)[2:6],2)

print(f(0))
print(f(2**12))

# output
>>> 0
    65664 # Oops > 2^16
Pratik Deoghare
  • 35,497
  • 30
  • 100
  • 146
0

Maybe you should pass 16. (a float) instead of 16 (an int). I was trying to test it, but for some reason putdata does not multiply at all... So I hope it just works for you.

Paul
  • 2,862
  • 3
  • 18
  • 18