Using openCV you could do the following:
- scale both source images (1 and 2) by factor 0.5. Both images are now in range [0..127]
- shift image 1 by 128. It now is in the range [128..255]
- subtract image 2 from image 1
This way, no range conversion is needed and the result is fully scaled to 8 bit. Use the cvConvertScale
for the first two operations.
Something like this:
//...
cvConvertScale(src1, tmp1, 0.5, 128);
cvConvertScale(src2, tmp2, 0.5, 0);
cvSub(tmp1, tmp2, dst);
EDIT:
To your comment on loosing information(precision), you are right, but you always do when dividing using integer math. And scaling in your case is just that. Simply think of it as shifting all the bits to the right by one place. So the last bit of information is lost.
On the other hand, the order of the applied operations is also important. By dividing by 2 you introduce a rounding (or truncation) error of 0.5
for every pixel. If you scale both input images before subtracting them, the rounding error adds up to 1.0
. This shows up in the result image as some pixels being off by 1 compared to the result you would get with your initial and Alexanders approach. But that is the tradeoff for the simpler solution without expanding the image to 16-bit or floating point.
See this example:
real numbers:
(200 - 101) / 2 = 99 / 2 = 49.5
Alexanders solution (integer math):
(200 - 101) / 2 = 99 /2 = 49
my Solution (integer math):
(200 / 2) - (101 / 2) = 100 - 50 = 50