0

I want to create a depth histogram of an image to see how the distribution of the depth values vary. But I don’t know how to do it because there are too many possible depths and counting each one would result in a histogram with a lot of bins. Like 307,200 bins from an image of (480*640).

In the following webpage:

http://www.i-programmer.info/programming/hardware/2714-getting-started-with-microsoft-kinect-sdk-depth.html?start=2

They divided the number of depth values by 4 then the performed the bit shift adjustment on the data to create a reasonable looking display:

for (int i = 0; i < PImage.Bits.Length; i += 2)
{
 temp= (PImage.Bits[i+1]<<8 |
               PImage.Bits[i])& 0x1FFF ;
 count[temp >> 2]++;
 temp <<= 2;
 PImage.Bits[i] = (byte) (temp & 0xFF);
 PImage.Bits[i + 1] = (byte) (temp >> 8);
}

I understand the operations that they did but I don’t understand how this method shrinks the data to 1/4

So, how can I show that information to create a reasonable looking display without using too many bins?

Any ideas?

Best regards,

andrestoga
  • 619
  • 3
  • 9
  • 19
  • Side note: as mclaassen pointed out in the (+1) answer you statement "Like 307,200 bins from an image of (480*640)" is very strange in relation to "Create depth histogram"... The number of bins related *only* to range of values, not number of values (i.e. for 8bpp image there will be at most 256 bins - on per value 0-255). – Alexei Levenkov Aug 05 '14 at 01:59

1 Answers1

1

This part explains it:

There are too many possible depths and counting each one would result in a histogram with a lot of bins so we divide the distance by four which means we only need a quarter of the number of bins:

int[] count = new int[0x1FFF / 4 +1];

By dividing the depth values by 4 you are reducing the number of bins by lowering the resolution at which you are measuring different depths. This allows the size of the count array to be 4 times smaller.

Based on your comment

Like 307,200 bins from an image of (480*640).

I think you may be misunderstanding what the histogram is. The screen size has nothing to do with the number of bins. You only get one data point per different depth level measured in the whole scene, they are not correlated to screen position at all.


Explanation of code:

for (int i = 0; i < PImage.Bits.Length; i += 2)
{
    //gets the depth value by combining 2 adjacent bytes from the data into 
    //a 2 byte value and trims the value to a max of 8191 (2^13)
    temp= (PImage.Bits[i+1]<<8 |
                  PImage.Bits[i])& 0x1FFF;

    //divides the value by 4 and increments counter for that depth value
    count[temp >> 2]++;

    //multiply depth value by 4, trimming off the lower bits, I assume this  
    //makes the depth differences more obvious when we write the new depth 
    //value back to the image data
    temp <<= 2;

    //write the depth value back to the image buffer
    PImage.Bits[i] = (byte) (temp & 0xFF);
    PImage.Bits[i + 1] = (byte) (temp >> 8);
}
mclaassen
  • 5,018
  • 4
  • 30
  • 52
  • Yes, you're right I misunderstood what the histogram is. And I understand that they divided by 4 the depth values to use fewer bins but what I don't understand is how the bit adjustment on the data works and what they are trying to achieve by doing that. I posted the code on the main post. Would you mind explaining the code? – andrestoga Aug 06 '14 at 00:04