0

Is it possible to somehow get the depth pixels from a kinect in different angles? Like say the kinect is recording me from above and I would like to fetch the depth pixels as it was seeing me from the front?

I have seen examples of people using point clouds and can from that data rotate the created mesh (from these points) and even though the kinect is, say, recording down on the person from above one can still with these point clouds rotate the mesh as it was seen from the front or beneath one's feet (which is really cool!).

So can I perhaps from this create my own depth pixels out from a point cloud? Any pointers would be greatly appreciated.

Placeable
  • 589
  • 8
  • 22
  • It's possible. the easiest way to do so is to turn the point cloud data into a 3D object, then draw that and just move the camera around it. Unfortunately, i'm not very versed in OpenFrameworks, but i know how you would go about it in Processing. – Timothy Groote Jul 08 '13 at 09:02
  • @Timothy I am trying to get the ofxKinect's getDepthPixels function to return pixels relative to an arbitrary angle perhaps built up from the point cloud. As I am using the depth pixels for rendering later on. Could you explain a bit more how you would do it in Processing, perhaps I can figure this out with some more information. Thanks! – Placeable Jul 08 '13 at 09:08
  • Daniel Shiffman's Kinect Library is probably very similar to ofxKinect. try taking a look here : https://github.com/shiffman/libfreenect/blob/master/wrappers/java/processing/KinectProcessing/src/librarytests/PointCloud.java – Timothy Groote Jul 08 '13 at 09:13
  • The only thing that isn't shown in this eaxample is rotating the OPENGL matrix so you can look at the point cloud from different angles, but there are probably opengl "camera" libs for ofx that can help you with that. – Timothy Groote Jul 08 '13 at 09:16
  • Well I can build up a point cloud pretty easily with ofxKinect but I would like from this data fetch the depth pixels. Say I have a scene recorded by the kinect and create a point cloud from it. I rotate the mesh built up from the point cloud to fit my needs. From this data I would like to get the depth pixels so I can use it in another step. Is this at all possible? – Placeable Jul 08 '13 at 09:25

1 Answers1

0

Okay I kind of solved this in a very ugly way. At least that is what I think. If anyone else has a better idea please post it!

So what I did was create a mesh from the point cloud (As seen in the ofxKinect example) drew that to an FBO and wrapped it around a shader that colored the depth value for the fragments. This way I get a colored range from [0-1].

After that, back on the CPU, I was able to fetch the pixels from the FBO using readToPixels and drew those pixels to an ofImage. From the ofImage I could sample the colors for each pixel on the image (now in a grayscaled depth range).

Sigh, now, looping through each pixel x & y I check the color for each pixel to and grab that value and do some calculations to see where that colored value lies in a 0-255 range (like the regular kinect.getDepthPixels(...) data) as:

int size = sizeof(unsigned char) * ( ofGetHeight() * ofGetWidth() );
unsigned char* p = (unsigned char*)malloc(size);
for (int x = 0; x < ofGetWidth(); x++)
{
    for (int y = 0; y < ofGetHeight(); y++)
    {
        ofColor col = sampleImg.getColor(x, y);
        float d = 0.0f;

        if (col.a != 0)
        {
            d = (float)(col.r * 3) / 765.0f;
            d = d * 255.0f;
        }

        int id = (y*1) + x;
        p[id] = d;
    }
}

From p I get a unsigned char array with values ranging from [0-255] like the kinect.getDepthPixels() function but instead based on the depth texture I now have the depth data from the point cloud.

This is not fully tested but I think it is a step in the right direction. And I am not too fond of this solution but I hope it helps someone else as I have been googling crazy all day long with not much help. I might've just complicated things a lot for me but we'll see.

Placeable
  • 589
  • 8
  • 22