Okay I kind of solved this in a very ugly way. At least that is what I think. If anyone else has a better idea please post it!
So what I did was create a mesh from the point cloud (As seen in the ofxKinect example) drew that to an FBO and wrapped it around a shader that colored the depth value for the fragments. This way I get a colored range from [0-1].
After that, back on the CPU, I was able to fetch the pixels from the FBO using readToPixels and drew those pixels to an ofImage. From the ofImage I could sample the colors for each pixel on the image (now in a grayscaled depth range).
Sigh, now, looping through each pixel x & y I check the color for each pixel to and grab that value and do some calculations to see where that colored value lies in a 0-255 range (like the regular kinect.getDepthPixels(...) data) as:
int size = sizeof(unsigned char) * ( ofGetHeight() * ofGetWidth() );
unsigned char* p = (unsigned char*)malloc(size);
for (int x = 0; x < ofGetWidth(); x++)
{
for (int y = 0; y < ofGetHeight(); y++)
{
ofColor col = sampleImg.getColor(x, y);
float d = 0.0f;
if (col.a != 0)
{
d = (float)(col.r * 3) / 765.0f;
d = d * 255.0f;
}
int id = (y*1) + x;
p[id] = d;
}
}
From p I get a unsigned char array with values ranging from [0-255] like the kinect.getDepthPixels() function but instead based on the depth texture I now have the depth data from the point cloud.
This is not fully tested but I think it is a step in the right direction. And I am not too fond of this solution but I hope it helps someone else as I have been googling crazy all day long with not much help. I might've just complicated things a lot for me but we'll see.