0

I'm developing a C++ fall detection application using this data set: http://fenix.univ.rzeszow.pl/~mkepski/ds/uf.html I can have from those images depth distance in millimeters, but I want to calculate (if it's possible) real world X and Y coordinates of pixels.

Is is possible? How can I do It?

Cœur
  • 37,241
  • 25
  • 195
  • 267
Eadhun Di
  • 132
  • 1
  • 13
  • create transform matrix for your kinect device and simply transform the points by it ... of coarse if your data is not yet transformed from camera space then you need to do it prior to this. beaware IR and Normal Cameras have sometimes different FOV and have offset between them look here: http://stackoverflow.com/a/19905805/2521214 so you need all the info about device which obtain the data look also here http://stackoverflow.com/a/28084380/2521214 – Spektre Apr 10 '15 at 06:44
  • The problem is that I'm using the data set mentioned above, I didn't capture those images by my own camera. – Eadhun Di Apr 10 '15 at 08:52

1 Answers1

0

if you are using windows SDK then it is possible with capturing skeleton frame.it gives real world coordinates x,y,z

Arshad
  • 16