0

I have a 480*640 depth image, and I got the normals (a 480*640*3 matrix) of each pixel from this depth image. Does anyone know how could I find the edge based on the normal information?

Thanks a lot!

Shai
  • 111,146
  • 38
  • 238
  • 371
guodi
  • 37
  • 2
  • 6
  • posting example image/depth info would help you get better answers. – Shai Dec 02 '14 at 06:33
  • @Shai I get the depth image from the point cloud obtained from the Kinect. Previously, I am thinking to compare the angle between normals and set threshold to pick out the edge. (Where the angle between normals that is bigger than threshold can be defined as edge) Would be this work and is there similar work? Actually I am a little confused about how to get the edge by comparing the angles...Thanks! – guodi Dec 02 '14 at 21:08
  • angle between vectors is easy to compute simply using the [dot-product](http://en.wikipedia.org/wiki/Dot_product#Geometric_definition). – Shai Dec 02 '14 at 21:15

1 Answers1

3

An intuitive definition of an edge in a depth image is where the surface normal faces away from the viewer. Assuming a viewing direction [0 0 -1] (into the XY plane) any normal that has nearly vanishing z component can be characterized as an edge.

e = abs( depth(:,:,3) ) < 1e-3; %// a nice starting point

You need to set the threshold based on your data.

After that you might consider applying some non-maximal suppression or other morphological cleaning operations.

Shai
  • 111,146
  • 38
  • 238
  • 371