3

Hello I have a depth image, I want to extract the person(human) silhouette from that. I used pixel thresholding like this:

for i=1:240 
  for j=1:320 
    if b(i,j)>2400 || b(i,j)<1900 
      c(i,j)=5000; 
    else 
      c(i,j)=b(i,j); 
    end 
  end
end

but there is some part left. Is there any way to remove that?

Original_image: enter image description here

Extracted_silhouette: enter image description here

Neil Slater
  • 26,512
  • 6
  • 76
  • 94
Frq Khan
  • 175
  • 2
  • 17
  • This is a broad subject area. But perhaps someone can explain why this is harder than you would hope, or give a useful next step. Please show your pixel-thresholding code, because there may be less work to do for any person writing an answer. – Neil Slater Jan 12 '15 at 14:30
  • @NeilSlater: Here is my code `for i=1:240 for j=1:320 if b(i,j)>2400 || b(i,j)<1900 c(i,j)=5000; else c(i,j)=b(i,j); end end end` – Frq Khan Jan 13 '15 at 01:35
  • You might find [this thread](http://stackoverflow.com/q/27241945/1714410) useful. – Shai Jan 13 '15 at 10:03
  • @Shai: The mentioned thread have normal information, But in my case the depth image size is 320x240 and I don't have normal to find edge. Is there any other way to find out the edges around the person? – Frq Khan Jan 13 '15 at 13:39
  • You can have a rough estimation of surface normal from depth. consider `[dzx dzy] = gradient( depth_map );` (horizontal and vertical derivatives of depth map) then a rough estimation of surface normals can be `n = cat( 3, dzx, dzy, ones(size(dzx)) )`. Now you need to normalize to unit length `n = bsxfun( @rdivide, n, sqrt( sum( n.^2, 3 ) ) );` – Shai Jan 13 '15 at 14:04
  • Ok @shai: I tried this code: `[dzx dzy] = gradient( depth_img ); n = cat( 3, dzx, dzy, ones(size(dzx))); n = bsxfun( @rdivide, n, sqrt( sum( n.^2, 3 ) ) ); figure;imagesc(b); e = abs( n(:,:,3) ) < 1e-2;` and I got the edges of background that are connect with the person. [Edge_image](https://www.dropbox.com/s/gim3v44e8yxe24x/edge.jpg?dl=0) – Frq Khan Jan 13 '15 at 16:10
  • @FrqKhan looks not bad. have you tried different thresholds? have you looked at the boundary image before thresholding? – Shai Jan 14 '15 at 06:26
  • You might find [this answer](http://stackoverflow.com/a/19510366/1714410) applicable to your task. – Shai Jan 14 '15 at 07:42
  • @shai yes I tried with different threshold values. But with this threshold I can see full boundary edges of human. Thank you for suggesting the link, but it seems it going to complex way to solve. I just want to extract silhouette as a pre-processing step in shorter way. – Frq Khan Jan 14 '15 at 08:19

3 Answers3

4

According to this thread depth map boundaries can be found based on the direction of estimated surface normals.
To estimate the direction of the surface normals, you can

[dzx dzy] = gradient( depth_map ); %// horizontal and vertical derivatives of depth map
n = cat( 3, dzx, dzy, ones(size(dzx)) );
n = bsxfun( @rdivide, n, sqrt( sum( n.^2, 3 ) ) ); %// normalize to unit length

A simple way would be to threshold

e = abs( n(:,:,3) ) < 1e-2;

Resulting with
enter image description here

A more sophisticated method of deriving silhouette from boundary can be found in this answer.

Community
  • 1
  • 1
Shai
  • 111,146
  • 38
  • 238
  • 371
  • Is there any simple way of extraction the edges of the sitting person? – Frq Khan Jan 14 '15 at 09:21
  • @FrqKhan I don't think there is a simple way to distinguish between the person and the sofa using only the depth map. Try and think about it for a second: what makes the depth information in the sofa region between the person's legs different than the person itself? there's no clear boundary, no value difference, the orientation of the surface normal might be informative, but only locally... – Shai Jan 14 '15 at 09:43
  • Is there any possible way to detect ROI? – Frq Khan Jan 21 '15 at 01:31
  • @FrqKhan this might be suitable for a new questions ;) – Shai Jan 21 '15 at 06:17
1

This is hard to do by thresholding, because the couch is at the same depth as the person's upper body. Do you need to segment out the entire person, or would it be sufficient to segment out the upper body? In the latter case, you can try using vision.CascadeObjectDetector is the Computer Vision System Toolbox to detect the person's upper body in the RGB image.

Dima
  • 38,860
  • 14
  • 75
  • 115
  • Hello Dima, Yes I want to extract entire person. Is there any possible way to extract the boundary of the person? – Frq Khan Jan 13 '15 at 01:34
  • Try the Image Segmenter app in the Image Processing Toolbox: http://www.mathworks.com/help/images/image-segmentation-using-the-image-segmenter-app.html?refresh=true – Dima Jan 13 '15 at 14:42
0

Building on the work of @Shai above, you could take the output of the thresholding and then apply a boundary to THAT image. Below is an an example that you can feed in [YOUR_IMAGE] from the output from the previous steps, and then modify the value of [ADJUST] until only the person is selected.

This code is looking for boundaries based on size and will not select anything bigger than the value you enter. Simply, but it works for me. Hope this helps.

boundaries = bwboundaries([YOUR_IMAGE]);    
 numberOfBoundaries = size(boundaries)
   if (size(boundaries) < [ADJUST])
      for k = 1 : numberOfBoundaries
         thisBoundary = boundaries2{k}
         plot(thisBoundary(:,2), thisBoundary(:,1), 'b', 'LineWidth', 2);   
      end
   end
Community
  • 1
  • 1
Adam893
  • 143
  • 1
  • 3
  • 16
  • from the above code the image (e) is used to calculate `bwboundaries(e)` . but the size of (boundaries) is `[16 1]` then what values should I choose? it show an box after I put `[ADJUST] = 18`. Is it correct? – Frq Khan Feb 12 '15 at 11:29
  • Looking at your code, are your images of the resolution 240x320? If so then the value you put into adjust needs to be a reflection of how many pixels the person takes up. At a rough estimate i would say the person occupies around 1/5 of the image. This gives a total of 320x240 = 15360 pixels in the image, of which 15360 is about the pixel area of the person. Working with these numbers should help. – Adam893 Feb 12 '15 at 11:40
  • Actually I extract the ROI of the above image and now I only have person and part of soda at his sides. The resolution of image is now `211x111` 1/5 of the image will be 4684, so I used [ADJUST]= [70 60]. which shows me this [result] (http://www.dropbox.com/s/pzf3ahwpvl8lerd/Capture.PNG?dl=0). – Frq Khan Feb 12 '15 at 12:12
  • The shadowing of the boundary here is interesting, not sure what is causing this. It might be useful to work with the normal (intensity) image and use the depth image as a reference for the position of the person? – Adam893 Feb 12 '15 at 12:16