0

Giving an image that I applied an edge detection filter, what would be the way (hopefully efficient/performant one) to achieve a mask of the "sum" of the point in a marked segment?

Image for illustration: enter image description here

Thank you in advance.

UPDATE:

Added example of a lighter image (https://i.stack.imgur.com/hGgLg.jpg). As you'll see in the below image, we assume that when the user marks a region (ROI), there will be an object that will "stand out" from its background. Our end goal is to get the most accurate "mask" of this object, so we can use it for ML processing.

enter image description here

Roi Mulia
  • 5,626
  • 11
  • 54
  • 105
  • One suggestion would be to threshold your original image and crop that region. Use morphology to fill regions and clean up any outliers. Then get the contour and and draw the contour filled as white on a black background – fmw42 Jun 28 '20 at 17:54
  • @fmw42 Hey Fred, thank you for replying. I did something similar, it's working for high contrast edge detection. What do you recommend for a scenario where there are far lighter edges? Also, do you do paid consultants by chance? – Roi Mulia Jun 29 '20 at 07:21
  • 2
    @RoiMulia I think you should add an example image that ROI contains lighter edges. – Burak Jun 29 '20 at 13:37
  • I agree with `@Burak`. Please post an example with which you are having difficulty. – fmw42 Jun 29 '20 at 19:13
  • What do you call the "sum" ? –  Jul 06 '20 at 10:00
  • Please see an updated question with the relevant scenario @fmw42 – Roi Mulia Jul 07 '20 at 10:47
  • Please see an updated question with the relevant scenario @Burak – Roi Mulia Jul 07 '20 at 10:48
  • @YvesDaoust Updated the question, please see the most-bottom image in my update, the "sum" in my perspective is the area between all the relevant edges that generate the mask (in our example, the newspaper). – Roi Mulia Jul 07 '20 at 10:48
  • Well, there is an area function for contours. –  Jul 07 '20 at 12:11
  • @YvesDaoust interesting. But how can we get the relevant area only from the paper edges (that will be resulted in the newspapers mask) – Roi Mulia Jul 07 '20 at 12:31
  • How did you select this mask ? –  Jul 07 '20 at 12:52
  • @YvesDaoust Manually with photoshop, just for the example – Roi Mulia Jul 07 '20 at 14:24
  • How can an algorithm read your mind ? You need to give objective criteria. –  Jul 07 '20 at 14:26
  • Let us [continue this discussion in chat](https://chat.stackoverflow.com/rooms/217391/discussion-between-roi-mulia-and-yves-daoust). – Roi Mulia Jul 07 '20 at 14:54
  • It is hard to distinguish object like in the example. So far, the best method I know is [active contours without edges](https://www.mathworks.com/matlabcentral/fileexchange/23445-chan-vese-active-contours-without-edges). The function requires an initial mask. You may try to give the green rectangle (relevant ROI) as input. – Burak Jul 07 '20 at 15:58
  • I suggested "without edges" method, but it may be the case that "snakes" method could work better. I am interested in the results, so please let us know the updates and I will be following. – Burak Jul 07 '20 at 16:02
  • 2
    What have you tried so far? Maybe we can give you a few tips to improve a "working" example. – karlphillip Jul 07 '20 at 22:51

2 Answers2

2

From the two examples you've uploaded I can assume you are thresholding based on difference in color/intensity- I can suggest grabcut as basic foreground separation- use the edges in the mask in that ROI as input to the algorithm. Even better- if your thresholding is good as the first image, just skip the edge detection part and this will be the input to grabcut.

======= EDIT =======

@RoiMulia if you need production level I can suggest that you leave the threshold + edge detection direction completly and try background removal techniques (SOTA are currently neural networks such as Background Matting: The World is Your Green Screen (example)).

You can also try some ready made background removal APIs such as https://www.remove.bg/ or https://clippingmagic.com/

YoniChechik
  • 1,397
  • 16
  • 25
  • I did a quick and dirty try out for this, the results are better but still not in the production-level-range. Results: https://imgur.com/a/MgfjH1z . Do we have a way to optimize it? – Roi Mulia Jul 09 '20 at 11:41
1

1.
Given the "ROI" supervision you have, I strongly recommend you to explore GrabCut (as proposed by YoniChechnik):
Rother C, Kolmogorov V, Blake A. "GrabCut" interactive foreground extraction using iterated graph cuts. ACM transactions on graphics (TOG). 2004.

To get a feeling of how this works, you can use power-point's "background removal" tool:
enter image description here
which is based on GrabCut algorithm.

This is how it looks in power point:
enter image description here

GrabCut segment the foreground object in a selected ROI mainly based on its foreground/background color distributions, and less on edge/boundary information though this extra information can be integrated into the formulation.

It seems like opencv has a basic implementation of GrabCut, see here.


2.
If you are seeking a method that uses only the boundary information, you might find this answer useful.


3.
An alternative method is to use NCuts: Shi J, Malik J. Normalized cuts and image segmentation. IEEE Transactions on pattern analysis and machine intelligence. 2000.

If you have very reliable edge map, you can modify the "affinity matrix" NCuts works with to be a binary matrix

         0  if there is a boundary between i and j
w_ij =   1  if there is no boundary between i and j
         0  if i and j are not neighbors of each other

NCuts can be viewed as a way to estimate "robust connected components".

Shai
  • 111,146
  • 38
  • 238
  • 371
  • 1
    Thank you for Responding, Shai. GrabCut gives good results, but unfortunately not with the accuracy we need for the inpainting process. I'll give NCuts try as well, I'm giving you the bounty as I feel this answer maxes my CV options before moving to DL methods. Thank you! – Roi Mulia Jul 13 '20 at 11:49