1

I'm new with EMGU and image processing and I have a project in C# that needs to detect a transparent object, specifically, a moth's wing inside a plastic bottle. Here are some examples.

enter image description here enter image description here

I tried using YCbCr in EMGU but I can not detect it nor differentiate it from the background.

Another thing is that I tried to enclose it in a "controlled environment" (inside a box where no light can come in) and used LED back-light. Is this advisable? Or can light from the environment (fluorescent light) will do? Will this affect the detection rate? Do lighting play a factor in this kind of problem?

This is the idea of my project and what I use. Basically, my project is just a proof of concept about detecting a transparent object from an image using a webcam (Logitech C910). This is an example of an old industrial problem here in our country when bottling plant over stock their plastic bottle and it got contaminated before use. Moth body and moth wing are the contaminants that were given to us. Also, this is to see if a webcam can suffice as an alternative to an industrial camera for this application.

I place it inside a controlled environment and use LED lights as backlight (this is just made using a prototyping board and high intensity LED light that is diffused with a bond paper). The object (moth wing) will be placed inside a plastic bottle with water and will be tested into 2 parts. The first part is that the bottle is not moving and the second part is when the bottle is moved on a conveyor but at the same controlled environment. I did all the hardware required so that is not an issue anymore. The moth body is manageable (I think) to detect but the moth wing left me scratching my head.

Any help would be very much appreciated. Thank you in advance!

  • Well, first of all, lighting plays a very important factor in this case. I would also like to ask you to upload images with some real color data with different lighting conditions instead of the grayscale variant. That would provide the maximum amount of information to work with. Off the top of my head, use a hough line detector (one part of the wing is straight) to localize the search. – scap3y Mar 07 '14 at 16:32
  • hi scap3y, here are some of the color pictures that I took with the lighting that is available with me. http://i89.photobucket.com/albums/k218/lololovelola/2014030821072_zps76f75eed.jpg and http://i89.photobucket.com/albums/k218/lololovelola/2014030821084_zpsb92a30e3.jpg – lololovelola Mar 09 '14 at 04:17

2 Answers2

0

Consider using as many visual cues as possible: blur/focus
shape - you can use active contour or findControus() on a clean image
location, intensity, and texture in grabcut framework
you can try IR illumination in case moth and glass react to it differently

Community
  • 1
  • 1
Vlad
  • 4,425
  • 1
  • 30
  • 39
  • I will try to see if I can do that. For IR lights, I did use that but my webcam can not see IR. – lololovelola Mar 07 '14 at 23:55
  • Can your webcam see a light from a remote control? Usually webcams dont have strong IR filters. Note that IR spectrum is very broad. – Vlad Mar 08 '14 at 00:13
  • hi Vlad, no I haven't tried the remote but I did test it using the only IR LED available in the shops that I checked. Turns out that the webcam can only see a very tiny spec of red light and everything was dark. – lololovelola Mar 09 '14 at 04:00
  • Good color images. Your solution is obvious now. Whatever initial and possibly imprecise blob you can get at the location of a moth should be fed into grab cut algorithm, see openCV library. The latter will refi e boundaries between foreground and backround by making statistical models and running probabilistic inference. This is a best you can do without extra hardware. – Vlad Mar 09 '14 at 20:29
  • Hi Vlad, I tried the idea of the autofocus algorithm without using Red LED backlight. I subtract a "background" image (grayscale) from the image (grayscale) where I wish to detect the object then I used adaptive threshold mean_c. After which I used erode(2), dilate(2), close(1) and finally invert the image to show these images/blob: http://i89.photobucket.com/albums/k218/lololovelola/blob1_zpsfa1c65d6.jpg and http://i89.photobucket.com/albums/k218/lololovelola/blob2_zpsd562820b.jpg . This seems to work but before I use contour, is there a way to clean this kind of image? Or is it unnecessary? – lololovelola Mar 25 '14 at 13:11
  • As for the location of the moth, it varies within the bubble of the bottled water. Or my idea is that it should be within the red box. – lololovelola Mar 25 '14 at 13:12
  • You skipped a step. Typically you would run " the sum of absolute horizontal and vertical gradients" as I specified in the link blur/focus above BEFORE applying an adaptive threshold. The goal is to weaken blurry pixels by taking a gradient. Look at the link again - it shows a pretty clear separation of background and foreground based on a blur cue alone while your images are a bit overcrowded with noise and background elements (probably beyond any repair) – Vlad Mar 25 '14 at 17:36
  • Hi Vlad, I redo my work and try to follow your steps in blur/focus and this is what I got upto the image subtraction http://i89.photobucket.com/albums/k218/lololovelola/sobelImage_zps23980023.jpg . However, I got stuck at "assemble pieces using the convex hull", how do I gather the points in this image? – lololovelola Mar 26 '14 at 10:23
  • Convex hull can be used only if you don't have noise in your image and want to get a convex outline of your parts, see http://docs.opencv.org/doc/tutorials/imgproc/shapedescriptors/hull/hull.html?highlight=convexhull – Vlad Mar 26 '14 at 18:40
0

You should try to adjust brightness/contrast and color balance.

enter image description here

Another idea is to use auto threshold such as Sauvola or auto local thresholds. It will give you interesting results such as this one (I directly convert the image to grayscale) : enter image description here

I do this tests very quickly by using imageJ.

enter image description here

Click to the link to the image in order to see which image correspond to which binarization algorithm.

Olivier A
  • 842
  • 4
  • 8
  • Hi Olivier, I'm using C# for my project and I think imageJ is java. I also look up on Sauvola but I can not find it at the documentation of EMGU and OpenCV. Do you have any idea if EMGU have it? Or any similar threshold you can suggest? Thanks! – lololovelola Mar 10 '14 at 09:34
  • Yes, it was just for testing quickly with a UI. In openCV you should look at [adaptiveThreshold](http://docs.opencv.org/modules/imgproc/doc/miscellaneous_transformations.html). Maybe it is also in EMGU. If you don't find what you want in EMGU, you can also try Aforge.NET which is also in C# and have both [adaptive binarization](http://www.aforgenet.com/framework/features/adaptive_binarization.html) and [adaptive local thresholding](http://www.aforgenet.com/framework/docs/html/0ad5b988-5613-d62a-22a9-cf41e39c139f.htm). – Olivier A Mar 10 '14 at 10:03
  • Hi Olivier, I tried to use adaptive threshold but it is nowhere near to your result. This is what I was able to do: http://i89.photobucket.com/albums/k218/lololovelola/processedimage_zpsd1f81c5b.jpg I am not really well verse in image processing so I am clueless on how can I do this correctly. – lololovelola Mar 13 '14 at 12:24
  • your picture is very strange. In openCV python documentation you can check a simple example : http://docs.opencv.org/trunk/doc/py_tutorials/py_imgproc/py_thresholding/py_thresholding.html – Olivier A Mar 23 '14 at 10:24
  • Hi Olivier, I tried it again based on the tutorial and I got this for gaussian using block size of 7 http://i89.photobucket.com/albums/k218/lololovelola/OutputGauss_zpsfb37beec.jpg and this is what I got from mean using block size of 5 http://i89.photobucket.com/albums/k218/lololovelola/OutputMean_zps4844a7e4.jpg . I just convert the image to grayscale then I used adaptive threshold. How did you clean your image that nicely? I tried to use morphological errode, dilate, and close but the image is still have noise. – lololovelola Mar 24 '14 at 14:40
  • I didn't use any filtering method, only Sauvola binarization. You're right it does not work with Gaussian or Mean. I add another picture in my answer. – Olivier A Mar 24 '14 at 15:47
  • Hi Olivier, seems like threshold won't do to my current problem then. But thank you for the help and images you showed me. It gave me an idea to how I can see the transparent part and currently I'm working on blob detection. I'll try what Vlad suggested since what I am doing right now is similar to what he first suggest. – lololovelola Mar 25 '14 at 12:51