1

I am pretty new to CV, so forgive my stupid questions...

What I want to do:
I want to recognize a RC plane in live video (for now its only a recorded video).

What I have done so far:

  • Differences between frames
  • Convert it to grey scale
  • GaussianBlur
  • Threshold
  • findContours

Here are some example frames:

But there are also frames with noise, so there are more objects in the frame.

I thought I could do something like this:

Use some object recognition algorithm for every contour that has been found. And compute only the feature vector for each of these bounding rectangles.

Is it possible to compute SURF/SIFT/... only for a specific patch (smaller part) of the image?

Since it will be important that the algorithm is capable of processing real time video I think it will only be possible if I don't look at the whole image all the time?! Or maybe decide for example if there are more than 10 bounding rectangles I check the whole image instead of every rectangle.

Then I will look at the next frame and try to match my feature vector with the previous one. That way I will be able to trace my objects. Once these objects cross the red line in the middle of the picture it will trigger another event. But that's not important here.

I need to make sure that not every object which is crossing or behind that red line is triggering that event. So there need to be at least 2 or 3 consecutive frames which contain that object and if it crosses then and only then the event should be triggered.

There are so many variations of object recognition algorithms, I am bit overwhelmed. Sift/Surf/Orb/... you get what I am saying.

Can anyone give me a hint which one I should chose or if what I am doing is even making sense?

Bernhard Barker
  • 54,589
  • 14
  • 104
  • 138
user2175762
  • 271
  • 1
  • 6
  • 13

1 Answers1

0

Assuming the plane location doesn't change a lot from one frame to the next, I think you should look at object tracking instead of trying to estimate the location independently in each frame. http://docs.opencv.org/modules/video/doc/motion_analysis_and_object_tracking.html

Rosa Gronchi
  • 1,828
  • 15
  • 25
  • What is a lot? The planes are pretty fast. – user2175762 Jan 02 '14 at 14:31
  • Ok, I tried to implement it. Now I get a bunch of dots on my plane. How does that help me to track only ONE specific object? In this case the plane. Example frames are: http://www7.pic-upload.de/02.01.14/6rz82ffyvbc.jpg and http://www7.pic-upload.de/02.01.14/vl55mj1mt1py.jpg and http://www7.pic-upload.de/02.01.14/onbwephku9j.jpg – user2175762 Jan 02 '14 at 14:53
  • Are the example frames the actual input or the product of some kind of processing? How to track a plane is not a theoretical question, the answer will heavily depend on the setup. – Rosa Gronchi Jan 02 '14 at 22:49
  • The examples frames have undergone a lot of post processing. Here are some example frames from the actual video: http://www7.pic-upload.de/03.01.14/znovaqbk27v3.png http://www7.pic-upload.de/03.01.14/z59renvgl6fe.png http://www7.pic-upload.de/03.01.14/trdml1l7wwig.png http://www7.pic-upload.de/03.01.14/3nrdh75n9ckj.png http://www7.pic-upload.de/03.01.14/yfbbb1za7upr.png So what kind of information do you need? Please, just ask. Thanks for the help so far! – user2175762 Jan 03 '14 at 02:43
  • Is the camera static? If it is, the problem should not be too difficult and I would use some kind of robust background subtraction. If it isn't you may still want to use background subtraction but you'll need to register the frames first. Generally speaking the problem is not simple (as you probably started to find out) and you'll probably end up using a combination of various different solution for different conditions. If you want something that works for general unknown setup, you may want to try train a Haar classifier, and not use frame to frame information just to make life simpler – Rosa Gronchi Jan 03 '14 at 03:16
  • Thanks again. The camera is stationary on the ground. No movement. The only thing that changes is the planes which are used. They probably change in colour, but not too much in shape. So you think I'd be better off with a backgroundsubtractor instead of all these steps I listed in my initial post? Because I think the plane is found very well already. – user2175762 Jan 03 '14 at 11:16
  • Here is my source code for easier understanding: https://www.dropbox.com/s/q3rqe5kfq8nfe49/main.cpp I have read about background subtrators. And I think they do about the same as I am already doing. Diffrences between frames and then a threshhold. – user2175762 Jan 03 '14 at 12:45
  • Please get back to me Rosa :) – user2175762 Jan 07 '14 at 21:37
  • I am not sure what is the exact nature of the system you are trying to build and what should the system design considerations be. But I am quite sure what you are looking for is a system and not an algorithm. If the camera is indeed static, background subtraction will help a lot. Back to your original question, OpenCV's feature detection supports mask: http://docs.opencv.org/modules/features2d/doc/common_interfaces_of_feature_detectors.html#featuredetector, so you can filter out background pixels. Try reading a bit about object detection and tracking and mainly experiment and see what works. – Rosa Gronchi Jan 07 '14 at 22:22
  • Thanks Rosa! I know, I am starting to get annoying :D... Here is a video. I hope this explains it better: https://www.dropbox.com/s/gmjlqcnwq3tezos/sample.mp4 Bascially what I want to do is to identify if what is moving is a plane (roughly) and if so I want to track it. Which means I always want to know the position of my possible match. Does this help you any furhter? – user2175762 Jan 10 '14 at 00:24