After discovering the power of OpenCV, I decided to use that library to develop the natural marker tracking engine that I am working on now. But my problem is I have no idea of a proper approach to the implementation of such tracker.
I have devised the following plan:
- Use one of the object tracking algorithm (e.g. SIFT, SURF etc.) to describe and extract keypoints from a live camera feed.
- Based on the extracted keypoints, convert them to histogram and compare the histogram with histograms of stored markers.
- Once a match is found, convert those position information and pass it to the engine responsible for rendering the 3d objects.
I tried the SIFT and SURF algorithm in describing and extracting key points and the end result is super low fps for both algorithm (i.e. less than 0 fps). I notice that SIFT and SURF are quite computationally expensive and will it be suitable for such tracking on a live camera feed?
Thanks.