7

I'm planning on doing my Final Year Project of my degree on Augmented Reality. It will be using markers and there will also be interaction between virtual objects. (sort of a simulation).

Do you recommend using libraries like ARToolkit, NyARToolkit, osgART for such project since they come with all the functions for tracking, detection, calibration etc? Will there be much work left from the programmers point of view?

What do you think if I use OpenCV and do the marker detection, recognition, calibration and other steps from scratch? Will that be too hard to handle?

coder9
  • 1,571
  • 1
  • 27
  • 52

3 Answers3

3

I suggest you to use OpenCV, you will find high quality algorithms and it is fast. They are continuously developing new methods so soon it will be possible to run it real-time in mobiles.

You can start with this tutorial here.

Community
  • 1
  • 1
Jav_Rock
  • 22,059
  • 20
  • 123
  • 164
3

I don't know how familiar you are with image or video processing, but writing a tracker from scratch will be very time-consuming if want it to return reliable results. The effort also depends on which kind of markers you plan to use. Artoolkit e.g. compares the marker's content detected from the video stream to images you earlier defined as markers. Hence it tries to match images and returns a value of probability that a certain part of the video stream is a predefined marker. Depending on the threshold you are going to use and the lighting situation, markers are not always recognized correctly. Then there are other markers like datamatrix, qrcode, framemarkers (used by QCAR) that encode an id optically. So there is no image matching required, all necessary data can be retrieved from the video stream. Then there are more complex approaches like natural feature tracking, where you can use predefined images, given that they offer enough contrast and points of interest so they can be recognized later by the tracker.

So if you are more interested in the actual application or interaction than in understanding how trackers work, you should base your work on an existing library.

Pedro
  • 4,100
  • 10
  • 58
  • 96
  • +1 for reply. Hi, It's been quite some time since I posted this. Well, the current situation is I'm working on marker detection code using OpenCV. I'm wondering in what form we can pass position, rotation details into OpenGL API to position a 3D object.? how the matrix should look like? In case I fail to do achieve success using OpenCV I should be ready to use something like ARToolkit for marker tracking and detection. – coder9 Oct 07 '11 at 16:27
  • once you're able to determine the correct position and rotation of your markers it shouldn't be too much trouble converting the data. opengl expects a float[16] array or (double[16]). I am not sure about about the exact order, just search for "opengl matrix order". I guess it shouldn't take too long to find it out. – Pedro Oct 07 '11 at 16:57
0

Mastering OpenCV with Practical Computer Vision Projects

I did the exact same thing and found Chapter 2 of this book immensely helpful. They provide source code for the marker tracking project and I've written a framemarker generator tool. There is still quite a lot to figure out in terms of OpenGL, camera calibration, projection matrices, markers and extending it, but it is a great foundation for the marker tracking portion.

Cameron Lowell Palmer
  • 21,528
  • 7
  • 125
  • 126