Let say I have a video from a drive recorder. I want to construct the recorded scene's points cloud using structure from motion technique. First I need to track some points.
Which algorithm can yield a better result? By using the sparse optical flow (Kanade-Lucas-Tomasi tracker) or the dense optical flow (Farneback)? I have experimented a bit but cannot really decide. Each one of them has their own strengths and weaknesses.
The ultimate target is to get the points cloud of the recorded cars in the scene. By using the sparse optical flow, I can track the interesting points of the cars. But it would be quite unpredictable. One solution is to make some kind of grid in the image, and force the tracker to track one interesting point in each of the grid. But I think this would be quite hard.
By using the dense flow, I can get the movement of every pixel, but the problem is, it cannot really detect the motion of cars that have only little motion. Also, I have doubt that the flow of every pixel yielded by the algorithm would be that accurate. Plus, with this, I believe I can only get the pixels movement between two frames only (unlike by using the sparse optical flow in which I can get multiple coordinates of the same interesting point along time t
)