I have an image sequence (example KITTI). I want to obtain the motion between frames (up to scale. i.e, translation vector norm is 1.). I could find ORB features, in both images, calculate their descriptors, and use a matcher (example Brute Froce Matcher from opencv, get matches between descriptors, and calculate the fundamental matrix, and retrieve the motion from it.
However, several papers use the KLT tracker aka, good features to track in order to find good keypoints. But here is the problem, there keypoints have no descriptors that can be matched, in order to obtain the motion between the 2 frames. They say in the paper they "track" the features by directly estimating the motion model, but several papers mention that they obtain correspondences between images by using the KLT tracker. I am not sure how this is done. An example of a paper that says this is here. So the main question is: How to obtain point correspondences between images, using good features to track?
I have tried calculating ORB descriptors for the keypoints found using goodFeaturesToTrack() function of opencv, but this wasn't very well for several reasons.
- ORB descriptors usually have orientation related to them, and this is found while calculating the keypoints, when I directly compute the descriptors of non-ORB keypoints, it doesn't work.
- I am sure there is something else that would mess up the calculation. (octave attribute of keypoints..)