0

I want to compute the affine transformation parameters between two images. The two images are coming from different sensors therefore, they differ in translation, rotation and scale. Because of the different sensor I guess there will be different distortion in the two images. I have at least 10 points with known image coordinates in both images. I tried to create a system of equations (like this but for 3 dimensions) using syms and then fsolve but I didn't get any reasonable results. Anything that I found so far was only about applying the transformation, not computing the parameters given the observations. Is there any function out there that computes that? Thank you in advance!

HliasK
  • 1
  • I think you're looking to do affine image *registration*: http://www.mathworks.com/discovery/image-registration.html – Dan Nov 11 '15 at 14:12
  • Don't let the title of the duplicate fool you. It's finds the parameters in the general case that will solve for not only translation and rotation, but scale too. If you want the more automated case where you detect matched points between two images, see here: http://stackoverflow.com/questions/29127181/matching-images-with-different-orientations-and-scales-in-matlab/29128507#29128507 – rayryeng Nov 11 '15 at 16:01
  • Thank you Dan and rayryeng for your answers. @rayryeng : The points that I have occurred after I applied the SIFT detector as proposed by A. Vedaldi and B. Fulkerson at [vl_feat toolbox](http://www.vlfeat.org/). The solution that you propose seem to work and thank you for that. However, it computes the coordinates with a 3-4 pixels accuracy. I believe that this is because it doesn't take into account the parameters of the sensors (the internal orientation in photogrammetry terms). Do you know any method that solves this? [1]: (http://www.vlfeat.org/) – HliasK Nov 11 '15 at 17:12
  • @rayryeng : Moreover, I would like to include your implementation on my report. Could I do that? If yes please let me know how I could cite you. Thank you again :) – HliasK Nov 11 '15 at 17:19
  • @HliasK mmm yeah, this certainly doesn't model the intrinsic and extrinsic parameters of the cameras. This purely does this on a pixel basis. I'll have to take a look and get back to you. Also, yes you certainly can. Take a look here on how to cite a StackOverflow post. http://meta.stackexchange.com/questions/49760/citing-stack-overflow-discussions - Thanks so much! Let me know if I can do anything more to help. Until then, let me do some research for you on how to resolve your particular issue. – rayryeng Nov 11 '15 at 17:23
  • Thank you very much again. The one image is World View 2 (13.3 meters focal length, 770 km altitude) and the other is SPOT6 (3.76036 meters focal length and 694 km altitude). Do you believe that the distortion could lead to such a large error (4 pixels). Because the points are very good, they are sub-pixel accurate. – HliasK Nov 11 '15 at 17:35

0 Answers0