7

I've been running a project of stitching images from multiple cameras, but I think I've got a bottleneck...I have some questions about this issue.

I wanna try to mount them on a vehicle in the future and that means the relative positions and orientations of cameras are FIXED.

Also, as I'm using multiple cameras and try to stitch images from them using HOMOGRAPHY, I'll put cameras as close as possible so that the errors(due to the fact that the foci of the cameras are not at the same position and it's impossible as cameras occupy certain space.) can be reduced.

Here's a short experiment video of mine. http://www.youtube.com/watch?v=JEQJZQq3RTY

The stitching result is very terrible as shown there... Even though the the scene captured by the cameras is static, the homography still keeps varying.

The following link is the code I've done so far and code1.png and code2.png are pictures that show part of my code in Stitching_refind.cpp.

https://docs.google.com/folder/d/0B2r9FmkcbNwAbHdtVEVkSW1SQW8/edit?pli=1

I've changed some contents in the code a few days ago such as to do the Step 2, 3 and 4(Please check the 2 png pictures mentioned above) JUST ONCE.


To sum up, my questions are:

1. Is it possible to find out overlapping regions before computing features? I don't want to compute the features on the whole images as it will result in more computational time and mismatches. I wonder if it's possible to JUST computer features in the overlapping region of 2 adjacent images?

2.What I can do to make the obtained homography more accurate? Some people spoke of CAMERA CALIBRATION and try some other matching method. I'm still new to Computer Vision... I tried to study some materials about Camera calibration but I still have no idea what it is for.

About 2 months ago I asked a similar question here: Having some difficulty in image stitching using OpenCV

,where one of the answerer Chris said:

It sounds like you are going about this sensibly, but if you have access to both of the cameras, and they will remain stationary with respect to each other, then calibrating offline, and simply applying the transformation online will make your application more efficient.

What does "calibrate offline" mean? and what does it help?

Thanks for any advice and help.

Community
  • 1
  • 1
SilentButDeadly JC
  • 329
  • 1
  • 5
  • 12
  • Sorry for my late reply, I've been away for a couple of weeks. I'm afraid I have not actually gone through the process of aligning two cameras in this way, I just know the general principle. However, the function you want to be looking into is [stereoCalibrate](http://opencv.itseez.com/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#stereocalibrate), with stereo calibration your search term for google. – Chris Jul 23 '12 at 15:28

1 Answers1

5

As Chris written:

However, your points are not restricted to a specific plane as they are 
imaging a 3D scene. If you wanted to calibrate offline, you could image 
a chessboard with both cameras, and the detected corners could be used
in this function.

Calibrate offline means that you use some callibration pattern easy to detect. Then compute transformation matrix. After this calibration you apply this (previously computed) matrix to acquired images ,it should work for you.

krzych
  • 2,126
  • 7
  • 31
  • 50
  • Excuse me, I still don't get it... 1.What does calibration have to do with computing the Homography? 2."Calibrate offline means that you use some callibration pattern easy to detect. Then compute transformation matrix. After this calibration you apply this (previously computed) matrix to acquired images " I know that there's a camera calibration code in OpenCV http://opencv.itseez.com/doc/tutorials/calib3d/camera_calibration/camera_calibration.html So what I should do is to run such calibration code just ONCE and then run my own stitching code? – SilentButDeadly JC Jul 16 '12 at 05:27
  • In link you added there is about storing camera parameters, you compute it once on chesboard pattern and then use for acquisition. Here also some references about homography: http://www.ics.uci.edu/~majumder/vispercep/cameracalib.pdf http://www.epixea.com/research/multi-view-coding-thesisse9.html http://people.scs.carleton.ca/~c_shu/Courses/comp4900d/notes/homography.pdf – krzych Jul 16 '12 at 06:10
  • Does it mean that I have to prepare a chess board for calibration? I have a set of chessboard pictures, which are used for "stereo calibration" in my LAB mate's PC. I uploaded them to my google drive just now. https://docs.google.com/folder/d/0B2r9FmkcbNwAbHdtVEVkSW1SQW8/edit – SilentButDeadly JC Jul 16 '12 at 07:40
  • I now still don't have a clue :( (sorry I'm still a beginner in CV field.), but I wonder if I have to do camera calibration whenever I execute my multi-camera stitching code(the Stitching_refined.cpp on my google drive), or I only have to do it JUST ONCE and it's once and for all? – SilentButDeadly JC Jul 16 '12 at 07:49
  • Calibration once, then you should apply computed parameters. – krzych Jul 16 '12 at 08:09
  • I found an older version of Camera calibration code of OpenCV here http://dsynflo.blogspot.tw/2010/03/camera-calibration-using-opencv.html – SilentButDeadly JC Jul 16 '12 at 09:09
  • I tried to compile it and it was successfully executed. So now what do I have to do ? There's also a large chessboard picture within that .rar. What I should do is to open the chessboard picture, hold the camera with my hand, and let it face the chessboard picture on the computer? – SilentButDeadly JC Jul 16 '12 at 09:12
  • I used the calibration code and take a chessboard pattern in front of the camera for it to take snapshots(12 pictures) for calibration. I did calibration on the 2 cameras SEPARATELY and I got Distortion.xml and Intrinsics.xml for both cameras. – SilentButDeadly JC Jul 16 '12 at 17:33
  • I'm not sure if I did things right. I took the chessboard pattern and keep moving IN FRONT OF THE CAMERA THAT'S TO BE CALIBRATED and finally I got the two xml files mentioned above. Also, what's my next step, please? – SilentButDeadly JC Jul 16 '12 at 17:34
  • Maybe I'm so annoying. Sorry for being that way. – SilentButDeadly JC Jul 18 '12 at 05:30
  • am trying to do exactly like you .. and would like to also know some answers .. First: Camera Calibration will produce intrinsic and extrinsic parameters. Those as defined by OpenCV documentation intrinsic matrix, distortion coefficients and the rotation+translation vectors. Now as the boy is describing his problem, the position of his cameras are fixed to each other. So if camera calibration is providing something he already know! what the use of it? Can you clarify please ? – user573014 Aug 01 '12 at 10:46