I'm trying to perform Bundle Adjustment (BA) on a sequence of stereo images (class Step
) taken with the same camera.
Each Step
has left & right images (rectified and synchronized), the generated depth map, keypoints+descriptors of the left image & 2 4x4 matrices - 1 for local (image plane) to global (3D world), and its inverse (T_L2G and T_G2L respectively).
The steps are registered with respect to the 1st image.
I'm trying to run BA on the result to refine the transformation and I'm trying to use PBA (https://grail.cs.washington.edu/projects/mcba/)
Code for setting up the cameras:
for (int i = 0; i < steps.size(); i++)
{
Step& step = steps[i];
cv::Mat& T_G2L = step.T_G2L;
cv::Mat R;
cv::Mat t;
T_G2L(cv::Rect(0, 0, 3, 3)).copyTo(R);
T_G2L(cv::Rect(3, 0, 1, 3)).copyTo(t);
CameraT camera;
// Camera Parameters
camera.SetFocalLength((double)m_focalLength); // Same camera, global focal length
camera.SetTranslation((float*)t.data);
camera.SetMatrixRotation((float*)R.data);
if (i == 0)
{
camera.SetConstantCamera();
}
camera_data.push_back(camera);
}
Then, I generate a global keypoint by running on all image pairs and matching (currently using SURF).
Then, Generating BA points data:
for (size_t i = 0; i < globalKps.size(); i++)
{
cv::Point3d& globalPoint = globalKps[i].AbsolutePoint;
cv::Point3f globalPointF((float)globalPoint.x, (float)globalPoint.y, (float)globalPoint.z);
int num_obs = 0;
std::vector < std::pair<int/*stepID*/, int/*KP_ID*/>>& localKps = globalKps[i].LocalKeypoints;
if (localKps.size() >= 2)
{
Point3D pointData;
pointData.SetPoint((float*)&globalPointF);
// For this point, set all the measurements
for (size_t j = 0; j < localKps.size(); j++)
{
int& stepID = localKps[j].first;
int& kpID = localKps[j].second;
int cameraID = stepsLUT[stepID];
Step& step = steps[cameraID];
cv::Point3d p3d = step.KeypointToLocal(kpID);
Point2D measurement = Point2D(p3d.x, p3d.y);
measurements.push_back(measurement);
camidx.push_back(cameraID);
ptidx.push_back((int)point_data.size());
}
point_data.push_back(pointData);
}
}
Then, Running BA:
ParallelBA pba(ParallelBA::PBA_CPU_FLOAT);
pba.SetFixedIntrinsics(true); // Same camera with known intrinsics
pba.SetCameraData(camera_data.size(), &camera_data[0]); //set camera parameters
pba.SetPointData(point_data.size(), &point_data[0]); //set 3D point data
pba.SetProjection(measurements.size(), &measurements[0], &ptidx[0], &camidx[0]);//set the projections
pba.SetNextBundleMode(ParallelBA::BUNDLE_ONLY_MOTION);
pba.RunBundleAdjustment(); //run bundle adjustment, and camera_data/point_data will be
Then, where I'm facing the problems, extracting the data back from PBA:
for (int i = 1/*First camera is stationary*/; i < camera_data.size(); i++)
{
Step& step = steps[i];
CameraT& camera = camera_data[i];
int type = CV_32F;
cv::Mat t(3, 1, type);
cv::Mat R(3, 3, type);
cv::Mat T_L2G = cv::Mat::eye(4, 4, type);
cv::Mat T_G2L = cv::Mat::eye(4, 4, type);
camera.GetTranslation((float*)t.data);
camera.GetMatrixRotation((float*)R.data);
t.copyTo(T_G2L(TranslationRect));
R.copyTo(T_G2L(RotationRect));
cv::invert(T_G2L, T_L2G);
step.SetTransformation(T_L2G); // Step expects local 2 global transformation
}
Everything runs the way I expect it to. PBA reports relatively small initial error (currently testing with a small amount of pair-wise registered images, so the error shouldn't be too large), and after the run it's reporting a smaller one. (Converges quickly, usually less the 3 iterations)
However, when I'm dumping the keypoints using the newly found transformations, the clouds seems to have moved further apart from each other.
(I've also tried switching between the T_G2L & T_L2G to "bring them closer". Doesn't work).
I'm wondering if there's something I'm missing using it.