0

So I need help with OpenCV in C++

Basically I have a camera that has some radial distortion and I am able to undistort it using the provided examples/samples in OpenCV.

But currently I have to recalibrate the camera each time the program is run. But the example generates an XML file for a reason right... To make use of those values...

My problem is I'm not sure which values and how to use those values from the XML file to undistort the camera without having to go through the entire calibration again.

I tried finding examples of this use online but for some reason nothing related to my problem came up...

Supposedly we are supposed to be able to take the values from the output XML file and use them directly in the program so that we don't have to recalibrate the camera each time.

But currently that's exactly what my program is doing :/

I really hope someone can help me with this

Thanks a lot :)

yeowjoon99
  • 11
  • 3

2 Answers2

0

First, You have to create Camera Matrix with Camera Matrix values from XML file.

    Mat cameraMatrix = new Mat(new Size(3,3), CvType.CV_64FC1);
    cameraMatrix.put(0,0,3275.907);
    cameraMatrix.put(0,1,0);
    cameraMatrix.put(0,2,2069.153);
    cameraMatrix.put(1,0,0);
    cameraMatrix.put(1,1,3270.752);
    cameraMatrix.put(1,2,1139.271);
    cameraMatrix.put(2,0,0);
    cameraMatrix.put(2,1,0);
    cameraMatrix.put(2,2,1);

Second, create Distortion Matrix with Distortion_Coefficients from XML file.

    Mat distortionMatrix = new Mat(new Size(4,1), CvType.CV_64FC1);
    distortionMatrix.put(0,0,-0.006934);
    distortionMatrix.put(0,1,-0.047680);
    distortionMatrix.put(0,2,0.002173);
    distortionMatrix.put(0,3,0.002580);

Finally, use OpenCV method.

    Mat map1 = new Mat();
    Mat map2 = new Mat();
    Mat temp = new Mat();
    Imgproc.initUndistortRectifyMap(cameraMatrix, distortionMatrix, temp, cameraMatrix, src.size(), CvType.CV_32FC1, map1, map2);

And you can get two matrix map1, map2 which use for undistortion. If you get this two matrix, you don't have to re-calibrate every time. Just use remap and undistortion will be done.

Imgproc.remap(mat, undistortPicture, map1, map2, Imgproc.INTER_LINEAR);

refer this link.

seokrae.kim
  • 304
  • 3
  • 9
  • Yea that's a great way of using the values but it seems limited in the sense that it's only realistic if the number of pictures we use to calibrate is small right? Because I'm quite new to this and I'm not sure if more pictures = better undistorting. Currently I'm trying to use 300 images (via live feed from camera). If using more pictures _doesn't_ yield more accurate undistorting do let me know as using 300 images takes a long time to calibrate (also introduces frame rate drop as each frame uses **many** matrix points) – yeowjoon99 Apr 25 '19 at 01:10
  • Oh yea forgot to mention. In my (incomplete) answer, I used the function cv::undistort instead of using initUndistortRectifyMap > remap because using cv::undistort would crop the image for me instead of leaving the weird stretched image that would appear when using initUndistortRectifyMap followed by remap – yeowjoon99 Apr 25 '19 at 03:36
0

Alright so I was able to extract the 4 things that I think was necessary from the output xml file. Essentially I made a new class that I named CalibSet and just extracted the data from the xml file via the "tfs[""] >> xxx;" at the bottom of the code.

class CalibSet
{
public:
    Size Boardsize;              // The size of the board -> Number of items by width and height
    Size image;                 // image size
    String calibtime;
    Mat CamMat;                 // camera matrix
    Mat DistCoeff;              // distortion coefficient
    Mat PViewReprojErr;         // per view reprojection error
    float SqSize;            // The size of a square in your defined unit (point, millimeter,etc).
    float avg_reproj_error;
    int NrFrames;                // The number of frames to use from the input for calibration
    int Flags;
    bool imagePoints;            // Write detected feature points
    bool ExtrinsicParams;        // Write extrinsic parameters
    bool GridPoints;              // Write refined 3D target grid points
    bool fisheyemodel;             // use fisheye camera model for calibration

    void write(FileStorage& fs) const                        //Write serialization for this class
    {
        fs << "{"
            <<"nr_of_frames" << NrFrames
            <<"image_width" << image.width
            <<"image_height" << image.height
            <<"board_width" << Boardsize.width
            <<"board_height" << Boardsize.height
            <<"square_size" << SqSize
            <<"flags" << Flags
            <<"fisheye_model" << fisheyemodel
            <<"camera_matrix" << CamMat
            <<"distortion_coefficients" << DistCoeff
            <<"avg_reprojection_error" << avg_reproj_error
            <<"per_view_reprojection_errors" << PViewReprojErr
            <<"extrinsic_parameters" << ExtrinsicParams
            << "}";
    }

    void read(const FileNode& node)                          //Read serialization for this class
    {
        node["calibration_time"] >> calibtime;
        node["nr_of_frames"] >> NrFrames;
        node["image_width"] >> image.width;
        node["image_height"] >> image.height;
        node["board_width"] >> Boardsize.width;
        node["board_height"] >> Boardsize.height;
        node["square_size"] >> SqSize;
        node["flags"] >> Flags;
        node["fisheye_model"] >> fisheyemodel;
        node["camera_matrix"] >> CamMat;
        node["distortion_coefficients"] >> DistCoeff;
        node["avg_reprojection_error"] >> avg_reproj_error;
        node["per_view_reprojection_errors"] >> PViewReprojErr;
        node["extrinsic_parameters"] >> ExtrinsicParams;
    }
};

CalibSet CS;
FileStorage tfs(inputCalibFile, FileStorage::READ);     // Read the settings
if (!tfs.isOpened())
{
    cout << "Could not open the calibration file: \"" << inputCalibFile << "\"" << endl;
        return -1;
}
tfs["camera_matrix"] >> CS.CamMat;
tfs["distortion_coefficients"] >> CS.DistCoeff;
tfs["image_width"] >> CS.image.width;
tfs["image_height"] >> CS.image.height;
tfs.release();                                         // close Settings file

And after this I use the function "undistort" to correct the live camera frames that I stored in frame and put the corrected image in rframe

flip(frame, frame, -1);     // flip image vertically so that it's not upside down
cv::undistort(frame, rframe, CS.CamMat, CS.DistCoeff);
flip(rframe, rframe, +1);   // flip image horizontally 

It's important to make sure that the orientation of photos taken for sampling is exactly the same as the one used later on (including mirroring vertically or horizontally) else the image will still be distorted after using "undistort"

After this I can get an undistorted image as intended BUT the frame rate is extremely low (around 10-20FPS) and I'd appreciate any help in optimising the process if possible; to allow for higher frame rate from the live camera feed

yeowjoon99
  • 11
  • 3