1

Given a set of 3D points in camera's perspective corresponding to a planar surface (ground), is there any fast efficient method to find the orientation of the plane regarding the camera's plane? Or is it only possible by running heavier "surface matching" algorithms on the point cloud?

I've tried to use estimateAffine3D and findHomography, but my main limitation is that I don't have the point coordinates on the surface plane - I can only select a set of points from the depth images and thus must work from a set of 3D points in the camera frame.

I've written a simple geometric approach that takes a couple of points and computes vertical and horizontal angles based on depth measurement, but I fear this is both not very robust nor very precise.

EDIT: Following the suggestion by @Micka, I've attempted to fit the points to a 2D plane on the camera's frame, with the following function:

#include <opencv2/opencv.hpp>

//------------------------------------------------------------------------------
/// @brief      Fits a set of 3D points to a 2D plane, by solving a system of linear equations of type aX + bY + cZ + d = 0
///
/// @param[in]  points             The points
///
/// @return     4x1 Mat with plane equations coefficients [a, b, c, d]
///
cv::Mat fitPlane(const std::vector< cv::Point3d >& points) {
    // plane equation: aX + bY + cZ + d = 0
    // assuming c=-1 ->  aX + bY + d = z

    cv::Mat xys = cv::Mat::ones(points.size(), 3, CV_64FC1);
    cv::Mat zs  = cv::Mat::ones(points.size(), 1, CV_64FC1);

    // populate left and right hand matrices
    for (int idx = 0; idx < points.size(); idx++) {
        xys.at< double >(idx, 0) = points[idx].x;
        xys.at< double >(idx, 1) = points[idx].y;
        zs.at< double >(idx, 0)  = points[idx].z;
    }

    // coeff mat
    cv::Mat coeff(3, 1, CV_64FC1);

    // problem is now xys * coeff = zs
    // solving using SVD should output coeff
    cv::SVD svd(xys);
    svd.backSubst(zs, coeff);

    // alternative approach -> requires mat with 3D coordinates & additional col
    // solves xyzs * coeff = 0
    // cv::SVD::solveZ(xyzs, coeff);  // @note: data type must be double (CV_64FC1)

    // check result w/ input coordinates (plane equation should output null or very small values)
    double a = coeff.at< double >(0);
    double b = coeff.at< double >(1);
    double d = coeff.at< double >(2);
    for (auto& point : points) {
        std::cout << a * point.x + b * point.y + d - point.z << std::endl;
    }

    return coeff;

}

For simplicity purposes, it is assumed that the camera is properly calibrated and that 3D reconstruction is correct - something I already validated previously and therefore out of the scope of this issue. I use the mouse to select points on a depth/color frame pair, reconstruct the 3D coordinates and pass them into the function above.

I've also tried other approaches beyond cv::SVD::solveZ(), such as inverting xyz with cv::invert(), and with cv::solve(), but it always ended in either ridiculously small values or runtime errors regarding matrix size and/or type.

joaocandre
  • 1,621
  • 4
  • 25
  • 42
  • so you know plane points in your depth image? Fitting a plane to those should be simple? – Micka Dec 14 '20 at 19:50
  • OpenCV's calib3d module has several "decompose" methods (decomposeHomographyMat). those might work, but the problem might be underconstrained. – Christoph Rackwitz Dec 14 '20 at 21:05
  • @Micka while I could try to find the plane equation coefficients, it's not entirely clear how would I get the transformation matrix from it. – joaocandre Dec 14 '20 at 23:42
  • @ChristophRackwitz the homography matrix is (I assume) what am looking for. – joaocandre Dec 14 '20 at 23:44
  • once you have the plane equation you can generate any number of points on that plane and use solvePnp function from opencv to get pose of the plane or use findHomography or getPerspectiveHonohraphy. You could use that functions directly if you had well known "marker points" on the plane. – Micka Dec 15 '20 at 05:56
  • @Micka I have been looking at OpenCV's solvers ever since I posted the question, and while I have been able to get coefficient estimates, it does not seem to be particularly accurate (using the estimated coefficients on input coordinates provides inconsistent outcomes) - could you perharps write an answer with that approach? – joaocandre Dec 15 '20 at 17:04
  • I'll edit the question anyway to provide more details. – joaocandre Dec 15 '20 at 17:11
  • 1
    please show samples of how you used solvePnp and give information about how you computed the camera calibration, etc. – Micka Dec 15 '20 at 17:18
  • What do you mean by `don't have the point coordinates on the surface plane` ? and if you do not have the coordinates, how do you `work from a set of 3D points in the camera frame` ? – Pe Dro Dec 15 '20 at 17:55
  • @PeDro I meant that I only have 3D coordinates on the camera frame, not in any external reference, as I would need to pass to OpenCV's transform/homography estimation functions. – joaocandre Dec 15 '20 at 18:12
  • The answer [here](https://stackoverflow.com/a/23897549/9625777) shares how to get "real-world coordinates" May help – Pe Dro Dec 15 '20 at 18:15
  • I am trying to look for a solution similar to yours. Given the (gray-scale) depth image, Are you able to get the orientation/pose of the camera in **any** reference frame ? Do let us know of updates. – Pe Dro Dec 15 '20 at 18:22
  • @PeDro I do not, otherwise I would use any of of `estimateAffinity3D`, `findHomography` or `solvePnP` to calculate the transformation matrix. – joaocandre Dec 16 '20 at 00:22
  • I have updated the question with some additional information. At this stage I don't exact understand how to get the transformation matrix from the plane coefficients. – joaocandre Dec 16 '20 at 01:04

0 Answers0