3

From camera with open cv I can get a red cross (see picture below) , I do not know best method to calculate cross center coordinates (x,y)? We can assume that laser is red.

enter image description here

Probably I will have to use some kind of object recognition. But I need to calculate it's center and perfomance is important.

Anyone can help ?

I have founded how to find laser pointer (red dot coordinates) by searching most red pixel in the picture but at this case center is not always most red (whole line is red and sometimes cv calculates that it is more red than center).

Bart
  • 19,692
  • 7
  • 68
  • 77

3 Answers3

5

Here is how I did it using the goodFeaturesToTrack function:

#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
#include <vector>

using namespace cv;
using namespace std;


int main(int argc, char* argv[])
{
    Mat laserCross = imread("laser_cross.png");

    vector<Mat> laserChannels;
    split(laserCross, laserChannels);

    vector<Point2f> corners;
    // only using the red channel since it contains the interesting bits...
    goodFeaturesToTrack(laserChannels[2], corners, 1, 0.01, 10, Mat(), 3, false, 0.04);

    circle(laserCross, corners[0], 3, Scalar(0, 255, 0), -1, 8, 0);

    imshow("laser red", laserChannels[2]);
    imshow("corner", laserCross);
    waitKey();

    return 0;
}

This results in the following output:
enter image description here

You could also look at using cornerSubPix to improve the answer accuracy.

EDIT : I was curious about implementing vasile's answer, so I sat down and tried it out. This looks to work quite well! Here is my implementation of what he described. For segmentation, I decided to use the Otsu method for automatic threshold selection. This will work well as long as you have high separation between the laser cross and the background, otherwise you might want to switch to an edge-detector like Canny. I did have to deal with some angle ambiguities for the vertical lines (i.e., 0 and 180 degrees), but the code seems to work (there may be a better way of dealing with the angle ambiguities).

Anyway, here is the code:

#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
#include <vector>

using namespace cv;
using namespace std;

Point2f computeIntersect(Vec2f line1, Vec2f line2);
vector<Point2f> lineToPointPair(Vec2f line);
bool acceptLinePair(Vec2f line1, Vec2f line2, float minTheta);

int main(int argc, char* argv[])
{
    Mat laserCross = imread("laser_cross.png");

    vector<Mat> laserChannels;
    split(laserCross, laserChannels);

    namedWindow("otsu", CV_WINDOW_NORMAL);
    namedWindow("intersect", CV_WINDOW_NORMAL);

    Mat otsu;
    threshold(laserChannels[2], otsu, 0.0, 255.0, THRESH_OTSU);
    imshow("otsu", otsu);

    vector<Vec2f> lines;
    HoughLines( otsu, lines, 1, CV_PI/180, 70, 0, 0 );

    // compute the intersection from the lines detected...
    int lineCount = 0;
    Point2f intersect(0, 0);
    for( size_t i = 0; i < lines.size(); i++ )
    {
        for(size_t j = 0; j < lines.size(); j++)
        {
            Vec2f line1 = lines[i];
            Vec2f line2 = lines[j];
            if(acceptLinePair(line1, line2, CV_PI / 4))
            {
                intersect += computeIntersect(line1, line2);
                lineCount++;
            }
        }

    }

    if(lineCount > 0)
    {
        intersect.x /= (float)lineCount; intersect.y /= (float)lineCount;
        Mat laserIntersect = laserCross.clone();
        circle(laserIntersect, intersect, 1, Scalar(0, 255, 0), 3);
        imshow("intersect", laserIntersect);
    }

    waitKey();

    return 0;
}

bool acceptLinePair(Vec2f line1, Vec2f line2, float minTheta)
{
    float theta1 = line1[1], theta2 = line2[1];

    if(theta1 < minTheta)
    {
        theta1 += CV_PI; // dealing with 0 and 180 ambiguities...
    }

    if(theta2 < minTheta)
    {
        theta2 += CV_PI; // dealing with 0 and 180 ambiguities...
    }

    return abs(theta1 - theta2) > minTheta;
}

// the long nasty wikipedia line-intersection equation...bleh...
Point2f computeIntersect(Vec2f line1, Vec2f line2)
{
    vector<Point2f> p1 = lineToPointPair(line1);
    vector<Point2f> p2 = lineToPointPair(line2);

    float denom = (p1[0].x - p1[1].x)*(p2[0].y - p2[1].y) - (p1[0].y - p1[1].y)*(p2[0].x - p2[1].x);
    Point2f intersect(((p1[0].x*p1[1].y - p1[0].y*p1[1].x)*(p2[0].x - p2[1].x) -
                       (p1[0].x - p1[1].x)*(p2[0].x*p2[1].y - p2[0].y*p2[1].x)) / denom,
                      ((p1[0].x*p1[1].y - p1[0].y*p1[1].x)*(p2[0].y - p2[1].y) -
                       (p1[0].y - p1[1].y)*(p2[0].x*p2[1].y - p2[0].y*p2[1].x)) / denom);

    return intersect;
}

vector<Point2f> lineToPointPair(Vec2f line)
{
    vector<Point2f> points;

    float r = line[0], t = line[1];
    double cos_t = cos(t), sin_t = sin(t);
    double x0 = r*cos_t, y0 = r*sin_t;
    double alpha = 1000;

    points.push_back(Point2f(x0 + alpha*(-sin_t), y0 + alpha*cos_t));
    points.push_back(Point2f(x0 - alpha*(-sin_t), y0 - alpha*cos_t));

    return points;
}

Hope that helps!

mevatron
  • 13,911
  • 4
  • 55
  • 72
  • The poster did say performance is important - goodFeaturesToTrack is Harris corner detection = very slow. – Martin Beckett Nov 20 '11 at 20:39
  • 1
    For this image, it took approximately 740 microseconds on a Core 2 Duo 2.66 GHz machine. Granted a small image, but that's fast enough to keep up with any camera I've seen :) – mevatron Nov 20 '11 at 20:50
  • 1
    I generated an 800x600 version in GIMP, and it took about 35 ms. Not great, but still close to camera real-time. – mevatron Nov 20 '11 at 21:00
  • opencv must have improved! I thought it did a Hough first to find corners (?) which will get slow with large images. Of course it depends on what you mean by performance - if this has to run at 60 fps on an embedded uproc then it's best to start with the simple dumb approach. – Martin Beckett Nov 20 '11 at 21:00
  • Yeah definitely! Your approach is far better for an embedded build! I also did tell `goodFeaturesToTrack` to use `cornerMinEigenVal` instead of `cornerHarris`...maybe that is why it's faster. I'll go try it out. – mevatron Nov 20 '11 at 21:07
  • 1
    speed aside. I don't think corner detect is the best approach, it would fail if there is a tiny gap or noise where the lines cross. I think Hough and searching for two lines with slopes crossing at near 90deg would be a better robust solution – Martin Beckett Nov 20 '11 at 21:18
  • 1
    Very thanks for code snippet. It is very useful. Everything works fine in the dark enviroment. I will be using it for distance calculation with web cam. You two are cool guys (mevarton and vasile) thanks for HELP! – Audrius Gailius Nov 23 '11 at 06:05
  • No problem! Glad we could help! :) – mevatron Nov 23 '11 at 14:13
3

Scan across some row of the image , eg 1/4 of the way down, looking for the center of red pixels. Then repeat for a row near the bottom - eg 3/4 of the way down. This gives you two points on the vertical bar

Now repeat for two columns near the edge of the image - eg 1/4 and 3/4 across - this gives you two points on the horizontal part.

A simple simultaneous equation gives you the crossing point.

If this is a video sequence and you are really tight for time, you can use the points you found in the previous frame and search a small window around that point - assuming the cross hasn't moved much.

ps. If the lines aren't straight, or move to random angles between frames, or you need a fraction of a pixel accuracy there are better techniques.

Community
  • 1
  • 1
Martin Beckett
  • 94,801
  • 28
  • 188
  • 263
2

Hough Lines should help you there, and it is also good enough in more challenging situations.

So, you can

  • Filter gauss/median (optional)
  • Canny or segmentation. I recommend you segmentation. It will give you much more lines, and the next steps will take more, but the precision will be subpixel
  • Hough lines (classical). cv::HoughLines(); It will return a number of lines described by rho and theta. (there can be hundreads of them if you use segmentation)

  • for each pair of them that do not belong to the same red line (abs(theta1-theta2)>minTheta), calculate the intersection. Some geometry needed here

  • average those centers by x and y. Or use some other statistics to obtain the average center point.

Here is an example of usage you can start with. Make sure to change the preprocessor #if 0 to #if 1 so that you will use the classical transform.

Sam
  • 19,708
  • 4
  • 59
  • 82