1

I would like to have the coordinates of the corners of a rectangle object from a greyscale image with some noise.

I start with this image https://i.stack.imgur.com/cLADI.jpg. The central region has a checkered rectangle with different grey intensities. What i want is coordinates of the rectangle in green https://i.stack.imgur.com/YWZhc.jpg.

With below code:

im = cv2.imread("opencv_frame_0.tif",0)
data = np.array(im)
edg = cv2.Canny(data, 120, 255)
ret,thresh = cv2.threshold(data,140,255,1)
imshow(thresh,interpolation='none', cmap=cm.gray)

I am able to get https://i.stack.imgur.com/3vpPT.jpg. Which looks quite good but I don't know how to efficiently get the corner coordinates central white frame. I will have other images like this later where the central grey rectangle can be of a different size so I want the code to be optimized to work for that future.

I tried other examples from OpenCV - How to find rectangle contour of a rectangle with round corner? and OpenCV/Python: cv2.minAreaRect won't return a rotated rectangle. The last one gives me https://i.stack.imgur.com/jcmA5.jpg with best settings.

Any help is appreciated! Thanks.

Amit Solanki
  • 81
  • 1
  • 6
  • Does the rectangle always look more or less oriented like the example provided, or will it appear at 45 degree angles? whats the maximum tolerance you want to have to rotation? – DanyAlejandro Jun 20 '18 at 22:10
  • It might be +/- 5 degrees inclined at max. In that case too I just want the corner coordinates as before – Amit Solanki Jun 21 '18 at 14:07

2 Answers2

3

If you are looking for python code to achieve a lot more, you can find it at this repo...

https://github.com/DevashishPrasad/Angle-Distance

So, to solve your problem this code might be helpful -

# import the necessary packages
from imutils import perspective
from imutils import contours
import numpy as np
import imutils
import cv2

# load the image, convert it to grayscale, and blur it slightly
image = cv2.imread("test.png")
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
gray = cv2.GaussianBlur(gray, (7, 7), 0)

# perform edge detection, then perform a dilation + erosion to
# close gaps in between object edges
edged = cv2.Canny(gray, 50, 100)
edged = cv2.dilate(edged, None, iterations=1)
edged = cv2.erode(edged, None, iterations=1)

# find contours in the edge map
cnts = cv2.findContours(edged.copy(), cv2.RETR_EXTERNAL,
    cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if imutils.is_cv2() else cnts[1]

# loop over the contours individually
for c in cnts:
    # This is to ignore that small hair countour which is not big enough
    if cv2.contourArea(c) < 1000:
        continue

    # compute the rotated bounding box of the contour
    box = cv2.minAreaRect(c)
    box = cv2.cv.BoxPoints(box) if imutils.is_cv2() else cv2.boxPoints(box)
    box = np.array(box, dtype="int")

    # order the points in the contour such that they appear
    # in top-left, top-right, bottom-right, and bottom-left
    # order, then draw the outline of the rotated bounding
    # box
    box = perspective.order_points(box)
    # draw the contours on the image
    orig = image.copy()
    cv2.drawContours(orig, [box.astype("int")], -1, (0, 255, 0), 5)

    # loop over the original points
    for (xA, yA) in list(box):
        # draw circles corresponding to the current points and
        cv2.circle(orig, (int(xA), int(yA)), 9, (0,0,255), -1)
        cv2.putText(orig, "({},{})".format(xA, yA), (int(xA - 50), int(yA - 10) - 20),
            cv2.FONT_HERSHEY_SIMPLEX, 1.8, (255,0,0), 5)

        # show the output image, resize it as per your requirements
        cv2.imshow("Image", cv2.resize(orig,(800,600))) 

    cv2.waitKey(0)

Comments explain it all

Output - 4 corners and their coordinates

Devashish Prasad
  • 1,227
  • 1
  • 13
  • 25
  • Thanks for sharing the github link +1. Amazing work !! – Jeru Luke Jun 21 '18 at 13:17
  • Thanks Devashish, Your script work well and does exactly what i wanted. In case when i have https://imgur.com/VCW5drq i get two images. How do i change the program that it only selects the largest rectangles? Also in case of the tilt, i would like to extract the rectangle region and show rotated image without tilt. Thanks ! – Amit Solanki Jun 21 '18 at 19:53
  • Your welcome @AmitSolanki and JeruLuke ....Talking about your 1st problem, you need to alter the conditon at line number 27 in the above code. It checks the area of contours(rectangles). If the area is below 1000(which means contour is small), it skips that contour. So change 1000 to 2000 or more as needed. – Devashish Prasad Jun 22 '18 at 08:23
  • Now, about second problem.. You first need to find the angle of rotation of the rectangle... you can use code from my github repository then use this code to rotate image [https://codeshare.io/5XmJe8] (this link will expire in 24 hrs), now to extract your rectangle find contours again and use `x,y,w,h = cv2.boundingRect(c)` then use these values to extract region of interest `roi = img[y:y+h, x:x+w]` . This roi is your extracted rectangle. – Devashish Prasad Jun 22 '18 at 08:26
0

From pre-processing, I get the following output: enter image description here From here you can easily find the 4 corners anyway you like (using things like HarrisCorners, vectorizing the image and taking a geometrical approach, your own corner detection algorithm, etc.). It really depends on your own needs.

Here's my code, all I'm doing is: 1. Blur 2. Threshold 3. Find connected components 4. Find the biggest one and separating it 5. Finding the contour

please modify as needed, take this only as a reference (just in case, OpenCV is the same in C++ and Python, and the examples you provide show that you know what you're doing):

#include <opencv2/opencv.hpp>
#include <algorithm>
#include <iostream>

using namespace std;
using namespace cv;

int main(int argc, char* argv[])
{
    Mat img2, img = imread("pic.png");
    cvtColor(img, img, cv::COLOR_BGR2GRAY);
    blur(img, img, Size(7, 7));

    threshold(img, img2, 0, 255, THRESH_OTSU | THRESH_BINARY_INV);

    Mat labels, stats, centroids;
    int n = cv::connectedComponentsWithStats(img2, labels, stats, centroids, 8, CV_16U);
    ushort area, x0, y0, labelBig = 0, maxArea = 0;
    for (int i = 1 ; i < n ; i++) {
        area = stats.at<int>(i, cv::CC_STAT_AREA);
        if (area > maxArea) {
            maxArea = area;
            labelBig = i;
        }
    }

    Mat img3 = Mat(img2.rows, img2.cols, CV_8U, Scalar(0));

    std::mutex mtx;
    labels.forEach<ushort>([&img3, labelBig, &mtx](ushort &label, const int pos[]) -> void {
        if (label == labelBig) {
            lock_guard<mutex> guard(mtx);
            img3.at<uchar>(pos) = 255;
        }
    });
    Mat img4;
    Canny(img3, img4, 50, 100, 3);
    imshow("Frame", img4);
    waitKey();
    return 0;
}

Notice that I'm using Otsu thresholding which gives it a bit of robustness. Also notice that I'm inverting your image as well; after that the biggest and whitest area is what I consider your rectangle.

DanyAlejandro
  • 1,440
  • 13
  • 24
  • 1
    Thanks Dany, your flow was helpful in understanding on how to go about feature identification. Although, the python solution was more easy to apply. – Amit Solanki Jun 21 '18 at 20:03