2

According to this Python tutorial, there are two Contour Approximation Methods for OpenCV cv2.findContours function: cv2.CHAIN_APPROX_NONE and cv2.CHAIN_APPROX_SIMPLE. First one stores all boundary points and second method removes all redundant points.

import cv2

im = cv2.imread('simple.jpg')
imgray = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(imgray, 127, 255, 0)
_, contours, hierarchy = cv2.findContours(thresh, cv2.RETR_TREE, 
cv2.CHAIN_APPROX_SIMPLE)

img = cv2.drawContours(im, contours, -1, (0, 255, 0), 3)


cv2.imshow('Output', img)
wk = cv2.waitKey(0) & 0xFF
if wk == 27:
    cv2.destroyAllWindows()

But this code outputs the same image, no matter what method it uses.

Here is an image after code execution: Output

Green lines are contours. As you can see, rectangle is surrounded by green lines, but it should be defined only by 4 points, one per corner.

rayryeng
  • 102,964
  • 22
  • 184
  • 193
masan
  • 75
  • 1
  • 9
  • Thanks for accepting my answer. Please note that I marked your question as a duplicate of a question asked a while ago, but that method originally was incorrect as it didn't display the actual points that you wanted. That has changed using the information from my answer and it is now correct. The duplicate is used as a beacon for other people to see that this problem has been solved in the past. – rayryeng Apr 03 '18 at 19:18
  • @masan https://docs.opencv.org/3.3.0/d3/dc0/group__imgproc__shape.html#gga4303f45752694956374734a03c54d5ffa5f2883048e654999209f88ba04c302f5 look at this. this might be helpful – Krishna Apr 05 '18 at 07:58
  • @krishna Thank you! I've been searching for it. – masan Apr 05 '18 at 08:04
  • @masan welcome. don't be afraid of downvotes :) – Krishna Apr 05 '18 at 08:06

2 Answers2

2

I guess it's because you are using the function drawContours(), it draws the contours literally. If you want to see the difference i.e. just the points, I think you should just try to plot the points returned and not use the drawContours() function.

Mehul Jain
  • 468
  • 4
  • 12
  • So, although I don't see difference, there is actually huge difference in performance, just not visible using drawContours()? – masan Apr 03 '18 at 18:51
  • It would be great if you can show the output. And how are you plotting the points? – Mehul Jain Apr 03 '18 at 18:54
1

If you look at the tutorial, specifically towards the bottom of the text it reads:

Just draw a circle on all the coordinates in the contour array (drawn in blue color).

Mehul Jain is correct where if you use cv2.drawContours it simply connects the points together when drawing the contour. You will not visibly see a difference between the two approximation methods. What you need to do is draw circles instead.

Therefore once you run cv2.findContours, you can use the contours output which is a list of all contours found in the image. Because there should only be one possible contour as it is a square, the list should be one element long. Also note that the output will be a (N, 1, 2) 3D NumPy array, so it would be best to reshape this into a 2 column array before continuing.

Next, you can use cv2.circle to actually take each point and draw individual circles instead of the actual contour. Only then will you actually see a difference.

Taking your code above and modifying it so that we do the two methods, we should do this instead:

### Your code
import numpy as np
import cv2

im = cv2.imread('simple.jpg')
imgray = cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
ret,thresh = cv2.threshold(imgray,127,255,0)

### Step #1
# Detect contours using both methods on the same image
_, contours1, _ = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
_, contours2, _ = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)

### Step #2 - Reshape to 2D matrices
contours1 = contours1[0].reshape(-1,2)
contours2 = contours2[0].reshape(-1,2)

### Step #3 - Draw the points as individual circles in the image
img1 = im.copy()
img2 = im.copy()

for (x, y) in contours1:
    cv2.circle(img1, (x, y), 1, (255, 0, 0), 3)

for (x, y) in contours2:
    cv2.circle(img2, (x, y), 1, (255, 0, 0), 3)

### Step #4 - Stack the two images side by side and show it
out = np.hstack([img1, img2])
cv2.imshow('Output', out)
cv2.waitKey(0)
cv2.destroyAllWindows()

Let's walk through the code slowly. The first part of the code is what you provided. Now we move onto what is new.

Step #1 - Detect contours using both methods

Using the thresholded image, we detect contours using both the full and simple approximations. This gets stored in two lists, contours1 and contours2.

Step #2 - Reshape to 2D matrices

The contours themselves get stored as a list of NumPy arrays. For the simple image provided, there should only be one contour detected, so extract out the first element of the list, then use numpy.reshape to reshape the 3D matrices into their 2D forms where each row is a (x, y) point.

Step #3 - Draw the points as individual circles in the image

The next step would be to take each (x, y) point from each set of contours and draw them on the image. We make two copies of the original image in colour form, then we use cv2.circle and iterate through each pair of (x, y) points for both sets of contours and populate two different images - one for each set of contours.

Step #4 - Stack the two images side by side and show it

We finally use numpy.hstack to stack the images horizontally and we show the result. You will see that both of the images where the left image is the full contour approximation while the right image is the simplified version.

rayryeng
  • 102,964
  • 22
  • 184
  • 193