3

This question is related to Transformation between two set of points . Hovewer this is better specified, and some assumptions added.

I have element image and some model.

I've detected contours on both

contoursModel0, hierarchyModel = cv2.findContours(model.copy(), cv2.RETR_LIST,   
                                                  cv2.CHAIN_APPROX_SIMPLE);
contoursModel = [cv2.approxPolyDP(cnt, 2, True) for cnt in contoursModel0];    
contours0, hierarchy = cv2.findContours(canny.copy(), cv2.RETR_LIST,  
                                        cv2.CHAIN_APPROX_SIMPLE);
contours = [cv2.approxPolyDP(cnt, 2, True) for cnt in contours0];

Then I've matched each contour to each other

modelMassCenters = [];
imageMassCenters = [];
for cnt in contours:
for cntModel in contoursModel:
    result = cv2.matchShapes(cnt, cntModel, cv2.cv.CV_CONTOURS_MATCH_I1, 0);
    if(result != 0):
        if(result < 0.05):
           #Here are matched contours
           momentsModel = cv2.moments(cntModel);
           momentsImage = cv2.moments(cnt);
           massCenterModel = (momentsModel['m10']/momentsModel['m00'],  
                              momentsModel['m01']/momentsModel['m00']); 
           massCenterImage = (momentsImage['m10']/momentsImage['m00'], 
                              momentsImage['m01']/momentsImage['m00']); 
           modelMassCenters.append(massCenterModel);
           imageMassCenters.append(massCenterImage); 

Matched contours are something like features.

Now I want to detect transformation between this two sets of points. Assumptions: element is rigid body, only rotation, displacement and scale change.

Some features may be miss detected how to eliminate them. I've once used cv2.findHomography and it takes two vectors and calculates homography between them even there are some miss matches.

cv2.getAffineTransformation takes only three points (can't cope missmatches) and here I have multiple features. Answer in my previous question says how to calculate this transformation but does not take missmatches. Also I think that it is possible to return some quality level from algorithm (by checking how many points are missmatched, after computing some transformation from the rest)

And the last question: should I take all vector points to compute transformation or treat only mass centers of this shapes as feature?

To show it I've added simple image. Features with green are good matches in red bad matches. Here match should be computed from 3 green featrues and red missmatches should affect match quality.

enter image description here

I'm adding fragments of solution I've figured out for now (but I think it could be done much better):

for i in range(0, len(modelMassCenters) - 1):
for j in range(i + 1, len(modelMassCenters) - 1  ):
    x1, y1 = modelMassCenters[i];
    x2, y2 = modelMassCenters [j];
    modelVec = (x2 - x1, y2 - y1);
    x1, y1 = imageMassCenters[i];
    x2, y2 = imageMassCenters[j];
    imageVec = (x2 - x1, y2 - y1);
    rotation = angle(modelVec,imageVec);
    rotations.append((i, j, rotation)); 
    scale = length(modelVec)/length(imageVec);
    scales.append((i, j,  scale)); 

After computing scales and rotation given by each pair of corresponding lines I'm going to find median value and average values of rotation which does not differ more than some delta from median value. The same thing with scale. Then points which are making those values taken to computation will be used to compute displacement.

Community
  • 1
  • 1
krzych
  • 2,126
  • 7
  • 31
  • 50
  • Just to get it over with - someone will come along and point out that your code is a bit C/Java-ish in style. Although not a syntax error - parentheses are not required around an `if` and lines need not be terminated by `;` – Jon Clements Sep 24 '12 at 09:49
  • I know, I don't know Python at all. Only use it for testing some vision algorithms with OpenCV – krzych Sep 24 '12 at 09:51
  • When you used findHomography did you use the ransac option? What exactly was the problem with it? – Hammer Sep 24 '12 at 14:54
  • FindHomography assumes shrinking and transformations in 3D. I've only added is as example of solving missmatches problem. It is not suited to solve this one. – krzych Sep 24 '12 at 14:58

1 Answers1

2

Your second step (match contours to each other by doing a pairwise shape comparison) sounds very vulnerable to errors if features have a similar shape, e.g., you have several similar-sized circular contours. Yet if you have a rigid body with 5 circular features in one quadrant only, you could get a very robust estimate of the affine transform if you consider the body and its features as a whole. So don't discard information like a feature's range and direction from the center of the whole body when matching features. Those are at least as important in correlating features as size and shape of the individual contour.

I'd try something like (untested pseudocode):

"""
Convert from rectangular (x,y) to polar (r,w)
    r = sqrt(x^2 + y^2)
    w = arctan(y/x) = [-\pi,\pi]
"""
def polar(x, y):        # w in radians
    from math import hypot, atan2, pi
    return hypot(x, y), atan2(y, x)

model_features = []
model = params(model_body_contour)    # return tuple (center_x, center_y, area)
for contour in model_feature_contours:
    f = params(countour)
    range, angle = polar(f[0]-model[0], f[1]-model[1])
    model_features.append((angle, range, f[2]))

image_features = []
image = params(image_body_contour)
for contour in image_feature_contours:
    f = params(countour)
    range, angle = polar(f[0]-image[0], f[1]-image[1])
    image_features.append((angle, range, f[2]))

# sort image_features and model_features by angle, range
#
# correlate image_features against model_features across angle offsets
#    rotation = angle offset of max correlation
#    scale = average(model areas and ranges) / average(image areas and ranges)

If you have very challenging images, such as a ring of 6 equally-spaced similar-sized features, 5 of which have the same shape and one is different (e.g. 5 circles and a star), you could add extra parameters such as eccentricity and sharpness to the list of feature parameters, and include them in the correlation when searching for the rotation angle.

Dave
  • 3,834
  • 2
  • 29
  • 44
  • Thanks for very nice solution. Sometimes the problem is that model_body_contour and image_body_contour is not detected or have some occlusions. I will try maybe with treating all detected features on model centers as mass center. And the same with image. What about some correctness measure in your approach? I see that something with correlations can be done there but how? I want to find more systematical and reliable way of outputing matching correctness. – krzych Sep 24 '12 at 21:29
  • A cross-correlation (between image and model) of 1.0 would be a perfect match; anything less than 1.0 indicates a lower-quality match. – Dave Sep 24 '12 at 23:46
  • Ok. Thanks for complete answer. I will wait for some other solutions as this problem can be solved in many ways, and then accept best answer – krzych Sep 25 '12 at 07:40