2

I have an image, green where I want to retrieve the coordinates of the boundary from.

enter image description here

import cv2
green = cv2.imread(PATH) 
h, w = green.shape[:2]      #  644 × 600

I assume that the center of the green is located in the middle of the image.

center_x = int(w / 2)
center_y = int(h / 2)

The above will be read using cv2 whereas I also have a list of xy-points that actually represent the boundary (in meters). However, these points have a different scale. I plot these with plt. The center of this shape is (0, 0). So, I have the following x-points:

boundary_x = [-13.66073755, -12.43520159, -11.04384843, -9.98332564, -7.11192784,
              -6.02621612, -4.71880321, -3.4512191, -1.52076807, -0.65083554, 
              0.32848671, 1.4180397, 2.61079625, 3.83598381, 4.56332455, 
              5.48307616, 7.15070888, 8.93250768, 9.84145255, 10.73775437, 
              11.16832875, 11.43360639, 11.58926395, 11.55000714, 11.02709822, 
              9.15382469, 7.74414845, 6.88892632, 5.59966095, 4.19472711, 
              3.71286054, 3.02690551, 2.01293197, 0.86352913, -0.76604073, 
              -1.69607303, -3.58023856, -7.05304689, -9.60330676, -11.42212883, 
              -12.0259103, -12.84910541, -13.43989501, -14.09124513, -14.44340792, 
              -14.58033385, -14.59593576, -14.47816773, -13.66073755]

and the following y-points

boundary_y = [0.39403631, -1.31464213, -2.90484677, -3.84934066, -6.01857721, 
              -7.03637054, -8.43781414, -10.07816775, -13.30072487, -14.56989554, 
              -15.64505512, -16.44305838, -17.06025494, -17.55754934, -17.72334407, 
              -17.65167922, -17.04824173, -16.16004512, -15.46884952, -14.36737278, 
              -13.45924195, -12.42784091, -10.96036943, -10.11943893, -7.62184824, 
              -0.31126985, 6.26614772, 8.57353953, 11.08919661, 13.3806628, 
              13.97784888, 14.58332691, 15.09121417, 15.37007017, 15.48016319, 
              15.33632452, 14.5088819, 12.73280014, 11.31790031, 10.12753062, 
              9.63155725, 8.7406104, 7.86794279, 6.59555651, 5.51401991, 
              4.56545763, 3.39174778, 2.29248212, 0.39403631]

I plot the two images below each other as follows:

import matplotlib.pyplot as plt
fig, (ax1, ax2) = plt.subplots(2)

ax1.imshow(green)
ax1.scatter(int(h/2), int(w/2), marker='o', color='red')

ax2.axis('equal')
ax2.plot(border_x, border_y, color='red')
ax2.scatter(0, 0, marker='o', color='red')
plt.show()

The output looks as follows:

enter image description here

My goal is to find for each boundary point the corresponding pixel point on the upper image. This is relatively straightforward for the x-axis. However, for the y-axis it is different because the above pictures goes from zero to h, whereas the y-axis of the below image goes from positive to negative. Please advice!

Edit:

I tried the following unsuccessfully:

I know that the original image is 600x644. With the help of this topic I was able to create a plt.Figure of those exact measurements

px = 1.0 / plt.rcParams['figure.dpi']  # pixel in inches

Next I need to use only the content area, so exclude everything outside of the plot area. I use this answer to write the following:

fig = plt.figure(frameon=False)
fig.set_size_inches(w * px, h * px)
ax = plt.Axes(fig, [0, 0, 1, 1])
ax.set_axis_off()
fig.add_axes(ax)
ax.axis('equal')

Next, I plot and save it

border, = ax.plot(border_x, border_y, color='red')
plt.show()
fig.savefig('border.png', dpi=fig.dpi) 

The figure looks as follows:

enter image description here

When I inspect the file, the dimensions are 644x by 600x

Now, I want to find the red contour and plot it on the original image and see if its a match.

green = cv2.imread('green.png', cv2.IMREAD_UNCHANGED)
gray = cv2.imread('border.png', cv2.IMREAD_GRAYSCALE)
thresh = cv2.threshold(gray, 100, 255, 0)[1]
contours, hierarchy = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
cv2.drawContours(green, contours, -1, (0,255,0), 1)

Unfortunately, its not a match.. Any ideas?

enter image description here

HJA24
  • 410
  • 2
  • 11
  • 33
  • so you need to scale and translate the points, which means multiplication and addition. give it a try! -- if that doesn't use some specific constants, and you *really* have to *align* these shapes... you need to extract points from the larger shape too, and then implement Iterative Closest Point (ICP) – Christoph Rackwitz Jun 21 '22 at 19:35
  • thanks, although I am not 100% sure I believe that 1 pixel is equal to 0.054883645824956 meter. Does this make the process easier? – HJA24 Jun 21 '22 at 19:54
  • thanks for that constant. it is almost equal to 1 / (.3048 * 3 * 20) = 1/18.288, and you know the length of a foot, a yard, and 20 yards. I believe it's exactly what you need. – Christoph Rackwitz Jun 21 '22 at 22:08
  • I think the units are mixed up in your constant. it can't be meters/pixel. perhaps it's pixels/meter? then you would have 1 pixel = 20 yards... perhaps with another factor in there, but units definitely seem to have been upside down – Christoph Rackwitz Jun 21 '22 at 22:43
  • @ChristophRackwitz I am pretty sure its pixels to meter/something. If we multiply the width of the image, 644, with the constant, we get +- 35 which is similar to the width of the green. I need to check whether the unit is actually meters or yards – HJA24 Jun 22 '22 at 08:20
  • oh ok, possibly what I interpreted into that constant is just coincidence. anyway, a diversion from the question, that was how to align the shapes – Christoph Rackwitz Jun 22 '22 at 08:32
  • 1
    try your approaches with synthetic data, data for which you _know_ the relationship. – Christoph Rackwitz Jun 22 '22 at 12:56

1 Answers1

0

I would start with computing OBB for both shapes and then just compute the transform between them note some shapes have infinite OBB possibilities for such this will not work...

Another possibility is to do this for both shapes:

  1. compute centroid
  2. select most distant point to centroid
  3. compute the distance of this point to centroid

Now scale is just ratio between the two distances and translation is difference between centroids so thew only thing left is to fit the angle of rotation until rendered shapes overlaps best (so min number of set pixels).

Spektre
  • 49,595
  • 11
  • 110
  • 380