I took the code in the answer https://stackoverflow.com/a/10374811/4828720 from Image transformation in OpenCV and tried to adapt it to an image of mine.
In it, I identified the pixel coordinates of the centers of the checkered bricks, illustrated here:
My target resolution is 784. I calculated the destination coordinates of the pixels. My resulting code is this:
import cv2
from scipy.interpolate import griddata
import numpy as np
source = np.array([
[315, 15],
[962, 18],
[526, 213],
[754, 215],
[516, 434],
[761, 433],
[225, 701],
[1036, 694],
], dtype=int)
destination = np.array([
[14, 14],
[770, 14],
[238, 238],
[546, 238],
[238, 546],
[546, 546],
[14, 770],
[770, 770]
], dtype=int)
source_image = cv2.imread('frames.png')
grid_x, grid_y = np.mgrid[0:783:784j, 0:783:784j]
grid_z = griddata(destination, source, (grid_x, grid_y), method='cubic')
map_x = np.append([], [ar[:,1] for ar in grid_z]).reshape(784,784)
map_y = np.append([], [ar[:,0] for ar in grid_z]).reshape(784,784)
map_x_32 = map_x.astype('float32')
map_y_32 = map_y.astype('float32')
warped_image = cv2.remap(source_image, map_x_32, map_y_32, cv2.INTER_CUBIC)
cv2.imwrite("/tmp/warped2.png", warped_image)
If I run this, none of the source points end up at their intended destination, but I get a warped mess instead. I added the destination points on top here:
Where am I going wrong? I noticed that my grid and map arrays are not as nicely distributed as the ones in the example. Do I have too few points? Do I need them in a regular grid? I tried only using the four points in the outer corners with no luck either.