6

Outline

I'm trying to warp an image (of a spectral peak in a time series, but this is not important) by generating a polynomial based on some 'centroid' data that is associated with the image (the peak at each time step) and augmenting the polynomial. These original and augmented polynomials make up my 'source' and 'destination' points, respectively, that I am trying to warp the image with, using skimage.transform.warp().

The goal of this warping is to produce two warped images (i.e. repeat the process twice). These images would then be positively correlated with one another, or negatively correlated if one of the two warped images were to be horizontally flipped (again, not that important here).

Here is an example output for comparison: Simple Image Comparison

(Note that the polynomial augmentation is performed by adding/subtracting noise at each polynomial peak/trough, proportional to the magnitude (pixel) at each point, then generating a new polynomial of the same order through these augmented points, with additional fixed points in place to prevent the augmented polynomial from inverting).


Code Snippet

I achieve this in code by creating a GeometricTransform and applying this to warp() as an inverse_map, as follows:

from skimage import transform

# Create the transformation object using the source and destination (N, 2) arrays in reverse order
# (as there is no explicit way to do an inverse polynomial transformation).
t = transform.estimate_transform('polynomial', src=destination, dst=source, order=4)  # order = num_poly_degrees - 1

# Warp the original image using the transformation object
warped_image = transform.warp(image, t, order=0, mode='constant', cval=float('nan'))

Problems

I have two main problems with the resulting warp:

  1. There are white spaces left behind due to the image warp. I know that this can be solved by changing the mode within transform.warp() from 'constant' to 'reflect', for example. However, that would repeat existing data, which is related to my next problem...
  2. Assuming I have implemented the warp correctly, it seems to have raised the 'zig-zag' feature seen at time step 60 to ~50 (red circles). My goal with the warping is to horizontally warp the images so that each feature remains within its own time step (perhaps give-or-take a very small amount), but their 'pixel' position (x-axis) is augmented. This is also why I am unsure about using 'reflect' or another mode within transform.warp(), as this would artificially add more data, which would cause problems later in my pipeline where I compare pairs of warped images to see how they are correlated (relating back to my second paragraph in Outline).

My Attempts

I have tried using RANSAC, as used in this question which also uses a polynomial transformation: Robustly estimate Polynomial geometric transformation with scikit-image and RANSAC in order to improve the warping. I had hoped that this method would only leave behind smaller white spaces, then I would be satisfied with switching to another mode within transform.warp(), however, this does not fix either of my issues as the performance was about the same.

I have also looked into using a piecewise affine transformation and Delaunay triangulation (using cv2) as a means of both preserving the correct image dimensions (without repeating data), and having minimal y-component warping. The results do solve the two stated problems, however the warping effect is almost imperceptible, and I am not sure if I should continue down this path by adding more triangles and trying more separated source and destination points (though this line of thought may require another post).


Summary

I would like a way to warp my images horizontally using a polynomial transformation (any other suggestions for a transformation method are also welcome!), which does its best to preserve the image's features within their original time steps.

Thank you for your time.


Edit

Here is a link to a shared google drive directory contain a .py file and data necessary to run an example of this process.

AlexP
  • 351
  • 2
  • 14
  • 1
    Could you share a complete example with a sample image (ie full script to get to that figure)? I *suspect* that you can zero out a row/column of the transform in order to remove any y component of the transformation. Whether that is now the best transform or it's a bad way to estimate it is another story, but it might be good enough for your purposes. ie try inspecting the `t` object that you created in your code snippet and playing with the values. – Juan May 27 '21 at 02:55
  • @Juan Thank you for the response. I'll edit my original post with a link to a google drive containing the necessary files. To run it, you may have to edit the filepath string variable found on line 19 to whatever directory you may use. – AlexP May 27 '21 at 15:20
  • @Juan Also, that is a good suggestion to inspect `t` to see if we can suppress any y-component transformations! – AlexP May 27 '21 at 15:23
  • Please post the original image for us to test with. – Red Jun 10 '21 at 12:53
  • @AnnZen You should be able to access a shared google drive folder with the image inside of it in the edit section of the post. The image file is called "track.npy". – AlexP Jun 10 '21 at 14:13
  • I'm having trouble downloading them. – Red Jun 10 '21 at 14:40

0 Answers0