1

I use Python OpenCV to register images, and once I've found the homography matrix H, I use cv2.warpPerspective to compute the final transformation.

However, it seems that cv2.warpPerspective is limited to short encoding for performance purposes, see here. I did some tests, and indeed the limit of image dimension is 32,767 pixels; so 2^15, which makes sense with the explanation given in the other discussion.

Is there an alternative to cv2.warpPerspective? I already have the homography matrix, I just need to do the transformation.

Community
  • 1
  • 1
FiReTiTi
  • 5,597
  • 12
  • 30
  • 58
  • Does this answer your question? [Alternative to opencv warpPerspective](https://stackoverflow.com/questions/56325847/alternative-to-opencv-warpperspective) – mapf Sep 28 '21 at 19:03
  • Thank you but no, I saw this discussion before posting. – FiReTiTi Sep 28 '21 at 19:05
  • What is the issue with the suggested approach? – mapf Sep 28 '21 at 19:06
  • I have the homography matrix, not it's decomposition into transformation. If I did, what function could do such matrix based transformation in OpenCV? – FiReTiTi Sep 28 '21 at 19:10
  • Ah I see. Well I'm not sure about the relationship between those matrices, but I assume that the transformation/application should be easily doable in numpy once you figure out what it is you need to do. – mapf Sep 28 '21 at 19:15
  • Thanks, I'll look into Numpy transformations. Likely Numpy can do it. – FiReTiTi Sep 28 '21 at 19:18
  • Perhaps you can convert the homography matrix into a fractional linear equation and do the warping using cv2.remap(). I do not know the limitations of remap. – fmw42 Sep 28 '21 at 19:39
  • Thanks, but any idea how to make such conversion? – FiReTiTi Sep 28 '21 at 20:41
  • no need to decompose the homography. I don't know what you are talking about there. -- you can recreate warpPerspective from numpy primitives. span a grid/mgrid/... so you get a big array that contains the coordinates for every **output** pixel. **invert** your homography, then apply it to these values (and normalize/dehomogenize). now you have the **input** coordinates for every pixel. use those to sample in the input picture. try `remap` for this (also for interpolation). remap takes such xmap/ymap as you just calculated. I don't know if it'll work on arrays larger than 2^15 along a side – Christoph Rackwitz Sep 28 '21 at 20:45
  • 1
    Can you please post a reproducible code sample? Create a synthetic large image (but not too large), define homography matrix, and apply `cv2.warpPerspective` that causes a failure. – Rotem Sep 29 '21 at 16:32

1 Answers1

1

After looking at alternative libraries, there is a solution using skimage.

If H is the homography matrix, the this OpenCV code:

warped_img = cv2.warpPerspective(image, H, (width, height))

Becomes:

warped_imgnew = skimage.transform.warp(image, numpy(H), output_shape=(height, width)) * 255.0
FiReTiTi
  • 5,597
  • 12
  • 30
  • 58