0

I have an image patch that I want to insert into another image at a floating point location. In fact, what I need is something like the opposite of what the opencv getRectSubPix function does.

I guess I could implement it by doing a subpixel warp of the patch into another patch and insert this other patch into the target image at an integer location. However, I don't have clear what to do with the empty fraction of the pixels in the warped patch, or how would I blend the border of the new patch with in the target image.

I rather use a library function than implement this operation myself. Does anybody know if there is any library function that can do this type of operation, in opencv or any other image processing library?

UPDATE:

I discovered that opencv warpPerspective can be used with a borderMode = BORDER_TRANSPARENT which means that the pixels in the destination image corresponding to the "outliers" in the source image are not modified by the function. So I thought I could implement this subpixel patch insertion with just a warpPerspective and an adequate transformation matrix. So I wrote this function in python to perform the operation:

def insert_patch_subpixel(im, patch, p):
    """
    im: numpy array with source image.
    patch: numpy array with patch to be inserted into the source image
    p: tuple with the center of the position (can be float) where the patch is to be inserted.
    """
    ths = patch.shape[0]/2
    xpmin = p[0] - ths
    ypmin = p[1] - ths
    Ho = np.array([[1, 0, xpmin],
                   [0, 1, ypmin],
                   [0, 0,     1]], dtype=float)

    h,w = im.shape
    im2 = cv2.warpPerspective(patch, Ho, (w,h), dst=im,
                        flags=cv2.INTER_LINEAR,
                        borderMode=cv2.BORDER_TRANSPARENT)
    return im2

Unfortunately, the interpolation doesn't seem to work for the outlier pixels if BORDER_TRANSPARENT is used. I tested this function with a small 10x10 image (filled with value 30) and inserting a 4x4 patch (filled with value 100) at p=(5,5) (left figure) and p=(5.5,5.5) (middle figure) and we can see in the figures below that there is no interpolation in the border. However, if I change the boderMode to BORDER_CONSTANT the interpolation works (right figure), but that also fills the destination image with 0s for the outlier values. enter image description here

It's a shame that interpolation doesn't work with BORDER_TRANSPARENT. I'll suggest this as an improvement to the opencv project.

Community
  • 1
  • 1
martinako
  • 2,690
  • 1
  • 25
  • 44
  • Warping the sub-image to the other can be done using `cv2.findHomography()` – Jeru Luke Jan 27 '17 at 18:17
  • It may be overkill for your problem but you can warp the patch in Fourier space domain, then reverse transform and merge it in your target image. If performed correctly that operation gives subpixel accuracy – Matt-Mac-Muffin Jan 28 '17 at 11:17
  • @Matt-Mac-Muffin could you elaborate a bit more on this approach? – martinako Jan 28 '17 at 14:09
  • 1
    A spatial shift in an image is a phase difference in frequency domain. You can learn more about FFT here http://www.robots.ox.ac.uk/~az/lectures/ia/lect2.pdf and this post might also help http://stackoverflow.com/questions/25827916/matlab-shifting-an-image-using-fft. – Matt-Mac-Muffin Jan 28 '17 at 15:17
  • Do the edges interpolate nicely if you're modifying a sub-image in the fourier domain? – Utkarsh Sinha Jan 30 '17 at 17:16
  • I didn't try the frequency domain solution as I found something using opencv warpPerspective that works for me, maybe @Matt-Mac-Muffin can comment on that. – martinako Jan 30 '17 at 17:29
  • I haven't tried myself but yes. I expect you would get results similar to the ones shown below. – Matt-Mac-Muffin Jan 30 '17 at 17:53

3 Answers3

2

Resize the patch image to the size you want in the destination. Then set alpha along the edges based on 1.0 - fraction for the left edge, fraction for the right edge. Then blend.

It's not quite perfect, because you're not resampling all the pixels properly, but that would also damage resolution. It's probably your best compromise.

Malcolm McLean
  • 6,258
  • 1
  • 17
  • 18
  • Thanks, I see how would this work to make the patch blend with the surroundings, but the pixels inside the patch would still be placed at integer locations. Unless I shift my patch first with the fractional part of the location. – martinako Jan 28 '17 at 14:13
  • I think I would have to implement the alpha blend manually, I think opencv only has a function addWeighted to blend full images but it doesn't allow of a mask – martinako Jan 28 '17 at 14:15
1

Actually you should use getRectSubPix(). Use it to extract your patch from the source image with the fractional part of your desired offset then just set it into the destination image with a simple copy (or blend as needed).

You might want to add a 1 pixel border around be patch where you can do the blend.

This function essentially does a translation only (subpixel) warp.

Adi Shavit
  • 16,743
  • 5
  • 67
  • 137
1

I found a solution based on what I found in my question update. As I could see the interpolation happening when using boderMode = BORDER_CONSTANT in the warpPerspective function I thought I could use this as a weighting mask for a blending between the original image and the subpixel inserted patch on a black background. See the new function and test code:

 import numpy as np
 import matplotlib.pyplot as plt

 def insert_patch_subpixel2(im, patch, p):
    """
    im: numpy array with source image.
    patch: numpy array with patch to be inserted into the source image
    p: tuple with the center of the position (can be float) where the patch is to be inserted.
    """
    ths = patch.shape[0]/2
    xpmin = p[0] - ths
    ypmin = p[1] - ths
    Ho = np.array([[1, 0, xpmin],
                   [0, 1, ypmin],
                   [0, 0,     1]], dtype=float)

    h,w = im.shape
    im2 = cv2.warpPerspective(patch, Ho, (w,h),
                        flags=cv2.INTER_LINEAR,
                        borderMode=cv2.BORDER_CONSTANT)

    patch_mask = np.ones_like(patch,dtype=float)
    blend_mask = cv2.warpPerspective(patch_mask, Ho, (w,h),
                        flags=cv2.INTER_LINEAR,
                        borderMode=cv2.BORDER_CONSTANT)

    #I don't multiply im2 by blend_mask because im2 has already
    #been interpolated with a zero background.
    im3 = im*(1-blend_mask)+im2
    im4 = cv2.convertScaleAbs(im3)
    return im4

if __name__ == "__main__":
    x,y = np.mgrid[0:10:1, 0:10:1]
    im =(x+y).astype('uint8')*5
    #im = np.ones((10,10), dtype='uint8')*30
    patch = np.ones((4,4), dtype='uint8')*100
    p=(5.5,5.5)
    im = insert_patch_subpixel2(im, patch, p)
    plt.gray()
    plt.imshow(im, interpolation='none',  extent = (0, 10, 10, 0))
    ax=plt.gca()
    ax.grid(color='r', linestyle='-', linewidth=1)
    ax.set_xticks(np.arange(0, 10, 1));
    ax.set_yticks(np.arange(0, 10, 1));
    def format_coord(x, y):
        col = int(x)
        row = int(y)
        z = im[row,col]
        return 'x=%1.4f, y=%1.4f %s'%(x, y, z)
    ax.format_coord = format_coord
    plt.show()

In the images below we can see the results of a test with a small 10x10 image (filled with value 30) and inserting a 4x4 patch (filled with value 100) at p=(5,5) (left figure) and p=(5.5,5.5) (middle figure) and now we can see in the figures below that there is bilinear interpolation in the border. To show that the interpolation works with an arbitrary background I also show a test with a gradient 10x10 image background (right figure). The test script creates a figure that lets you inspect the pixel values and verify that the correct interpolation is done at each border pixel. enter image description here

martinako
  • 2,690
  • 1
  • 25
  • 44