0

Constraints: can not call on convolve, correlate, fftconvolve, or any other similar functions.

Query: Is there a way to get rid of the for loops I used to perform the convolution because for larger sized images the efficiency of the program would reduce or is there a better method altogether to perform the same task?

Pre-Context: img0 is a 2D gray scale image and h is the 2D convolution mask used with odd dimensions.

Here is the code I used:

import numpy as np

def myImageFilter(img0, h):
    img0_r = img0.shape[0]
    img0_c = img0.shape[1]
    m = h.shape[0]
    n = h.shape[1]
    img1 = np.zeros((img0_r,img0_c))
    row_pad = m//2
    col_pad = n//2
    f = np.pad( img0, ( (row_pad,row_pad) , (col_pad,col_pad)), 'edge')
    h_prime = np.flip(h)

    for i in range( 0 + row_pad , img0.shape[0]+row_pad ):
        for j in range( 0 + col_pad , img0.shape[1]+col_pad):
            g = np.sum( np.multiply( f[ i-row_pad : i+row_pad +1 , j-col_pad : j+col_pad +1], h_prime ) )
            img1[i-row_pad][j-col_pad] = g
    print(img1)
    return img1
  • 1
    You seem to be expecting magic. A 2D convolution necessarily involves nested loops. That's just what it does. Any convolution package you called would do the same thing. – Tim Roberts Sep 17 '22 at 00:11
  • You can reduce the amount of work if your filter is separable. Otherwise, for larger filters you can reduce work by using the FFT. But there is no way to reduce the number of loops for the generic, spatial-domain case. – Cris Luengo Sep 17 '22 at 00:30
  • 1
    @TimRoberts The default Python implementation is CPython which is a slow interpreter so loops are pretty expensive (several order of magnitude slower than a native C/C++ code in this case). This is why the OP ask for removing loops. Vectorization (ie. using package functions doing loops in C/C++) can be used to mitigate this problem. – Jérôme Richard Sep 17 '22 at 00:40

0 Answers0