I'm trying to achieve the Block Toeplitz's matrix for a 2D convolution with padding=same (similar to keras). I saw, read and search a lot info, but I don't get an implementation of it.
Some references I have taken (also I'm reading papers, but anyone talks about convd with padding, only full or valid):
McLawrence's answer: answer. He says literally: "his is for padding = 0 but can easily be adjusted by changing h_blocks and w_blocks and W_conv[i+j, :, j, :]." But i dont know how implement this changes.
Warren's Weckesser answer: answer: Explains what is a block matrix.
Salvador's Dali answer: answer: Explains the method to perform the blockTeoeplitz's matrix for padding="valid" and also, Ali Salehi, explains the method for padding="full".
Modfying the code of McLawrence's answer I achieved the same result that keras conv2d with padding="same", but only for 2x2 kernel dimension and square input matrix. The code is:
k_h, k_w = kernel.shape
i_h, i_w = input.shape
o_h, o_w = input.shape
s_c = o_h-o_w
# construct 1d conv toeplitz matrices for each row of the kernel
toeplitz = []
for r in range(k_h):
toeplitz.append(linalg.toeplitz(c=(kernel[r,0], *np.zeros(i_w-1)), r=(*kernel[r], *np.zeros(i_w-k_w))) )
# construct toeplitz matrix of toeplitz matrices (just for padding=0)
h_blocks, w_blocks = input.shape
h_block, w_block = toeplitz[0].shape
W_conv = np.zeros((h_blocks, h_block, w_blocks, w_block))
for i, B in enumerate(toeplitz):
for j in range(o_h):
if i == len(toeplitz)-1 and j == o_h-1:
continue
W_conv[j, :, i+j, :] = B
W_conv.shape = (h_blocks*h_block, w_blocks*w_block)
return W_conv
Any paper or reference that may be helpful?