What would be the fastest and most pythonic way to perform element-wise operations of arrays of different size without oversampling the smaller array?
For example: I have a large array, A 1000x1000 and a small array B 10x10 I want each element in B to respond to 100x100 elements in array B. There is no need for any interpolation, just to use the same element in B for all 10000 operations in A.
I can adjust the size of the two arrays to make the shape of A is a multiple of B. Typically they will be 1:10000 or 1:1000 in all dimensions. The arrays represents data samples with different resolution but same extent.
I know that I can oversample array B, e.g. by using Kronecker product, but It would be nicer to keep array B small, especially as some of my arrays get really big to handle and store. I'm using xarray and dask, but any numpy operation would also work.
I hope this snippet explains what I want to do:
import numpy as np
A = np.random.rand(10,10)
B = np.random.rand(1000,1000)
res = np.shape(B)[0]//np.shape(A)[0]
#I want to add A and B so that each element in A is added to 100x100 elements in B.
#This doesn't work of obvious reasons:
#C = A+B
#This solution sacrifices the resolution of B:
C = A+B[::res,::res]
#These solutions creates an unnecessary large array for the operation(don't they?):
K = np.ones((res,res))
C = np.kron(A, K) + B
C = np.repeat(np.repeat(A,res, axis=0), res, axis=1)+B
I have a feeling that this problem must have been up before, but I couldn't find any answer that works for this particular case.