Suppose the convolution of a general number of discrete probability density functions needs to be calculated. For the example below there are four distributions which take on values 0,1,2 with the specified probabilities:
import numpy as np
pdfs = np.array([[0.6,0.3,0.1],[0.5,0.4,0.1],[0.3,0.7,0.0],[1.0,0.0,0.0]])
The convolution can be found like this:
pdf = pdfs[0]
for i in range(1,pdfs.shape[0]):
pdf = np.convolve(pdfs[i], pdf)
The probabilities of seeing 0,1,...,8 are then given by
array([ 0.09 , 0.327, 0.342, 0.182, 0.052, 0.007, 0. , 0. , 0. ])
This part is the bottleneck in my code and it seems there must be something available to vectorize this operation. Does anyone have a suggestion for making it faster?
Alternatively, a solution where you could use
pdf1 = np.array([[0.6,0.3,0.1],[0.5,0.4,0.1]])
pdf2 = np.array([[0.3,0.7,0.0],[1.0,0.0,0.0]])
convolve(pd1,pd2)
and get the pairwise convolutions
array([[ 0.18, 0.51, 0.24, 0.07, 0. ],
[ 0.5, 0.4, 0.1, 0. , 0. ]])
would also help tremendously.