I am not very familiar with tensor algebra and I am having trouble understanding how to make numpy.tensordot
do what I want.
The example I am working with is simple: given a tensor a
with shape (2,2,3)
and another b
with shape (2,1,3)
, I want a result tensor c
with shape (2,1)
. This tensor would be the result of the following, equivalent python code:
n = a.shape[2]
c = np.zeros((2,n))
for k in range(n):
c += a[:,:,k]*b[:,:,k]
The documentation says that the optional parameter axes
:
If an int N, sum over the last N axes of a and the first N axes of b in order. The sizes of the corresponding axes must match.
But I don't understand which "axes" are needed here (furthermore, when axes is a tuple or a tuple of tuples it gets even more confusing). Examples aren't very clear to me either.