0

I'm currently trying to create a tensordot using numpy for vectors. For example, let's say I have the following variables:

a = [np.array([1, 2]), np.array([3,4])] 
b = [np.array([5,6]), np.array([7,8])]

and I want to compute the "tensor product of the vectors", i.e. [a[0]*b[0], a[0]*b[1], a[1]*b[0], a[1]*b[1]] which would give in our example:

a x b = [[5,12], [7,16], [15, 24], [21, 32]]

I've tried many combinations using tensordot along different axes, but it never gives me the results I want :((

For example, I tried the followings:

np.tensordot(a,b)

which gives me array(70)

or np.tensordot(a,b, axes = 0) which gives me

array([[[[ 5,  6],
     [ 7,  8]],

    [[10, 12],
     [14, 16]]],


   [[[15, 18],
     [21, 24]],

    [[20, 24],
     [28, 32]]]])

I also tried using different axes such as np.tensordot(a,b, axes = ([0], [1])) with no success...

Can someone please help me? :) I'm sure it's pretty trivial but I seem to miss something

Thanks.

  • Show us a few of those `tensordot` applications, and what is wrong. – hpaulj Feb 10 '19 at 18:28
  • I tried to explain the `tensordot` results. But what makes you think that `tensordot` should give you the desired result? – hpaulj Feb 10 '19 at 22:12
  • Another way to put my question - what's `a tensordot`? I am aware of this `numpy` function. But it sounds like you are trying to produce a product that is defined elsewhere (math theory?). If so, where? – hpaulj Feb 10 '19 at 22:24
  • Thanks a lot for your answer below, that's exactly what I was looking for! – Kevin Richard Feb 11 '19 at 15:45
  • As for my aim, I'm defining f and g two functions with images in the tensor space by their values at all x, i.e. [f(x_1), f(x_2), ..., f(x_n)] with x_1, ..., x_n being the finite set of definitions of f and g. The product I'm defining can be then seen as the flattened vector of values of the function h defined as the tensor product of the function f x g – Kevin Richard Feb 11 '19 at 15:48

1 Answers1

2
In [663]: a = np.array([[1, 2], [3,4]]); b = np.array([[5,6], [7,8]])

A simple dot (matrix product) of these 2 arrays:

In [664]: a.dot(b)
Out[664]: 
array([[19, 22],
       [43, 50]])

Your desired array:

In [665]: [a[0]*b[0], a[0]*b[1], a[1]*b[0], a[1]*b[1]] 
Out[665]: [array([ 5, 12]), array([ 7, 16]), array([15, 24]), array([21, 32])]
In [666]: np.array(_)
Out[666]: 
array([[ 5, 12],
       [ 7, 16],
       [15, 24],
       [21, 32]])

np.tensordot is an attempt to generalize np.dot; for 2d arrays like this it can't do anything that a few added transposes can't.

Your result isn't a tensordot in that sense. dot involves sum of products; you aren't doing any sums. Rather it looks more like an outer product, or may a variation on kron.

With a couple trials I reproduced your array with einsum:

In [673]: np.einsum('ij,kj->ikj',a,b)
Out[673]: 
array([[[ 5, 12],
        [ 7, 16]],

       [[15, 24],
        [21, 32]]])
In [674]: _.reshape(-1,2)
Out[674]: 
array([[ 5, 12],
       [ 7, 16],
       [15, 24],
       [21, 32]])

einsum like dot and tensordot is built around sums of products, but gives us a finer control over which axes are multiplied, and which are summed. Here, we don't sum any.

I can get the same 3d array with:

In [675]: a[:,None,:]*b[None,:,:]
Out[675]: 
array([[[ 5, 12],
        [ 7, 16]],

       [[15, 24],
        [21, 32]]])

tensordot

According to the docs, the default value for axes is 2:

In [714]: np.tensordot(a,b)
Out[714]: array(70)
In [715]: np.tensordot(a,b,axes=2)
Out[715]: array(70)
  • axes = 2 : (default) tensor double contraction :math:a:b

In other words, multiply the arrays, and sum over all axes. This is clearer, in my mind, with einsum notation:

In [719]: np.einsum('ij,ij',a,b)
Out[719]: 70


In [718]: np.tensordot(a,b,axes=0).shape
Out[718]: (2, 2, 2, 2)
  • axes = 0 : tensor product :math:a\\otimes b : tensor product a\otimes b
np.einsum('ij,kl',a,b)

I can see your desired result, or at least the Out[673] version in your (2,2,2,2) array, as some sort of diagonal subset.

I don't use these scalar like axes modes of tensordot much. In a previous post or two I've puzzled over them, but I don't have a good feel. I much prefer the clarity if einsum.

How does numpy.tensordot function works step-by-step?

hpaulj
  • 221,503
  • 14
  • 230
  • 353