1

I want to compute the following operation over a matrix :

import numpy as np

x = np.arange(9).reshape((3,3))
result = np.zeros((3,3,3))
for i in range(3):
    for j in range(3):
        for k in range(3):
            result[i,j,k] = x[j,i] * x[j,k]

Which gives

array([[[  0.,   0.,   0.],
        [  9.,  12.,  15.],
        [ 36.,  42.,  48.]],

       [[  0.,   1.,   2.],
        [ 12.,  16.,  20.],
        [ 42.,  49.,  56.]],

       [[  0.,   2.,   4.],
        [ 15.,  20.,  25.],
        [ 48.,  56.,  64.]]])

As expected.

Question

How can I perform this calculation with tensor products (without loops) with numpy ?

Edit

If the elements of X are vectors, the operation is instead :

result[i,j,k] = np.dot(x[j,i] , x[j,k])

What would be the appropriate numpy operator for this calculation ?

Toool
  • 361
  • 3
  • 18

1 Answers1

2

A straight-forward one using the iterators as a string expression with np.einsum would be -

np.einsum('ji,jk->ijk',x,x)

Another with broadcasting and swapping axes -

(x[:,None,:]*x[:,:,None]).swapaxes(0,1)
Divakar
  • 218,885
  • 19
  • 262
  • 358
  • Thank you ! I forgot to mention that I am not looking for a solution with `np.einsum`. The broadcasting solution is neat ! But I am wondering if there is a solution using `np.tensordot` (since the true problem implies dot products - see my edit), or something similar. I am actually implementing this in _theano_ which puts constraints on broadcasting.. – Toool Jul 14 '17 at 20:05
  • @Tool So, does theano support broadcasting? What are the constraints? Since, we need to keep one axis aligned (in this case the first axis), numpy.dot or numpy.tensordot won't work without excessive memory usage. `numpy.matmul` might work, but again would like to know if there's any equivalent of it in theano. – Divakar Jul 14 '17 at 20:08
  • I just checked, _theano_ does support broadcasting (same as numpy) :) Isn't `tensordot` supposed to use broadcasting ? What would be the equivalent of your current answer with a dot product instead of a regular product ? – Toool Jul 14 '17 at 20:13
  • 1
    @Tool See [`here`](https://stackoverflow.com/a/41871402/3293881) for some explanation with a sample and why tensordot won't work/at least without more workarounds and no performance benefit. In your case, there's no sum-reduction, its all expansion. As such, `broadcasting` would be the way to go. – Divakar Jul 14 '17 at 20:17
  • 1
    A `dot` product is a sum of products. `tensordot` reduces the problem to a `dot` call, using transposes and reshapes. But you aren't summing anything. This is a kind of outer product, which broadcasting does nicely. Even the `einsum` solution doesn't sum. – hpaulj Jul 14 '17 at 20:20
  • Thank you for your comments. I will go for the broadcasting solution as it is the most efficient one ! – Toool Jul 14 '17 at 20:26