-1

I can use tf.matmul(A, B) to do batch matrix multiplication when:

  • A.shape == (..., a, b) and
  • B.shape == (..., b, c),

where the ... are the same.

But I want an additional broadcasting:

  • A.shape == (a, b, 2, d) and
  • B.shape == (a, 1, d, c)

  • result.shape == (a, b, 2, c)

I expect the result to be a x b batches of matrix multiplication between (2, d) and (d, c).

How to do this?


Test code:

import tensorflow as tf
import numpy as np

a = 3
b = 4
c = 5
d = 6

x_shape = (a, b, 2, d)
y_shape = (a, d, c)
z_shape = (a, b, 2, c)

x = np.random.uniform(0, 1, x_shape)
y = np.random.uniform(0, 1, y_shape)
z = np.empty(z_shape)

with tf.Session() as sess:
    for i in range(b):
        x_now = x[:, i, :, :]
        z[:, i, :, :] = sess.run(
            tf.matmul(x_now, y)
        )

print(z)
R zu
  • 2,034
  • 12
  • 30
  • `B` and `y` have different shapes? I don't know about `tf`, but `numpy` `A@B` works. – hpaulj Jun 19 '19 at 22:51
  • yup. for `numpy`, `x @ y[:, np.newaxis, :, :]` works. And that works for tensorflow too. I don't how efficient is `@` in tensorflow on a gpu. – R zu Jun 19 '19 at 23:50

2 Answers2

1

tf.einsum - a generalized contraction between tensors of arbitrary dimension, would be your friend in such a problem. See tf documentation here.

There is a great tutorial on stackoverflow: (Understanding NumPy's einsum).


import tensorflow as tf
import numpy as np

a = 3
b = 4
c = 5
d = 6

x_shape = (a, b, 2, d)
y_shape = (a, d, c)
z_shape = (a, b, 2, c)

x = tf.constant(np.random.uniform(0, 1, x_shape))
y = tf.constant(np.random.uniform(0, 1, y_shape))
z = tf.constant(np.empty(z_shape))

v = tf.einsum('abzd,adc->abzc', x, y)
print z.shape, v.shape

with tf.Session() as sess:
  print sess.run(v)


RESULT:

(3, 4, 2, 5) (3, 4, 2, 5)
[[[[ 1.8353901   1.29175219  1.49873967  1.78156638  0.79548786]
   [ 2.32836196  2.01395003  1.53038244  2.51846521  1.65700572]]

  [[ 1.76139921  1.78029925  1.22302866  2.18659201  1.51694413]
   [ 2.32021949  1.98895703  1.7098903   2.21515966  1.33412172]]

  [[ 2.13246675  1.63539287  1.64610271  2.16745158  1.02269943]
   [ 1.75559616  1.6715972   1.26049591  2.14399714  1.34957603]]

  [[ 1.80167636  1.91194534  1.3438773   1.9659323   1.25718317]
   [ 1.4379158   1.31033243  0.71024123  1.62527415  1.31030634]]]


 [[[ 2.04902039  1.59019464  1.32415689  1.59438659  2.02918951]
   [ 2.23684642  1.27256603  1.63474052  1.73646679  2.42958829]]
  ....
  ....
greeness
  • 15,956
  • 5
  • 50
  • 80
  • Thanks for the answer. How fast and memory efficient is tensorflow's implementation of `einsum` if I run it on a gpu? How about tensorflow's `@`? I remember numpy can use parallel mkl routines for `dot` and `tensordot` but not for `einsum`. – R zu Jun 20 '19 at 01:18
  • Not sure for GPU. For CPU, they might be the same. https://stackoverflow.com/questions/43100679/tensorflow-einsum-vs-matmul-vs-tensordot. Also for TPU, einsum is faster from my personal experience (5%-10%). – greeness Jun 20 '19 at 01:21
  • Speed of einsum in tf relies on the optimization by the `opt_einsum` package. https://github.com/tensorflow/tensorflow/issues/16835 The @ operator uses another code path: https://github.com/tensorflow/tensorflow/issues/1062 – R zu Jun 20 '19 at 15:17
0

Only need tf.reshape and tf.matmul. No need for transpose.

import tensorflow as tf
import numpy as np

jit_scope = tf.contrib.compiler.jit.experimental_jit_scope

a = 3
b = 4
c = 5
d = 6

x_shape = (a, b, 2, d)
y_shape = (a, d, c)

x = tf.constant(np.random.uniform(0, 1, x_shape))
y = tf.constant(np.random.uniform(0, 1, y_shape))

x2 = tf.reshape(x, (a, b * 2, d))

with jit_scope():
    z = tf.reshape(tf.matmul(x2, y), (a, b, 2, c))
    z2 = x @ (y[:, np.newaxis, :, :])
    z3 = tf.einsum('abzd, adc -> abzc', x, y)

with tf.Session() as sess:
    z_, z2_, z3_ = sess.run([z, z2, z3])

assert np.allclose(z_, z2_)
assert np.allclose(z_, z3_)
R zu
  • 2,034
  • 12
  • 30