8

In ipython I imported tensorflow as tf and numpy as np and created an TensorFlow InteractiveSession. When I am running or initializing some normal distribution with numpy input, everything runs fine:

some_test = tf.constant(np.random.normal(loc=0.0, scale=1.0, size=(2, 2)))
session.run(some_test)

Returns:

array([[-0.04152317,  0.19786302],
       [-0.68232622, -0.23439092]])

Just as expected.

...but when I use the Tensorflow normal distribution function:

some_test = tf.constant(tf.random_normal([2, 2], mean=0.0, stddev=1.0, dtype=tf.float32))
session.run(some_test)

...it raises a Type error saying:

(...)
TypeError: List of Tensors when single Tensor expected

What am I missing here?

The output of:

sess.run(tf.random_normal([2, 2], mean=0.0, stddev=1.0, dtype=tf.float32))

alone returns the exact same thing which np.random.normal generates -> a matrix of shape (2, 2) with values taken from a normal distribution.

mrry
  • 125,488
  • 26
  • 399
  • 400
daniel451
  • 10,626
  • 19
  • 67
  • 125

1 Answers1

17

The tf.constant() op takes a numpy array (or something implicitly convertible to a numpy array), and returns a tf.Tensor whose value is the same as that array. It does not accept a tf.Tensor as its argument.

On the other hand, the tf.random_normal() op returns a tf.Tensor whose value is generated randomly according to the given distribution each time it runs. Since it returns a tf.Tensor, it cannot be used as the argument to tf.constant(). This explains the TypeError (which is unrelated to the use of tf.InteractiveSession, since it occurs when you build the graph).

I'm assuming you want your graph to include a tensor that (i) is randomly generated on its first use, and (ii) constant thereafter. There are two ways to do this:

  1. Use NumPy to generate the random value and put it in a tf.constant(), as you did in your question:

    some_test = tf.constant(
        np.random.normal(loc=0.0, scale=1.0, size=(2, 2)).astype(np.float32))
    
  2. (Potentially faster, as it can use the GPU to generate the random numbers) Use TensorFlow to generate the random value and put it in a tf.Variable:

    some_test = tf.Variable(
        tf.random_normal([2, 2], mean=0.0, stddev=1.0, dtype=tf.float32)
    sess.run(some_test.initializer)  # Must run this before using `some_test`
    
mrry
  • 125,488
  • 26
  • 399
  • 400
  • 2
    Thanks for the explanation! So I have to use `tf.Variable` when I want the GPU acceleration aka "pure" tensorflow for getting a random "constant"?! – daniel451 Feb 26 '16 at 21:30
  • Yes, it's counterintuitive, isn't it? :) The issue is really that, in TF, the concepts of "is variable" and "is initializable" are combined in the same type - we've occasionally discussed better ways to do initialization (e.g. some equivalent of static initialization in C-like languages), but haven't settled on a design yet. (One could imagine how such a thing would be useful for optimizations like constant folding, etc.) – mrry Feb 26 '16 at 21:50
  • 1
    Thanks for the response @mrry. If I am trying to do the same thing but I don't want to keep `some_test` constant thereafter would I do the same thing as option 2 but not include `sess.run(some_test.initializer)`? – bnorm Aug 25 '17 at 21:55