10

TF1 had sess.run() and .eval() to get values of tensors - and Keras had K.get_value(); now, neither work the same (former two at all).

K.eager(K.get_value)(tensor) appears to work inside Keras graph by exiting it, and K.get_value(tensor) outside the graph - both w/ TF2's default eagerly (which is off in former). However, this fails if tensor is a Keras backend operation:

import keras.backend as K
def tensor_info(x):
    print(x)
    print("Type: %s" % type(x))
    try:        
        x_value = K.get_value(x)
    except:
        try:    x_value = K.eager(K.get_value)(x)
        except: x_value = x.numpy()
    print("Value: %s" % x_value)  # three methods

ones = K.ones(1)
ones_sqrt = K.sqrt(ones)

tensor_info(ones); print()
tensor_info(ones_sqrt)
<tf.Variable 'Variable:0' shape=(1,) dtype=float32, numpy=array([1.], dtype=float32)>
Type: <class 'tensorflow.python.ops.resource_variable_ops.ResourceVariable'>
Value: [1.]

Tensor("Sqrt:0", shape=(1,), dtype=float32)
Type: <class 'tensorflow.python.framework.ops.Tensor'>
# third print fails w/ below
AttributeError: 'Tensor' object has no attribute 'numpy' 


This is a non-issue in TF < 2.0. Github's been silent. I'm aware of ways to rewrite the code as a workaround, but it'll eliminate Keras' backend-neutrality and work akin to tf.keras. Is there a way to get Keras 2.3 tensor values in TensorFlow 2.0 while retaining backend-neutrality?
OverLordGoldDragon
  • 1
  • 9
  • 53
  • 101

4 Answers4

9

I think you want K.eval:

>>> v = K.ones(1)
>>> K.eval(v)
array([1.], dtype=float32)
>>> K.eval(K.sqrt(v))
array([1.], dtype=float32)

Note that K.get_value is reserved for use with variables (e.g. v here) while K.eval works with any tensor.

Sergei Lebedev
  • 2,659
  • 20
  • 23
  • Thank you; however, only works with `import keras.backend as K` - fails for `tensorflow.keras.backend` and `tensorflow.python.keras.backend`. As other functionality may depend on latter two, this isn't a complete answer. It may, however, be a bug - can you confirm? – OverLordGoldDragon Oct 06 '19 at 23:27
  • I've checked the snippet with the imports you've listed, and all three produce the same result. Could you update your question with the outputs you get in each of these cases? – Sergei Lebedev Oct 07 '19 at 09:56
  • All's good - turns out `tf.python` isn't meant to be used anyway (see [here](https://stackoverflow.com/questions/58279628/what-is-the-difference-between-tf-keras-and-tf-python-keras)), or not always. – OverLordGoldDragon Oct 08 '19 at 02:54
1

Per my PR, this is the more reliable (but not guaranteed) workaround:

def K_eval(x):
    try:
        return K.get_value(K.to_dense(x))
    except:
        eval_fn = K.function([], [x])
        return eval_fn([])[0]

Update: mind the distrubution context under which a Tensor is to be evaluated; in TF2.2, a tf.Variable or tf.Tensor created under tf.python.distribute.distribution_strategy_context.in_replica_context() == True will fail any K.eval-etc attempts. Looks like tensors simply aren't meant to be evaluated there.

OverLordGoldDragon
  • 1
  • 9
  • 53
  • 101
1

I think what you are looking for is tf.keras.backend.get_value API.

print(x)
>>tf.Tensor([1.], shape=(1,), dtype=float32)
print(tf.keras.backend.get_value(x))
>>[1.]
DesiKeki
  • 656
  • 8
  • 9
0

In my case tensorflow 2.0 this works when printing the loss:

 import tensorflow as tf
 from tensorflow import keras

...

  print(loss_value)
  print(float(loss_value) )

output:

    tf.Tensor(2.3782592, shape=(), dtype=float32)
    2.3782591819763184 
Ibraheem
  • 35
  • 8