10

Problem Description

I am trying to write a custom loss function in TensorFlow 2.3.0. To calculate the loss, I need the y_pred parameter to be converted to a numpy array. However, I can't find a way to convert it from <class 'tensorflow.python.framework.ops.Tensor'> to numpy array, even though there seem to TensorFlow functions to do so.

Code Example

def custom_loss(y_true, y_pred):
    print(type(y_pred))
    npa = y_pred.make_ndarray()
    ...
    

if __name__ == '__main__':
    ...
    model.compile(loss=custom_loss, optimizer="adam")
    model.fit(x=train_data, y=train_data, epochs=10)

gives the error message: AttributeError: 'Tensor' object has no attribute 'make_ndarray after printing the type of the y_pred parameter: <class 'tensorflow.python.framework.ops.Tensor'>

What I have tried so far

Looking for a solution I found this seems to be a common issue and there a couple of suggestions, but they did not work for me so far:

1. " ... so just call .numpy() on the Tensor object.": How can I convert a tensor into a numpy array in TensorFlow?

so I tried:

def custom_loss(y_true, y_pred):
    npa = y_pred.numpy()
    ...

giving me AttributeError: 'Tensor' object has no attribute 'numpy'

2. "Use tensorflow.Tensor.eval() to convert a tensor to an array": How to convert a TensorFlow tensor to a NumPy array in Python

so I tried:

def custom_loss(y_true, y_pred):
    npa = y_pred.eval(session=tf.compat.v1.Session())
    ...

giving me one of the longest trace of error messages I ever have seen with the core being:

InvalidArgumentError: 2 root error(s) found.
      (0) Invalid argument: You must feed a value for placeholder tensor 'functional_1/conv2d_2/BiasAdd/ReadVariableOp/resource' with dtype resource
         [[node functional_1/conv2d_2/BiasAdd/ReadVariableOp/resource (defined at main.py:303) ]]
         [[functional_1/cropping2d/strided_slice/_1]]
      (1) Invalid argument: You must feed a value for placeholder tensor 'functional_1/conv2d_2/BiasAdd/ReadVariableOp/resource' with dtype resource
         [[node functional_1/conv2d_2/BiasAdd/ReadVariableOp/resource (defined at main.py:303) ]]

also having to call TensorFlow Compatibility Functions from Version 1.x does not feel very future-proof, so I do not like this approach too much anyhow.

3. Looking at the TensorFlow Docs there seemed to be the function I needed just waiting: tf.make_ndarray Create a numpy ndarray from a tensor.

so I tried:

def custom_loss(y_true, y_pred):
    npa = tf.make_ndarray(y_pred)
    ...

giving me AttributeError: 'Tensor' object has no attribute 'tensor_shape'

Looking at the example in the TF documentation they use this on a proto_tensor, so I tried converting to a proto first:

def custom_loss(y_true, y_pred):
    proto_tensor = tf.make_tensor_proto(y_pred)
    npa = tf.make_ndarray(proto_tensor)
    ...

but already the tf.make_tensor_proto(y_pred) raises the error: TypeError: Expected any non-tensor type, got a tensor instead.

Also trying to make a const tensor first gives the same error:

def custom_loss(y_true, y_pred):
    a = tf.constant(y_pred)
    proto_tensor = tf.make_tensor_proto(a)
    npa = tf.make_ndarray(proto_tensor)
    ...

There are many more posts around this but it seems they are all coming back to these three basic ideas. Looking forward to your suggestions!

Frank Jacob
  • 223
  • 1
  • 3
  • 13

1 Answers1

9

y_pred.numpy() works in TF 2 but AttributeError: 'Tensor' object has no attribute 'make_ndarray indicates that there are parts of your code that you are not running in Eager mode as you would otherwise not have a Tensor object but an EagerTensor.

To enable Eager Mode, put this at the beginning of your code before anything in the graph is built:

tf.config.experimental_run_functions_eagerly(True)

Second, when you compile your model, add this parameter:

model.compile(..., run_eagerly=True, ...)

Now you're executing in Eager Mode and all variables actually hold values that you can both print and work with. Be aware that switching to Eager mode might require additional adjustments to your code (see here for an overview).

runDOSrun
  • 10,359
  • 7
  • 47
  • 57
  • Thanks for your quick answer, but it did not solve my problem: First I got the message that the experimental is depreciated, so I used `tf.config.run_functions_eagerly` instead, fine. After that I got an Out Of Memory Error, it seems eager execution uses a lot more memory? So I switched from GPU to CPU to have more memory, then I got the error `AttributeError: 'tensorflow.python.framework.ops.EagerTensor' object has no attribute 'make_ndarray'` This does not seem to solve it ... – Frank Jacob Sep 13 '20 at 11:33
  • @FrankJacob Do you have a colab notebook with your code by any chance? Also, to test Eager mode it might be useful to reduce the batch size to a very low number. Just to debug the code. You can always add a generator later. – runDOSrun Sep 13 '20 at 11:41
  • With `npa = y_pred.numpy()` this works now on CPU! Thanks! But running on CPU is not an option for me. There must a way to convert with better memory efficiency than this? – Frank Jacob Sep 13 '20 at 11:50
  • @FrankJacob Unfortunately, Eager does indeed consume more memory. I think you will have to use a generator with model.fit and load your data batch-wise. Of course, this is assuming that you really *need* a numpy array and can't design the loss function with regular tensor operations (where we wouldn't *depend* on Eager). If you need advice on your loss function design and want to open a new question about it, you can link it here. Otherwise, you can explore if reducing the batch size significantly helps for GPU. – runDOSrun Sep 13 '20 at 11:58
  • Thanks, I have opened a new question for the [loss function](https://stackoverflow.com/questions/63874265/keras-custom-loss-function-error-no-gradients-provided) – Frank Jacob Sep 13 '20 at 18:37
  • Thank you a lot. You save my day. I updated that we should use tf.executing_eagerly() because tf.config.experimental_run_functions_eagerly(True) will be deprecated. – M.Vu Aug 20 '21 at 09:59
  • AttributeError: 'Tensor' object has no attribute 'numpy' – dixhom Jan 07 '23 at 00:01