14

I have a few tensors in my code and and need to get the values of those tensors. This is one them. How to print the values of tensor OA?

Input:OA
Output: <tf.Tensor 'Sum_1:0' shape=(1, 600) dtype=float32>

Input:type(OA)
Output: tensorflow.python.framework.ops.Tensor

I have tried all the available functions like tf.print(), eval(), tensor.numpy(). None of them worked for me in Tensorflow 2.0. It seems they work only for 'EagerTensor' and not for 'ops.Tensor'.

1) OA.eval(session=sess) Error: ValueError: Cannot use the given session to evaluate tensor: the tensor's graph is different from the session's graph.

2) tf.print(OA) Output:

3) print (OA.numpy()) Output: AttributeError: 'Tensor' object has no attribute 'numpy'

Is there any way to convert ops.Tensor to EagerTensor to try the above functions? Or is there any other option to print the values of ops.Tensor. Please advise.

--Adding the minimal code to reproduce the example ops.Tensor in TF2.0.

!pip install tensorflow==2.0.0
tf.__version__

import tensorflow as tf
from keras.layers import Dense, Conv1D, MaxPooling1D, Flatten, Dropout, Input, Embedding, Bidirectional, LSTM
from tensorflow.keras import regularizers

EMBEDDING_DIM = 300
max_length = 120
batch_size = 512
vocab_size = 1000
units = 300

from keras.layers import Dense, Conv1D, MaxPooling1D, Flatten, Dropout, Input, Embedding, Bidirectional, LSTM
from tensorflow.keras import regularizers

input_text = tf.keras.Input(shape= (max_length), batch_size=batch_size)

embedding_layer = tf.keras.layers.Embedding(vocab_size, EMBEDDING_DIM, input_length =max_length, name="Embedding_Layer_1")
embedding_sequence = embedding_layer(input_text)

HQ = tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(units,recurrent_dropout=0.5,kernel_regularizer=regularizers.l2(0.001),return_sequences=True,name='Bidirectional_1'))(embedding_sequence)
HQ = tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(units,recurrent_dropout=0.5,kernel_regularizer=regularizers.l2(0.001),name='Bidirectional_2'))(HQ)

print (HQ)

Output: Tensor("bidirectional_3/concat:0", shape=(512, 600), dtype=float32)

type(HQ)

Output: tensorflow.python.framework.ops.Tensor

How to check the actual values of this tensor?

Raghu
  • 407
  • 5
  • 12
  • Please provide a [minimal, reproducible example](https://stackoverflow.com/help/minimal-reproducible-example). How do you create `OA`? – jkr Feb 21 '20 at 12:54
  • 1
    @jakub - Can you please check now. I updated the question with minimal reproducible example. – Raghu Mar 06 '20 at 11:12
  • @Raghu - I had a similar need. I wanted to inspect the embedding tensor of a constant sequence of indices using: MyEmbedding = tf.keras.layers.Embedding(tf.constant([[[0], [1], [5], [500]]])). As a result, MyEmbedding was a tensorflow.python.framework.ops.Tensor object. The only way I could print the array was by following the first example here: https://stackoverflow.com/questions/52215711/tensorflow-tensor-to-numpy-array-conversion-without-running-any-session/52216282, using tf.enable_eager_execution() and tensor.numpy(). Hope it helps. – George Sep 25 '20 at 01:40
  • `tf.config.run_functions_eagerly(True)` executed once at the beginning solved the problem for me for a similar problem – Lostefra Oct 25 '21 at 10:13

4 Answers4

0

Your graph is not complete at the point you are printing HQ. You need to complete the model creation. Presumably something like

output = tf.keras.layers.xyz()(HQ)
model = tf.keras.models.Model(input_text, output)

The trick to print an intermediate layer is to just make it an output. You can make it an additional output of your existing model temporarily, or just make a new model.

inspection_model = tf.keras.models.Model(input_text, [output, HQ])

now run inference on your inspection_model to get the value of the intermediate activation HQ.

print(inspection_model(xyz))
Yaoshiang
  • 1,713
  • 5
  • 15
0

Directly you cannot print the values of tensors the way you are trying to do. Tensorflow 2.x by default runs in eager mode and also you have given no input to your incomplete model.

The way to do it is by using the custom training loop. Suppose the layers mentioned in your code stack up to create a model my_model.

from tensorflow.keras import Model


my_model = Model(input_text, outputs=HQ)

with tf.GradientTape() as t:
    HQ_predictions = my_model(input_data)
print(HQ_predictions)
MSS
  • 3,306
  • 1
  • 19
  • 50
0

this video explains in details what is on going under the hood. to get around this, try the following :

  • run your function in eager mode using tf.config.run_functions_eagerly(True) , you can find an example tensorflow documentation, or you can enable eager mode for all operations using tf.compat.v1.enable_eager_execution() after that you can access your values using something like this
OA.numpy( )
  • run your operation in a tf.Session( )
with tf.Session( ) as sess:
   print( sess.run(OA) ) 

EDIT :

after the development of tensorflow 2 , you can pass your data to each layer of your model without any problems, but while fitting the model, all values inside a tensor cannot be retieved unless you use one of the following fixes:

  • apply model.compile(...., run_eagerly=True)

  • you can call tf.compat.v1.enable_eager_execution() at the beginning of your code to avoid any issues

  • if you want to access the values inside the tensor you can use

with tf.Session( ) as sess:
   t =  sess.run(HQ) 
   print( t ) # to print the tensor

after that you can manipulate your tensor as any normal tf.Tensor, this should allow you to also use

n = t.numpy( ) # to transform the tensor to a numpy array 

now that n contains all the values of HQ you can access each individual element the same way you handle numpy arrays

you can also check this thread for more information

hafedh
  • 11
  • 4
  • While this link may answer the question, it is better to include the essential parts of the answer here and provide the link for reference. Link-only answers can become invalid if the linked page changes. - [From Review](/review/late-answers/34510278) – Chenmunka Jun 12 '23 at 15:25
  • That's a great idea, I'll edit my response to include all the necessary information. – hafedh Jun 12 '23 at 17:16
-1

Use .numpy() attribute like :

your_tensor.numpy()
DachuanZhao
  • 1,181
  • 3
  • 15
  • 34