8

I encountered error 'Tensor' object has no attribute 'assign_add' when I try to use the assign_add or assign_sub function. The code is shown below:

I defined two tensor t1 and t2, with the same shape, and same data type.

>>> t1 = tf.Variable(tf.ones([2,3,4],tf.int32))
>>> t2 = tf.Variable(tf.zeros([2,3,4],tf.int32))
>>> t1
<tf.Variable 'Variable_4:0' shape=(2, 3, 4) dtype=int32_ref>
>>> t2
<tf.Variable 'Variable_5:0' shape=(2, 3, 4) dtype=int32_ref>

then I use the assign_add on t1 and t2 to create t3

>>> t3 = tf.assign_add(t1,t2)
>>> t3
<tf.Tensor 'AssignAdd_4:0' shape=(2, 3, 4) dtype=int32_ref>

then I try to create a new tensor t4 using t1[1] and t2[1], which are tensors with same shape and same data type.

>>> t1[1]   
<tf.Tensor 'strided_slice_23:0' shape=(3, 4) dtype=int32>
>>> t2[1]
<tf.Tensor 'strided_slice_24:0' shape=(3, 4) dtype=int32>
>>> t4 = tf.assign_add(t1[1],t2[1])

but got error,

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/admin/tensorflow/lib/python2.7/site-packages/tensorflow/python/ops/state_ops.py", line 245, in assign_add
return ref.assign_add(value)
AttributeError: 'Tensor' object has no attribute 'assign_add'

same error when using assign_sub

>>> t4 = tf.assign_sub(t1[1],t2[1])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/admin/tensorflow/lib/python2.7/site-packages/tensorflow/python/ops/state_ops.py", line 217, in assign_sub
return ref.assign_sub(value)
AttributeError: 'Tensor' object has no attribute 'assign_sub'

Any idea where is wrong? Thanks.

X. L
  • 601
  • 1
  • 7
  • 11

4 Answers4

6

The error is because t1 is a tf.Variable object , while t1[1] is a tf.Tensor.(you can see this in the outputs to your print statements.).Ditto for t2 and t[[2]]

As it happens, tf.Tensor can't be mutated(it's read only) whereas tf.Variable can be(read as well as write) see here. Since tf.scatter_add,does an inplace addtion, it doesn't work with t1[1] and t2[1] as inputs, while there's no such problem with t1 and t2 as inputs.

Kanchan Kumar
  • 111
  • 2
  • 5
0

What you are trying to do here is a little bit confusing. I don't think you can update slices and create a new tensor at the same time/line.

If you want to update slices before creating t4, use tf.scatter_add() (or tf.scatter_sub() or tf.scatter_update() accordingly) as suggested here. For example:

sa = tf.scatter_add(t1, [1], t2[1:2])

Then if you want to get a new tensor t4 using new t1[1] and t2[1], you can do:

with tf.control_dependencies([sa]):
    t4 = tf.add(t1[1],t2[1])
Y. Luo
  • 5,622
  • 1
  • 18
  • 25
0

Here are some examples for using tf.scatter_add and tf.scatter_sub

>>> t1 = tf.Variable(tf.ones([2,3,4],tf.int32))
>>> t2 = tf.Variable(tf.zeros([2,3,4],tf.int32))
>>> init = tf.global_variables_initializer()
>>> sess.run(init)
>>> t1.eval()
array([[[1, 1, 1, 1],
    [1, 1, 1, 1],
    [1, 1, 1, 1]],

   [[1, 1, 1, 1],
    [1, 1, 1, 1],
    [1, 1, 1, 1]]], dtype=int32)
>>> t2.eval()
array([[[0, 0, 0, 0],
    [0, 0, 0, 0],
    [0, 0, 0, 0]],

   [[0, 0, 0, 0],
    [0, 0, 0, 0],
    [0, 0, 0, 0]]], dtype=int32)


>>> t3 = tf.scatter_add(t1,[0],[[[2,2,2,2],[2,2,2,2],[2,2,2,2]]])
>>> sess.run(t3)
array([[[3, 3, 3, 3],
    [3, 3, 3, 3],
    [3, 3, 3, 3]],

   [[1, 1, 1, 1],
    [1, 1, 1, 1],
    [1, 1, 1, 1]]], dtype=int32)

>>>t4 = tf.scatter_sub(t1,[0,0,0],[t1[1],t1[1],t1[1]])

Following is another example, which can be found at https://blog.csdn.net/efforever/article/details/77073103

Because few examples illustrating scatter_xxx can be found on the web, I paste it below for reference.

import tensorflow as tf 
import numpy as np 


with tf.Session() as sess1:


c = tf.Variable([[1,2,0],[2,3,4]], dtype=tf.float32, name='biases') 
cc = tf.Variable([[1,2,0],[2,3,4]], dtype=tf.float32, name='biases1') 
ccc = tf.Variable([0,1], dtype=tf.int32, name='biases2') 

#对应label的centers-diff[0--]
centers = tf.scatter_sub(c,ccc,cc)
#centers = tf.scatter_sub(c,[0,1],cc)  
#centers = tf.scatter_sub(c,[0,1],[[1,2,0],[2,3,4]])
#centers = tf.scatter_sub(c,[0,0,0],[[1,2,0],[2,3,4],[1,1,1]])
#即c[0]-[1,2,0] \ c[0]-[2,3,4]\ c[0]-[1,1,1],updates要减完:indices与updates元素个数相同

a = tf.Variable(initial_value=[[0, 0, 0, 0],[0, 0, 0, 0]])  
b = tf.scatter_update(a, [0, 1], [[1, 1, 0, 0], [1, 0, 4, 0]])  
#b = tf.scatter_update(a, [0, 1,0], [[1, 1, 0, 0], [1, 0, 4, 0],[1, 1, 0, 1]]) 

init = tf.global_variables_initializer() 
sess1.run(init)

print(sess1.run(centers))
print(sess1.run(b))


[[ 0.  0.  0.]
 [ 0.  0.  0.]]
[[1 1 0 0]
 [1 0 4 0]]


[[-3. -4. -5.]
 [ 2.  3.  4.]]
[[1 1 0 1]
 [1 0 4 0]]
X. L
  • 601
  • 1
  • 7
  • 11
0

You can also use tf.assign() as a workaround as sliced assign was implemented for it, unlike for tf.assign_add() or tf.assign_sub(), as of TensorFlow version 1.8. Please note, you can only do one slicing operation (slice into slice is not going to work) and also this is not atomic, so if there are multiple threads reading and writing to the same variable, you don't know which operation will be the last one to write unless you explicitly code for it. tf.assign_add() and tf.assign_sub() are guaranteed to be thread safe. Still, this is better that nothing: consider this code (tested):

import tensorflow as tf

t1 = tf.Variable(tf.zeros([2,3,4],tf.int32))
t2 = tf.Variable(tf.ones([2,3,4],tf.int32))

assign_op = tf.assign( t1[ 1 ], t1[ 1 ] + t2[ 1 ] )

init_op = tf.global_variables_initializer()

with tf.Session() as sess:
    sess.run( init_op )
    res = sess.run( assign_op )
    print( res )

will output:

[[[0 0 0 0]
[0 0 0 0]
[0 0 0 0]]

[[1 1 1 1]
[1 1 1 1]
[1 1 1 1]]]

as desired.

Peter Szoldan
  • 4,792
  • 1
  • 14
  • 24