Is it possible to minimise a loss function by changing only some elements of a variable? In other words, if I have a variable X
of length 2, how can I minimise my loss function by changing X[0]
and keeping X[1]
constant?
Hopefully this code I have attempted will describe my problem:
import tensorflow as tf
import tensorflow.contrib.opt as opt
X = tf.Variable([1.0, 2.0])
X0 = tf.Variable([3.0])
Y = tf.constant([2.0, -3.0])
scatter = tf.scatter_update(X, [0], X0)
with tf.control_dependencies([scatter]):
loss = tf.reduce_sum(tf.squared_difference(X, Y))
opt = opt.ScipyOptimizerInterface(loss, [X0])
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
opt.minimize(sess)
print("X: {}".format(X.eval()))
print("X0: {}".format(X0.eval()))
which outputs:
INFO:tensorflow:Optimization terminated with:
Message: b'CONVERGENCE: NORM_OF_PROJECTED_GRADIENT_<=_PGTOL'
Objective function value: 26.000000
Number of iterations: 0
Number of functions evaluations: 1
X: [3. 2.]
X0: [3.]
where I would like to to find the optimal value of X0 = 2
and thus X = [2, 2]
edit
Motivation for doing this: I would like to import a trained graph/model and then tweak various elements of some of the variables depending on some new data I have.