I am working with the code for linear regression in TensorFlow from https://github.com/nlintz/TensorFlow-Tutorials/blob/master/1_linear_regression.py
This code computes a regression-function that I will call y_estimate
. As this is linear regression, the following formula holds:
y_estimate = m*x
.
The coefficient m
is equal to the weights of the neural net's layer. Those weights are extracted and we get a perfectly working regression formula.
However, I want to take another approach: First, I want to sample many different values for y_estimate
. For example, I want to pass the neural net 101 values for x
. Then, I want 101 values for y_estimate
from the neural net. Second, I want to plot those values.
Unfortunately, I fail to obtain the values for y_estimate
. In the neural net, those values are calcuated through y_model = model(X, w)
. As X
and w
both contain many elements (101 to be precise), y_model
should also contain 101 elements. I tried different approaches to print all the values of y_model
, but each failed.
In the following, I will show my approaches. I only copy the relevant code, the rest is exactly the same as the code in the GIT posted above.
1) First, I just try to print y_model
naively:
with tf.Session() as sess:
tf.initialize_all_variables().run()
for i in range(100):
for (x, y) in zip(trX, trY):
sess.run(train_op, feed_dict={X: x, Y: y})
print('y_model: ', y_model)
print(sess.run(w))
Output: ('y_model: ', <tf.Tensor 'Mul_8:0' shape=<unknown> dtype=float32>)
2) Second, I try to fetch the tensor y_model
first and then print the result:
with tf.Session() as sess:
tf.initialize_all_variables().run()
for i in range(100):
for (x, y) in zip(trX, trY):
sess.run(train_op, feed_dict={X: x, Y: y})
estimates = sess.run(y_model, feed_dict={X: x, Y: y})
print('estimates: ', estimates)
print(sess.run(w))
Output: ('estimates: ', 2.0016618)
3) Third, I tried to first use the op tf.Print()
that should later print the values of the tensor when evaluating. Note that calling tf.get_default_session().run(t)
is equivalent to calling t.eval()
as elaborated here.
w = tf.Variable(0.0, name="weights") # create a shared variable (like theano.shared) for the weight matrix
y_model = model(X, w)
cost = tf.square(Y - y_model) # use square error for cost function
train_op = tf.train.GradientDescentOptimizer(0.01).minimize(cost) # construct an optimizer to minimize cost and fit line to my data
temp1 = tf.Print(y_model,[y_model])
with tf.Session() as sess:
# you need to initialize variables (in this case just variable W)
tf.initialize_all_variables().run()
for i in range(100):
for (x, y) in zip(trX, trY):
sess.run(train_op, feed_dict={X: x, Y: y})
estimates = sess.run(temp1, feed_dict={X: x, Y: y})
print('estimates: ', estimates)
print(sess.run(w))
Output: ('estimates: ', 2.0458241)
4) A fourth approach would be to use for example tf.reduce_max()
to only obtain the maximum value:
with tf.Session() as sess:
tf.initialize_all_variables().run()
for i in range(100):
for (x, y) in zip(trX, trY):
sess.run(train_op, feed_dict={X: x, Y: y})
temp1 = tf.reduce_max(y_model)
estimate_max = sess.run(temp1, feed_dict={X: x, Y: y})
print('estimate_max: ', estimate_max)
print(sess.run(w))
Output: ('estimate_max: ', 1.887839)
Now my concrete question: Why do I only get one value instead of 101 values for y_model
(regarding my approaches 2 und 3)? Shouldn't y_model
return one value for each x
of the input?
What did I compute instead and how can I obtain the 101 values i desire?
Thank you very much for your help!