I do all of my tensorflow data generation AND training on the GPU. This means that my data sets are represented as tensorflow constants. For example:
import tensorflow as tf
# Generate x and y on the GPU
x = tf.constant([[1], [2], [3], [4]], dtype=tf.float32)
y_true = tf.constant([[0], [-1], [-2], [-3]], dtype=tf.float32)
This means that when I go to train my models, it looks something like the following code. I have greatly simplified it so that I can recreate my problem on other computers (based on the code at the bottom of the tensorflow low-level API tutorial Where it says "Complete Program")
import tensorflow as tf
# Declare constants
x = tf.constant([[1], [2], [3], [4]], dtype=tf.float32)
y_true = tf.constant([[0], [-1], [-2], [-3]], dtype=tf.float32)
# Create model to train
linear_model = tf.layers.Dense(units=1)
y_pred = linear_model(x) # <---- my model
loss = tf.losses.mean_squared_error(labels=y_true, predictions=y_pred)
optimizer = tf.train.AdamOptimizer(.2)
train = optimizer.minimize(loss)
# Boilerplate code
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
# Train the model to give the correct y given the x
for i in range(100):
_, loss_value = sess.run((train, loss))
print(loss_value, end=", ")
# Test our output to make sure it looks good
print()
print(sess.run(y_pred))
# Generate new data to test or train in the model
new_x = tf.constant([[0], [1], [2]], dtype=tf.float32)
new_y_true = tf.constant([[10], [11], [12]], dtype=tf.float32)
# Test the model on new_x and see if it looks similar to new_y_true and/or train based on new_x and new_y_true
# ??
When I finish training my model (where y_pred
is declared) I want to reuse that linear model and test it on my new_x
and new_y_true
data or even train on it. How would I do this?
I tried replacing x
and y_true
with placeholders, but you cannot put tf.Tensor
objects into the feed_dict
parameter.