0

I am coding a reinforcement learning agent with the Python API of tensorflow. I need to call tensorflow millions of times during execution to evaluate tensors (with no training) that are fed from outside tensorflow, and much less often I need to send a large training set. The evaluation calls take more than 1 millisecond each (GPU GTX 970, i7-4790K CPU), and together they take more than half of my code running time. I would like to know if I could somehow reduce the time it takes tensorflow to evaluate data without training, or reduce the overhead of calling tensorflow.

Something that may slow the whole evaluation process is that I use two inputs, I apply a convolutional layer to one and then concat to the other using tf.concat before applying some dense layers. Would this make the evaluation process particularly slow?

Some example code:

    x1 = tf.placeholder(tf.float32, shape=[None, self.conv_size_0*self.conv_size_1], name="x1")
    x2 = tf.placeholder(tf.float32, shape=[None, self.noconv_size], name="x2")
    y_ = tf.placeholder(tf.float32, shape=[None, 1])

    # Convolution in 1D image of length self.conv_size_0 and number of channels self.conv_size_1 (defined during graph creation, and then fixed)

    x_image = tf.reshape(x1,[-1,self.conv_size_0,1,self.conv_size_1])

    W_conv1 = weight_variable([3, 1, self.conv_size_1, self.conv_depth])
    b_conv1 = bias_variable([self.conv_depth])

    h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)

    # Concatenating 
    x1convsize = self.conv_depth*self.conv_size_0
    x1conv = tf.reshape(h_conv1,[-1,x1convsize])

    sa_size = x1convsize + self.noconv_size
    x = tf.concat([x1conv,x2],1)

    # Here I ommited code defining some dense layers acting on x
    y_f = ...

    predDiff = tf.subtract(y_f,y_)
    loss = tf.nn.l2_loss(predDiff)      
    train_step = tf.train.AdamOptimizer(5e-4).minimize(loss)

Then, to evaluate my function, I call millions of times this:

[Q] = sess.run([self.y_f], feed_dict={
        self.x1: x1, self.x2: x2, self.keep_prob: 1.0})

0 Answers0