I am coding a tensorflow project in which I am editing each weight and bias manually so I set up the weights and biases like in old tensorflow with dictionaries rather then using tf.layers.dense
and letting tensorflow take care of updating weights. (This is the cleanest way I came up with although it might not be ideal)
I feed a fixed model the same data in each iteration, but the running time increases throughout program execution.
I cut out almost everything from my code so I can see where the issue lies but I cannot understand what is causing the increase in running time.
---Games took 2.6591222286224365 seconds ---
---Games took 3.290001153945923 seconds ---
---Games took 4.250034332275391 seconds ---
---Games took 5.190149307250977 seconds ---
Edit: I have managed to reduce the running time by using a placeholder that doesn't add additional nodes to the graph but the running time still increases as a slower rate. I'd like to remove this running time growth. (It goes from 0.1 to over 1 second after a while)
Here is my whole code:
import numpy as np
import tensorflow as tf
import time
n_inputs = 9
n_class = 9
n_hidden_1 = 20
population_size = 10
weights = []
biases = []
game_steps = 20 #so we can see performance loss faster
# 2 games per individual
games_in_generation = population_size/2
def generate_initial_population(my_population_size):
my_weights = []
my_biases = []
for key in range(my_population_size):
layer_weights = {
'h1': tf.Variable(tf.truncated_normal([n_inputs, n_hidden_1], seed=key)),
'out': tf.Variable(tf.truncated_normal([n_hidden_1, n_class], seed=key))
}
layer_biases = {
'b1': tf.Variable(tf.truncated_normal([n_hidden_1], seed=key)),
'out': tf.Variable(tf.truncated_normal([n_class], seed=key))
}
my_weights.append(layer_weights)
my_biases.append(layer_biases)
return my_weights, my_biases
weights, biases = generate_initial_population(population_size)
data = tf.placeholder(dtype=tf.float32) #will add shape later
def model(x):
out_layer = tf.add(tf.matmul([biases[1]['b1']], weights[1]['out']), biases[1]['out'])
return out_layer
def play_game():
model_input = [0] * 9
model_out = model(data)
for game_step in range(game_steps):
move = sess.run(model_out, feed_dict={data: model_input})[0]
sess = tf.Session()
sess.run(tf.global_variables_initializer())
while True:
start_time = time.time()
for _ in range(int(games_in_generation)):
play_game()
print("---Games took %s seconds ---" % (time.time() - start_time))