1

I'm having a custom keras callback implemented and I'm executing two consecutive training stages on the same model.

In the callback, I create several placeholders to feed some metric values for evaluation at the end of the training. For the first training stage this is fine, since the placeholders don't exist, yet, however in the second training stage this will lead to an error, since tensorflow will create a second set of the placeholders but with an indexed name.

Therefore, I'm looking for a solution to either feed the values into the placeholders from the first training stage (maybe something like find a placeholder by name and then feed the values into it) or deleting certain placeholders by name so that I can create new ones

Edit:

To clarify my current situation. I got this custom Keras Callback implemented (I'll leave the calculation of the metric out):

class Metric(keras.callbacks.Callback):


def __init__(self):
    self.val_prec_ph = tf.placeholder(shape=(), dtype=tf.float64, name="prec")
    tf.summary.scalar("val_precision", self.val_prec_ph)

    self.merged = tf.summary.merge_all()
    self.writer = tf.summary.FileWriter(self.log_dir)

def on_train_begin(self, logs={}):

    self.precision = []

def on_train_end(self, logs={}):

   //do some calculations

    self.precision.append(calculation)

    summary = self.session.run(self.merged,
                               feed_dict={self.val_prec_ph: self.precision[-1]})

    self.writer.add_summary(summary)
    self.writer.flush()

That's basically my framework to do the placeholder. Due to the consecutive runs tensorflow will do the following: The first training will run without problems and name the placeholder "prec". In the second run, however, tensorflow will name the self.val_prec_ph placeholder something like "prec_", what then will lead to the error that the "prec" placeholder has not been feed, although it is still there.

Therefore I either want to write directly into the "prec" placeholder or delete it after the first run so that I don't have duplicates.

There reason why I'm doing this at the end of the end training process yadda, yadda ... is a different story which has another problem.

TheDude
  • 1,205
  • 2
  • 13
  • 21

1 Answers1

0

Here is a possible solution to your specific question, searching for the placeholder in the graph by name (using tf.Graph().get_tensor_by_name()), and creating it if it can't be found:

class Metric(keras.callbacks.Callback):

    def __init__(self, ph_name="prec"):
        try:
            self.val_prec_ph = tf.get_default_graph().get_tensor_by_name(
                ph_name + ':0')
            # Check this solution by @rvinas to cover possible suffix/scope errors:
            # https://stackoverflow.com/a/38935343/624547
        except KeyError:
            self.val_prec_ph = tf.placeholder(shape=(), dtype=tf.float64, 
                                              name=ph_name)

        tf.summary.scalar("val_precision", self.val_prec_ph)

        self.merged = tf.summary.merge_all()
        self.writer = tf.summary.FileWriter(self.log_dir)

    # ...
benjaminplanche
  • 14,689
  • 5
  • 57
  • 69