0

I'm trying to generate text using previously trained LSTM. I found an existing solution but the problem is that it throws some exceptions. As I understand it happens because of older library usage. After some fixes here's my final function for text generation:

def generate_text(train_path, num_sentences, rnn_data):
gen_config = get_config()
gen_config.num_steps = 1
gen_config.batch_size = 1

with tf.Graph().as_default(), tf.Session() as session:
    initializer = tf.random_uniform_initializer(-gen_config.init_scale,
                                                gen_config.init_scale)

    with tf.name_scope("Generate"):
        rnn_input = PTBInput(config=gen_config, data=rnn_data, name="GenOut")
        with tf.variable_scope("OutModel", reuse=None, initializer=initializer):
            mout = PTBModel(is_training=False, config=gen_config, input_=rnn_input)

            # Restore variables from disk. TODO: save/load trained models
            # saver = tf.train.Saver()
            # saver.restore(session, model_path)
            # print("Model restored from file " + model_path)

        print('Getting Vocabulary')
        words = reader.get_vocab(train_path)

        mout.initial_state = tf.convert_to_tensor(mout.initial_state)

        state = mout.initial_state.eval()
        # state = session.run(mout.initial_state)
        x = 0  # the id for '<eos>' from the training set //TODO: fix this
        word_input = np.matrix([[x]])  # a 2D numpy matrix

        text = ""
        count = 0
        while count < num_sentences:
            output_probs, state = session.run([mout.output_probs, mout.final_state],
                                              {mout.input.input_data: word_input,
                                               mout.initial_state: state})

            print('Output Probs = ' + str(output_probs[0]))
            x = sample(output_probs[0], 0.9)
            if words[x] == "<eos>":
                text += ".\n\n"
                count += 1
            else:
                text += " " + words[x]
            # now feed this new word as input into the next iteration
            word_input = np.matrix([[x]])
        print(text)
    return

But I get an exception:

FailedPreconditionError (see above for traceback): Attempting to use uninitialized value OutModel/softmax_b [[Node: OutModel/softmax_b/read = IdentityT=DT_FLOAT, _class=["loc:@OutModel/softmax_b"], _device="/job:localhost/replica:0/task:0/cpu:0"]]

How can I fix it? And is there any other problems with my code?

  • Is there any other output information that might help us? – pypypy May 22 '17 at 14:06
  • I'm not sure that there's anything else that can be helpful... As I understand - this exception is rised when the program reach: `output_probs, state = session.run([mout.output_probs, mout.final_state], {mout.input.input_data: word_input, mout.initial_state: state}) ` – Anton Kamolins May 22 '17 at 14:22
  • try `tf.global_variables_initializer()` see if that fixes anything – pypypy May 22 '17 at 14:31
  • It feels like this code fixed my problem! Thanks you! :) what can you say about such text generation method? Should I fix something? Is is correct? – Anton Kamolins May 22 '17 at 14:58

1 Answers1

0

The problem is an uninitialised variable, you can fix this by either individually init'ing all the variables or by using the helper tf.global_variables_initializer()

pypypy
  • 1,075
  • 8
  • 18