My goal is (1) Load a pre-trained word embedding matrix from a file as the initial value; (2) Fine tune the word embedding instead of keeping it fixed; (3) Each time I restore the model, load the fine-tuned word embedding instead of the pre-trained one.
I have tried sth like:
class model():
def __init__(self):
# ...
def _add_word_embed(self):
W = tf.get_variable('W', [self._vsize, self._emb_size],
initializer=tf.truncated_normal_initializer(stddev=1e-4))
W.assign(load_and_read_w2v())
# ...
def _add_seq2seq(self):
# ...
def build_graph(self):
self._add_word_embed()
self._add_seq2seq()
But this approach would cover the fine-tuned word embedding whenever I stop the training and restart it. I also tried sess.run(W.assign())
after calling model.build_graph
. But it threw an error that the graph has been finalized and I can not change it anymore. Could you please tell me the right way to achieve it? Thanks in advance!
EDIT:
This question is not duplicated as IT HAS A NEW REQUIREMENT: USE THE PRE-TRAINED WORD EMBEDDING AT THE BEGINNING OF TRAINING AND FIND-TUNE IT AFTERWARDS. I ALSO ASK HOW TO DO THIS EFFICIENTLY. THE ACCEPTED ANSWER IN THAT QUESTION IS NOT FXXKING SATISFIED WITH THIS REQUIREMENT. CAN YOU THINK TWICE BEFORE YOU MARK ANY QUESTION AS DUPLICATED ???????????