So I have changed the model described here to perform multi-class text classification instead of a binary class classification. http://www.wildml.com/2015/12/implementing-a-cnn-for-text-classification-in-tensorflow/
My model is overfitting even after applying L2 regularization so I am wanting to use a pre-trained word2vec model. But I am extremely new to Tensorflow & deep learning & am not sure where to start.
Code: https://github.com/dennybritz/cnn-text-classification-tf/blob/master/text_cnn.py#L27
Here is the relevant code that I want to change to use the Google pre-trained word2vec model:
# Embedding layer
with tf.device('/cpu:0'), tf.name_scope("embedding"):
W = tf.Variable(
tf.random_uniform([vocab_size, embedding_size], -1.0, 1.0),
name="W")
self.embedded_chars = tf.nn.embedding_lookup(W, self.input_x)
self.embedded_chars_expanded = tf.expand_dims(self.embedded_chars, -1)
It will be very helpful if someone can point me to how can i incorporate this in the code. I looked at the embedding_lookup doc but that doesn't seem to have the information i am looking for.