I am trying to use the weights from my word2vec model as weights for the Embedding layer of my neural network in keras. The example code that I'm following uses:
word_model = gensim.models.Word2Vec(sentences, size=100, min_count=1,
window=5, iter=100)
pretrained_weights = word_model.wv.syn0
keras_model.add(Embedding(input_dim=vocab_size, output_dim=emdedding_size,
weights=[pretrained_weights]))
I understand that word2vec creates vectors for each word, in this case of size 100.
pretrained_weights.shape
returns (1350,100), but I am not sure what the 1350 number means.
keras_model.predict(np.array([word_model.wv.vocab['test'].index]))
returns a vector of size 1350, which I am not sure how to interpret (the response the model was trained on is a vector of size 7200).
I can run the example code and get results fine, but I would like to know why it works.