Is the nn.Embedding() essential for learning for an LSTM?
I am using an LSTM in PyTorch to predict NER - example of a similar task is here - https://pytorch.org/tutorials/beginner/nlp/sequence_models_tutorial.html
Code wise, I am using code almost identical to the code in the tutorial above.
The only detail is - I am using word2Vec instead of nn.Embedding().
So I remove the nn.Embedding() layer and provide the forward function the features from the word2Vec directly. The RNN does not learn.
Hence, Is the nn.Embedding() essential for learning for an LSTM?