0

I know we can sum up all the word vectors and then we can take the average to represent a sentence, but is there any other better way to represent a sentence?

Nomiluks
  • 2,052
  • 5
  • 31
  • 53
  • You should use doc2vec for sentence vectors. http://deeplearning4j.org/doc2vec.html – racknuf Nov 19 '15 at 18:05
  • Okay if i train using doc2vec then it is possible for me to get the word vectors also? can i perform both things at the same time? i have edited my question. – Nomiluks Nov 19 '15 at 19:19
  • I think you'd need to vectorize them separately. What are you trying to do exactly? We've got some experienced NLP and word2vec people sharing tips on our gitter channel: https://gitter.im/deeplearning4j/deeplearning4j – racknuf Nov 19 '15 at 20:30
  • i want to train a model using vector representations by employing deep neural networks. End goal is to predict the class labels (y) on any input sentence (x). Where, a single sentence (x) can be assigned more than one label (y's). I want to perform experiment by giving the whole sentence or some word features or both at the same time. It will be good if i could have both of these representations in one model. May it can be helpful for me to extract more interesting features. Right now, am using gensim python library i have a working word2vec and doc2vec model. – Nomiluks Nov 19 '15 at 21:02
  • Possible duplicate of [How to get vector for a sentence from the word2vec of tokens in sentence](http://stackoverflow.com/questions/29760935/how-to-get-vector-for-a-sentence-from-the-word2vec-of-tokens-in-sentence) – kampta Nov 27 '15 at 16:34

0 Answers0