I was wondering why tf.nn.embedding_lookup
uses a list of tensors whereas tf.gather
just performs a lookup on a single tensor. Why would I ever need to do the lookup on multiple embeddings?
I think I read somewhere that it is useful for saving memory on large embeddings, but I am not sure how this would work since I don't see how splitting up the embedding would save anything.