I've been researching text generation with RNNs, and it seems as though the common technique is to input text character by character, and have the RNN predict the next character.
Why wouldn't you do the same technique but using words instead of characters. This seems like a much better technique to me because the RNN won't make any typos and it will be faster to train.
Am I missing something?
Furthermore, is it possible to create a word prediction RNN but with somehow inputting words pre-trained on word2vec, so that the RNN can understand their meaning?