In Keras, if you want to add an LSTM layer with 10 units, you use model.add(LSTM(10))
. I've heard that number 10
referred to as the number of hidden units here and as the number of output units (line 863 of the Keras code here).
My question is, are those two things the same? Is the dimensionality of the output the same as the number of hidden units? I've read a few tutorials (like this one and this one), but none of them state this explicitly.