In tensorflow, there are methods called softmax_cross_entropy_with_logits
and sampled_softmax_loss
.
I read the tensorflow document and searched google for more information but I couldn't find the difference. It looks like to me both calculates the loss using softmax function.
Using sampled_softmax_loss
to calculate the loss
loss = tf.reduce_mean(tf.nn.sampled_softmax_loss(...))
Using softmax_cross_entropy_with_logits
to calculate the loss
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(P, Q))
To me, calculating softmax loss is same as calculating softmaxed cross entropy (e.g. cross_entropy(softmax(train_x))
)
Could somebody tell me the why there is two different methods and which method should I use in which case?