I'm using a small custom function inside of tf.contrib.seq2seq.sequence_loss(softmax_loss_function=[...])
as a custom sofmax_loss_function:
def reduced_softmax_loss(self, labels, logits):
top_logits, indices = tf.nn.top_k(logits, self.nb_top_classes, sorted=False)
top_labels = tf.gather(labels, indices)
return tf.nn.softmax_cross_entropy_with_logits_v2(labels=top_labels,
logits=top_logits)
But even though, labels and logits should have the same dimension, after execution it returns and InvalidArgumentError
:
indices[1500,1] = 2158 is not in [0, 1600)
with numbers varying due to my random seed.
Is there an other function like tf.gather
which I could use instead? Or is the returned value in false shape?
Everything works fine, if I'm passing the usual Tensorflow functions.
Thanks in advance!