6

Given a set of training examples for training a neural network, we want to give more or less weight to various examples in training. We apply a weight between 0.0 and 1.0 to each example based on some criteria for the "value" (e.g. validity or confidence) of the example. How can this be implemented in Tensorflow, in particular when using tf.nn.sparse_softmax_cross_entropy_with_logits()?

Ron Cohen
  • 2,815
  • 5
  • 30
  • 45

1 Answers1

6

In the most common case where you call tf.nn.sparse_softmax_cross_entropy_with_logits with logits of shape [batch_size, num_classes] and labels of shape [batch_size], the function returns a tensor of shape batch_size. You can multiply this tensor with a weight tensor before reducing them to a single loss value:

weights = tf.placeholder(name="loss_weights", shape=[None], dtype=tf.float32)
loss_per_example = tf.nn.sparse_softmax_cross_entropy_with_logits(logits, labels)
loss = tf.reduce_mean(weights * loss_per_example)
GeertH
  • 1,738
  • 9
  • 18
  • When we specify the weights as a part of `tf.train.Example`, do we have to specify the weights as one hot encoded list or simply a list that matches the bboxes list in that image. Example: `'image/object/weight': dataset_util.float_list_feature(weights)` – Pratik Khadloya Apr 09 '20 at 03:09