The Tensorflow function tf.nn.weighted_cross_entropy_with_logits()
takes the argument pos_weight
. The documentation defines pos_weight
as "A coefficient to use on the positive examples." I assume this means that increasing pos_weight
increases the loss from false positives and decreases the loss from false negatives. Or do I have that backwards?
Asked
Active
Viewed 6,111 times
6

Ron Cohen
- 2,815
- 5
- 30
- 45
1 Answers
11
Actually, it's the other way around. Citing documentation:
The argument
pos_weight
is used as a multiplier for the positive targets.
So, assuming you have 5
positive examples in your dataset and 7
negative, if you set the pos_weight=2
, then your loss would be as if you had 10
positive examples and 7
negative.
Assume you got all of the positive examples wrong and all negative right. Originally you would have 5
false negatives and 0
false positives. When you increase the pos_weight
, the number of false negatives will artificially increase. Note that the loss value coming from false positives doesn't change.

sygi
- 4,557
- 2
- 32
- 54
-
Thanks. So if using a mutually exclusive classifier with more than 2 classes and 1-hot truth labels, increasing pos_weight has the effect of amplifying the losses in all cases with wrong estimates, and cases with correct estimates are unchanged (because the loss in the correct-estimate cases is zero)? – Ron Cohen Nov 20 '16 at 16:30
-
amplifying the losses in all cases with *false negatives*, but yes, I think so. – sygi Nov 20 '16 at 16:56