I am running some experiments which require use of the following function:
import math
def sigmoid(x):
return 1.0 / (1.0 + math.exp(-x))
The function returns a number from 0.0
to 1.0
(see https://en.wikipedia.org/wiki/Sigmoid_function). If I give it small values for input x
, then the result is as expected, e.g.
>>> sigmoid(1)
0.7310585786300049
>>> sigmoid(5)
0.9933071490757153
>>> sigmoid(7)
0.9990889488055994
>>> sigmoid(10)
0.9999546021312976
>>> sigmoid(20)
0.9999999979388463
>>> sigmoid(30)
0.9999999999999065
However, when the x gets larger, then the function always returns 1.0.
>>> sigmoid(40)
1.0
>>> sigmoid(45)
1.0
>>> sigmoid(50)
I suspect this has to do with the part of the function that adds 1.0 with a potentially very small number (1.0 + math.exp(-x)
). e.g.
>>> 1.0 + math.exp(-30)
1.0000000000000935
>>> 1.0 + math.exp(-40)
1.0
>>> 1.0 + math.exp(-50)
1.0
>>> 1.0 + math.exp(-60)
1.0
How do I prevent Python from making such errors? I think it's an overflow (or underflow??) issue. Any tips? Thanks in advance.