Fundamentally you are limited by floating point precision. For example, if you are using 64 bit floats:
fmax_64 = np.finfo(np.float64).max # the largest representable 64 bit float
print(np.log(fmax_64))
# 709.782712893
If x
is larger than about 709 then you simply won't be able to represent np.exp(x)
(or 1. / (1 + np.exp(x))
) using a 64 bit float.
You could use an extended precision float (i.e. np.longdouble
):
fmax_long = np.finfo(np.longdouble).max
print(np.log(fmax_long))
# 11356.5234063
The precision of np.longdouble
may vary depending on your platform - on x86 it is usually 80 bit, which would allow you to work with x
values up to about 11356:
func = lambda x: 1. / (1. + np.exp(np.longdouble(x)))
print(func(11356))
# 1.41861159972e-4932
Beyond that you would need to rethink how you're computing your expansion, or else use something like mpmath which supports arbitrary precision arithmetic. However this usually comes at the cost of much worse runtime performance compared with numpy, since vectorization is no longer possible.