After having searched around a little bit, I'm still struggling with divisions by zero in numpy. I am stunned by the contradiction I report right away:
from numpy import *
seterr(all='ignore') # Trying to avoid ZeroDivisionError, but unsuccessful.
def f(x) :
return 1./(x-1.)
With this, when I execute f(1.)
, I get
ZeroDivisionError: float division by zero
.
However, when I define
z = array( [ 1., 1. ] )
and execute f(z)
, I do not get any error, but
array([ inf, inf])
.
As you can see, there is kind of a contradiction between both outputs. My first question is why.
Ideally, I would like to get inf
as the output of f(1.)
, or at least nan
, but not an error (and therefore the stoppage of the calculation).
My second question is how to manage this.
Notice my failed attempt by making use of seterr
.