I have a function which takes an array-like
argument an a value
argument as inputs. During the unit tests of this function (I use hypothesis
), if a very large value
is thrown (one that cannot be handled by np.float128
), the function fails.
What is a good way to detect such values and handle them properly?
Below is the code for my function:
def find_nearest(my_array, value):
""" Find the nearest value in an unsorted array.
"""
# Convert to numpy array and drop NaN values.
my_array = np.array(my_array, copy=False, dtype=np.float128)
my_array = my_array[~np.isnan(my_array)]
return my_array[(np.abs(my_array - value)).argmin()]
Example which throws an error:
find_nearest([0.0, 1.0], 1.8446744073709556e+19)
Throws: 0.0
, but the correct answer is 1.0
.
If I cannot throw the correct answer, at least I would like to be able to throw an exception. The problem is that now I do not know how to identify bad inputs. A more general answer that would fit other cases is preferable, as I see this as a recurring issue.