I have a complex function which includes very (very) large numbers, and i optimize the function with scipy.minimize
.
A long time ago when i implemented the function i used numpy.float128()
numbers, because i thought it can handle big numbers better.
However i attended a course, and learned that python ints
(and floats i guess) can be arbitrary large.
I changed my code to use simple integers, (changed the initialization from a = np.float128()
to a = 0
) and surprisingly the very same function has a different optimum if i use a = 0
and a = np.float128
, If i run the minimization with f.e. a = np.float128()
10 times, i get the same results. I use SLSQP
method for optimization with bounds.
The code is complex, and i think it is not required to answer my question, but in case needed i can provide it.
So how can this happen? Which type should i use? Is this some kind of a precision error?