1

I have a complex function which includes very (very) large numbers, and i optimize the function with scipy.minimize.

A long time ago when i implemented the function i used numpy.float128() numbers, because i thought it can handle big numbers better.

However i attended a course, and learned that python ints (and floats i guess) can be arbitrary large.

I changed my code to use simple integers, (changed the initialization from a = np.float128() to a = 0 ) and surprisingly the very same function has a different optimum if i use a = 0 and a = np.float128, If i run the minimization with f.e. a = np.float128() 10 times, i get the same results. I use SLSQP method for optimization with bounds.

The code is complex, and i think it is not required to answer my question, but in case needed i can provide it.

So how can this happen? Which type should i use? Is this some kind of a precision error?

Gábor Erdős
  • 3,599
  • 4
  • 24
  • 56
  • 1
    *"(and floats i guess)"* No, python floats are not arbitrary precision. They are standard 64 bit floating point values. – Warren Weckesser Nov 22 '16 at 15:11
  • @WarrenWeckesser So using np.float128() is actually better? The interesting thing is the optimum found with `a = 0` is better (lower) than the one found with `a = np.float128()`. Also the `a = 0` version is slower, as it uses more iteration cycles to get the optimum. – Gábor Erdős Nov 22 '16 at 15:12
  • Not really. In scipy, the underlying code that implements SLSQP is written in Fortran, and the Fortran code uses double precision--that's 64 bit floating point. So whatever your function uses, the values are eventually converted to 64 bit floating point in the SLSQP implementation. – Warren Weckesser Nov 22 '16 at 15:18
  • @WarrenWeckesser In that case, i dont really understand why i get different results. If my `float128()` gets converted into `float64()`, i should get the same result. Or do i lose precision in the conversion creating different results? – Gábor Erdős Nov 22 '16 at 15:19
  • @WarrenWeckesser: Just curious, so if I use np.float128 () I don't get 128 bit but 64 bit numbers? Nice to know. No doubt will be somewhere in the docs but never seen it. – Jacques de Hooge Nov 22 '16 at 15:20
  • @JacquesdeHooge Here is a nice thread about this, but i could not figure out a solution for myself in this: http://stackoverflow.com/questions/9062562/what-is-the-internal-precision-of-numpy-float128 – Gábor Erdős Nov 22 '16 at 15:22
  • `float128` uses 16 bytes to store values, but in general, it isn't true quadruple precision. Search stackoverflow for "numpy float128" (or google it) for many discussions of `float128`. – Warren Weckesser Nov 22 '16 at 15:28
  • I've seen some of them, and it all looks quite confusing. Could it be the case that Python ints are represented by C++ 'very long' ints, using all the bytes for significancy and none for the exponent. This would explain why in some cases (i.e. in certain number ranges) int would be more precise. – Jacques de Hooge Nov 22 '16 at 15:31
  • *"...why i get different results"* I don't know, but if your function has multiple local minima, it is not unusual for small changes in the inputs to give different results. – Warren Weckesser Nov 22 '16 at 15:32
  • @WarrenWeckesser It surely has multiple minimum, but i think this is not the problem, because i can do this 10 times with the same results, however i get different results 100% if i use the other method. The main question is: Can this error come from using `float128`, or this must be another error? – Gábor Erdős Nov 22 '16 at 15:34
  • I don't know--I haven't looked too closely at the SLSQP code. (Of course, if there are multiple local minima, different results aren't really an "error". Try different values of `x0` to see how the results vary.) – Warren Weckesser Nov 22 '16 at 15:54

0 Answers0