If we want to sum over a big range of numbers the fact that the data type used is a floating point double precision does it make the running sum less susceptible to overflow?
Should we still some explicitly write code to detect it and stop the summation?
Asked
Active
Viewed 39 times
0

Cratylus
- 52,998
- 69
- 209
- 339
-
what language is this? It doesn't matter the data type used; if you sum over enough numbers it will overflow unless you use a library to handle it for you – Eiyrioü von Kauyf Jul 26 '13 at 19:55
-
I originally though this question is language agnostic but I added `Java` since what I normally use – Cratylus Aug 03 '13 at 13:30
-
http://stackoverflow.com/questions/3413448/double-vs-bigdecimal – Eiyrioü von Kauyf Aug 03 '13 at 18:05
-
double covers a range from 4.94065645841246544e-324d to 1.79769313486231570e+308d (positive or negative). This is pretty much, however the precesion is only about 17 decimals. This means long before the overflow your sum will be only a rough estimation with hundreds zero digits at the end. So dealing with big numbers requires BigInteger or BigDecimal if more than an esitmation is needed. – Bernd Ebertz Aug 08 '13 at 17:07