-3

I came across something this morning which made me think...

If you store a variable in Python, and I assume most languages, as x = 0.1, and then you display this value to 30 decimal places you get: '0.100000000000000005551115123126'

I read an article online which explained that the number is stored in binary on the computer and the discrepancy is due to base conversion.

My question is how do physicists and nano-scientists get around this problem when they do computation?

I always thought that if I input data into my calculator using scientific-notation it will give me a reliable accurate result but now I am wondering if this is really the case?

There must be a simple solution?

Thanks.

Naz
  • 395
  • 1
  • 4
  • 12
  • "My question is how do physicists and nano-scientists get around this problem when they do computation?" - it's not a problem at all for those domains. Decimal isn't special. Physics doesn't need decimal. – user2357112 Sep 06 '20 at 13:05
  • I know it doesn't need decimal but I think computers do and hence computations involving these numbers will introduce some level of error? That was my thinking. From reading the replies it seems that the level of error is there but not significant given that there is an error bar anyway in measurements. That is what I took from reading the contributions. – Naz Sep 06 '20 at 14:55

3 Answers3

2

Well, as a physicist I say these decimals are redundant in most cases.. it really doesn't matter what's on the 16th decimal place. The precision of measurements doesn't reach that level (not even in QED). Take a look at here, the highest precision measurements are around 10^13 - 10^14.

Applying to this to your example:

With 14 decimal places 0.100000000000000005551115123126 becomes 0.100000000000000, which doesn't introduce any errors at all.

Péter Leéh
  • 2,069
  • 2
  • 10
  • 23
1

There is the decimal class in python that can help you deals with this problem.

But personally, when I work with money transactions, I don't want to have extra cents like 1€99000012. I convert amounts to cents. So I just have to manipulate and store integers.

AlexisG
  • 2,476
  • 3
  • 11
  • 25
1

As always, it depends on the context (sorry for all the "normally" and "usually" words below). Also depends on definition of "scientific". Below is en example of "physical", not "purely mathematical" modeling.

Normally, using computer for scientific / engineering calculations:

  1. you have reality
  2. you have an analytical mathematical model of the reality
  3. to solve the analytical model, usually you have to use some numerical approximation (e.g. finite element method, some numerical scheme for time integration, ...)
  4. you solve 3. using floating point arithmetics

Now, in the "model chain":

  • you loose accuracy from 1) reality to 2) analytical mathematical model
    • most theories does some assumptions (neglecting relativity theory and using classical Newtonian mechanics, neglecting effect of gravity, neglecting ...)
    • you don't know exactly all the boundary and initial conditions
    • you don't know exactly all the material properties
    • you don't know ... and have to do some assumption
  • you loose accuracy from 2) analytical to 3) numerical model
    • from definition. Analytical solution is accurate, but usually practically unachievable.
    • in the limit case of infinite computational resources, the numerical methods usually converges to the analytical solution, which is somehow limited by the limited floating point accuracy, but usually the resources are limiting.
  • you loose some accuracy using floating point arithmetics
    • in some cases it influences the numerical solution
    • there are approaches using exact numbers, but they are (usually much) more computationally expensive

You have a lot of trade-offs in the "model chain" (between accuracy, computational costs, amount and quality of input data, ...). From "practical" point of view, floating point arithmetics is not fully negligible, but usually is one of the least problems in the "model chain".

Jan Stránský
  • 1,671
  • 1
  • 11
  • 15
  • Thank you Jan. I don't have a practical problem to solve. I just stumbled across the issue when I was reading about floating point numbers in Python and it struck me that there would be computing ramifications. I'll have a look at 'floating point arithmetic'. – Naz Sep 06 '20 at 15:01
  • By floating point arithmetics, I mean these artifacts like `0.1+0.2!=0.3`, `(0.1+0.2)+0.3!=0.1+(0.2+0.3)` (important in parallel computing, where numbers are summed in random order), printing actually stored value for `0.1` etc. Definitely good to know (although not everybody need it) – Jan Stránský Sep 06 '20 at 15:10