0

I'm pretty sure that this has to do something with (representation of) numerical precision. But in this case, I find it especially unintuitive and I want to understand what happens and why it happens.

Suppose you have a simple text file (test.txt) which contains the following:

test.txt:

4.12448729726923

scan("test.txt", what = 0)
Read 1 item
[1] 4.124487

So far, so good. Standard output precision "cuts off" some of the decimal places. But now, let's increase output precision and do the same:

options(digits = 20)
scan("test.txt", what = 0)
Read 1 item
[1] 4.1244872972692299129

See what happens here: I would suspect that R stops its internal representation (and output) after ...6923 (that's what in test.txt, after all!). But no, R "makes up" some additional decimal places and returns a number which is different from (and more "precise" than) the one in test.txt. Could you explain to me what's going on here? Thanks a lot and please excuse any false terminology in my question – this is a pretty new problem for me.

swolf
  • 1,020
  • 7
  • 20
  • 3
    This is just a variant of [Why are these numbers not equal?](https://stackoverflow.com/questions/9508518/why-are-these-numbers-not-equal) also in the R FAQ that is automatically installed. Basically many decimal numbers cannot be represented exactly by binary computers. – dcarlson Nov 10 '21 at 17:12
  • Ah, ok. So basically that means that R cannot represent the number in test.txt and „jumps“ to the next representable number, right? – swolf Nov 10 '21 at 17:25
  • 1
    @swolf That's basically it. It gets as close at it can with a binary representation. Note this isn't specific to R. This happens in nearly all languages that use floating point arithmetic. This is just how computers work in general. – MrFlick Nov 10 '21 at 17:55

0 Answers0