Disclaimer
I was not sure whether to post that here or on CV but after having read what is on topic on CV I think it is more R
specific then purely statistical. Thus, I posted it here.
Problem
Citing from ?.Machine
double.eps
the smallest positive floating-point numberx
such that1 + x != 1
. It equalsdouble.base ^ ulp.digits
if eitherdouble.base
is 2 ordouble.rounding
is 0; otherwise, it is(double.base ^ double.ulp.digits) / 2
. Normally2.220446e-16
.
Thus, I would assume that all.equal(1 + .Machine$double.eps, 1.0)
returns FALSE
. Which it does not. Reading the doc of all.equal
I see that the default tolerance is .Machine$double.eps ^ 0.5
.
Fair enough, but I observe some odd results which I do not understand:
isTRUE(all.equal(1.0 + .Machine$double.eps, 1.0, tolerance = .Machine$double.eps)) # TRUE
isTRUE(all.equal(1.0 - .Machine$double.eps, 1.0, tolerance = .Machine$double.eps)) # FALSE
isTRUE(all.equal(0.9 + .Machine$double.eps, 0.9, tolerance = .Machine$double.eps)) # FALSE
isTRUE(all.equal(2.0 + .Machine$double.eps, 2.0, tolerance = .Machine$double.eps)) # TRUE
Thus, all.equal
picks only differences for numbers below 1 correctly.
Last explanation I could think of is that all.equal
looks on the relative difference scale by default, so I tried to overrule this behaviour with no success either:
isTRUE(all.equal(1.0 + .Machine$double.eps, 1.0,
tolerance = .Machine$double.eps, scale = 1)) # TRUE
Apparently, I have a massive misunderstanding of how floating-point numbers work in R
, which leads me to these
Questions
- How to compare 2 numbers in R with "maximum" (wrt to floating point precision) precision correctly?
- Why are the results of
all.equal
different for numbers below and above1
? - [Bonus Question]: what is the rational to use
.Machine$double.eps ^ .5
as default tolerance instead of the unsquarerooted version? Is it simply to relax the test a bit?