6

Does R store the smallest possibly representable scientific value?

To clarify: on my current machine:

>1e-300
[1] 1e-300

While

>1e-400
[1] 0

Through trial and error I know it is somewhere around the e-324 mark on my machine (where it also starts losing precision).

>1e-324
[1] 0
>5e-324
[1] 4.940656e-324

I've searched through the .Machine list and none of the values it stores contain either the value, or the exponent I'm looking for.

Edit:

Linked threads on the side indicate that this should be .Machine$double.eps, which is 2.220446e-16. Clearly this is no longer the case?

Scott Ritchie
  • 10,293
  • 3
  • 28
  • 64
  • So you're going to limit "size" to the quantum foam level? Or would you like to go smaller? I hearby declare 2^(-(factorial(googolplex))) to be of scientific interest! Anyway, seriously, take a look at `gmp` before asking for max/min size of numbers. – Carl Witthoft Nov 05 '13 at 01:24
  • If you must know, I'm attempting to render a manhattan plot of -log10 p-values for covariates in my data. So for genes on the Y chromosome, obviously, the p-values are 0. `plot` doesn't like `Inf` in its ylim, so I want I'm substituting 0s for the "minimum". – Scott Ritchie Nov 05 '13 at 01:28
  • There are estimated `10^80` atoms in the observable universe. Lets just ignore all numbers larger than that too! – Scott Ritchie Nov 05 '13 at 01:29

2 Answers2

6

The smallest normalised is double.xmin, as described in this page. The Wikipedia entry is very interesting and has the subnormal limit which is 2^-1074, which is approximately 4.9406564584124654 x 10^-324 (from Wikipedia as Ben Bolker mentioned in the comments). Your output in R is matching this value.

double.epsilon is not what you think. It is the smallest number you can add to 1 such as you obtain a number which will be not recognised as 1.

I suggest you read about how the double are stored in memory and the basics of double operations. Once you understand how a double is stored the lower limit is obvious.

BlueTrin
  • 9,610
  • 12
  • 49
  • 78
  • This seems to only hold true for addition and subtraction: `> 5e-320 + 2.225074e-308; [1] 2.225074e-308` While ordering, multiplication, and division work for smaller values: `> 5e-310/2; [1] 2.5e-310; > 5e-310*2; [1] 1e-309; > 5e-310 < 5e-309; [1] TRUE; ` – Scott Ritchie Nov 05 '13 at 00:41
  • 1
    This is the smallest normalised value. I added a link to Wikipedia, which contains the smallest subnormalised value. Read also http://stackoverflow.com/questions/8341395/what-is-a-subnormal-floating-point-number and http://en.wikipedia.org/wiki/Denormal_number – BlueTrin Nov 05 '13 at 00:46
  • Thanks! I'm confused why `R`'s limit seems to be in between the normal and subnormal limits. `4.940656e-324` seems to be the minimum value. anything below 8e-324 evaluates to that number, where `2.47e-324` is approximately the point where it evaluates to 0. – Scott Ritchie Nov 05 '13 at 00:59
  • From `?.Machine`: `Note that on most platforms smaller positive values than ‘.Machine$double.xmin’ can occur. On a typical R platform the smallest positive double is about ‘5e-324’.` (This doesn't say *why*, exactly, but it does document the phenomenon.) – Ben Bolker Nov 05 '13 at 03:32
  • 3
    The number you quote `4.940656e-324` is exactly `2^(-1074)`, as pointed out in the wiki link in the answer ... – Ben Bolker Nov 05 '13 at 03:38
  • Thanks for clarifying @BenBolker, my brain skipped over the `2^` part and instead interpreted it as `e-1074`. – Scott Ritchie Nov 08 '13 at 01:24
1

The accepted answer remains correct for base R, but using the package Rmpfr enables arbitrary precision. Example:

First, note issue in base R:

> p <- c("5e-600","2e-324","3e-324","4e-324", "5e-324","6e-324","7.1e-324","8e-324")
> as.numeric(p)
[1]  0.000000e+00  0.000000e+00 4.940656e-324 4.940656e-324 4.940656e-324 4.940656e-324
[7] 4.940656e-324 9.881313e-324

Observe that as we near the limit the precision is an issue and all values are 4.940656e-324.

Now use mpfr function from 'Rmpfr` package to cast the strings as floats:

> library(Rmpfr)
> .N <- function(.) mpfr(., precBits = 20)
> .N(p)
8 'mpfr' numbers of precision  20   bits 
[1] 5.0000007e-600   2.00000e-324 2.9999979e-324   4.00000e-324 4.9999966e-324 5.9999959e-324
[7]   7.09999e-324   8.00000e-324
Vince
  • 3,325
  • 2
  • 23
  • 41
  • And if you want even bigger numbers you can use the `Brobdingnag` package (i.e. my answer to https://stackoverflow.com/questions/22466328/how-to-work-with-large-numbers-in-r/22467504#22467504). It seems every few years I forget then relearn the issues with floating point numbers through stackoverflow – Scott Ritchie Oct 26 '17 at 00:03
  • Thanks @ScottRitchie. Do you know if `Brobdingnag` works with very small numbers. From package doc, seems that it works only for very large numbers; however, seems like both should work. – Vince Oct 26 '17 at 15:50
  • I've just tried to play around a bit, and it seems to lose the negative in the exponent for extremely small numbers, so I think the answer is no. – Scott Ritchie Oct 27 '17 at 01:20