1

In python 3 I can easily represent and use fairly large integers such as 2**128. However, in R I run into problems at much smaller integer values, with 2^53 being the upper limit (and why that limit?). For example, the following problem can occur.

x11 <- 2^54 - 11
x12 <- 2^54 - 12
print(x11, digits = 22)
# [1] 18014398509481972
print(x12, digits = 22)
# [1] 18014398509481972
x11 == x12
# [1] TRUE

I know that I could scale values or use floating point and then deal with machine error. But I'm wondering if there is a library or some other work around for using integers directly. Note that the L designation does not solve this problem.

In this case I know versions, and hardware matter so this is R 4.0.5 on macos 11.5.1.

John
  • 23,360
  • 7
  • 57
  • 83
  • 2
    There are several packages concerning this limitation. For example `gmp`. https://stackoverflow.com/questions/2053397/long-bigint-decimal-equivalent-datatype-in-r couldbe a useful ressource. – Martin Gal Aug 07 '21 at 10:16
  • 1
    Thanks @MartinGal. That looks like a highly related thread. One issue not mentioned there and implicit here is that R *seems* to represent substantially larger integers than 32 bit. It just can't do anything reasonable with them. This illusion can be problematic. – John Aug 16 '21 at 13:51

1 Answers1

1

You may use the package gmp (look at https://www.r-bloggers.com/2019/08/really-large-numbers-in-r/). Then

library(gmp)
num = as.bigz(2)
x11 <- num^54 -11
x12 <- num^54 -12
print(x11, digits = 22)
Big Integer ('bigz') :
[1] 18014398509481973
print(x12, digits = 22)
Big Integer ('bigz') :
[1] 18014398509481972
x11 == x12
[1] FALSE
iago
  • 2,990
  • 4
  • 21
  • 27