I'm using dynamic programming (in R) to fill about 20,000 cells in an array. I need the exact integer in the final cell. The function only requires addition, multiplication, and logical expressions.
If I run my function with double-precision numbers, it does the calculations in about 30s, but the exact integer is about 50 digits long, and of course double-precision numbers can't fully represent it.
When I try it with mpfr numbers or Big Integers ('bigz'), the function runs at least an order of magnitude slower. Is there a way that I can get large numbers, high precision, and speed, all at once?
(For context, I'm trying to calculate Motzkin numbers to find how many possible stable structures there can be of a given RNA sequence. Here's the code, using mpfr numbers. "s" is a string about 200 characters in length.)
Motzkin_v7 <- function(s){
b <- list(A = c("U"), C = c("G"), G = c("C","U"), U = c("A","G"))
d <- new("mpfrArray",
rep(mpfr(0, precBits = 256),
times = (nchar(s)+1)^2),
Dim = c(as.integer(nchar(s)+1),as.integer(nchar(s)+1)))
#d <- array(0, dim = c(nchar(s)+1,nchar(s)+1))
for (m in 1:5){
for (i in m:nrow(d)){
d[i,(i+1-m)]<-mpfr(1, precBits = 256)
}
}
for (i in 6:nrow(d)){
cat (i, nrow(d), "\n")
for (j in (i-5):1){
d[i,j] <- d[i,(j+1)]
for (k in 4:(i-j-1)){
d[i,j] <- d[i,j] + (d[i,(j+1+k)]*d[(j+k),(j+1)]*u(s,j,j) %in% b[[u(s,(j+k),(j+k))]])
}
}
}
return(d[nrow(d),1])
}