I have implemented the Metropolis-Hasting algorithm for one my projects and I use that along with "snowfall" package to build different chains of MCMC on different cores.
But now, what happens is, as soon as I begin to run my code for example for 30,000 times, each core takes up a 130MB chunk of the RAM and in each iteration they each take 300KB more. So pretty soon they all use all of my RAM and my code stops working. I never add any row dynamically to my matrices and I remove unnecessary variables at the end of each iteration.
So what I am suspicious about, is those variables that I overwrite them in each iteration (because basically there is nothing else left) but I have not seen or read anywhere that overwriting a variable may add to the total memory used by R. Here is a example of my code:
mcmc.func<-function(...){
nitr<-30000
out <- matrix(NA, nitr, 6) # output which contains all the parameters posterior
j<-1
for (i in 1:nitr) {
repeat {
news <- t(as.matrix(mvrnorm(n = 1, (means), (diag(x = sd, 6, 6) ^ 2)*scale))) }
if(check.sample(news)){break;}#check the constrains
}
... doing some stuff.... (finding prior (pratio) and liklihood (cost))
if (runif(1, 0, 1) < ((exp( - cost_c) / exp( - oldCost)) * pratio)) {
out[j,]<-c(news, cost_c) # filling up my matrix
... other stuff ...
}
j<-j+1
}
}#end of the function
and here is the snowfall code:
library(snowfall)
sfInit(cpus = 15, parallel = TRUE,type="SOCK") #initiating the cluster
sfSource("C:/Users/hamzed/Dropbox/Var_setup.R") # shared variables
sfSource("C:/Users/hamzed/Dropbox/Utility.r") # all functions I used
sfSource("C:/Users/hamzed/Dropbox/Metropolis_func.R") # The Metriplos algorithm
sfExportAll() # exports all global variables from the master to all slaves
wrapper<-function(c){
## running MCMC for different files on different cores
return( mcmc.func(c))
}
outputNB<-sfClusterApplyLB(c(1:3,5,7:8),wrapper)
sfCat()
sfStop() ## stop cluster
Anybody any idea ?