0

I have a problem that I want to speed up with threading or multiproccesing. I've tried to set up a simple test to see if I can get an understanding of the code process. However, when I tried a with a simple loop it takes 17.79 sec with the parallel programing compared to 0.39 sec when I just use normal R programing. Could anyone have a look at the code and comment on what I might do wrong.

`


n.cores <- parallel::detectCores() - 3

#create the cluster
my.cluster <- parallel::makeCluster(
  n.cores, 
  type = "PSOCK"
  )

#check cluster definition (optional)
print(my.cluster)

#register it to be used by %dopar%
doParallel::registerDoParallel(cl = my.cluster)

#check if it is registered (optional)
foreach::getDoParRegistered()

#how many workers are available? (optional)
foreach::getDoParWorkers()


m<-100000
start_time <- Sys.time()
x <- vector()
for(i in 1:m){
  x[i] <- sqrt(i)
  }
x
end_time <- Sys.time()
end_time - start_time



start_time1 <- Sys.time()

x <- foreach(
  i = 1:m, 
  .combine = 'c'
) %dopar% {
    sqrt(i)
  }
x
end_time1 <- Sys.time()
end_time - start_time
end_time1 - start_time1


parallel::stopCluster(cl = my.cluster)

`

end_time - start_time: Time difference of 0.3915679 secs

end_time1 - start_time1: Time difference of 17.79209 secs

I was hoping that the parallel programing would increase run speed, not decrease it

  • Related: https://stackoverflow.com/questions/10411871/why-is-foreach-do-sometimes-slower-than-for – MrFlick Nov 17 '22 at 14:35
  • 1
    See also: https://stackoverflow.com/questions/14614306/why-is-the-parallel-package-slower-than-just-using-apply. Running code is parallel isn't "free". There's a cost. If the code you are running is very simple, the extra costs of doing it in parallel isn't worth it. – MrFlick Nov 17 '22 at 14:36

0 Answers0