0

I am new to foreach and trying to do parallel calculations. My Windows 8.1 laptop has 4 cores. The following code shows that %dopar% is slower than %do%. But why? What am I missing here?

Reproducible example

library(doParallel)
#> Loading required package: foreach
#> Warning: package 'foreach' was built under R version 3.6.1
#> Loading required package: iterators
#> Loading required package: parallel
library(microbenchmark)
#> Warning: package 'microbenchmark' was built under R version 3.6.1

#find number of cores
parallel::detectCores()
#> [1] 4

no_cores <- detectCores() - 1  
cl <- makeCluster(no_cores)  
registerDoParallel(cl)

microbenchmark(x = foreach(i=1:10) %do% (i+1),
               y = foreach(i=1:10) %dopar% (i+1))
#> Unit: milliseconds
#>  expr       min        lq     mean   median       uq       max neval
#>     x  8.393377  9.389422 12.89608 12.60476 15.14176  32.61851   100
#>     y 14.840381 16.625099 21.05566 18.30625 23.38002 109.54543   100

## go back to sequential calculations
stopCluster(cl) 
registerDoSEQ()
umair durrani
  • 5,597
  • 8
  • 45
  • 85
  • 4
    Parallel processing has all kinds of overhead. On a simple computation like this, the overhead outweighs any potential benefit from the parallelization. – Axeman Nov 29 '19 at 21:19
  • Thanks @Axeman. So, my code is correct, but I should see benefits with a more complicated function? – umair durrani Nov 29 '19 at 21:24
  • 1
    Potentially, depending on how much data has to be moved between the processes. Have a go and try it out. – Axeman Nov 29 '19 at 21:30

0 Answers0