0

I'm trying to run a small test of %dopar% and it always comes out slower.. Here's some dummy code and output. I'm running this on win7x64 core i7-2600k...

library(foreach)
library(doSNOW)
registerDoSNOW(makeCluster(3, type = "SOCK"))

N <- 3*(10^4)

system.time(foreach(i = 1:N) %do% {sum(rnorm(N))})
system.time(foreach(i = 1:N) %dopar% {sum(rnorm(N))} )

Here's the output:

> system.time(foreach(i = 1:N) %do% {sum(rnorm(N))})
 user  system elapsed 
90.39    0.00   90.42 

> system.time(foreach(i = 1:N) %dopar% {sum(rnorm(N))} )
user  system elapsed 
17.00    0.89  177.11 
user1357015
  • 11,168
  • 22
  • 66
  • 111
  • 6
    Parallel processing is not magic pixie dust that automatically makes everything faster. If each task is very simple, the overhead of splitting/tracking the parallelism may outweigh the gains. – joran Apr 22 '13 at 19:14
  • The thing is I'm only splitting to three cores. Further, as can be seen below, the user gets a faster result: http://www.r-bloggers.com/simple-examplehow-to-use-foreach-and-dosnow-packages-for-parallel-computation/ – user1357015 Apr 22 '13 at 19:17
  • @user1357015 the overhead problem already occurs at two threads, the problem just becomes more pronounced when using more threads. – Paul Hiemstra Apr 22 '13 at 19:20

0 Answers0