4

The task is to speed up a summation by using parMap or parListChunk or better. It actually runs slower with parallelism code.

Edit: Facepalm.. I overlooked how to execute the application correctly.

Don't forget to add cores by

./myHaskellApp paramaters +RTS -N4 -sstderr

Where N4 is number of cores.

Jeremiah Willcock
  • 30,161
  • 7
  • 76
  • 78
gorn
  • 8,097
  • 5
  • 37
  • 44
  • 1
    Are these also slower for really big lists? – gspr Mar 06 '12 at 07:55
  • Depending on how expensive `euler` is, `5` may be much too small a chunk size. What if you try something more like 2000? (Although considering the smallitude of 15000, `euler` is probably fairly expensive, in which case this wouldn't work) – luqui Mar 06 '12 at 08:15
  • The best tool to analyze your parallel program is http://www.haskell.org/haskellwiki/ThreadScope – Chris Kuklewicz Mar 06 '12 at 13:48
  • Considering that the list doesn't even exist for the non parallel case (it compiles down to a non-allocating loop) but for the parallel case you must allocate a large list, chop it into chunks, and spark work. Are you really surprised? If you post the full code I might give it a look later. – Thomas M. DuBuisson Mar 06 '12 at 16:43
  • I will try larger chunks later. – gorn Mar 06 '12 at 20:23
  • Ah jeez! I didn't run the application with correct arguments to add more cores to the execution. – gorn Mar 07 '12 at 00:11
  • Performance Hint ;-} tag your haskell questions "Haskell" – Gene T Mar 07 '12 at 10:36

1 Answers1

6

Always make sure that you're actually running -threaded, optimzied with -O2, and you are using a reasonable number of cores (e.g. -N4). Furthermore, check your garbage collector statistics.

Community
  • 1
  • 1
Don Stewart
  • 137,316
  • 36
  • 365
  • 468