-1

I have read that if the foreach is very simple, the overhead that I get for using the parallel foreach no worth the cost. So I have a simple WPF application to do some tests. I have this code:

//Parallel.Foreach
txtLog.Text = txtLog.Text + "\r\n\r\n\r\nSe inicia el Parallel.Foreach a " + DateTime.Now;
miSw.Restart();
Parallel.ForEach(miLstInt,
    (iteradorInt, state) =>
    {
        if (iteradorInt >= 500000)
        {
            state.Stop();
        }
    });
miSw.Stop();

txtLog.Text = txtLog.Text + "\r\nTiempo total del Parallel.Foreach: " + miSw.ElapsedMilliseconds.ToString();



//Forech
txtLog.Text = txtLog.Text + "\r\n\r\nSe inicia el foreach a " + DateTime.Now;
miSw.Restart();
foreach (int i in miLstInt)
{
    if (i >= 500000)
    {
        break;
    }
}
miSw.Stop();
txtLog.Text = txtLog.Text + "\r\nTiempo total del foreach: " + miSw.ElapsedMilliseconds.ToString();

I have a button that when I click it, it run the two foreach and show the results in a textBox.

When I run first time, parallel foreach takes about 29ms and foreach about 3ms. But the second time that I run it and the next times, parallel foreach takes 0ms and foreach is between 2 or 3ms, more times 3 than 2, but the results are more stable.

So my doubt is, why is it more slowly the first time but later is faster? should I have to considerate this, if I will run a command many times, although the first time is slower, does it worth the parallel foreach if the next times it will be faster?

Nisarg Shah
  • 14,151
  • 6
  • 34
  • 55
Álvaro García
  • 18,114
  • 30
  • 102
  • 193
  • 2
    I strongly suspect this is just regular JIT time because you measuring time in less than optimal manner - https://stackoverflow.com/questions/457605/how-to-measure-code-performance-in-net... Consider just closing as duplicate if agree. – Alexei Levenkov Oct 19 '17 at 17:37

1 Answers1

1

Parallel Foreach uses a managed thread pool, so that first run cost may represent an initial spawning of threads.

The threads would be left in the pool and re-used on subsequent runs.

Statistically, you probably want to generate bigger numbers to measure performance differences - and you can't create a Jefferson law for this as different workloads benefit more or less from parallelization.

Fenton
  • 241,084
  • 71
  • 387
  • 401
  • 2
    though your statement maybe true but two codes provided by OP are not identical. parallel loop may break sooner before iterating over all indexes. – M.kazem Akhgary Oct 19 '17 at 17:44