20

Since this question is about the increment operator and speed differences with prefix/postfix notation, I will describe the question very carefully lest Eric Lippert discover it and flame me!

(further info and more detail on why I am asking can be found at http://www.codeproject.com/KB/cs/FastLessCSharpIteration.aspx?msg=3899456#xx3899456xx/)

I have four snippets of code as follows:-

(1) Separate, Prefix:

    for (var j = 0; j != jmax;) { total += intArray[j]; ++j; }

(2) Separate, Postfix:

    for (var j = 0; j != jmax;) { total += intArray[j]; j++; }

(3) Indexer, Postfix:

    for (var j = 0; j != jmax;) { total += intArray[j++]; }

(4) Indexer, Prefix:

    for (var j = -1; j != last;) { total += intArray[++j]; } // last = jmax - 1

What I was trying to do was prove/disprove whether there is a performance difference between prefix and postfix notation in this context (ie a local variable so not volatile, not changeable from another thread etc.) and if there was, why that would be.

Speed testing showed that:

  • (1) and (2) run at the same speed as each other.

  • (3) and (4) run at the same speed as each other.

  • (3)/(4) are ~27% slower than (1)/(2).

Therefore I am concluding that there is no performance advantage of choosing prefix notation over postfix notation per se. However when the Result of the Operation is actually used, then this results in slower code than if it is simply thrown away.

I then had a look at the generated IL using Reflector and found the following:

  • The number of IL bytes is identical in all cases.

  • The .maxstack varied between 4 and 6 but I believe that is used only for verification purposes and so not relevant to performance.

  • (1) and (2) generated exactly the same IL so its no surprise that the timing was identical. So we can ignore (1).

  • (3) and (4) generated very similar code - the only relevant difference being the positioning of a dup opcode to account for the Result of the Operation. Again, no surprise about timing being identical.

So I then compared (2) and (3) to find out what could account for the difference in speed:

  • (2) uses a ldloc.0 op twice (once as part of the indexer and then later as part of the increment).

  • (3) used ldloc.0 followed immediately by a dup op.

So the relevant IL for the incrementing j for (1) (and (2)) is:

// ldloc.0 already used once for the indexer operation higher up
ldloc.0
ldc.i4.1
add
stloc.0

(3) looks like this:

ldloc.0
dup // j on the stack for the *Result of the Operation*
ldc.i4.1
add
stloc.0

(4) looks like this:

ldloc.0
ldc.i4.1
add
dup // j + 1 on the stack for the *Result of the Operation*
stloc.0

Now (finally!) to the question:

Is (2) faster because the JIT compiler recognises a pattern of ldloc.0/ldc.i4.1/add/stloc.0 as simply incrementing a local variable by 1 and optimize it? (and the presence of a dup in (3) and (4) break that pattern and so the optimization is missed)

And a supplementary: If this is true then, for (3) at least, wouldn't replacing the dup with another ldloc.0 reintroduce that pattern?

Manishearth
  • 14,882
  • 8
  • 59
  • 76
Simon Hewitt
  • 1,391
  • 9
  • 24
  • 8
    If this is what slows down your application, then it's perfect and you can retire. – Yochai Timmer May 20 '11 at 17:22
  • 1
    Have you looked at (measured) the differences in optimization of array-bound checking ? That's usually the major factor and all your samples are off-the-path. You should worry about `intArray[j]` – H H May 20 '11 at 17:42
  • 3
    When you did your timings, did you compile with Release and run without debugging (i.e. Ctrl+F5)? – Jim Mischel May 20 '11 at 17:53
  • 1
    Why do you use `!=jmax` instead of ` – CodesInChaos May 20 '11 at 18:15
  • 1
    And why don't you inspect the generated x86 assembly? Just put a `Debugger.Break();` in front of your code and attach the debugger and get the asm code. As Jim said you must not start from the debugger, but need to attach later. – CodesInChaos May 20 '11 at 18:17
  • 1
    @Yochai At least the difference the bounds-checks make is significant quite often. Putting my loops in a form where the compiler removed the bounds-checks has given me massive speedups. – CodesInChaos May 20 '11 at 18:19
  • Thanks for the comments. The code is as it is because it is based around someone else's code (as mentioned in the article link). I am aware of array bounds checking optimization and the importance of running in Release mode. This code happens to be an array but the article also compares lists and other structures. I'm not even looking to make the code faster but just investigate why the IL is generated as it is in this exact scenario. – Simon Hewitt May 20 '11 at 18:21
  • 2
    + for managing the tricky process of asking a question that you know very well could be flame-bait because it's about a micro-optimization :) – Mike Dunlavey May 20 '11 at 20:58
  • @Mike Dunlavey: ya that too. ;-) – quentin-starin May 20 '11 at 22:12

3 Answers3

10

OK after much research (sad I know!), I think have answered my own question:

The answer is Maybe. Apparently the JIT compilers do look for patterns (see http://blogs.msdn.com/b/clrcodegeneration/archive/2009/08/13/array-bounds-check-elimination-in-the-clr.aspx) to decide when and how array bounds checking can be optimized but whether it is the same pattern I was guessing at or not I don't know.

In this case, it is a moot point because the relative speed increase of (2) was due to something more than that. Turns out that the x64 JIT compiler is clever enough to work out whether an array length is constant (and seemingly also a multiple of the number of unrolls in a loop): So the code was only bounds checking at the end of each iteration and the each unroll became just:-

        total += intArray[j]; j++;
00000081 8B 44 0B 10          mov         eax,dword ptr [rbx+rcx+10h] 
00000085 03 F0                add         esi,eax 

I proved this by changing the app to let the array size be specified on the command line and seeing the different assembler output.

Other things discovered during this excercise:-

  • For a standalone increment operation (ie the result is not used), there is no difference in speed between prefix/postfix.
  • When an increment operation is used in an indexer, the assembler shows that prefix notation is slightly more efficient (and so close in the the original case that I assumed it was just a timing discrepency and called them equal - my mistake). The difference is more pronounced when compiled as x86.
  • Loop unrolling does work. Compared to a standard loop with array bounds optimization, 4 rollups always gave an improvement of 10%-20% (and the x64/constant case 34%). Increasing the number of rollups gave varied timing with some very much slower in the case of a postfix in the indexer, so I'll stick with 4 if unrolling and only change that after extensive timing for a specific case.
Simon Hewitt
  • 1,391
  • 9
  • 24
8

Interesting results. What I would do is:

  • Rewrite the application to do the whole test twice.
  • Put a message box between the two test runs.
  • Compile for release, no optimizations, and so on.
  • Start the executable outside of the debugger.
  • When the message box comes up, attach the debugger
  • Now inspect the code generated for the two different cases by the jitter.

And then you'll know whether the jitter is doing a better job with one than the other. The jitter might, for example, be realizing that in one case it can remove array bounds checks, but not realizing that in the other case. I don't know; I'm not an expert on the jitter.

The reason for all the rigamarole is because the jitter may generate different code when the debugger is attached. If you want to know what it does under normal circumstances then you have to make sure the code gets jitted under normal, non-debugger circumstances.

Eric Lippert
  • 647,829
  • 179
  • 1,238
  • 2,067
  • Thanks Eric. Whilst I didn't do it quite as you mentioned, I now have 8 copies of the generated assembly ouput (4 tests each for X64 and X86) based on Release mode, running externally and attaching the debugger. I'm not an assembly expert but I can now see some patterns. – Simon Hewitt May 22 '11 at 08:00
  • @Simon: Regarding Eric's suggestions, in my tests, knowing that optimization is disabled under the debugger, I did all my timings **outside** the debugger. Since all the timings agreed within a few percent, I didn't see a need to investigate the assembly language. If different people are going to be benchmarking the same code, we need it to be exactly the same code. You are doing loop unrolling **and** moving the increment operator around and that's a different question. – Rick Sladkey May 22 '11 at 15:46
  • 1
    Just for clarification, all my timings were outside the debugger and outside VS too. I only used the debugger to attach to the already running app to get the JITted assembly. – Simon Hewitt May 22 '11 at 17:33
7

I love performance testing and I love fast programs so I admire your question.

I tried to reproduce your findings and failed. On my Intel i7 x64 system running your code samples on .NET4 framework in the x86|Release configuration, all four test cases produced roughly the same timings.

To do the test I created a brand new console application project and used the QueryPerformanceCounter API call to get a high-resolution CPU-based timer. I tried two settings for jmax:

  • jmax = 1000
  • jmax = 1000000

because locality of the array can often make a big difference in how the performance behaves and the size of the of loop increases. However, both array sizes behaved the same in my tests.

I have done a lot of performance optimization and one of the things that I have learned is that you can very easily optimize an application so that it runs faster on one particular computer while inadvertently causing it to run slower on another computer.

I am not talking hypothetically here. I have tweaked inner loops and poured hours and days of work to make a program run faster, only to have my hopes dashed because I was optimizing it on my workstation and the target computer was a different model of Intel processor.

So the moral of this story is:

  • Code snippet (2) runs faster than code snippet (3) on your computer but not on my computer

This is why some compilers have special optimization switches for different processors or some applications come in different versions even though one version could easily run on all supported hardware.

So if you are going to do testing like this, you have to do it same way that JIT compiler writers do: you have to perform your tests on a wide variety of hardware and then choose a blend, a happy-medium that gives the best performance on the most ubiquitous hardware.

Rick Sladkey
  • 33,988
  • 6
  • 71
  • 95
  • Hi Rick. I intended this question to be mainly theoretical about generated IL so kept code to the absolute minimum but since you have gone to the trouble of trying to reproduce the timing difference, I'll give you some more detail. The array size was 16,000,000 but more importantly the code I had unrolled each loop 16 times (just copy the line 16 times). Nothing special in the IL - what I quoted above is simply repeated 16 times in the IL too so I didn't mention it originally. My machine is an i5 X64 running AnyCPU/Release mode. I will try using x86 mode too to see if that makes a difference. – Simon Hewitt May 21 '11 at 06:04
  • Like I said, it's a good question. Intuitively we can say that the compiler(s) **ought** to be able to treat them the same. So in theory the IL total loads and stores should be the same. In practice the JIT compiler may have better luck with order of operations of one than the other and it may differ depending on the machine. – Rick Sladkey May 21 '11 at 06:20
  • Hi Rick, I have now seen some of the assembly produced and the timings really should not be the same. Is there any chance I can send you a copy of the app (140 lines) to retest on your machine? I am coming to some conclusions now which are quite interesting. – Simon Hewitt May 22 '11 at 07:49
  • @Simon: Search for my name on Google. You will be able to find a way to contact me from the first hit. – Rick Sladkey May 22 '11 at 15:50