2

This question derives from the reason I asked my last question on foreach loops. I have a large string array (say in the thousands), and I want to iterate through the array and also be able to break out based on a certain condition, and I need optimal performance.

Some example code:

for(int i = 0; i < array.length && flag == true; i++){
    //Processing and set flag
}

//..or

foreach(string item in array){
    //processing...set flag
    if(!flag)
        break;
}

Which way would be less expensive?

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Nick Rolando
  • 25,879
  • 13
  • 79
  • 119
  • 3
    honestly it won't matter one bit, your performance problems are going to be elsewhere – BrokenGlass Jan 11 '11 at 18:23
  • 1
    Have you tried measuring it yourself? – Matěj Zábský Jan 11 '11 at 18:24
  • Have a look at this: http://stackoverflow.com/questions/1124753/for-vs-foreach-loop-in-c – Larry Hipp Jan 11 '11 at 18:24
  • Have you tried it yourself? It is not that hard to write a program that tests this. Also, in the second approach you don't need a flag, just use `break` instead. – Tomas Jansson Jan 11 '11 at 18:25
  • @tomas: Oh! Had a logic error >< fixed it. sorry about that. And I know, but its funner to ask, see what people have to say, and give points :P Plus, Dan Tao gave me a lot of relevant information I didn't know and couldn't have gotten by just bench testing – Nick Rolando Jan 11 '11 at 18:55

6 Answers6

5

You can always benchmark them. Use a Stopwatch and iterate over, say, ten million iterations to see which goes faster.

What I think you'll find, though, is that the two are nearly identical since the JIT compiler optimizes foreach on an array to basically a for.

flevine100 is actually right that in general a for is slightly more efficient than a foreach for types whose GetEnumerator methods create a new object implementing IEnumerator or IEnumerator<T> (due to the memory allocation and method call overhead); this is less the case for most of the collections in System.Collections.Generic, however, due to their explicit IEnumerable implementation using value type enumerators (not to mention that the foreach construct does not actually require an IEnumerable implementation in the first place).

It's even less the case for arrays specifically because they are fixed-size and therefore trivial to optimize by the JIT compiler.

Community
  • 1
  • 1
Dan Tao
  • 125,917
  • 54
  • 300
  • 447
  • For sure, I was thinking that foreach would have a little more overhead due to its flexiblity. stopwatch is a good idea, just wondering if someone knew offhand if foreach contained extra avoidable overhead or not – Nick Rolando Jan 11 '11 at 18:29
  • Wow thx for the edit. kind of what i thought, thx :) – Nick Rolando Jan 11 '11 at 18:41
2

I have found that for(...) is faster than foreach(). I think it's because foreach() uses the IEnumerable plumbing.

Since you're concerned about speed... In .NET 4.0 If your loop body is not relying on shared state, you should use Parallel.For or Parallel.Foreach to scale your processing out onto multiple processors.

fbl
  • 2,840
  • 3
  • 33
  • 41
  • 1
    Interesting - my testing (just for fun) shows the opposite - foreach *over an array* often outperforms for in my testing. That being said, from a practical standpoint, there's no "real" difference, IMO. (Also, these types of benchmarks are often horribly flawed, since so much depends on platform and build settings...) – Reed Copsey Jan 11 '11 at 18:27
  • Agreed, and it's usually dependent on the scenario. My scenarios showed for loops to be better than foreach, but that could be an artifact of my problem. I had huge arrays with a very small loop body. Usually the loop body will dominate the runtime, not the choice of loop construct. – fbl Jan 11 '11 at 18:31
2

I would not focus on this level of micro-optimization.

Chances are you have much better optimization opportunities, especially if you're working on strings. for/foreach differences will be such a small fraction of your overall runtime that it will perform essentially the same.

It would be much better to make the algorithm as "clean" as possible, and look for other performance opportunities if required, such as threading the entire routine.

Reed Copsey
  • 554,122
  • 78
  • 1,158
  • 1,373
  • Ok, thx. I'm not too familiar with multithreading.. go figure they wouldn't teach you that in a 4yr programming degree – Nick Rolando Jan 11 '11 at 18:40
  • @Nicklamort: In general, focus on optimizing your algorithm, not the actual lines of code - unless, of course, you measure and find a real bottleneck. – Reed Copsey Jan 11 '11 at 18:42
1

Without benchmarking, I would be very surprised if there was a noticeable difference between the two (the answer, of course, is highly dependent on the work being done within the loop and the type of collection).

These are the kinds of things that, in my experience, never create performance bottlenecks in production code. Any application that does anything significant is undoubtedly involved with some sort of I/O or network interaction that accounts for the the majority of the performance penalty.

If you are concerned though, I would highly recommend profiling the offending code and find which is faster.

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Andrew Hare
  • 344,730
  • 71
  • 640
  • 635
  • stopwatch/benchmarking seems to be the way to go. wondering if someone new off hand if foreach had a lot of overhead or something – Nick Rolando Jan 11 '11 at 18:30
1

In the second example you don't have an early exit clause, though adding break in place of your flag will achieve that.

I'm unclear of internals except that foreach uses Enumerators and the for loop will depend on the scalability of your element accessor. On a list they are effectively equal once you add that break.

Reddog
  • 15,219
  • 3
  • 51
  • 63
0

For a simple bare array, the for loop will tend to produce slightly smaller IL. Compare

    static int[] array = new int[100];

    static void UseForLoop () {
        for (int i = 0; i < array.Length; ++i) {
            Console.WriteLine(array[i]);
        }
    }

    static void UseForeachLoop () {
        foreach (int i in array) {
            Console.WriteLine(i);
        }
    }

which produces the following sets of IL from VS 2010, default release configuration:

.method private hidebysig static void UseForLoop() cil managed
{
        .maxstack 2
        .locals init (
                [0] int32 i)
        L_0000: ldc.i4.0 
        L_0001: stloc.0 
        L_0002: br.s L_0014
        L_0004: ldsfld int32[] ConsoleApplication5.Program::array
        L_0009: ldloc.0 
        L_000a: ldelem.i4 
        L_000b: call void [mscorlib]System.Console::WriteLine(int32)
        L_0010: ldloc.0 
        L_0011: ldc.i4.1 
        L_0012: add 
        L_0013: stloc.0 
        L_0014: ldloc.0 
        L_0015: ldsfld int32[] ConsoleApplication5.Program::array
        L_001a: ldlen 
        L_001b: conv.i4 
        L_001c: blt.s L_0004
        L_001e: ret 
}

.method private hidebysig static void UseForeachLoop() cil managed
{
        .maxstack 2
        .locals init (
                [0] int32 i,
                [1] int32[] CS$6$0000,
                [2] int32 CS$7$0001)
        L_0000: ldsfld int32[] ConsoleApplication5.Program::array
        L_0005: stloc.1 
        L_0006: ldc.i4.0 
        L_0007: stloc.2 
        L_0008: br.s L_0018
        L_000a: ldloc.1 
        L_000b: ldloc.2 
        L_000c: ldelem.i4 
        L_000d: stloc.0 
        L_000e: ldloc.0 
        L_000f: call void [mscorlib]System.Console::WriteLine(int32)
        L_0014: ldloc.2 
        L_0015: ldc.i4.1 
        L_0016: add 
        L_0017: stloc.2 
        L_0018: ldloc.2 
        L_0019: ldloc.1 
        L_001a: ldlen 
        L_001b: conv.i4 
        L_001c: blt.s L_000a
        L_001e: ret 
}

..but the key parts there, the loops, are basically the same. As others have said, this is kind of a micro-optimization, too. The JIT'd x86 from these two methods is probably going to be the same, and unless your iterating over a complex collection with a complicated enumerator, the difference is not likely to be huge even in a practical example.

I'd use the one that is more readable -- if speed is really that much of a concern, favor a for loop, but you'd likely get better results from algorithmic optimizations.