0

I have the two following pieces of code in C# and D, the goal was to compare speed in a simple loop.

D:

import std.stdio;
import std.datetime;

void main() {
    StopWatch timer;
    long summer = 0;
    timer.start();
    for (long i = 0; i < 10000000000; i++){
        summer++;
    }
    timer.stop();
    long interval_t = timer.peek().msecs;
    writeln(interval_t);
}

Output: about 30 seconds

C#:

using System;
using System.Diagnostics;

class Program{
    static void Main(){
        Stopwatch timer = new Stopwatch();
        timer.Start();
        long summer = 0;
        for(long i = 0; i < 10000000000; i++){
            summer++;
        }
        timer.Stop();
        Console.WriteLine(timer.ElapsedMilliseconds);
    }
}

Output: about 8 seconds

Why is the C# code so much faster?

Dan Doe
  • 1,146
  • 3
  • 14
  • 25
  • @MarcinJuraszek Another simple question. Did you read the title of the question ? – Jagannath Dec 29 '13 at 04:29
  • What he intended to ask is essentially: "Did you enable optimization when you ran the D compiler?" – Jerry Coffin Dec 29 '13 at 04:31
  • Surprising this is, you are not doing anything with the summer variable after the loop and yet C# is taking 8 secs for that not sure why it can't eliminate the entire operation. – Jagannath Dec 29 '13 at 04:31
  • C# compiler is csc, D compiler is dmd. – Dan Doe Dec 29 '13 at 04:33
  • 3
    Try enabling optimizations in DMD if you haven't already. `-O -inline -release -noboundscheck`. – eco Dec 29 '13 at 04:36
  • 3
    Enabling optimization on the D compiler changes the result to just around 7 seconds as well. Thanks for the answer, I did not expect the difference to be so huge. – Dan Doe Dec 29 '13 at 04:39

3 Answers3

11

There's a little more to this than just saying: "You didn't turn on the optimizer."

At least at a guess, you didn't (initially) turn on the optimizer in either case. Despite this, the C# version without optimization turned on ran almost as fast as the D version with optimization. Why would that be?

The answer stems from the difference in compilation models. D does static compilation, so the source is translated to an executable containing machine code, which then executes. The only optimization that happens is whatever is done during that static compilation.

C#, by contrast, translates from source code to MSIL, an intermediate language (i.e., basically a bytecode). That is then translated to machine language by the JIT compiler built into the CLR (common language runtime--Microsoft's virtual machine for MSIL). You can specify optimization when you run the C# compiler. That only controls optimization when doing the initial compilation from source to byte code. When you run the code, the JIT compiler does its thing--and it does its optimization whether you specify optimization in the initial translation from source to byte code or not. That's why you get much faster results with C# than with D when you didn't specify optimization with either one.

I feel obliged to add, however, that both results you got (7 and 8 seconds for D and C# respectively) are really pretty lousy. A decent optimizer should recognize that the final output didn't depend on the loop at all, and based on that it should eliminate the loop completely. Just for comparison, I did (about) the most straightforward C++ translation I could:

#include <iostream>
#include <time.h>

int main() {
    long summer = 0;
    auto start = clock();
    for (long i = 0; i < 10000000000; i++)
        summer++;
    std::cout << double(clock() - start) / CLOCKS_PER_SEC;
}

Compiled with VC++ using cl /O2b2 /GL, this consistently shows a time of 0.

Jerry Coffin
  • 476,176
  • 80
  • 629
  • 1,111
  • Thank you for taking the time to answer with such depth. Grade A answer. – Dan Doe Dec 29 '13 at 05:38
  • Someone file a bug. This is old hat for optimizer and with all the CTFE stuff in D it should be rather trivial to do: just keep track of what is deterministic and CTFE (almost) anything who's inputs are so marked. – BCS Dec 29 '13 at 07:06
  • 1
    LDC also has 0 seconds. Which simply confirms widely spread opinion that one should not use DMD if optimizations are really important. – Mihails Strasuns Dec 30 '13 at 08:59
7

I believe your question should be titled:

Why are for loops compiled by <insert your D compiler here> so much slower than for loops compiled by <insert your C# compiler/runtime here>?

Performance can vary dramatically across implementations, and is not a trait of the language itself. You are probably using DMD, the reference D compiler, which is not known for using a highly-optimizing backend. For best performance, try the GDC or LDC compilers.

You should also post the compilation options you used (optimizations may have been enabled with only one compiler).

See this question for more information: How fast is D compared to C++?

Community
  • 1
  • 1
Vladimir Panteleev
  • 24,651
  • 6
  • 70
  • 114
  • A good, state of the art compiler/optimizer should be able to reduce whole loop to `summer += 10000000000;` The fact that this doesn't devolve to a benchmark of `Stopwatch` indicates something in and of it's self. – BCS Dec 29 '13 at 05:22
2

Several answers have suggested that an optimizer would optimize the entire loop away.

Mostly they explicitly don't do that as they expect the programmer coded the loop that way as a timing loop.

This technique is often used in hardware drivers to wait for time periods shorter than the time taken to set a timer and handle the timer interrupt.

This is the reason for the "bogomips" calculation at linux boot time... To calibrate how many iterations of a tight loop per second this particular CPU/compiler can do.

John Carter
  • 460
  • 3
  • 10