6

I'm evaluating Visual C++ 10 optimizing compiler on trivial code samples so see how good the machine code emitted and I'm out of creative usecases so far.

Is there some sample codebase that is typically used to evaluate how good an optimizing C++ compiler is?

sharptooth
  • 167,383
  • 100
  • 513
  • 979
  • 5
    If I were evaluating a compiler, I'd look at some code that's representative of the sort of I thing I am going to be compiling with it... – NPE Sep 12 '11 at 10:37
  • 3
    You really need quite a large and diverse code corpus to do this properly, since there are so many different classes of optimisation that a compiler might apply. Alternatively just benchmark code that is relevant to your own specific needs and use cases. – Paul R Sep 12 '11 at 10:37
  • 1
    Microsoft has a couple articles about their own tests FYI, http://blogs.msdn.com/b/vcblog/archive/2010/07/07/how-we-test-the-compiler-performance.aspx and http://blogs.msdn.com/b/vcblog/archive/2009/12/01/gl-and-pgo.aspx But as @aix said, the most relevant test is going to be one on a codebase representing what you're actually doing. – HostileFork says dont trust SE Sep 12 '11 at 10:44
  • 1
    Are boost regression tests good enough for it? – ks1322 Sep 12 '11 at 10:48
  • I have to agree with the other comments; the only useful test is whether it's good at compiling *your* code (vs. contrived examples). – Oliver Charlesworth Sep 12 '11 at 10:58
  • 1
    @aix: I'd propose to compile _the_ code instead of an _equivalent_. Saves a lot of effort and prevents potential unequivalence ;) – Sebastian Mach Sep 12 '11 at 11:14
  • 1
    @phresnel: *The* code is even better (provided it's already been written ;-)) – NPE Sep 12 '11 at 11:16

4 Answers4

3

The only valid benchmark is one that simulates the type of code you're developing. Optimizers react differently to different applications and different coding styles, and the only one that really counts is the code that you are going to be compiling with the compiler.

James Kanze
  • 150,581
  • 18
  • 184
  • 329
2

Try benchmarking such libraries as Eigen (http://eigen.tuxfamily.org/index.php?title=Main_Page).

quant_dev
  • 6,181
  • 1
  • 34
  • 57
  • 2
    And then choose the best compiler for Eigen? (And not the one creating the best code for your problem). – Christopher Sep 12 '11 at 11:04
  • 1
    Correct me if I'm wrong, but I understand that the OP is not interested in benchmarking the compiler against one particular piece of code. – quant_dev Sep 12 '11 at 11:23
1

Quite a few benchmarks use scimark: http://math.nist.gov/scimark2/download_c.html however, you should be selective in what you test (test in isolation), as some benchmarks might fail due to poor loop unrolling but the rest of the code was excellent, but something else does better only cause of the loop unrolling (ie the rest of its generated code was sub-par)

Necrolis
  • 25,836
  • 3
  • 63
  • 101
1

As has been already said, you really need to measure optimisation within the context of typical use cases for your own applications, in typical target environments. I include timers in my own automated regression suite for this reason, and have found some quite unusual results as documented in a previous question FWIW, I'm finding VS2010 SP1 is creating code about 8% faster on average than VS2008 on my own application, with about 13% with whole program optimization. This is not spread evenly across use cases. I also tend to see significant variations between long test runs, which are not visible profiling much smaller test cases. I haven't carried out platform comparisons yet, e.g. are many gains platform or hardware specific.

I would imagine that many optimisers will be fine tuned to give best results against well known benchmark suites, which could imply in turn that these are not the best pieces of code against which to test the benefits of optimisation. (Speculation of course)

Community
  • 1
  • 1
SmacL
  • 22,555
  • 12
  • 95
  • 149