2

I have a simple question about Eclipse CDT and GNU GCC compiler.

The application is compiled in

  • Debug mode, i.e., Optimization = None(-O0), Debugging = Maximum(-g3), versus application compiled in
  • Optimized mode, i.e., Optimization = Maximum(-O3), Debugging = None.

Apart from the performance difference, is it guaranteed that the application compiled in these 2 mode generates the exactly same results?

I am about to release the application to the end-users, the application is server based, it handles several multicast data feeds. Can anyone offer some advice on which compilation mode I should choose for the final release to its end-users.

Thanks.

2607
  • 4,037
  • 13
  • 49
  • 64

1 Answers1

7

It is only guaranteed that your program will produce the same results if your code is fully standards-compliant. There are many ways you can write code that has "undefined behaviour" that actually works on an unoptimized build, but may break when optimized.

For example, suppose I have:

struct A
{
   int i;
};

struct B
{
   int i;
};

int main()
{
    A a;
    a.i = 10;
    B* b = reinterpret_cast<B*>(&a);
    std::cout << b->i << std::endl;
    return 0;
}

This will almost certainly print out 10, but a compiler could legitimately generate code that does something else due to strict aliasing rules

Community
  • 1
  • 1
je4d
  • 7,628
  • 32
  • 46
  • Plus for compiling with O3 the compile time tends to be longer, you don't wan't to that much while developing the application. – ther Apr 22 '12 at 15:45