There are a few reasons.
1. Compilation takes longer time
For small and even medium sized projects, this is rarely an issue today. Modern computers are VERY fast. If it takes five or ten seconds usually does not matter. But for larger projects it does matter. Especially if the build process is not setup properly. I remember when I was trying to add a feature to the game The Battle for Wesnoth. Compilation took around ten minutes. It's easy to see how much you would want to reduce that to five minutes or lower if you could.
2. Optimized code is harder to debug
The reason that it makes code harder to debug is that the debugger does not run the program line by line. That's just an illusion. Here is an example where it might be a problem:
int main(void) {
char str[] = "Hello, World!";
int number_of_capital_letters = 0;
for(int i=0; i<strlen(str); i++) {
if(isupper(str[i]))
number_of_capital_letters++;
}
printf("%s\n", str);
// Outcommented for debugging reasons
// printf("%d\n", number_of_capital_letters);
}
You fire up your debugger and wonders why it does not keep track of number_of_capital_letters
. And then you find out that since you have commented out the last printf
statement, the variable is not used for any observable behavior so the optimizer changes your code to:
int main(void) {
puts("Hello, World!");
}
One could argue that you then just turn off optimizer for a debug build. And that's true in the world when a cow is a sphere. But a third reason is
3. Sometimes bugs only show up at higher optimization levels.
Imagine that you have a big code base. When you upgrade the compiler, a bug suddenly emerges. And it seems to vanish when you remove optimization. What's the problem here? Well, it could be a bug in the optimizer. But it could also be a bug in your code that manifested itself with the new version of the optimizer. Very often, code with undefined behavior behaves different in code compiled with optimization.
So what do you do? You could try to figure out if the bug is in the optimizer or your code. That can be a VERY time consuming task. Let's assume it's a bug in the optimizer. What to do? You could downgrade your compiler, which is not optimal for several reasons. Especially if it's an open source project. Imagine downloading the source and then run the build script and scratching your head for hours to figure out what's wrong, and then you see in some documentation (provided that the author documented it) that you need a specific version of a specific compiler.
Let's instead assume it's a bug in your code. The ideal thing is of course to fix it. But maybe you don't have the resources to do so. This time you can also require anyone who compiles it to use a certain version of a specific compiler.
But if you could just edit a Makefile and replace -O3
with -O2
, you can clearly see that it's a viable option sometimes in our non-ideal world where time is not an endless resource. With a bit of bad luck, such a bug can take a week to track down. Or more. That's time you can spend somewhere else.
Here is an example of such a bug:
#include <stdio.h>
int main(void) {
char str[] = "Hello";
str[5] = '!';
puts(str);
}
When I compiled this with gcc 10.2 I got different results depending on optimization level.
Without optimization:
Hello!
With optimization:
Hello!`@
Try it out yourself:
https://godbolt.org/z/5dcKKrEW1
https://godbolt.org/z/48bz5ae1d
And here I found a forum thread where the debug build works but not release: https://developer.apple.com/forums/thread/15112
4. Sometimes bugs only show up at LOWER optimization levels.
Yep, that may also happen. In this case, you could just increase the optimization if you don't care that much about correctness. But if you do care, this can be a way to find bugs. If your code runs correctly both with and without optimization, it's more likely to not contain bugs that will haunt you in the future compared to if you only have compiled with optimization.
I did not find an example that worked, but this might theoretically do.
int main(void) {
if(1/0) // Division by zero
puts("An error has occurred");
else
puts("Everything is fine");
}
If this is compiled without optimization, it's a high probability that it will crash. But the optimizer might assume that undefined behavior (like division by zero) never occurs, so it optimizes the code to just:
int main(void) {
puts("Everything is fine");
}
Assume that 1/0
is some kind of error check that is very unlikely to evaluate to true, so you would normally assume the program prints "Everything is fine". Here, the optimizer hides a bug.
5. The optimizer might produce a binary that's bigger in size, or is using more memory. Or something else that's not desirable.
This sometimes matters. Especially in embedded systems. Usually (always) -O0
produces very big code, but you might want to use -Os
(optimize for size instead of speed) instead of -O3
to get a small binary. And sometimes also to get faster code. See below.
6. The optimizer might produce slower code
Yep, really. It's not often, but it may happen. A related but not equivalent example is illustrated in this question where the compiler generates faster code when optimizing for size of executable than speed.