GCC treats int i=048;
as an error because of 048
should be octal number, but 8
can't appear in an octal number.
But why couldn't GCC be more intelligent and treat is as a decimal number?
GCC treats int i=048;
as an error because of 048
should be octal number, but 8
can't appear in an octal number.
But why couldn't GCC be more intelligent and treat is as a decimal number?
Because it would go against the syntactic rules of the language 'C' if it had interpreted 048 as a decimal. Any number that starts with 0 should be interpreted as "octal".
And that's a good thing, because compilers strive hard to be standard-compliant.
Also assume you're writing a C parser for your own C compiler that can actually "understand" you meant 048, 049..
were all decimal numbers. Now how would you make that parser? Its possible but unbelievably complicated. And source of tonnes of bugs.
It's not really a fault of GCC, since it strives to conform to the standard.
But imagine if it did make an exception of the form "If a numeric token begins with 0 but isn't Octal, then treat it as decimal". Not only is this relatively complicated rule -- "if the code doesn't make sense in the usual way, fall back to an alternative interpretion and see if it makes sense that way" -- but it presents all sorts of other unexpected behaviour:
/* My bearings */
int east = 000; // 0 in Octal = 0
int northeast = 045; // 45 in Octal = 37
int north = 090; // Starts with 0 but contains 9, must be decimal = 90
int northwset = 135; // Starts with 1, is decimal = 135
...
It's true that similar code with the existing behaviour could also pass through the compiler with unintended values for the variables. The point is that adding a special rule to help your case will leave other cases remaining. It is better to catch errors and treat them as such, than to figure out that some of them can be interpreted differently.
(FWIW, I've never used the Octal notation and find its presence scary, because in many other situations I'll pad decimal numbers with 0s for presentation. Remembering never to do that in C takes a little bit of extra brain power.)
The C language requires that a diagnsotic (error) be issued in this case. GCC must comply.
Aside from that, the type of behavior you're advocating is extremely harmful. It would mask various bugs/typos, and introduce a very confusing inconsistency. For example, if some code contained:
int x = 01800;
and you changed the 18
to 20
, the value of x
would actually decrease!
What would you expect the greater-intelligence-having gcc
to do if you input
int i=047;
...and then, subsequently, what might you expect if one of your colleagues changed the program to read
int i=049;
..you and he or she would be floored to learn that the value had changed from 39
to 49
by only adding "2"!
The Principle of Least Astonishment helps guide designers of all kinds, and no doubt this may have been factor in the design of the C language.
That said, even less astonishing still would be octal literals which are not merely a leading zero away. To that end, languages like python and rust encode octal as "0o47
" (similar to C et al's hexadecimal literals).