Many languages (dating back at least to C
. I am not familiar with older ones) seem to have the convention the integer literals can be written in on of 3 bases with the following notation:
0x
or0X
prefixes a base-16 (hex) number0
prefixes a base-8 (octal) number- No prefix means base-10 (decimal) number
I understand the use of hex and decimal. They are used all over the place, have distinct use cases where they make the code more readable (a bitwise and
with 0x20000
is much clearer than with 131072
), and cannot be confused (a number with an 'x' or 'X' init does not make sense as a decimal number).
However, the octal literals seem like a mistake to me. First of all, I have never seen them used, and can't seem to think of a single use-case where it would make the code more readable to use them over hex or decimal. In addition, they can potentially be confused with decimal numbers (010
is a perfectly leagal decimal number outside of programming language domain).
So, do octal literals have a use, are they some form of historical leftover baggage from some time when they were useful, or are they just a mistake?