In C, putting either 0x
or 0
before an integer literal value will cause the compiler to interpret the number in base 16 (hexadecimal) and base 8 (octal), respectively.
Normally, we interpret and read numbers in base 10 (decimal). However, we sometimes will use these other bases because they are useful powers of 2 that can represent groups of bits (1 hexadecimal digit = 4 bits, 1 octal digit = 3 bits), and bit manipulation is something that C wants to provide. This is why you'll see something like char
be represented with 2 hexadecimal digits (e.g. 0x12
) to set a single char
to be a specific bit sequence.
If you wanted to verify this, printf
also allows you to output int
in hexadecimal and octal as well using %x
and %o
respectively instead of %d
.
#include <stdio.h>
int main()
{
int a = 0100;
int b = 010;
int c = 1111;
int d = 01111;
printf("0100 => %o, 010 => %o, 1111 => %d, 01111=> %o\n", a,b,c,d);
}
If you run this program, you'll get the following:
gcc -ansi -O2 -Wall main.c && ./a.out
0100 => 100, 010 => 10, 1111 => 1111, 01111=> 1111
...which is exactly what you set the values to in the program, just without the prefixes. You just mistakenly used another integer encoding by accident on assignment in the original code, and used a different one to output the value.