From the standard draft N1256, section 6.4.4.1, clause 5:
The type of an integer constant is the first of the corresponding list in which its value can be represented.
From the table, it seems that octal and hexadecimal constants without suffixes will be assigned any standard integer type (the smallest suitable combination of signed
/unsigned
and int
/long int
/long long int
). For decimal constants, only the signed
types will be considered by default.
It makes sense to me why u
/U
exist; signed overflow behaviour is different from unsigned overflow behaviour at best, and UB at worst, so I guess it may be necessary to specify the unsignedness of a literal in complicated expressions.
This leaves us with the L
/l
and LL
/ll
suffixes. They can only be used to select the lower bound of the type of integer literals. What practical purpose does that serve in C99?