After reading the 32 bit unsigned multiply on 64 bit causing undefined behavior? question here on StackOverflow, I began to ponder whether typical arithmetic operations on small unsigned types could lead to undefined behavior according to the C99 standard.
For example, take the following code:
#include <limits.h>
...
unsigned char x = UCHAR_MAX;
unsigned char y = x + 1;
The x
variable is initialized to the maximum magnitude for the unsigned char
data type. The next line is the issue: the value x + 1
is greater than UCHAR_MAX
and cannot be stored in the unsigned char
variable y
.
I believe the following is what actually occurs.
- The variable
x
is first promoted to data typeint
(section 6.3.1.1/2), thenx + 1
is evaluated as data typeint
.
Suppose there is an implementation where INT_MAX
and UCHAR_MAX
are the same -- x + 1
would result in a signed integer overflow. Does this mean that incrementing the variable x
, despite being an unsigned integer type, can lead to undefined behavior due to a possible signed integer overflow?