I would use the properties of two's complement to compute the values.
unsigned int uint_max = ~0U;
signed int int_max = uint_max >> 1;
signed int int_min1 = (-int_max - 1);
signed int int_min2 = ~int_max;
2^3 is 1000
. 2^3 - 1 is 0111
. 2^4 - 1 is 1111
.
w
is the length in bits of your data type.
uint_max
is 2^w - 1, or 111...111
. This effect is achieved by using ~0U
.
int_max
is 2^(w-1) - 1, or 0111...111
. This effect can be achieved by bitshifting uint_max
1 bit to the right. Since uint_max
is an unsigned value, the logical shift is applied by the >>
operator, means it adds in leading zeroes instead of extending the sign bit.
int_min
is -2^(w-1), or 100...000
. In two's complement, the most significant bit has a negative weight!
This is how to visualize the first expression for computing int_min1
:
...
011...111 int_max +2^(w-1) - 1
100...000 (-int_max - 1) -2^(w-1) == -2^(w-1) + 1 - 1
100...001 -int_max -2^(w-1) + 1 == -(+2^(w-1) - 1)
...
Adding 1 would be moving down, and subtracting 1 would be moving up. First we negate int_max
in order to generate a valid int
value, then we subtract 1 to get int_min
. We can't just negate (int_max + 1)
because that would exceed int_max
itself, the biggest int
value.
Depending on which version of C or C++ you are using, the expression -(int_max + 1)
would either become a signed 64-bit integer, keeping the signedness but sacrificing the original bit width, or it would become an unsigned 32-bit integer, keeping the original bit width but sacrificing the signedness. We need to declare int_min
programatically in this roundabout way to keep it a valid int
value.
If that's a bit (or byte) too complicated for you, you can just do ~int_max
, observing that int_max
is 011...111
and int_min
is 100...000
.
Keep in mind that these techniques I've mentioned here can be used for any bit width w of an integer data type. They can be used for char
, short
, int
, long
, and also long long
. Keep in mind that integer literals are almost always 32-bits by default, so you may have to cast the 0U
to the data type with the appropriate bit width before bitwise NOTing it. But other than that, these techniques are based on the fundamental mathematical principles of two's complement integer representation. That said, they won't work if your computer uses a different way of representing integers, for example ones' complement or most-significant sign-bit.