I have two related questions:
What does the standard say, and what do different compilers do, when it comes to comparison between arithmetic expression of the form x * y == z (or x + y == z), where x * y is too large for either x or y to hold, but not larger than z.
What about comparison between signed and unsigned integers of equal width with the same underlying binary value?
The example below may clarify what I mean
#include <stdio.h>
#include <stdint.h>
#include <string.h>
int main (void)
{
uint8_t x = 250;
uint8_t y = 5;
uint16_t z = x*y;
uint8_t w = x*y;
if (x * y == z) // true
puts ("x*y = z");
if ((uint16_t)x * y == z) // true
puts ("(uint16_t)x*y = z");
if (x * y == (uint8_t)z) // false
puts ("x*y = (uint8_t)z");
if (x * y == w) // false
puts ("x*y = w");
if ((uint8_t)(x * y) == w) // true
puts ("(uint8_t)x*y = w");
if (x * y == (uint16_t)w) // false
puts ("x*y = (uint16_t)w");
int8_t X = x;
if (x == X) // false
puts ("x = X");
if (x == (uint8_t)X) // true
puts ("x = (uint8_t)X");
if ((int8_t)x == X) // true
puts ("(int8_t)x = X");
if (memcmp (&x, &X, 1) == 0) // true
puts ("memcmp: x = X");
}
The first part does not surprise me: as explained in Which variables should I typecast when doing math operations in C/C++? the compiler implicitly promotes shorter to longer integers during arithmetic operations (and I suppose this applies to comparison operators). Is this the guaranteed standard behaviour?
But the answer to that question, as well as the answer to Signed/unsigned comparisons , say that signed integers should be promoted to unsigned. I was expecting that x == X
above would be true, since they hold the same data (see memcmp
). What seems to happen instead is that both are promoted to wider integers and then the signed-to-unsigned (or vice versa) happens.
EDIT 2:
In particular I am interested in cases where say a function returns int
that will be -1 in case of error, otherwise will represent e.g. number of bytes written, which should always be positive. Standard functions of this type return ssize_t
, which if I'm not mistaken on most platforms is same as int64_t
, but the number of bytes written can be all the way to UINT64_MAX
. So if I want to compare the returned int
or ssize_t
to an unsigned value for expected bytes written, an explicit cast to unsigned int
or size_t
(if I'm not mistaken same width as ssize_t
but unsigned) is needed?
EDIT 1:
I can't make sense of the following:
#include <stdio.h>
#include <stdint.h>
int main (void)
{
int8_t ssi = UINT8_MAX;
uint8_t ssu = ssi;
printf ("ssi = %hhd\n", ssi); // -1
printf ("ssu = %hhu\n", ssu); // 255
if (ssi == ssu) // false
puts ("ssi == ssu");
puts ("");
int16_t si = UINT16_MAX;
uint16_t su = si;
printf ("si = %hd\n", si); // -1
printf ("su = %hu\n", su); // 65535
if (si == su) // false
puts ("si == su");
puts ("");
int32_t i = UINT32_MAX;
uint32_t u = i;
printf ("i = %d\n", i); // -1
printf ("u = %u\n", u); // 4294967295
if (i == u) // true????
puts ("i == u");
puts ("");
int64_t li = UINT64_MAX;
uint64_t lu = li;
printf ("li = %ld\n", li); // -1
printf ("lu = %lu\n", lu); // 18446744073709551615
if (li == lu) // true
puts ("li == lu");
}
While the 64-bit example may be explained by the fact that there is no wider integer to go to, the 32-bit one is counter-intuitive. Shouldn't it be the same as the 8- and 16-bit cases?