According to most voted ansewer here: Is the literal 0xffffffff int or unsigned in C++. There says "Assuming your implementation has 32-bit int, since 0xffffffff does not fit in an int, its type should be unsigned int." unfortunately, can't comment there due to new here.
Question 1: but why does not 0xffffffff fit in an int? Because its value is 4294967295? If yes, then we are interpreting it as unsigned
already. why don't we interpret it as signed -1
?
cout << std::is_signed<decltype(0xffffffff)>::value << " " << 0xffffffff << endl;
cout << std::is_signed<decltype(0xffffffe)>::value << " " << 0xfffffffe << endl;
bool same = std::is_same_v<decltype(0xffffffff), decltype(0xfffffffe)>;
std::cout << "0xFFFFFFFF and 0xFFFFFFFE are the same type? " << std::boolalpha << same << '\n';
decltype(0xffffffff) a = -1;
decltype(0xfffffffe) b = -1;
cout << a << " " << b << endl;
The output is:
0 4294967295
1 4294967294
0xFFFFFFFF and 0xFFFFFFFE are the same type? true
4294967295 4294967295
Question 2: why do 0xffffffff and 0xfffffffe have different signs but are the same type?
Question 3: if the sign of 0xfffffffe is 1, then why isn't itself a negative number?
Question 4: if the sign of 0xfffffffe is 1, then why isn't b
-1?