0

I have a question about signed numbers and hexadecimal numbers and their use in arithmetic in C. From what I understand, signed numbers generally are able to store smaller numbers than their unsigned counterparts.

For example, a signed integer that is 32 bits in length has a maximum value of 2,147,483,647; whereas, an unsigned 32-bit integer has a range of up to 4,294,967,295.

It appears that the value of these numbers overflows when performing addition on the highest possible values:

printf("My integer: %i\n", 2147483647 + 1);

The output that I get is:

My integer: -2147483648

However, despite this overflow, the hexadecimal number string representation appears to be well-formed and correct for the operation of addition.

printf("My hexadecimal: %#X\n", 0x7FFFFFFF + 0x1); // 2147483647 + 1 in Hex

The output that I get is:

My hexadecimal: 0X80000000

My question is, would there be any situation where performing this type of addition and then looking at the hexadecimal representation is beneficial?

At first glance, it appears that this method would give us access to the entire range of the 32-bit number for the operation of addition. Any thoughts or comments are appreciated. Cheers

Adam Bak
  • 1,269
  • 12
  • 15

1 Answers1

2

Overflow on signed integers is undefined behavior. Which means you can't reliably predict what will happen when you do so.

That being said, what you're seeing here is an illustration that your system is using 2's complement for representing negative values in signed integers.

While 2's complement is very common, it's not universal. Some systems may use a sign bit instead. So for maximum portability, you shouldn't depend on this behavior.

dbush
  • 205,898
  • 23
  • 218
  • 273