146

What is the difference between signed and unsigned int?

Moumita Das
  • 1,485
  • 2
  • 12
  • 5
  • 8
    This is a real question, and the answer is not so simple but rather subtle. – R.. GitHub STOP HELPING ICE Apr 21 '11 at 05:36
  • 2
    Voting to reopen. It might be a duplicate, but it's definitely a real question. – Brian Apr 21 '11 at 13:27
  • 5
    Re: "It might be a duplicate" - [What is a difference between unsigned int and signed int in C?](http://stackoverflow.com/questions/3812022/what-is-a-difference-between-unsigned-int-and-signed-int-in-c) – eldarerathis Apr 21 '11 at 21:27
  • 1
    More tags should be added, since many languages use them. – Juan Boero Jun 19 '15 at 14:02
  • 1
    This question may need a chapter to elaborate. If you want to know the ins and outs, check [Unsigned and Signed Integers](http://kias.dyndns.org/comath/13.html) for more explanation. – anonymous May 30 '17 at 02:20

5 Answers5

189

As you are probably aware, ints are stored internally in binary. Typically an int contains 32 bits, but in some environments might contain 16 or 64 bits (or even a different number, usually but not necessarily a power of two).

But for this example, let's look at 4-bit integers. Tiny, but useful for illustration purposes.

Since there are four bits in such an integer, it can assume one of 16 values; 16 is two to the fourth power, or 2 times 2 times 2 times 2. What are those values? The answer depends on whether this integer is a signed int or an unsigned int. With an unsigned int, the value is never negative; there is no sign associated with the value. Here are the 16 possible values of a four-bit unsigned int:

bits  value
0000    0
0001    1
0010    2
0011    3
0100    4
0101    5
0110    6
0111    7
1000    8
1001    9
1010   10
1011   11
1100   12
1101   13
1110   14
1111   15

... and Here are the 16 possible values of a four-bit signed int:

bits  value
0000    0
0001    1
0010    2
0011    3
0100    4
0101    5
0110    6
0111    7
1000   -8
1001   -7
1010   -6
1011   -5
1100   -4
1101   -3
1110   -2
1111   -1

As you can see, for signed ints the most significant bit is 1 if and only if the number is negative. That is why, for signed ints, this bit is known as the "sign bit".

Alexey Frunze
  • 61,140
  • 12
  • 83
  • 180
Bill Evans at Mariposa
  • 3,590
  • 1
  • 18
  • 22
  • 18
    Perhaps worth pointing out is that this is the two's complement format, which admittedly is nowadays widely used. There are also other ways to represent signed integers, most notably one's complement. – Schedler Apr 26 '11 at 11:59
  • Correct. And the ISO9899 C standard does not even require that either one's complement or two's complement be used; any other convention that actually works is permissible. – Bill Evans at Mariposa Apr 26 '11 at 15:31
  • 1
    Although two's complement is not required, `(unsigned)(-1)` is required to be the maximum representable value for `unsigned` (independent of the binary representation), which is trivially true for 2's complement, but not other representations. – rubenvb Nov 23 '11 at 11:52
  • 4
    @BillEvansatMariposa: The standard says that for signed integers there're 3 allowed representations: sign+magnitude, 2's complement, 1's complement. Any other would have to be invisible to the program and be perceived as one of these 3. – Alexey Frunze Nov 23 '11 at 11:57
  • Ok but under the hood! What is REALLY happening! What is the difference between a SIGNED and UNSIGNED number! How does the machine manage the computation? It just subtracts a value from the another? How it differences 1111 = 15 and 1111 = -1 ? – Mihail Georgescu Mar 18 '16 at 14:22
  • @Mihail Georgescu: I'm not sure what you're asking. Unsigned 1111 - 0001 = 1110: 15 - 1 = 14. Signed 1111 - 0001 = 1110: -1 - 1 = -2. Addition and subtraction work the same whether signed or unsigned. If the machine has both signed and unsigned integer overflow detection, then of course those would work differently from each other. – Bill Evans at Mariposa Mar 19 '16 at 01:32
  • @BillEvansatMariposa no sir! I didn't know how signed and unsigned actually work. Yesterday I found out that whatever the yield of the instruction is the answer is INTERPRETED by the language itself in case of negative numbers using TWO's complement. That's what confused me, didn't knew how the CPU can make the difference but he doesn't actually make any difference. Thank you for you answer! I salute you. – Mihail Georgescu Mar 19 '16 at 10:35
  • The difference between signed and unsigned integers is not just implemented in the language; it has support in the CPU itself. If you add two integers, you can check for overflow with one CPU flag (or set of flags) for unsigned, and with another CPU flag (or set of flags) for signed. Similarly, if you are comparing two numbers, you can check the result with one CPU flag (or set of flags) for unsigned, and with another CPU flag (or set of flags) for signed. – Bill Evans at Mariposa Mar 20 '16 at 04:01
89

In laymen's terms an unsigned int is an integer that can not be negative and thus has a higher range of positive values that it can assume. A signed int is an integer that can be negative but has a lower positive range in exchange for more negative values it can assume.

user2977636
  • 2,086
  • 2
  • 17
  • 21
25

int and unsigned int are two distinct integer types. (int can also be referred to as signed int, or just signed; unsigned int can also be referred to as unsigned.)

As the names imply, int is a signed integer type, and unsigned int is an unsigned integer type. That means that int is able to represent negative values, and unsigned int can represent only non-negative values.

The C language imposes some requirements on the ranges of these types. The range of int must be at least -32767 .. +32767, and the range of unsigned int must be at least 0 .. 65535. This implies that both types must be at least 16 bits. They're 32 bits on many systems, or even 64 bits on some. int typically has an extra negative value due to the two's-complement representation used by most modern systems.

Perhaps the most important difference is the behavior of signed vs. unsigned arithmetic. For signed int, overflow has undefined behavior. For unsigned int, there is no overflow; any operation that yields a value outside the range of the type wraps around, so for example UINT_MAX + 1U == 0U.

Any integer type, either signed or unsigned, models a subrange of the infinite set of mathematical integers. As long as you're working with values within the range of a type, everything works. When you approach the lower or upper bound of a type, you encounter a discontinuity, and you can get unexpected results. For signed integer types, the problems occur only for very large negative and positive values, exceeding INT_MIN and INT_MAX. For unsigned integer types, problems occur for very large positive values and at zero. This can be a source of bugs. For example, this is an infinite loop:

for (unsigned int i = 10; i >= 0; i --) {
    printf("%u\n", i);
}

because i is always greater than or equal to zero; that's the nature of unsigned types. (Inside the loop, when i is zero, i-- sets its value to UINT_MAX.)

Keith Thompson
  • 254,901
  • 44
  • 429
  • 631
15

Sometimes we know in advance that the value stored in a given integer variable will always be positive-when it is being used to only count things, for example. In such a case we can declare the variable to be unsigned, as in, unsigned int num student;. With such a declaration, the range of permissible integer values (for a 32-bit compiler) will shift from the range -2147483648 to +2147483647 to range 0 to 4294967295. Thus, declaring an integer as unsigned almost doubles the size of the largest possible value that it can otherwise hold.

Alexey Frunze
  • 61,140
  • 12
  • 83
  • 180
imran
  • 151
  • 1
  • 2
-2

In practice, there are two differences:

  1. printing (eg with cout in C++ or printf in C): unsigned integer bit representation is interpreted as a nonnegative integer by print functions.
  2. ordering: the ordering depends on signed or unsigned specifications.

this code can identify the integer using ordering criterion:

char a = 0;
a--;
if (0 < a)
    printf("unsigned");
else
    printf("signed");

char is considered signed in some compilers and unsigned in other compilers. The code above determines which one is considered in a compiler, using the ordering criterion. If a is unsigned, after a--, it will be greater than 0, but if it is signed it will be less than zero. But in both cases, the bit representation of a is the same. That is, in both cases a-- does the same change to the bit representation.

Minimus Heximus
  • 2,683
  • 3
  • 25
  • 50
  • If this explained the different with one dealing in negative numbers and the other, not. It would help this post a lot. – Daniel Jackson Nov 18 '19 at 15:41
  • @DanielJackson Unclear what you say. a char can be considered negative or positive depending on compiler. the output of the code depends on what compiler chooses and this shows the difference between signed and unsigned. – Minimus Heximus Nov 23 '19 at 16:27