Historically, there were integer formats that could represent both −0 and +0. Both sign-and-magnitude and one’s complement can represent −0 and +0. These proved to be less useful than two’s complement, which won favor and is ubiquitous today.
Two’s complement has some numerical properties that make it a little nicer to implement in hardware, and having two zeros caused some nuisance for programmers. (I heard of bugs such as an account balance being −0 instead of +0 resulting in a person being sent a bill when they should not have been.)
Floating-point uses sign-and-magnitude, so it can represent both −0 and +0. Due to the nature of floating-point, the arithmetic properties of two’s complement would not aid a floating-point implementation as much, and having two zeros allows a programmer to use a little extra information in some circumstances.
So the choices for integer and floating-point formats are motivated by utility, not mathematical necessity.
A Look At Integer Arithmetic
Let’s consider implementing some integer arithmetic in computer hardware using four bits for study. Essentially the first thing we would do is implement unsigned binary arithmetic, so we design some logic gates to make adders and other arithmetic units. So the inputs 0101 and 0011 to the adder produce output 1000.
Next, we want to handle negative numbers. In writing, we handle negative numbers by putting a sign on front, so our first thought might be to do the same thing with bits: Use a bit in front to indicate negative. Now we have a sign-and-magnitude representation. 0001 represents +1, and 1001 represents −1. 0010 represents +2, and 1010 represents −2. 0111 represents +7 , and 1111 represents −7. And, of course, 0000 represents +0, and 1000 represents −0. That is an idea, and then we must implement it. We have already got an adder, and, if we feed it 0010 (2) and 0011 (3), it correctly outputs 0101 (5). But, if we feed it 0011 (3) and 1001 (−1), it outputs 1100 (−4). So we have to modify it. Well, that is not too bad, we have a subtraction unit for unsigned binary, so we can look at the first bit, and, if we are adding a negative number, we subtract instead of adding. That works for some operations; for 0011 and 1001, observing the leading 1 on the second operand and feeding 011 and 001 to the subtraction unit would produce 010 (2), which is correct. But, if we have 0010 and 1011, feeding 010 and 011 to the subtraction unit might produce some error indication (it was originally designed for unsigned binary) or it might “wrap” and produce 111 (because such wrapping, along with a “borrow out” bit in the output, makes the subtraction unit work as part of a design for subtracting wider numbers). Either way, that is wrong for our signed numbers; we want the output of 0010 (2) plus 1011 (−3) to be 1001 (−1). So we have to design new arithmetic units that handle this. Perhaps, when adding numbers of mixed signs, they figure out which one is larger in magnitude, subtract the smaller from the larger, and then apply the sign bit of the larger. In any case, we have a fair amount of work to do just to design the addition and subtraction units.
Another suggestion is, to make a number negative, invert every bit. This is called one’s complement. It is easy to understand and fits the notion of negation—just negate everything. Let’s consider how it affects our arithmetic units. For the combinations of +3 or −3 with +2 or −2, we would want these results: 0011 (3) + 0010 (2) = 0101 (5), 0011 (3) + 1101 (−2) = 0001 (1), 1100 (−3) + 0010 (2) = 1110 (−1), and 1100 (−3) + 1101 (−2) = 1010 (−5). Upon examination, there is a simple way to adapt our binary adder to make this work: Do the addition on all four bits as if they were unsigned binary, and, if there is a carry out of the leading bit, add it back to the low bit. In unsigned binary 0011 + 0010 = 0101 with no carry, so the final output is 0101. 0011 + 1101 = 0000 with a carry, so the final result is 0001. 1100 + 0010 = 1110 with no carry, so the final result is 1110. 1100 + 1101 = 1001 with a carry, so the final result is 1010.
This is nice; our one’s complement adder is simpler than the sign-and-magnitude adder. It does not need to compare magnitudes and does not need to do a subtraction to handle negative numbers. We can make it cheaper and make more profit.
Then somebody comes up with the idea of two’s complement. Instead of inverting every bit, we will conceptually subtract the number from 2n, where n is the number of bits. So 10000 − 0001 = 1111 represents −1, and 1110 is −2, 1101 is −3, and so on. What does this do to our adder?
In unsigned binary, 0010 (2) + 1101 (13) = 1111 (15). In two’s complement, 0010 (2) + 1101 (−3) = 1111 (−1). The bits are the same! This actually works for all two’s complement numbers; adding the bit patterns for unsigned numbers produces the same results we want for adding two’s complement numbers. We can use the exact same logic gates for unsigned binary and two’s complement. That is brilliant, give that employee a raise. That is what modern hardware does; the same arithmetic units are used for adding or subtracting two’s complement numbers as are used for adding or subtracting unsigned numbers.
This is a large part of why two’s complement won out for representing negative integers. It results in simpler, easier, cheaper, faster, and more efficient computers.
(There is a difference between unsigned addition and two’s complement addition: How overflow is detected. In unsigned addition, an overflow occurs if there is a carry out of the high bit. In two’s complement addition, an overflow occurs if there is a carry out of the highest of the magnitude bits, hence a carry into the sign. Adder units commonly handle this by reporting both indications, in one form or another. That information, if desired, is tested in later instructions; it does not affect the addition itself.)