In order to avoid representing 0 in two different ways when representing signed integers in bytes such as
10000000
and
00000000
one can say, by convention, that 10000000
= -128. Is this correct?
In order to avoid representing 0 in two different ways when representing signed integers in bytes such as
10000000
and
00000000
one can say, by convention, that 10000000
= -128. Is this correct?
Yes, but not quite by convention - it is a formula. By the same formula -1 will be 11111111.
Read here for details http://en.wikipedia.org/wiki/Two%27s_complement
Obviously there are different possible representations, including the one you mentioned which has two different numbers for 0.
Two's complement is the only representation I know of that's used in computers, and for that one your assumption is correct. In the datatype sometimes known as signed char
, binary 10000000
is indeed -128. See: