You can only define two's complement numbers when you already have decided exactly how many bits hold the number. Do you have a three-bit signed integer type somewhere where you are storing the bits 100? If so, then 100 would be interpreted as -4.
If you store it in a larger integer type, we would normally assume the other bits to the left of the 1 are all 0s (since otherwise you should have shown what they were) and the value would be positive 4.
By the way, it would be very unusual nowadays to find a compiler that has a C or C++ int
type that is only 8 bits long like the one in the question.
(OK, "unusual" is an understatement--as a comment notes, the standard doesn't allow this, and as far as I know it has never been considered legitimate to have a plain int
type as shown in the question that could be stored in fewer than 16 bits.
You can declare a signed integer bit-field of just 3 bits within a struct
, but the syntax for that is quite different from what the question shows.)
So it is not really correct even to interpret int b=~1;
as storing the bit pattern 11111110; the actual bit pattern would have at least 16 bits, and most of us in recent years have only seen it compiled as 32 bits.