The assumption that the highest bit is a simple sign bit is wrong. Java, as well as most modern programming languages (and hardware architectures) use the so-called two's complement representation for numbers. (The bit itself, coincidentally, does indicate the sign, but not in the way you would expect it to, i.e. 150 and -150 have more differences than just the sign bit in their representation.)
This representation might seem like an odd choice at first, but it actually makes operations such as adding a positive number to a negative number (or variations of this) automagically work, without making the processor have to check for special cases.
According to the relevant Wikipedia article:
The system is useful in simplifying the implementation of arithmetic on computer hardware. Adding 0011 (3) to 1111 (−1) at first seems to give the incorrect answer of 10010. However, the hardware can simply ignore the left-most bit to give the correct answer of 0010 (2). Overflow checks still must exist to catch operations such as summing 0100 and 0100.
The system therefore allows addition of negative operands without a subtraction circuit and a circuit that detects the sign of a number. Moreover, that addition circuit can also perform subtraction by taking the two's complement of a number (see below), which only requires an additional cycle or its own adder circuit. To perform this, the circuit merely pretends an extra left-most bit of 1 exists.
See this related answer for an even more in-depth explanation with lots of nice, easy to understand examples.