Bits is bits. What the bits mean is up to you.
Let's talk about 8-bit quantities to make it easier on us. Consider the bit pattern
1 0 0 0 0 0 0 0
What does that 'mean'?
If you want to consider it as an unsigned binary integer, it's 128 (equals 2 to the 7th power).
If you want to consider it as a signed binary integer in twos-complenent representation, it's -128.
If you want to treat it as a signed binary integer in sign-and-magnitude representation (which nobody does any more), it's -0. Which is one reason we don't do that.
In short, the way large positive numbers are distinguished from negative numbers is that the programmer knows what he intends the bits to mean. It's something that does not exist in the bits themselves.
Languages like C/C++ have signed and unsigned types to help (by defining whether, for example, 1000 0000 is greater or less than 0000 0000), but there will always be pitfalls you need to be aware of, because integers in computer hardware are finite, unlike the real world.