I'm currently starting to work with DER (Distinguished Encoding Rules) encoding and have problems understanding the encoding of integers.
In the reference document https://www.itu.int/ITU-T/studygroups/com17/languages/X.690-0207.pdf this encoding is defined as follows:
8.3.1 The encoding of an integer value shall be primitive. The contents octets shall consist of one or more octets.
8.3.2 If the contents octets of an integer value encoding consist of more than one octet, then the bits of the first octet and bit 8 of the second octet:
shall not all be ones; and
shall not all be zero.
NOTE – These rules ensure that an integer value is always encoded in the smallest possible number of octets.
8.3.3 The contents octets shall be a two's complement binary number equal to the integer value, and consisting of bits 8 to 1 of the first octet, followed by bits 8 to 1 of the second octet, followed by bits 8 to 1 of each octet in turn up to and including the last octet of the contents octets.
On another site, https://learn.microsoft.com/en-us/windows/desktop/seccertenroll/about-integer, it is explained that for positive numbers whose binary representation starts with a 1, a zero byte is added at the front. This is also mentioned in the answers to a former question on stackoverflow: ASN Basic Encoding Rule of an integer.
Unfortunately, from these answers I cannot see how this latter instruction can be deduced from the rules of the reference document.
For example, if I want to encode the number 128, why can't I do this as
[tag byte] [length byte] 10000000?
I know that the correct encoding would be [tag byte] [length byte] 00000000 10000000, but which condition is injured by the variant above? Probably it has something do to with the two's complement, but isn't the two's complement of 128 again 10000000?
I hope you can help me understand why the description on the Microsoft site is equivalent to the original definition. Thank you.