5

I'm running into a problem where as I have an implied unsigned hexadecimal number as a string, provided from user input, that needs to be converted into a BigInteger.

Thanks to the signed nature of a BigInteger any input where the highest order bit is set (0x8 / 1000b) the resulting number is treated as negative. This issue however can't be resolved by simply checking the sign bit and multiplying by -1 or getting the absolute value due to ones's complement which will not respect the underlying notation e.g. treating all values 0xF* as a -1.

As follows are some example input/output

var style = NumberStyles.HexNumber | NumberStyles.AllowHexSpecifier;


BigInteger.TryParse("6", style) == 6   // 0110 bin
BigInteger.TryParse("8", style) == -8  // 1000 bin
BigInteger.TryParse("9", style) == -7  // 1001 bin
BigInteger.TryParse("A", style) == -6  // 1010 bin
...
BigInteger.TryParse("F", style) == -1  // 1111 bin
...
BigInteger.TryParse("FA", style) == -6 // 1111 1010 bin
BigInteger.TryParse("FF", style) == -1 // 1111 1111 bin
...
BigInteger.TryParse("FFFF", style) == -1 // 1111 1111 1111 1111 bin

What is the proper way to construct a BigInteger from an implied unsigned hexadecimal string?

Peter Duniho
  • 68,759
  • 7
  • 102
  • 136
rheone
  • 1,517
  • 1
  • 17
  • 30

1 Answers1

3

Prefixing your hex string with a "0" should do it:

BigInteger.TryParse(string.Format("0{0}", "FFFF"), style, ...)

My BigInteger is 65535 in the example above.

Edit

Excerpt from the BigInteger documentation:

When parsing a hexadecimal string, the BigInteger.Parse(String, NumberStyles) and BigInteger.Parse(String, NumberStyles, IFormatProvider) methods assume that if the most significant bit of the first byte in the string is set, or if the first hexadecimal digit of the string represents the lower four bits of a byte value, the value is represented by using two's complement representation. For example, both "FF01" and "F01" represent the decimal value -255. To differentiate positive from negative values, positive values should include a leading zero. The relevant overloads of the ToString method, when they are passed the "X" format string, add a leading zero to the returned hexadecimal string for positive values.

SuperOli
  • 1,784
  • 1
  • 11
  • 23
  • 1
    In cases where the input string is fully numeric, however a "0x" prefix is allowed. Sure, I can check for the existence of said sub string remove it and prepend a zero (actually that's what I'm doing at the moment) I'm just hoping for a more correct answer. – rheone Nov 11 '14 at 18:59
  • Updated my answer. Adding a leading zero seems like the proper technique to use. – SuperOli Nov 11 '14 at 20:44
  • Sure enough. Marked as correct, and slightly disappointed there isn't an existing unsigned biginteger. – rheone Nov 12 '14 at 16:25