I am trying to manually implement multiplication between a double and a 128 bit integer that I have created myself using two ulongs.
My understanding is as follows:
1. Decompose the double into it's significand and exponent. Ensuring the significand is normalized.
2. Multiply the significand and my uint128. This will give me at 256 bit number.
3. Shift my 256 bit number by exponent extracted from the double.
4. If the value is over 128 bits, then I overflowed.
I feel like I am incredibly close, but I am missing something. Lets say I have the following example. I am storing a uint128 with the value 2^127 and I want to multiply it by 8E-6.
uint128 myValue = new uint128(2^127);
double multiplier = 8E-6;
uint128 product = myValue * multiplier;
The real value or correct answer is 1361129467683753853853498429727072.845824
.
So I would like to get the value 1361129467683753853853498429727072
as my 128-bit integer.
The problem is my implementation is giving me 1361129467683753792259819967610881
.
int exponent; // This value ends up being -69 for 8E-6
uint128 mantissa = GetMantissa(multiplier, out exponent); // This value ends up being 4722366482869645 after normalizing it.
uint256 productTemp = myValue * mantissa; // This value is something like 803469022129495101412490705402148357126451442021826560.
uint128 product = productTemp >> exponent. // this value is 1361129467683753792259819967610881
I am using this code from extracting mantissa and exponent from double in c# to get my mantissa and exponent. I can use those values to correctly get 8E-6 back as a double.
Does anyone know what I am getting wrong here? If I using .8 instead of 8E-6 my values are better.