There are a lot of similar questions asked on SO, but I've yet to find one that works and is easily portable to C#. Most involve C++ or similar, and the (presumably) working answers rely on either embedded assembly or native C/C++ functions that don't exist in C#. Several functions work for part of the range, but fail at other parts. I found one working answer I was able to port to C#, but it was very slow (turns out it's decently-fast when I compile to x64 instead of x86, so I posted it as the answer to beat).
Problem
I need a function that allows me to multiply any 64-bit integer by a fraction 0 to 1 (or -1 to 1) that is derived from two 64-bit integers. Ideally, the answer would work for both Int64 and UInt64, but it's probably not hard to make one work from the other.In my case, I have a random 64-bit Int64/UInt64 (using the xoshiro256p algorithm, though that's likely irrelevant). I want to scale that number to any arbitrary range in the type's allowed values. For example, I might want to scale Int64 to the range [1000, 35000]. This is, conceptually, easy enough:
UInt64 minVal = 1000;
UInt64 maxVal = 35000;
UInt64 maxInt = UInt64.MaxValue;
UInt64 randInt = NextUInt64(); // Random value between 0 and maxInt.
UInt64 diff = maxVal - minVal + 1;
UInt64 scaledInt = randInt * diff / maxInt; // This line can overflow.
return scaledInt + minVal;
As noted by many other people, and the comment above, the problem is that randInt * diff
can potentially overflow.
On paper, I could simply store that intermediate result in a 128-bit integer, then store the result of the division in the 64-bit output. But 128-bit math isn't native to 64-bit systems, and I'd rather avoid arbitrary-precision libraries since I'll be making lots of calls to this function and efficiency will be notable.
I could multiply by a double to get 53 bits of precision, which is fine for what I'm currently doing, but I'd rather come up with a proper solution.
I could create a C++ library with one of the ASM solutions and call that library, but I'd like something that's pure C#.
Requirements
- Needs to be pure C#.
- Needs to work for any set of inputs such that
randInt * diff / maxInt
is in the range [0, maxInt] (and each value itself is in the same range). - Shouldn't require an external library.
- Needs to be +-1 from the mathematically-correct answer.
- Needs to be reasonably quick. Maybe I'm just asking for miracles, but I feel like if doubles can do 5-10 ms, we should be able to hit 20 ms with purpose-built code that gets another 11 bits of precision.
- Ideally works relatively well in both release and debug modes. My code has about a 3:1 ratio, so I'd think we could get debug under 5-ish times the release time.
My Testing
I've tested the following solutions for relative performance. Each test ran 1 million iterations of my random number generator, scaling using various methods. I started by generating random numbers and putting them in lists (one for signed, one for unsigned). Then I ran through each list and scaled it into a second list.
I initially had a bunch of tests in debug mode. It mostly didn't matter (we're testing relative performance), but the Int128/UInt128 libraries fared much better in release mode.
Numbers in parenthesis are the debug time. I include them here because I still want decent performance while debugging. The Int128 library, for example, is great for release mode, but terrible for debug. It might be useful to use something that has a better balance until you're ready for final release. Because I'm testing a million samples, the time in milliseconds is also the time in nanoseconds per operation (all million UInt64s get generated in 33 ms, so each one is generated in 33 ns).
Source code for my testing can be found here, on GitGub.
- 86 ms (267): the Int64 random generator.
- 33 ms (80): the UInt64 random generator.
- 4 ms (5): using double conversion to Int64, with reduced precision.
- 8 ms (10): again for UInt64.
- 76 ms (197): this C Code for Int64, converted to C# (exact code in my answer below).
- 72 ms (187): again for UInt64.
- 54 ms (1458): this UInt128 library, for Int64.
- 40 ms (1476): again for UInt64.
- 1446 ms (1455): double128 library for Int64. Requires a paid license for commercial use.
- 1374 ms (1397): again for UInt64.
I couldn't get these to give proper results.
- this MulDiv64 library, linked to the main application with DllImport.
- QPFloat, compiled to x64, created a MulDiv64 function in the C++ code.
- this Java code.
- the MFllMulDiv function from the Microsoft Media Foundation library. I tried to test it, but couldn't figure out how to get VS to link into my C++ project properly.
Similar Questions
Most accurate way to do a combined multiply-and-divide operation in 64-bit?
- Answers by phuclv, Soonts, Mysticial, and 500 - Internal Server Error involve external libraries, assembly, or MSVC-specific functions.
- Answers by timos, AnT, Alexey Frunze, and Michael Burr don't actually answer anything.
- Answers by Serge Rogatch and Pubby aren't precise.
- Answer by AProgrammer works, but is very slow (and I have no idea how it works) -- I ended up using it anyways and getting decent results in x64 compilation.
How can I descale x by n/d, when x*n overflows?
- The only answer, by Abhay Aravinda, isn't real code, I wasn't sure how to implement the last section, and the comments suggest it can overflow for large values anyways.
Fast method to multiply integer by proper fraction without floats or overflow
- Answers by Taron and chux - Reinstate Monica are approximations or MSVC-specific.
- Answer by R.. GitHub STOP HELPING ICE just uses 64-bit math since that question is about multiplying Int32.
(a * b) / c MulDiv and dealing with overflow from intermediate multiplication
- Answer by Jeff Penfold didn't work for me (I think I'm missing something in the logical operators converting from Java to C#), and it was very slow.
- Answer by greybeard looks nice, but I wasn't sure how to translate it to C#.
- Answers by tohoho and Dave overflow.
- Answer by David Eisenstat requires BigInt libraries.
How to multiply a 64 bit integer by a fraction in C++ while minimizing error?
- All the answers overflow in different circumstances.