8

I'm coding a C++/MFC application using Visual Studio 2010 and I need to maintain a cumulative running total that will be used to calculate an average transfer rate, as such:

//Let's assume that INT128 is a 128-bit integer type
static INT128 iRunningTotal = 0;
static INT128 iCounter = 0;

LONGLONG iteration_get_current_average(LONGLONG iRate)
{
    //May be called repeatedly...
    iRunningTotal += iRate;
    iCounter++;

    //Calculate the current average
    return iRunningTotal / iCounter;
}

I searched for C++ 128-bit integer and pretty much everywhere people suggest using Boost library. Well, that's a possibility, but I'm not familiar with it and don't use it anywhere else in my project.

So Boost aside, I'm curious, is there a way to do this with pure C/C++?

user2864740
  • 60,010
  • 15
  • 145
  • 220
c00000fd
  • 20,994
  • 29
  • 177
  • 400
  • 4
    "I'm curious, is there a way to do this with pure C/C++?" - the answer is obviously yes since C++ is Turing Complete. But if you don't want to use a library, you'll need to make one yourself. – Mysticial May 21 '14 at 06:13
  • 4
    It is highly unlikely that you will actually need a 128bit integer for this particular situation. Transfer rates are rarely, if ever, high enough to exceed the capacity of a 64bit integer, which `LONGLONG` is. A 64bit integer can hold values up to 16 *exabytes*. – Remy Lebeau May 21 '14 at 06:23
  • And by "highly unlikely" .. also 1) iRate could be scaled back (e.g. K/s, M/s) and 2) a double has a *much* wider ranger than a 128-bit integer, albeit with a loss of accuracy for "large values". (And there is absolutely *no* reason for iCounter to be that large.) – user2864740 May 21 '14 at 06:24
  • Transfer rates are usually expressed as floating point numbers to users, and rarely as raw bytes per time, so calculating rates as kb/mb/gb per time will drastically reduce the size of integer needed (the higher the unit, the smaller the bits needed). – Remy Lebeau May 21 '14 at 06:31
  • 1
    with a 64-bit unsigned integer you can count up to 18 exabit. According to [cisco](http://www.cisco.com/c/en/us/solutions/collateral/service-provider/visual-networking-index-vni/VNI_Hyperconnectivity_WP.html) this is by 2017 about the expected *monthly* internet volume in many worlds regions, so it's likely something like the global monthly traffic volume right now. Do you really think you need more than that? – KillianDS May 21 '14 at 07:09

2 Answers2

7

I shall leave aside the question of whether it's a good idea, or whether the physical quantity you're measuring could even in theory ever exceed a value of 2^63, or 10^19 or thereabouts. I'm sure you have your reasons. So what are your options in pure C/C++?

The answer is: not many.

  • 128 bit integers are not part of any standard, nor are they supported on the compilers I know.
  • 64 bit double will give you the dynamic range (10^308 or so). An excellent choice if you don't need exact answers. Unfortunately if you have a number with enough zeros and you add one to it, it isn't going to change.
  • 80 bit double is natively support by the floating point processor, and that gives you the 63 bit mantissa together with the extended dynamic range.

So, how about roll-your-own 128 bit integer arithmetic? You would really have to be a masochist. It's easy enough to do addition and subtraction (mind your carries), and with a bit of thought it's not too hard to do multiplication. Division is another thing entirely. That is seriously hard, and the likely outcome is bugs similar to the Pentium bug of the 1990s.

You could probably accumulate your counters in two (or more) 64 bit integers without much difficulty. Then convert them into doubles for the calculations at the end. That shouldn't be too hard.

After that I'm afraid it's off to library shopping. Boost you mentioned, but there are much more specialised libraries around, such as cpp-bigint.

Not surprisingly, this question has been asked before and has a very good answer: Representing 128-bit numbers in C++.

Community
  • 1
  • 1
david.pfx
  • 10,520
  • 3
  • 30
  • 63
  • Thanks. I appreciate you taking your time to actually address the question. – c00000fd May 21 '14 at 19:49
  • 1
    Hmm... implementing 128-bit division and other basic integer arithmetic is not that hard. You'd obviously do it in asm for speed. [Here's x64 example](https://github.com/JesperMikkelsen/Big-Numbers/blob/master/Lib/Src/Math/Int128x64.asm). – ahmd0 Jul 13 '17 at 22:55
  • 1
    @ahmd0: I disagree, quite strongly. As I said, implementing division is hard, and testing it for correctness is even harder. Intel failed, and many years ago so did we (who knew Knuth was buggy?). Your code may be correct, or not, who can tell? Widely used and tested libraries are the way to go. – david.pfx Jul 15 '17 at 07:22
  • 1
    @david.pfx People who take their time to test things can tell. You think widely used and tested libraries did testing differently that you'd do...? All these "accidents" are trivial incompetence due to lack of actual care for testing. You can write good tests and catch all bugs before first 5 people ever notice your library exists. If you have some number and divide it in your code, but result is clearly wrong according to what you did by hand, I wonder how can you tell if it's wrong or not? –  Sep 07 '19 at 21:43
  • 1
    @Sahsahae: Then you've obviously never done it for real; I have. The testing space for numerical libraries is vast, way beyond 'good' testing practice. Do you think you're better than Intel? Would you have detected the Pentium bug before turning it into silicon? Obviously not. – david.pfx Sep 09 '19 at 00:47
  • @Sahsahae Your comment assume that division is implemented with an algorithm similar to what you would do by hand in which case, it might be true that you would have to test only a few cases to ensure that it works properly. In reality, for better efficiency many technics would be used (lookup table, replacing division by multiplication and shift,...) and you then have many algorithms where some might have an overflow only for some specific ranges. – Phil1970 Jan 03 '20 at 15:44
  • If we appeal to authority, had I been the one at openssl working on it, heartbleed would have never happened. That's how relevant authority fallacy is. Have a nice day. –  Jan 03 '20 at 16:05
4

Let's try to compute the point at which your numbers could become large enough to overflow 64-bit numbers.

Let's assume you're doing your measurements once per microsecond. At a rate of 1 million increments per second, it'll take 264/1'000'000 seconds for a 64-bit number to overflow. That works out to over a half million years of counting. Even if you increase the rate to once per nanosecond, it would still take well over 500 years.

For the running total, you could (theoretically) run out a little sooner. If, for example, you had 100 Gigabit Ethernet, and had it running at maximum theoretical bandwidth all the time, you'd run out in (a little) less than 47 years.

If you limit yourself to technologies most of us can actually afford, about the fastest transfer rates most of use deal with are to/from SSDs. Assuming you had drives that could handle it, the most recent SATA Express specification supports transfers at up to 16 Gb/s. You'd need to saturate that 24/7 for well over 200 years before you used up the full range of a 64-bit integer.

Hmm...maybe we should look at main memory. Let's assume 4 channels of the fastest DDR 4 memory yet specified, and (as always) the grossly unrealistic assumption that you'll keep it operating at maximum theoretical bandwidth 24/7. With this, you could still count all all transfers to and from memory for over 4 years at a time before you'd be in any danger of a 64-bit integer overflowing.

Of course, you could try to over-clock the CPU and RAM to get there a little faster, but that would probably be a losing game--anything more than the very most modest overclock will probably reduce the life expectancy of the parts, so the machine would probably die before the 64-bit integer overflowed.

Bottom line: Your need for 128-bit integers seems questionable at best.

Jerry Coffin
  • 476,176
  • 80
  • 629
  • 1,111