33

I am trying to calculate 1 + 1 * 2 + 1 * 2 * 3 + 1 * 2 * 3 * 4 + ... + 1 * 2 * ... * n where n is the user input. It works for values of n up to 12. I want to calculate the sum for n = 13, n = 14 and n = 15. How do I do that in C89? As I know, I can use unsigned long long int only in C99 or C11.

  1. Input 13, result 2455009817, expected 6749977113
  2. Input 14, result 3733955097, expected 93928268313
  3. Input 15, result 1443297817, expected 1401602636313

My code:

#include <stdio.h>
#include <stdlib.h>
int main()
{
    unsigned long int n;
    unsigned long int P = 1;
    int i;
    unsigned long int sum = 0;
    scanf("%lu", &n);
    for(i = 1; i <= n; i++)
    {
        P *= i;
        sum += P;
    }
    printf("%lu", sum);
    return 0;
}
Timʘtei
  • 753
  • 1
  • 8
  • 21

3 Answers3

88

In practice, you want some arbitrary precision arithmetic (a.k.a. bigint or bignum) library. My recommendation is GMPlib but there are other ones.

Don't try to code your own bignum library. Efficient & clever algorithms exist, but they are unintuitive and difficult to grasp (you can find entire books devoted to that question). In addition, existing libraries like GMPlib are taking advantage of specific machine instructions (e.g. ADC -add with carry) that a standard C compiler won't emit (from pure C code).

If this is a homework and you are not allowed to use external code, consider for example representing a number in base or radix 1000000000 (one billion) and code yourself the operations in a very naive way, similar to what you have learned as a kid. But be aware that more efficient algorithms exist (and that real bignum libraries are using them).

A number could be represented in base 1000000000 by having an array of unsigned, each being a "digit" of base 1000000000. So you need to manage arrays (probably heap allocated, using malloc) and their length.

Basile Starynkevitch
  • 223,805
  • 18
  • 296
  • 547
  • 4
    Some good advice here. I don't understand the downvote, Time to reverse it! – Bathsheba May 30 '17 at 10:49
  • I think algos for BigNumbers are in Knuth's book. Do you know a better reference for implementing a library of BN such as that one from python or scheme ? – alinsoar May 30 '17 at 14:20
  • I did read a Springer textbook on BigNums a few years ago. It did mention Knuth (but was not authored by him). Forgot the details – Basile Starynkevitch May 30 '17 at 14:22
  • But to implement a bignum library inside some language implementation, I would recommend glueing GMPlib – Basile Starynkevitch May 30 '17 at 14:22
  • @alinsoar GMPlib actually also has a pretty nice documentation of its [algorithms](https://gmplib.org/manual/Algorithms.html) (also in the corresponding chapter of the pdf documentation). I don't know if implementing a BN library in an interpreted language is a good idea though. As Basile suggested, I would also try to use GMPlib (many languages have C-bindings, I guess). – chtz May 30 '17 at 15:14
  • @chtz: GMPlib is rumored to not be able to handle nicely out-of-memory conditions (it is aborting on that case). For some interpreters it could be wrong. – Basile Starynkevitch May 30 '17 at 15:17
  • 1
    @chtz: "I don't know if implementing a BN library in an interpreted language is a good idea though." – This is totally off-topic here, but well, once in a while, we can have some off-topic fun. Modern high-performance dynamic optimizers are *extremely* non-intuitive. For example, there's a Ruby gem for manipulating PSDs (Photoshop files). It is implemented as a highly-optimized C extension (for Ruby implementations that support them) and a "fallback" implementation in pure, idiomatic Ruby (for implementations that don't support C extensions such as e.g. Topaz which is an AOT-compiler to … – Jörg W Mittag May 30 '17 at 20:24
  • 1
    … ECMAScript). And this pure-Ruby fallback implementation does pretty much everything you could conceivably imagine to resist being optimized. E.g. dynamically calling methods from method names stored in runtime strings and such stuff. Looping over an array of method names and reflective calling them. So, you might think that this cannot possibly be fast. But, here's the kicker: Precisely *because* it is written in Ruby, the TruffleRuby optimizing interpreter can, through multiple rounds of inlining, specialization, and optimization compile the entire test program down to a constant! It is … – Jörg W Mittag May 30 '17 at 20:27
  • 1
    … significantly faster than using the C extension, since crossing the Ruby-C-barrier is costly. Whereas TruffleRuby happily inlines back and forth between the test program (written in Ruby), the PSD library (written in Ruby) and the Ruby core datastructures (written in Ruby), YARV has nothing to work with: YARV only knows how to run Ruby fast, but the Ruby portion of the code is trivial. YARV has no insight into the Ruby core datastructures (written in C) or the PSD library (written in C). GCC cannot optimize the Ruby code because it doesn't know about it. And every call from Ruby into the … – Jörg W Mittag May 30 '17 at 20:29
  • 3
    … PSD library is a costly crossing of the VM boundary. In the end, eliminating those cross-language calls buys you back everything and more than you lost from writing the compute-intensive code in Ruby. So, I don't believe that "implementing a BN library in an interpreted language" is *that* outrageous. The ability to inline the compute-intensive code into the larger application is not to be underestimated. – Jörg W Mittag May 30 '17 at 20:32
  • (Off-topic rant over.) :-D – Jörg W Mittag May 30 '17 at 20:32
  • The book "BigNum Math" by Tom St. Denis has a nice discussion of the algorithms, together with C implementations. – John Coleman May 30 '17 at 20:35
  • 1
    _Don't try to code your own bignum library._ I don't agree with this statement, same as I don't agree wit _Don't lay out your own crypto._. Please, try, but *don't expect it to be safe, performant and production ready* on first attempt. If no one tries, then we won't have any progress. And writing such code is _IMHO_ best way to learn them. – smbear Jun 01 '17 at 07:49
20

You could use a double, especially if your platform uses IEEE754.

Such a double gives you 53 bits of precision, which means integers are exact up to the 53rd power of 2. That's good enough for this case.

If your platform doesn't use IEEE754 then consult the documentation on the floating point scheme adopted. It might be adequate.

Bathsheba
  • 231,907
  • 34
  • 361
  • 483
  • 7
    Note that unlike a BigInt solution, this will lose precision beyond a certain point. – BallpointBen May 30 '17 at 19:15
  • 4
    Indeed. After the 53rd power of 2 for IEEE754. But so will the built in integral types. So you need to pick the correct type for your job. – Bathsheba May 30 '17 at 19:34
0

A simple approach when you're just over the limit of MaxInt, is to do the computations modulo 10^n for a suitable n and you do the same computation as floating point computation but where you divide everything by 10^r.The former result will give you the first n digits while the latter result will give you the last digits of the answer with the first r digits removed. Then the last few digits here will be inaccurate due to roundoff errors, so you should choose r a bit smaller than n. In this case taking n = 9 and r = 5 will work well.

Count Iblis
  • 101
  • 1