0

So I have written a program in c++ which was suppose to print factorials of number from 1 endlessly. It printed the first some number as expected with correct factorial value but after that it is just end up getting negative value and Program got terminated due to condition mentioned. But I am not sure how a good working factorial program end up getting negative value.

#include<iostream>
using namespace std;
int main(){
    int a = 1;
    int c = 1;
    while(a>0){
        cout<<(a*c)<<endl;
       
        a = a*c;
        c = c+1;
    }
    return 0;
}

I have Expected Endless result of factorial from 1 to forever.

Result I got-

1
2
6
24
120
720
5040
40320
362880
3628800
39916800
479001600
1932053504
1278945280
2004310016
2004189184 //Number Decreased Instead of Increasing
-288522240 //Now It is negative
Chris
  • 26,361
  • 5
  • 21
  • 42
  • 1
    It's called *integer overflow*, there's a limit to how big an `int` can be and things get unpredictable after that. – john Jun 23 '23 at 06:10
  • A computer has finite memory, a large number needs storage, so your expectation of increasing numbers 'forever' is never going to happen, in any computer language. At some point you will run out of memory. – john Jun 23 '23 at 06:13
  • 1
    You could *delay*, but not get rid of the problem with `unsigned long long`. You won't get negative any more, but at some point the next value will be smaller than the previous one – which then again indicates the overflow having occurred. Side note: for signed integers, overflow is *undefined behaviour*, for unsigned not. – Aconcagua Jun 23 '23 at 06:24
  • Side note: About [`using namespace std`](https://stackoverflow.com/questions/1452721/why-is-using-namespace-std-considered-bad-practice)... – Aconcagua Jun 23 '23 at 06:39
  • Have a look at [std::numeric_limits](https://en.cppreference.com/w/cpp/types/numeric_limits) experiment a bit with various types and see what maximum and minimum values you can store in each type. – Pepijn Kramer Jun 23 '23 at 06:48
  • @Aconcagua About INT_MAX that in my opinion is still the "C" way of doing things. Seem my comment about numeric_limits header. – Pepijn Kramer Jun 23 '23 at 06:49
  • 1
    @geeky01adarsh Better a `std::vector` to be able to profit from contiguous memory (quicker access), or maybe calculate as `std::string` directly, not requiring conversion on printing any more. Best: Use some already completed big-integer, there are plenty of out there, no need to re-invent the wheel… – Aconcagua Jun 23 '23 at 06:58
  • @Aconcagua: using `printf` is error-prone. have to use correct `'%'` depending on type (which have to be known, as it comes from library and not directly from user)... – Jarod42 Jun 23 '23 at 07:08
  • @Jarod42 Agree on. Sometimes it's just simpler to provide appropriate formatting to some text, which can be a pain with C++ streams. Again agree: not here, though... – Aconcagua Jun 23 '23 at 07:11
  • Thanks, @Aconcagua, I was unaware of big-integer in C++. However, I would like to correct you over the user of `std::vector` as the internal working of vector, creates a new array, each time you try to increase the size, and other operations are also costly there in comparison to normal arrays. Find out it here https://geeky01adarsh.hashnode.dev/why-cpp-vectors-fails – geeky01adarsh Jun 23 '23 at 07:11
  • @Aconcagua I make it a habit to write the exact same code in my small snippets I would also use in production code ;) As for printf C++20 has std::format and C++23 std::print. Times change and I think it is important to keep up with latest developments ;) – Pepijn Kramer Jun 23 '23 at 07:13
  • @geeky01adarsh You can also use std::uint64_t see [integer types](https://en.cppreference.com/w/cpp/types/integer) – Pepijn Kramer Jun 23 '23 at 07:13
  • Try `std::cout << INT_MAX << std::endl;` – it will print the maximum number representable by an `int` on your system, typically 2.147.483.647. A more C++-like version – even robust against changing the type of the variable(!): `std::cout << std::numeric_limits::max() << std::endl;` – Aconcagua Jun 23 '23 at 07:14
  • Sounds like you need a bignum library, like [gmp](https://gmplib.org/). – Jesper Juhl Jun 23 '23 at 07:16
  • @geeky01adarsh: I won't trust your link... Alternative using array won't resolve dynamic size, `std::deque` is build on the criticized `std::vector`... – Jarod42 Jun 23 '23 at 07:20
  • 1
    @geeky01adarsh And you indeed *do* think a `std::list` is better choice? You have memory allocations for *every* element you insert. It's main advantage is that iterators (or pointers) into do not invalidate on modifications even somewhere in the middle (unless erasing the element exactly pointed at, of course), this can be valuable in some scenarios. `std::deque` indeed is a good compromise, but it does not (cannot) guarantee contiguous memory. This can be especially relevant if you need to deal with some C libraries. Just iterating over the elements is faster, too, with `std::vector` ... – Aconcagua Jun 23 '23 at 07:28
  • ... as memory is contiguous. In the end it is as always: Know your requirements and choose the container/data structure most appropriate to solve them, and quite often this can be `std::vector` as well. – Aconcagua Jun 23 '23 at 07:29
  • @Jarod42 I am not suggesting an array for this particular question, the array is recommended when you know the size which is not going to change. Using `std::vector` may give poor results where performance is necessary, like in competitive programming, I can provide you with questions over Leetcode to verify this quote. – geeky01adarsh Jun 23 '23 at 07:35
  • @Aconcagua got your point, and thanks for the detailed explanation. What I was trying to conclude in that article was `std::vector` is slower than an inbuilt array when you have a *fixed size*. I am not saying that using it in this particular scenario is better. – geeky01adarsh Jun 23 '23 at 07:42
  • @geeky01adarsh Well, that's what I said already: Know your requirements! Fixed size is fine -> use `std::array`, otherwise – well, not *automatically* `std::vector`, but not `std::deque` either – once more: it depends on the requirements, consider the C-library example again (requirement for contiguous memory), and `std::deque` *can* result in poorer performance as well (on simply iterating over them without modifying them). – Aconcagua Jun 23 '23 at 07:59
  • @geeky01adarsh Stumbling over the comments after two days again: *'creates a new array each time you try to increase the size'* – that's actually not correct. 1. You can provide `std::vector` in advance (before first allocation occurs) an estimate about how many elements *might* be required (see [`reserve`](https://en.cppreference.com/w/cpp/container/vector/reserve) and on actually running out of memory it reserves a chunk (much) larger than required to store a new element, typically by duplicating storage available. So if correctly `reserve`ing, re-allocations usually get rather rare... – Aconcagua Jun 26 '23 at 06:25
  • You could now argument about potentially wasted memory, typically, though, (unless you do maintain a huge amount of independent vectors) this is not really an issue on modern hardware – apart on MCU, but there you'd usually avoid dynamic memory allocation entirely anyway, so then *any* standard container apart from `std::array` is out… – Aconcagua Jun 26 '23 at 06:27

2 Answers2

5

int is a signed type. Your factorials gets big enough to overflow that (exceed the largest value an int can store. When they do, you start to see answers that don't seem to make sense.

This can be readily seen with a simple demo program:

$ cat test.cpp
#include <iostream>
#include <iomanip>
#include <limits.h>

int factorial(int n) {
    int result = 1;
    for (; n > 1; n--) result *= n;
    return result;
}

int main() {
    for (int i = 1; i < 20; i++) {
        std::cout << std::setw(10) << std::right
                  << factorial(i) << std::endl
                  << INT_MAX << " <- max " << std::endl;
    }
}
$ g++ test.cpp
$ ./a.out
         1
2147483647 <- max
         2
2147483647 <- max
         6
2147483647 <- max
        24
2147483647 <- max
       120
2147483647 <- max
       720
2147483647 <- max
      5040
2147483647 <- max
     40320
2147483647 <- max
    362880
2147483647 <- max
   3628800
2147483647 <- max
  39916800
2147483647 <- max
 479001600
2147483647 <- max
1932053504
2147483647 <- max
1278945280
2147483647 <- max
2004310016
2147483647 <- max
2004189184
2147483647 <- max
-288522240
2147483647 <- max
-898433024
2147483647 <- max
 109641728
2147483647 <- max

If you want to see larger factorial results, you need to use a larger integer type, and likely a larger unsigned integer type.

If we do this, we buy ourselves some additional space to generate larger factorials.

$ cat test.cpp
#include <iostream>
#include <iomanip>

unsigned long long int factorial(int n) {
    unsigned long long int result = 1;
    for (; n > 1; n--) result *= n;
    return result;
}

int main() {
    for (int i = 1; i < 20; i++) {
        std::cout << std::setw(19) << std::right
                  << factorial(i) << std::endl
                  << ULONG_LONG_MAX << " <- max " << std::endl;
    }
}
$ g++ test.cpp
$ ./a.out
                  1
9223372036854775807 <- max
                  2
9223372036854775807 <- max
                  6
9223372036854775807 <- max
                 24
9223372036854775807 <- max
                120
9223372036854775807 <- max
                720
9223372036854775807 <- max
               5040
9223372036854775807 <- max
              40320
9223372036854775807 <- max
             362880
9223372036854775807 <- max
            3628800
9223372036854775807 <- max
           39916800
9223372036854775807 <- max
          479001600
9223372036854775807 <- max
         6227020800
9223372036854775807 <- max
        87178291200
9223372036854775807 <- max
      1307674368000
9223372036854775807 <- max
     20922789888000
9223372036854775807 <- max
    355687428096000
9223372036854775807 <- max
   6402373705728000
9223372036854775807 <- max
 121645100408832000
9223372036854775807 <- max
Aconcagua
  • 24,880
  • 4
  • 34
  • 59
Chris
  • 26,361
  • 5
  • 21
  • 42
  • Just as a side note: `std::numeric_limits::max()` would have always given the right limit, no matter what the function actually returns ;) – Aconcagua Jun 23 '23 at 06:45
1

Standard C++ integer types has a defined range value in order to stay in the number of bits defined by the architecture (32 bit, 64 bit etc). You can check them the old way by searching for min and max values for types (it can depend on architecture), or using something more modern like the numeric_limits class template.

If you calculate a number bigger (or smaller) you will have a integer overflow, 'cause of the logic implementation of number operation in processes. You can have undefined behavior, or funny collateral effects. Normal One type of typical behaviour (thanks @Aconcagua) behavior can be a crash of your application.

If you need to handle numbers bigger than this, you should rely on a dedicated library, like boost::multiprecision, that has classes that represents number of arbitrary size. Take care, obviously since the numbers are bigger they are handled in a special way defined by the library logic, and it usually slower than built-in types. If it can be an issue depends on you and on performance requirements of your application.

Jepessen
  • 11,744
  • 14
  • 82
  • 149
  • 1
    I wouldn't describe a crash as '*normal*' behaviour, if at all, as *typical*, though I'd estimate that there are at least as many cases of programmes just running mad without crash, so maybe *'one type of typical behaviour'* (or already splitting hairs that last one?). Is a crash a behaviour at all? – Aconcagua Jun 23 '23 at 06:50