Here's what you're missing: when a signed integer primitive type (such as short, int, long) gets incremented above a signed value that it can represent, it tries to flip its sign bit, which is the leftmost bit, which is only supposed to be used to indicate the sign of the number. 1 in the sign bit indicates a negative value. This phenomenon is called integer overflow.
Consider a fictional 3-bit signed primitive data type (for comparison, a Java long is 64 bits). It can represent numbers between -4 and 3.
3, the biggest positive value a 3-bit number can represent, looks like this: 011
add 1 to 011 and you get: 100 (the number part overflows into the sign part)
the decimal version of 100 is -4
However, when you get to dealing with the capacity of a long, there are a lot of digits to count, so here's a quick way to determine the biggest number defined by a given nondecreasing sequence (in this case, factorial):
long n = 1;
while (factorial(n) > 0) {
System.out.println("factorial of " + n++ + " can fit in a long!");
}
This looks like it ought to be an infinite loop, but it isn't; eventually, factorial(n) will return negative due to integer overflow.
This will give you the following output:
factorial of 1 can fit in a long!
factorial of 2 can fit in a long!
factorial of 3 can fit in a long!
factorial of 4 can fit in a long!
factorial of 5 can fit in a long!
factorial of 6 can fit in a long!
factorial of 7 can fit in a long!
factorial of 8 can fit in a long!
factorial of 9 can fit in a long!
factorial of 10 can fit in a long!
factorial of 11 can fit in a long!
factorial of 12 can fit in a long!
factorial of 13 can fit in a long!
factorial of 14 can fit in a long!
factorial of 15 can fit in a long!
factorial of 16 can fit in a long!
factorial of 17 can fit in a long!
factorial of 18 can fit in a long!
factorial of 19 can fit in a long!
factorial of 20 can fit in a long!