5

I can read that int range (signed) is from [−32767, +32767] but I can say, for example

int a=70000;
int b=71000;
int c=a+b;

printf("%i", c);
return 0;

And the output is 141000 (correct). Should not the debugger tell me "this operation is out of range" or something similar?

I suppose that this has to be with me ignoring the basics of C programming, but none of the books that I'm currently reading tell nothing about this "issue".

EDIT: 2147483647 seems to be the upper limit, thank you. If a sum exceeds that number, the result is negative, wich is expected, BUT if it is a subtraction, for example: 2147483649-2147483647=2 the result is still good. I mean, why the value 2147483649 is correctly hold for that substraction purpose (or at least it seems to me)?

ikeeki
  • 69
  • 1
  • 7

7 Answers7

6

The range [−32767, +32767] is the required minimum range. An implementation is allowed to provide a larger range.

Bo Persson
  • 90,663
  • 31
  • 146
  • 203
  • 2
    I think it's `[-32768, 32767]`. – erip Jan 15 '16 at 13:24
  • 4
    @erip - No, that's a common range for 16-bit two's complement systems. But the absolute minimum is slightly smaller to also allow sign-magnitude and one's complement. – Bo Persson Jan 15 '16 at 13:26
  • Ah, interesting! TIL. Thanks for keeping me honest. +1 – erip Jan 15 '16 at 13:26
  • 5
    @erip *I think it's `[-32768, 32767]`* No. See **[5.2.4.2.1 Sizes of integer types ``](http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1570.pdf)** – Andrew Henle Jan 15 '16 at 13:27
5

All types are compiler-dependent. int used to be the "native word" of the underlying hardware, which on 16-bit systems meant that int was 16 bits (which leads to the -32k to +32k range). When 32-bit systems started coming then int naturally followed along and became 32 bits, which can store values around -2 billion to +2 billion.

However this "native word" use for int didn't follow along when 64-bit systems came around, I know of no 64-bit system or compiler that have int being 64 bits.

See e.g. this reference of integer types for more information.

Some programmer dude
  • 400,186
  • 35
  • 402
  • 621
4

In C++, int is at least 16-bits long, but typically 32-bits on modern hardware. You can write INT_MIN and INT_MAX and check yourself.

Note that signed integer overflow is undefined behavior, you are not guaranteed to get a warning, except perhaps with high compiler warnings and debug mode.

TemplateRex
  • 69,038
  • 19
  • 164
  • 304
3

You have misunderstood. The standard guarantees that a int holds [-32767, +32767], but it is permitted to hold more. (In particular, nearly every compiler you are likely to use allows a range [-2147483648, 2147483647]).

There is another problem. If you make the value you assign to a and b bigger you still probably won't get any warning or error. Integer overflow causes "undefined behaviour", and literally anything is allowed to happen.

  • "If a sum exceed that number" ... anything can happen. See this bug report where GCC converted a loop from 0 to 64 to an infinite loop which overwrote all of memory because there was signed integer overflow within the loop. https://gcc.gnu.org/bugzilla/show_bug.cgi?id=33498 – Martin Bonner supports Monica Jan 15 '16 at 13:50
  • 2147483647seems to be the upper limit, thank you. If a sum exceed that number, the result is negative, wich is expected, BUT if it is a subtraction, for example: 2147483649-2147483647=2 the result is still good. I mean, why the value 2147483649 is correctly hold for that substraction purpose (or at least it seems to me)? – ikeeki Jan 15 '16 at 13:51
  • 1
    You are getting (un)lucky. 2147483649 overflows, and results in an `int` with value -2147483647. Subtracting 2147483647 from that *should* result in -4294967294, but *that* underflows and you end up with +2. *HOWEVER* beware that arithmetic overflow is undefined behaviour, and the compiler may assume it doesn't happen - and all sorts of thing go wrong when it does. – Martin Bonner supports Monica Jan 15 '16 at 15:11
1

If an int is four bytes an unsigned is 4294967295, signed max. 2147483647 and signed min. -2147483648

unsigned int ui = ~0;
int max = ui>>1;
int min = ~max;
int size = sizeof(max);
kometen
  • 6,536
  • 6
  • 41
  • 51
0

While the standard guarantees the size of int to be 16 bit, it is usually implemented as a 32-bit value.

adjan
  • 13,371
  • 2
  • 31
  • 48
0

The size of an int (and the max value it can hold) depends on the compiler and the computer you are using. There is no guarantee that it will have 2 bytes or 4 bytes, but there is a guaranteed minimum size for the c++ variables.

You can see a list of minimum sizes for c++ types in this page: http://www.cplusplus.com/doc/tutorial/variables/