1

I've recently started learning C. I have read that integer range of gcc is -214783648 to +2147483647.

Using GCC on CentOS.

I understood that it is 2^32 is splitted into two parts and 0.

  • Is this a imposed restriction on the Compiler ? Any Advantages with this ? or my compiler is 32 Bit ? (As far as I know my computer is 64-Bit).

  • If one one go above this limit, what should one do ?

Thanks in Advance

aravind ramesh
  • 307
  • 2
  • 15
  • If you need bigger values, use `long` instead of `int`. – McLovin Feb 20 '14 at 02:37
  • @user long is 32 bit where I am – David Heffernan Feb 20 '14 at 02:44
  • Get used to using and knowing the sizes of these: `std::uint8_t` `std::uint16_t`, `std::uint32_t`, `std::uint64_t` and their signed counterparts: `std::int8_t`, `std::int16_t`, `std::int32_t`, `std::int64_t`.. Finally get used to knowing and using `std::size_t`. – Brandon Feb 20 '14 at 02:45
  • @CantChooseUsernames What you've posted there looks like C++... – Dennis Meng Feb 20 '14 at 05:09
  • @DennisMeng Sorry. Forgot this is the C-section. Still, these types should exist in C as well. Just remove the `std::` and use `#include `: http://ideone.com/j6WwbP – Brandon Feb 20 '14 at 05:15
  • @CantChooseUsernames May as well add that the `#include ` is important (for OP's benefit; you already know what I'm talking about) – Dennis Meng Feb 20 '14 at 05:16

2 Answers2

1

Sizing of integers is platform-specific. C standard does not say exactly how many bits an int must have, all it says is that an int must be at least 16-bit long. This is not a restriction on the compiler - rather, it is one of many compiler's property that define your platform.

The bit-ness of your platform does not have a direct impact on the size of the int on your platform. You can often use the same compiler to generate 32-bit and 64-bit code (with gcc, you can use flags -m32 or -m64), but the sizes of built-in types will not necessarily change.

If you need bigger range for your integers, you can use long long, which is guaranteed to have at least 64 bits. In addition, gcc supports 128-bit integer type __int128. If you need even more range, you can use an arbitrary precision math package.

Sergey Kalinichenko
  • 714,442
  • 84
  • 1,110
  • 1,523
0

It has to do with binary numbers and how many bits an integer data type has. Typically an integer is composed of 4 bytes = 32 bits and therefore an integer can store 2^32 values [0, 232 - 1] or [-214783648, 2147483647] for signed integers.

As far as I know the behavior if one goes out of the range is undefined. This is known as Integer overflow.

And here is a must read: http://en.wikipedia.org/wiki/Binary_number

Community
  • 1
  • 1
rendon
  • 2,323
  • 1
  • 19
  • 25