4

How the size of int is decided?

Is it true that the size of int will depend on the processor. For 32-bit machine, it will be 32 bits and for 16-bit it's 16.

On my machine it's showing as 32 bits, although the machine has 64-bit processor and 64-bit Ubuntu installed.

Alois Mahdal
  • 10,763
  • 7
  • 51
  • 69
Rohit
  • 635
  • 6
  • 12
  • 22

6 Answers6

9

It depends on the implementation. The only thing the C standard guarantees is that

sizeof(char) == 1

and

sizeof(char) <= sizeof(short) <= sizeof(int) <= sizeof(long) <= sizeof(long long)

and also some representable minimum values for the types, which imply that char is at least 8 bits long, int is at least 16 bit, etc.

So it must be decided by the implementation (compiler, OS, ...) and be documented.

  • 1
    I came across a compiler with a `short long`. `int` was 16 bits, and `short long` was 24. – detly Oct 05 '12 at 08:36
  • 2
    @detly The creators of that compiler certainly had a sense of humour. –  Oct 05 '12 at 08:47
  • Yeah, the size of `double` was given as "24 or 32". This was for a microprocessor. – detly Oct 05 '12 at 09:45
  • @detly you mean microcontroller. –  Oct 05 '12 at 09:48
  • 1
    These guarantees are incomplete. Also must be that `CHAR_BIT` is at least 8, `sizeof(int) * CHAR_BIT` must at least be 16, etc. [See here for more](http://stackoverflow.com/a/271132/87234). – GManNickG Feb 07 '13 at 06:20
  • @GManNickG Correct, fixed. –  Feb 07 '13 at 06:23
  • 1
    Strictly speaking, the standard imposes requirements on the *ranges* of the integer types, not their sizes. In the presence of padding bits, it's possible to have `sizeof (long) < sizeof (int)`, if `int` has more padding bits than `long`. (Such an implementation is unlikely.) – Keith Thompson Mar 21 '16 at 19:21
  • Also, with the current standards, `long long` is guaranteed to be at least 64 bits. – cmaster - reinstate monica Apr 25 '18 at 22:19
3

It depends on the compiler.

For eg : Try an old turbo C compiler & it would give the size of 16 bits for an int because the word size (The size the processor could address with least effort) at the time of writing the compiler was 16.

Mr Fooz
  • 109,094
  • 6
  • 73
  • 101
loxxy
  • 12,990
  • 2
  • 25
  • 56
  • To be (extremely) pedantic; it would give `2` for `sizeof int`, and there are `CHAR_BIT` bits in a byte. `sizeof` returns the number of bytes, and there need not be 8 bits in a byte. – Ed S. Oct 05 '12 at 04:58
2

Making int as wide as possible is not the best choice. (The choice is made by the ABI designers.)

A 64bit architecture like x86-64 can efficiently operate on int64_t, so it's natural for long to be 64 bits. (Microsoft kept long as 32bit in their x86-64 ABI, for various portability reasons that make sense given the existing codebases and APIs. This is basically irrelevant because portable code that actually cares about type sizes should be using int32_t and int64_t instead of making assumptions about int and long.)

Having int be int32_t actually makes for better, more efficient code in many cases. An array of int use only 4B per element has only half the cache footprint of an array of int64_t. Also, specific to x86-64, 32bit operand-size is the default, so 64bit instructions need an extra code byte for a REX prefix. So code density is better with 32bit (or 8bit) integers than with 16 or 64bit. (See the wiki for links to docs / guides / learning resources.)

If a program requires 64bit integer types for correct operation, it won't use int. (Storing a pointer in an int instead of an intptr_t is a bug, and we shouldn't make the ABI worse to accommodate broken code like that.) A programmer writing int probably expected a 32bit type, since most platforms work that way. (The standard of course only guarantees 16bits).

Since there's no expectation that int will be 64bit in general (e.g. on 32bit platforms), and making it 64bit will make some programs slower (and almost no programs faster), int is 32bit in most 64bit ABIs.

Also, there needs to be a name for a 32bit integer type, for int32_t to be a typedef for.

Peter Cordes
  • 328,167
  • 45
  • 605
  • 847
0

It is depends on the primary compiler. if you using turbo c means the integer size is 2 bytes. else you are using the GNU gccompiler means the integer size is 4 bytes. it is depends on only implementation in C compiler.

Manikandan Rajendran
  • 1,142
  • 1
  • 8
  • 9
0

The size of integer is basically depends upon the architecture of your system. Generally if you have a 16-bit machine then your compiler will must support a int of size 2 byte. If your system is of 32 bit,then the compiler must support for 4 byte for integer.

In more details,

  • The concept of data bus comes into picture yes,16-bit ,32-bit means nothing but the size of data bus in your system.
  • The data bus size is required for to determine the size of an integer because,The purpose of data bus is to provide data to the processor.The max it can provide to the processor at a single fetch is important and this max size is preferred by the compiler to give a data at time.
  • Basing upon this data bus size of your system the compiler is designed to provide max size of the data bus as the size of integer.
x06->16-bit->DOS->turbo c->size of int->2 byte
x306->32-bit>windows/Linux->GCC->size of int->4 byte
pradipta
  • 1,718
  • 2
  • 13
  • 24
  • thanx for the info, what's about on the 64 bit>Linux...in 64 bit systems the data bus will be of size 64 bits, there also the compiler is showing 4 bytes – Rohit Oct 05 '12 at 08:37
  • The concept is,in a 64 bit machine the compiler can support upto 8-byte data that doesn't mean it can't support 4 byte.Simply speaking lower system can compatible with the higher systems so,the system is 64-bit but the compiler is supporting upto 32-bit.so it is showing 4-byte,check for the latest compiler version. – pradipta Oct 05 '12 at 08:44
-1

Yes. int size depends on the compiler size. For 16 bit integer the range of the integer is between -32768 to 32767. For 32 & 64 bit compiler it will increase.

Peter Cordes
  • 328,167
  • 45
  • 605
  • 847