In C and C++, the size of the build-in integer types is implementation dependent. But is there any predefined intended meaning of them such that int should represent the machine word size etc?
-
2You should check this answer: http://stackoverflow.com/a/589684/1225541 – alestanis Oct 22 '12 at 15:49
-
I'm not following your question. Are you asking if the size of an int is in anyway tied to the hardware it's running on? – Mike Oct 22 '12 at 15:49
-
AFAIK there is no explicit intend expressed anywhere, but I could be wrong. However, the language is meant to be implementable and performant so the definitions are at least not explicitly tailored against the reality of existing architectures. – pmr Oct 22 '12 at 15:52
-
@alestanis It is not really the same question. I was asking for semantical meaning. – user877329 Oct 22 '12 at 16:00
-
1@user877329 I didn't link to the question but to the answer. It states that the meaningful constant is the number of bits in a byte, which depends on your system. – alestanis Oct 22 '12 at 16:03
-
possible duplicate of [64bit Visual Studio](http://stackoverflow.com/questions/2047451/64bit-visual-studio) – Musa Oct 23 '12 at 07:58
3 Answers
Historically, int
was supposed to mean the most "natural" type for an
integer on the machine hardware; obviously, "natural" is somewhat
subjective, but in the past, it was usually pretty obvious, and there
weren't that many integral types available anyway, so makeing int
the
same size as a long
or a short
was the normal course of things.
For various reasons, most 64 bit platforms make int
32 bits. One
could easily argue that this isn't the most "natural" length, but there
was a desire that 32 bit integers be the default, and int
is clearly
the default integral type. Whether it is the most natural for the
architecture or not becomes secondary to whether it is the size wanted
as a default.
With regards to word size: historically, this was the most natural, but in many ways, it's not clear what is meant by "word size" on a modern machine: the largest size you can do arithmetic on? the size of bus transfers to and from memory? etc. Traditionally, "word size" has been used to mean both the width of internal registers (when the machine had them), or the size of a basic bus transfer. (The 8088 was usually referred to as an 8 bit machine, although it had 32 bit registers.) I wouldn't put too much meaning in it today.

- 150,581
- 18
- 184
- 329
There's some wording on that, but it's not very rigid:
Objects declared as characters (char) shall be large enough to store any member of the implementation’s basic character set.
There are five standard signed integer types : “signed char”, “short int”, “int”, “long int”, and “long long int”. In this list, each type provides at least as much storage as those preceding it in the list. (...) Plain ints have the natural size suggested by the architecture of the execution environment, the other signed integer types are provided to meet special needs.
No strict recommendations about float sizes either:
There are three floating point types: float, double, and long double. The type double provides at least as much precision as float, and the type long double provides at least as much precision as double. (...) The value representation of floating-point types is implementation-defined.
-
This does not appear to be text quoted from the C standard. Can you give a citation? Would it not be preferable to cite the standard? – Eric Postpischil Oct 22 '12 at 15:56
-
It might be nice if plain ints **did** have natural size, but there are varying conventions even for that. An int in MSVC under Windows on x86_64 is 32 bits, while an int in gcc under Linux on x86_64 is 64 bits. See: http://en.wikipedia.org/wiki/LLP64#64-bit_data_models – dajames Oct 22 '12 at 15:58
-
-
@dajames are you sure? IIRC int is 32 bits on MinGW GCC on Windows, strange if Linux GCC is different – Kos Oct 22 '12 at 16:01
-
@kos So int is recommended for a for loop and char for characters. But for things like array length, there is no other than size_t (#include
) – user877329 Oct 22 '12 at 16:01 -
@kos I believe so yes ... Note that I'm talking about 64-bit platforms here (I haven't tried 64-bit mingw, and it may be different because IIRC it uses MSVC's runtimes). Follow the link to Wikipedia that I posted earlier for some discussion. – dajames Oct 22 '12 at 16:05
-
The standard says that the results of `sizeof` will be `size_t`, to ensure that they will fit, but anytime you're doing any arithmetic on an integral value (including calculating indexes into an array), the type normally used is `int`. – James Kanze Oct 22 '12 at 16:07
-
@dajames MinGW64 has sizeof(long)==sizeof(int), sizeof(size_t)==sizeof(void*) – user877329 Oct 22 '12 at 16:07
-
1@dajames That's not what the article you cite says (nor does it correspond to my experience). Most 64 bit Unixes use I32LP64, with 32 bit `int` and 64 bit `long`. This is the de facto standard for 64 bit machines. – James Kanze Oct 22 '12 at 16:10
-
@James Kanze (&@kos) Sorry ... My mistake. I'm talking about the wrong thing. It is sizeof(long) that is 8 on gcc/linux but 4 on msvc/Win64. I imagine minGW follows the Windows convention for compatibility with system DLLs. See http://stackoverflow.com/questions/7607502/sizeoflong-in-64-bit-c – dajames Oct 22 '12 at 16:11
-
@dajames It's possible. I think gcc/g++ actually has options to control this, so it would only be the defaults which depend on the system. (On the other hand, if you use something other than the defaults, you can't link against the system, or against any library which links against the system. Which makes them pretty useless, as options go.) – James Kanze Oct 22 '12 at 16:16
C, unlike Java, was designed as a platform enabler and not a stand-alone platform. Cross platform compatibility took a much lower priority than working with data-type sizes that worked optimally for the given platform. Integer types are therefore not specified by the C standard and are totally platform specific

- 27,972
- 12
- 65
- 103
-
Java defines integer size at bit level, which is a lower level of abstraction than to say "ints have the natural size suggested by the architecture of the execution environment". Isn't that strange? – user877329 Oct 22 '12 at 16:14
-
@user877329 What's strange about it. Java makes a conscious decision to not allow implementations on certain more or less exotic hardware, in return for (originally) a guarantee that the same code would give the same results on any machine. (This guarantee has since been broken, since it caused the VM to run excessively slow on Intel architectures.) – James Kanze Oct 22 '12 at 16:19
-
@JamesKanze Yes, but when programming in a high level language i prefer to choose type by purpose, not by size. It is better to say that a smaller pointer range implies that the program (under JVM or not) simply cannot handle larger amount of data. But sometimes, when reading binary data, it is nessecary to choose by size. – user877329 Oct 22 '12 at 16:25
-
@user877329 Sometimes, when dealing with raw data, it may be necessary to use `size_t`. Otherwise, when working at a higher level, the type is `int`, except in cases where you've actually gone to the effort of implementing subrange types that work. – James Kanze Oct 22 '12 at 16:35