4

On a system, does data type "int" in C always have the same number of bits as the bits of the OS?

Thanks!

  • 4
    (a) how many bits does an OS “have”? (b) no; in the ABIs used by most mainstream 64-bit OSes, `int` is 32 bits. – Stephen Canon Feb 14 '14 at 17:29
  • 1
    One obvious counter-example - a 32-bit version of gcc running on 64-bit Linux... So, the answer is definitely "no". – twalberg Feb 14 '14 at 17:37
  • The OS used when a C program compiles does not need to be the same OS the program runs under (cross-compile). The c program itself does not need to have the same int size as either OS. The C program may not even have an OS when it runs. – chux - Reinstate Monica Feb 14 '14 at 17:56

4 Answers4

6

No, not necessarily. Just for an obvious example, if you use a 32-bit compiler on a 64-bit OS, you'll typically have 32-bit ints.

The requirements in the C standard are fairly minimal. Beyond the minimum size requirements, there's (§6.2.5/5):

A ‘‘plain’’ int object has the natural size suggested by the architecture of the execution environment (large enough to contain any value in the range INT_MIN to INT_MAX as defined in the header <limits.h>).

If you need to be certain that your type is at least 64 bits, you can use long long.

Types like int32_t have been mentioned in other answers. Although often used for other purposes, this type is really intended for a situation where you need a type that's exactly 32-bits wide, regardless of how that may impact performance.

That means you generally want to avoid these types. If you just need to ensure that you can hold at least a 64-bit integer without overflow, either long long or int_fast64_t is a better choice (and likewise for things like 32-bit types). Right now, for a 64-bit type, it's unlike this will make a big difference for a 64-bit type.

For a 32-bit type (for example) it might well make a difference though. int32_t must be exactly 32 bits wide, but in a 64-bit process on a 64-bit OS running on a 64-bit processor, it's quite likely that a 64-bit type will be faster than a 32-bit type. I've roughly tripled the speed of some legacy (but not terribly old) code that used int32_t where it wasn't really suitable. At the time (on a 32-bit compiler and OS) it didn't cause a problem, but on a 64-bit system, it imposed a fair amount of extra work, because what they really wanted was int_fast32_t--the fastest type available that supported at least 32 bits.

Likewise, it seems nearly inevitable that at some point in the future, we'll start to use processors "larger" than 64 bits. When we do, we'll probably hit the same situation with 64-bit types that we have right now with 32-bit types: there'll be quite a few places that people have used int64_t when they probably wanted int_fast64_t, and their code will run substantially slower than it really needs to because they've required their (for example) 128-bit processor to mask the operand(s) down to 64 bits instead of working with the size native to the processor.

Jerry Coffin
  • 476,176
  • 80
  • 629
  • 1,111
  • Does the number of bits of "int" depend entirely on the compiler? –  Feb 14 '14 at 17:30
  • @T...: Yes, at least normally (I suppose there *could* be some oddball implementation where that's not the case, but it certain is usually). – Jerry Coffin Feb 14 '14 at 17:31
  • Thanks! In my Ubuntu, ` /usr/include/limits.h` doesn't seem to be compiler-specific. So does that mean the OS and the compiler may have different number of bits for "int"? –  Feb 14 '14 at 17:35
  • 1
    @T...: Appearances can be deceiving--that header is compiler specific. If you install it with the normal package management tools, you'll almost certainly end up installing a compiler that matches the OS though. – Jerry Coffin Feb 14 '14 at 17:43
  • Actually the C standard promises (elsewhere) that `int` is at least 16 bits wide. – Fred Foo Feb 14 '14 at 17:57
  • Another question: Is the ratio between the number of bits for "double" and that for "int" always fixed? –  Feb 14 '14 at 17:58
  • @larsmans: Have I said (or even implied) otherwise? If I have, I'd like to fix that (but I don't see such a thing). – Jerry Coffin Feb 14 '14 at 17:58
  • @T...: Do you mean for a given compiler, or in general? In general it's not. For a particular compiler it probably is, at least as long as you use the same flags. – Jerry Coffin Feb 14 '14 at 18:00
  • I mean for the same compiler and OS and computer. What do you mean by "flag"? –  Feb 14 '14 at 18:06
  • @T...: I'm not sure. It seems like one of the compilers for the DEC Alpha may have allowed you to adjust the size of `double`, but I don't remember for sure (it definitely did let you adjust the size of `long double` as either 80 or 128 bits). – Jerry Coffin Feb 14 '14 at 18:11
  • @JerryCoffin "the requirement" seems to imply this is the only thing required of `int`. – Fred Foo Feb 15 '14 at 12:51
2

No. It will depend on the OS and the architecture (ie: what type of processor running the OS). Common primitive types like int, unsigned, etc will be system dependent, so depending on the system you are using, int could be 16, 32, or some arbitrary size.

If you want to guarantee the size of a value, you need to use fixed-width types. If you want to do that, look into <stdint.h>, and the new primitives it provides you with, like int32_t, uint8_t, etc.

References


  1. Integer types, Accessed on 2014-02-14, <http://www.cplusplus.com/reference/cstdint/>
Cloud
  • 18,753
  • 15
  • 79
  • 153
  • Note that even using these types does not *guarantee* the size of a variable, because those types are optional according to the standard. If an implementation had 9 bit `char`s, for instance, it would be required to provide `int_fast8_t` and `int_least8_t` and their unsigned counterparts, but it would not be required to (and almost certainly would not) provide `int8_t` or `uint8_t`. – Crowman Feb 15 '14 at 15:11
  • @PaulGriffiths No contest on that note. If the implementation does supply the fixed-width types that, by definition, guarantee the size to be exact, then it has to work, or it is broken. On the note of the `fast`/`least` types, you are also correct: they don't guarantee an exact size, just an upper/lower limit. – Cloud Feb 17 '14 at 17:13
1

It's not required to by the standard. It may in practice usually, but this thread mentions that it's more commonly false in embedded systems.

Community
  • 1
  • 1
kbshimmyo
  • 578
  • 4
  • 13
0

On a 64-bit Debian, this small program

#include <stdio.h>

int main() {
    #define printSize(aType) printf("sizeof " #aType " = %ld\n", sizeof(aType))
    printSize(void*);
    printSize(char);
    printSize(short);
    printSize(int);
    printSize(long);
    printSize(long long);
}

prints the following:

sizeof void* = 8
sizeof char = 1
sizeof short = 2
sizeof int = 4
sizeof long = 8
sizeof long long = 8

So it's not just that int is not required to be 64 bits on a 64 bit system, there is also real world general purpose systems on which the size of int defers from the pointer size. Consequently, you should not rely on the size of these types.


There are exceptions to this rule, where you can rely on the size of a basic type. These are explicitly mentioned in the C standard. The two most important ones are these:

  1. char is required to be one 8 bit (you can't count on it being either signed or unsigned, though).

  2. long long is required to be at least 64 bits.

And of course, long is required to be at least as big as int etc.

cmaster - reinstate monica
  • 38,891
  • 9
  • 62
  • 106
  • Confused by your last point - the C standard defines minimum sizes for *all* the basic numeric types, not just `char` and `long long`, and while `char` has to be at least 8 bits, it can be larger. – Crowman Feb 15 '14 at 14:42
  • @PaulGriffiths Yeah, I was a bit too narrow in that last part. Changed it to be a bit more general. However, the `char` type is required to be precisely one byte by the C11 standard. This used to be different, in C99 you were indeed not allowed to make any assumption on the size of a `char`. – cmaster - reinstate monica Feb 15 '14 at 14:52
  • I'm not sure where you came across that idea, but it's not correct. `char` can be 8 bits or larger in C11 just like it could in the previous standards, it hasn't changed. C11 does not mandate that `char` be 8 bits, see C11 5.2.4.2.1.1, particularly the part that says "shall be equal or greater in magnitude to those shown". The C standard has always required `char` to be exactly one byte by definition, but it has never required one byte to be exactly 8 bits, and it still does not. – Crowman Feb 15 '14 at 14:57