150

I came across the data type int32_t in a C program recently. I know that it stores 32 bits, but don't int and int32 do the same?

Also, I want to use char in a program. Can I use int8_t instead? What is the difference?

To summarize: what is the difference between int32, int, int32_t, int8 and int8_t in C?

Andriy M
  • 76,112
  • 17
  • 94
  • 154
linuxfreak
  • 2,068
  • 3
  • 20
  • 29

3 Answers3

170

Between int32 and int32_t, (and likewise between int8 and int8_t) the difference is pretty simple: the C standard defines int8_t and int32_t, but does not define anything named int8 or int32 -- the latter (if they exist at all) is probably from some other header or library (most likely predates the addition of int8_t and int32_t in C99).

Plain int is quite a bit different from the others. Where int8_t and int32_t each have a specified size, int can be any size >= 16 bits. At different times, both 16 bits and 32 bits have been reasonably common (and for a 64-bit implementation, it should probably be 64 bits).

On the other hand, int is guaranteed to be present in every implementation of C, where int8_t and int32_t are not. It's probably open to question whether this matters to you though. If you use C on small embedded systems and/or older compilers, it may be a problem. If you use it primarily with a modern compiler on desktop/server machines, it probably won't be.

Oops -- missed the part about char. You'd use int8_t instead of char if (and only if) you want an integer type guaranteed to be exactly 8 bits in size. If you want to store characters, you probably want to use char instead. Its size can vary (in terms of number of bits) but it's guaranteed to be exactly one byte. One slight oddity though: there's no guarantee about whether a plain char is signed or unsigned (and many compilers can make it either one, depending on a compile-time flag). If you need to ensure its being either signed or unsigned, you need to specify that explicitly.

Matt Quigley
  • 7,614
  • 4
  • 25
  • 26
Jerry Coffin
  • 476,176
  • 80
  • 629
  • 1,111
  • What about bool and bool_t? Are they the same? – linuxfreak Jan 25 '13 at 05:40
  • 1
    @linuxfreak: Not sure about `bool_t` -- never heard of that one before. The C standard defines `_Bool` as a built-in type. `bool` is defined only if you `#include ` (as a macro that expands to `_Bool`). – Jerry Coffin Jan 25 '13 at 05:41
  • 6
    You said "for a 64-bit implementation, (int) should probably be 64 bits". In practice, int is 32-bits on all common 64-bit platforms including Windows, Mac OS X, Linux, and various flavors of UNIX. One exception is Cray / UNICOS but they are out of fashion these days. – Sam Watkins Nov 18 '14 at 05:53
  • 6
    @SamWatkins: Yes, that's why I carefully said "should be", not "is". The standard says it's "the natural size suggested by the architecture", which (IMO) means on a 64-bit processor, it really *should* be 64 bits (though, for better or worse, you're quite right that it usually isn't). From a more practical viewpoint, it *is* awfully handy to have a 32-bit type among the types in C89, and if int is 64 bits, long has to be at least 64 bits too, so there'd often be no 32-bit type. – Jerry Coffin Nov 18 '14 at 06:43
  • You write "If you want to store characters, you probably want to use char instead. Its size can vary (in terms of number of bits) but it's guaranteed to be exactly one byte" <--- Are you taking an ancient definition of byte, where byte doesn't have to be 8 bits? If you're using a non-ancient definition of byte i.e. byte=8 bits, then what do you mean when you say a char can vary in how many bits it is but is always one byte? – barlop Oct 19 '15 at 20:28
  • @barlop: I mean that the C and C++ standards define as `byte` as being the amount of storage occupied by one `char` (or `signed char` or `unsigned char`). For example, `sizeof(char) == 1` is guaranteed to be true. – Jerry Coffin Oct 19 '15 at 20:42
  • @JerryCoffin ok so re char I suppose when you said its size can vary in number of bits.. you meant it can be more than 8 bits – barlop Oct 19 '15 at 21:12
  • 2
    @barlop: Yes. (Both C and C++ mandate a minimum range of 255 values for char, so it requires at least 8 bits, but can be more). – Jerry Coffin Oct 19 '15 at 21:33
  • @JerryCoffin do you have a quote re minimum range of 255 values which you use to determine that it requires at least 8 bits? – barlop Oct 20 '15 at 00:48
  • @barlop: §5.2.4.2.1 of the C standard ("The values given below shall be replaced by constant expressions suitable for use in #if preprocessing directives. [...] Their implementation-defined values shall be equal or greater in magnitude (absolute value) to those shown, with the same sign. [...] UCHAR_MAX 255"). – Jerry Coffin Oct 20 '15 at 01:05
  • "Its size can vary (in terms of number of bits) but it's guaranteed to be exactly one byte" - what? – ErlVolton Oct 20 '17 at 22:59
  • @ErlVolton: One byte can be anywhere from 8 bits on up. A `char` must be exactly the same size as a byte. – Jerry Coffin Oct 20 '17 at 23:21
  • 3
    I was always under the impression that one byte was exactly 8 bits, not anywhere from 8 bits on up – ErlVolton Oct 21 '17 at 01:41
  • @ErlVolton: Your impression was mistaken. – Jerry Coffin Oct 21 '17 at 05:14
  • @JerryCoffin You're right, after reading the wiki article on "Byte" I see a byte is defined as 8 bits or up: "The C standard requires that the integral data type unsigned char must hold at least 256 different values, and is represented by at least eight bits (clause 5.2.4.2.1). Various implementations of C and C++ reserve 8, 9, 16, 32, or 36 bits for the storage of a byte." Can you explain how common it is for an implementation of C or C++ to define a byte as more than 8 bits, and which implementations do so? Thanks a ton – ErlVolton Oct 21 '17 at 05:37
  • @ErlVolton They do exist. A while ago I used a compiler for a TMS320F2812 and its byte size was defined as 16 bits. Caused lots of confusion. – Andrew Goedhart May 11 '20 at 09:08
29

The _t data types are typedef types in the stdint.h header, while int is an in built fundamental data type. This make the _t available only if stdint.h exists. int on the other hand is guaranteed to exist.

Superman
  • 3,027
  • 1
  • 15
  • 10
9

Always keep in mind that 'size' is variable if not explicitly specified so if you declare

 int i = 10;

On some systems it may result in 16-bit integer by compiler and on some others it may result in 32-bit integer (or 64-bit integer on newer systems).

In embedded environments this may end up in weird results (especially while handling memory mapped I/O or may be consider a simple array situation), so it is highly recommended to specify fixed size variables. In legacy systems you may come across

 typedef short INT16;
 typedef int INT32;
 typedef long INT64; 

Starting from C99, the designers added stdint.h header file that essentially leverages similar typedefs.

On a windows based system, you may see entries in stdin.h header file as

 typedef signed char       int8_t;
 typedef signed short      int16_t;
 typedef signed int        int32_t;
 typedef unsigned char     uint8_t;

There is quite more to that like minimum width integer or exact width integer types, I think it is not a bad thing to explore stdint.h for a better understanding.

Naumann
  • 357
  • 2
  • 13