I'm new to C++. I was learning about types, their memory uses and the differences in their memory size based on architecture. Is there any downside to using fixed-width types such as int32_t?
-
On your PC, no. On some platform that does not even provide int32_t (it is optional)... – Marc Glisse Jan 30 '17 at 03:39
-
2It's all about *semantics*. What are you saying to the people reading your code? If you use plain `int` then you say that it's a generic integer, nothing special about it. If you use `size_t` then you say it's a size of some kind. If you use `int32_t` then you say it's a specific kind of integer data that has to follow certain restrictions (signedness and size). – Some programmer dude Jan 30 '17 at 03:47
3 Answers
The only real downside might be if you want your code to be portable to a system that doesn't have a 32-bit integer type. In practice those are pretty rare, but they are out there.
C++ has access to the C99 (and newer) integer types via cstdint
, which will give you access to the int_leastN_t
and int_fastN_t
types which might be the most portable way to get specific bit-widths into your code, should you really happen to care about that.

- 219,201
- 40
- 422
- 469
-
"they are out there". Can you name such a system? For which there is a C++11 compiler available, but there is no `(u)int32_t`? – geza Jul 30 '18 at 07:21
-
I don't know, but would expect, that you can get a C++11 compiler for a Motorola 56k DSP; it has 24- and 48-, but no 32-bit native types. Related are these two questions: https://stackoverflow.com/questions/45119928 and https://stackoverflow.com/questions/6971886. – Carl Norum Jul 31 '18 at 03:08
The original intent of the int
type was for it to represent the natural size of the architecture you were running on; you could assume that any operations on it were the fastest possible for an integer type.
These days the picture is more complicated. Cache effects or vector instruction optimization might favor using an integer type that is smaller than the natural size.
Obviously if your algorithm requires an int
of at least a certain size, you're better off being explicit about it.

- 299,747
- 42
- 398
- 622
-
`int` was around and required to be 16-bit while lots of 8-bit platforms were still in use. – Carl Norum Jan 30 '17 at 03:43
-
@CarlNorum but even those architectures generally had instructions for working with 16-bit `int` - everybody knew 8 bits was useless for computation. – Mark Ransom Jan 30 '17 at 03:46
-
@MarkRansom Yay an 8bit or 1 byte integer; I can finally count up to 255 in decimal or 0xFF in hex! I think my 30 year old outdated calculator does a better job counting than an 8 bit integer. – Francis Cugler Jan 30 '17 at 08:29
E.g.
To save space, use int_least32_t
To save time, use int_fast32_t
But in actuality, I personally use long
(at least 32-bit) and int
(at least 16-bit) from time to time simply because they are easier to type.
(Besides, int32_t
is optional, not guaranteed to exist.)

- 5,654
- 28
- 44