62

I already know that stdint is used to when you need specific variable sizes for portability between platforms. I don't really have such an issue for now, but what are the cons and pros of using it besides the already shown fact above?

Looking for this on stackoverflow and others sites, I found 2 links that treats about the theme:

These two links are great specially if one is looking to know more about the main reason of this header - portability. But for me, what I like most about it is that I think uint8_t is cleaner than unsigned char (for storing an RBG channel value for example), int32_t looks more meaningful than simply int, etc.

So, my question is, exactly what are the cons and pros of using stdint besides the portability? Should I use it just in some specifics parts of my code, or everywhere? if everywhere, how can I use functions like atoi(), strtok(), etc. with it?

Thanks!

TejasKhajanchee
  • 103
  • 2
  • 8
Sassa
  • 1,673
  • 2
  • 16
  • 30

4 Answers4

87

Pros

Using well-defined types makes the code far easier and safer to port, as you won't get any surprises when for example one machine interprets int as 16-bit and another as 32-bit. With stdint.h, what you type is what you get.

Using int etc also makes it hard to detect dangerous type promotions.

Another advantage is that by using int8_t instead of char, you know that you always get a signed 8 bit variable. char can be signed or unsigned, it is implementation-defined behavior and varies between compilers. Therefore, the default char is plain dangerous to use in code that should be portable.

If you want to give the compiler hints of that a variable should be optimized, you can use the uint_fastx_t which tells the compiler to use the fastest possible integer type, at least as large as 'x'. Most of the time this doesn't matter, the compiler is smart enough to make optimizations on type sizes no matter what you have typed in. Between sequence points, the compiler can implicitly change the type to another one than specified, as long as it doesn't affect the result.

Cons

None.


Reference: MISRA-C:2004 rule 6.3."typedefs that indicate size and signedness shall be used in place of the basic types".

EDIT : Removed incorrect example.

Community
  • 1
  • 1
Lundin
  • 195,001
  • 40
  • 254
  • 396
  • 5
    Obvious Cons as wallyk stated is the performance impact. Forcing to count in uint32_t integers on a 12bit machine is tedious – hroptatyr Mar 23 '12 at 10:40
  • 19
    `uint32_t` cannot exist on a 12bit machine. – R.. GitHub STOP HELPING ICE Mar 23 '12 at 12:02
  • 6
    @hroptatyr If you read my answer again, you will find that I addressed that very issue. If it needs to be `uint32_t`, declare it as that. If it doesn't need that high resolution, but only 16 bit, then declare it as `uint_fast16_t`, which will be compiled as a 32 bit on a CPU where 32 bit alignment is faster. – Lundin Mar 23 '12 at 12:24
  • 1
    According to 6.3.1.3 (1), "When a value with integer type is converted to another integer type other than _Bool, if the value can be represented by the new type, it is unchanged.", your example should print out "x > y" also on a 16-bit system. The integer conversion rank of `long` is greater than that of `(unsigned) int`, `~x` is `UINT_MAX`, with 16-bit `unsigned int`s, 65535, that is representable as a `long`. So the comparison is `65535L > 0L`. The point is valid, but you picked a wrong example. – Daniel Fischer Apr 27 '12 at 22:03
  • Hmm, we have an `unsigned int` and a `long`. No integer promotions. `~x` is still an `unsigned int`. For the comparison, we have the usual arithmetic conversions (6.3.1.8). The signed type has greater conversion rank and can represent all values of the unsigned type, so the operand with unsigned type is converted to the signed type. Per the quoted, it must keep its value. If you test it with `unsigned short` on a 32-bit system, you get an integer promotion for `~`, meaning that `~x` becomes the `int` -1 (assuming two's complement etc.), that is converted (value-preserving) to `long`... – Daniel Fischer Apr 27 '12 at 23:32
  • ... resulting in `-1L > 0L`. But, the situation is the same as 16-bit `int` and 32-bit `long` on a 64-bit linux, with 32-bit `int` and 64-bit `long`. And, lo and behold, it prints `x > y`, as expected. – Daniel Fischer Apr 27 '12 at 23:34
  • 1
    Sorry to be a nuisance, but I'd really like you to either change the example, so I can upvote, or point out a flaw in my reasoning, so I can upvote. – Daniel Fischer May 20 '12 at 01:12
  • @DanielFischer When taking a second look at it, it seems to be incorrect indeed, as per the usual arithmetic conversions rule `"Otherwise, if the type of the operand with signed integer type can represent all of the values of the type of the operand with unsigned integer type, then the operand with unsigned integer type is converted to the type of the operand with signed integer type."`. The example with short given in my hasty comment obviously gives incorrect result only because of the integer promotions, so it was a bad example as well, comment deleted. – Lundin May 21 '12 at 14:11
  • I have removed the example from my answer but cant be bothered to come up with a new one. Anyway, all these issues just clearly demonstrate how very dangerous and problematic the various implicit conversions are, so they only strengthen my arguments to use stdint.h. – Lundin May 21 '12 at 14:11
  • 1
    Pity that you can't be bothered to think up a new example. I'm too lazy for that, too. Nevertheless, now I can upvote. – Daniel Fischer May 21 '12 at 14:29
  • @R.. Why not? If the compiler authors want to implement the type, they can. –  Feb 29 '16 at 05:29
  • @rrrzx: [u]intNN_t are required to have no padding bits. This is impossible unless NN is a multiple of CHAR_BIT. – R.. GitHub STOP HELPING ICE Feb 29 '16 at 14:20
20

The only reason to use uint8_t rather than unsigned char (aside from aesthetic preference) is if you want to document that your program requires char to be exactly 8 bits. uint8_t exists if and only if CHAR_BIT==8, per the requirements of the C standard.

The rest of the intX_t and uintX_t types are useful in the following situations:

  • reading/writing disk/network (but then you also have to use endian conversion functions)
  • when you want unsigned wraparound behavior at an exact cutoff (but this can be done more portably with the & operator).
  • when you're controlling the exact layout of a struct because you need to ensure no padding exists (e.g. for memcmp or hashing purposes).

On the other hand, the uint_least8_t, etc. types are useful anywhere that you want to avoid using wastefully large or slow types but need to ensure that you can store values of a certain magnitude. For example, while long long is at least 64 bits, it might be 128-bit on some machines, and using it when what you need is just a type that can store 64 bit numbers would be very wasteful on such machines. int_least64_t solves the problem.

I would avoid using the [u]int_fastX_t types entirely since they've sometimes changed on a given machine (breaking the ABI) and since the definitions are usually wrong. For instance, on x86_64, the 64-bit integer type is considered the "fast" one for 16-, 32-, and 64-bit values, but while addition, subtraction, and multiplication are exactly the same speed whether you use 32-bit or 64-bit values, division is almost surely slower with larger-than-necessary types, and even if they were the same speed, you're using twice the memory for no benefit.

Finally, note that the arguments some answers have made about the inefficiency of using int32_t for a counter when it's not the native integer size are technically mostly correct, but it's irrelevant to correct code. Unless you're counting some small number of things where the maximum count is under your control, or some external (not in your program's memory) thing where the count might be astronomical, the correct type for a count is almost always size_t. This is why all the standard C functions use size_t for counts. Don't consider using anything else unless you have a very good reason.

R.. GitHub STOP HELPING ICE
  • 208,859
  • 35
  • 376
  • 711
  • 2
    The issue with int_fastx_t sounds like a compiler bug, rather than something the programmer should need to concern themselves with. And I agree that counter variables (for loop int interators etc) should be be size_t in most cases. – Lundin Mar 23 '12 at 12:31
  • "reading/writing disk/network" is the PC answer. You also definitely need stdint types in any form of embedded system, where you are dealing with hardware registers, direct memory access, memory handling routines, data protocol mappings, interrupt vector setup, bootloaders... and so on. – Lundin Mar 23 '12 at 12:35
  • @Lundin: The fact that it's a widespread "bug", along with the lack of a specification for the definitions of these types in the psABI, is a **very good** reason for the programmer to be concerned. It means that if the bug is ever fixed, any external interfaces using these types will break ABI compatibility. Worse yet, the "version" of your interfaces is determined not by your library version but by your compiler version. – R.. GitHub STOP HELPING ICE Mar 24 '12 at 00:10
10

cons

The primary reason the C language does not specify the size of int or long, etc. is for computational efficiency. Each architecture has a natural, most-efficient size, and the designers specifically empowered and intended the compiler implementor to use the natural native data size data for speed and code size efficiency.

In years past, communication with other machines was not a primary concern—most programs were local to the machine—so the predictability of each data type's size was of little concern.

Insisting that a particular architecture use a particular size int to count with is a really bad idea, even though it would seem to make other things easier.

In a way, thanks to XML and its brethren, data type size again is no longer much of a concern. Shipping machine-specific binary structures from machine to machine is again the exception rather than the rule.

Community
  • 1
  • 1
wallyk
  • 56,922
  • 16
  • 83
  • 148
  • 3
    @Lundin: Types like `int16_fast_t` would be a lot more useful if the Standard allowed compilers to give such types extra precision in cases where that would be more efficient without having to give it always. code which is interested in memory efficiency (common on embedded systems) could greatly benefit if the range of a variable could depend upon whether it was stored in a register even if there were a rule that said that required the range of any particular variable to be uniform everywhere. Under such a rule, `int16_fast_t` could on many machines be as compact as an `int16_t` while... – supercat May 14 '15 at 19:00
  • 3
    ...still gaining all the speed advantages that could be realized if it had been promoted to `int`. – supercat May 14 '15 at 19:04
6

I use stdint types for one reason only, when the data I hold in memory shall go on disk/network/descriptor in binary form. You only have to fight the little-endian/big-endian issue but that's relatively easy to overcome.

The obvious reason not to use stdint is when the code is size-independent, in maths terms everything that works over the rational integers. It would produce ugly code duplicates if you provided a uint*_t version of, say, qsort() for every expansion of *.

I use my own types in that case, derived from size_t when I'm lazy or the largest supported unsigned integer on the platform when I'm not.

Edit, because I ran into this issue earlier:
I think it's noteworthy that at least uint8_t, uint32_t and uint64_t are broken in Solaris 2.5.1. So for maximum portability I still suggest avoiding stdint.h (at least for the next few years).

hroptatyr
  • 4,702
  • 1
  • 35
  • 38