23

When should one use the datatypes from stdint.h? Is it right to always use as a convention them? What was the purpose of the design of nonspecific size types like int and short?

chux - Reinstate Monica
  • 143,097
  • 13
  • 135
  • 256
Guy
  • 501
  • 2
  • 4
  • 13

4 Answers4

29

When should one use the datatypes from stdint.h?

  1. When the programming tasks specify the integer width especially to accommodate some file or communication protocol format.
  2. When high degree of portability between platforms is required over performance.

Is it right to always use as a convention them (then)?

Things are leaning that way. The fixed width types are a more recent addition to C. Original C had char, short, int, long and that was progressive as it tried, without being too specific, to accommodate the various integer sizes available across a wide variety of processors and environments. As C is 40ish years old, it speaks to the success of that strategy. Much C code has been written and successfully copes with the soft integer specification size. With increasing needs for consistency, char, short, int, long and long long, are not enough (or at least not so easy) and so int8_t, int16_t, int32_t, int64_t are born. New languages tend to require very specific fixed integer size types and 2's complement. As they are successfully, that Darwinian pressure will push on C. My crystal ball says we will see a slow migration to increasing uses of fixed width types in C.

What was the purpose of the design of nonspecific size types like int and short?

It was a good first step to accommodate the wide variety of various integer widths (8,9,12,18,36, etc.) and encodings (2's, 1's, sign/mag). So much coding today uses power-of-2 size integers with 2's complement, that one may not realize that many other arrangements existed beforehand. See this answer also.

chux - Reinstate Monica
  • 143,097
  • 13
  • 135
  • 256
  • 3
    afaik, `int` was designed as the datatype which can be handled most efficiently by the cpu (e.g. a cpu register). `long` was just longer and `short` shorter (but accessible as e.g. the half of a register (`al`, `ah` on x86)). – ensc Nov 19 '13 at 18:26
  • 1
    @ensc Agree about `int` as the "most efficiently by the cpu" or more to the "native" CPU integer size. But this is not always so for some CPUs have a "native" 8-bit integer (Old CPUs and today's small embedded ones). But C requires at least 16 bits for an `int` to be compliant. – chux - Reinstate Monica Nov 19 '13 at 18:56
  • #2 is wrong. Plain "int" need not be used for any reason now (except compatibility with existing APIs using int), and you can get both portability and performance with stdint.h types, by using int_fast16_t (et al). – Jetski S-type Sep 17 '19 at 04:34
4

My work demands that I use them and I actually love using them.

I find it useful when I have to implement a protocol and use them inside a structure which can be a message that needs to be sent out or a holder of certain information.

If I have to use a sequence number that needs to be incremented, I wouldn't use int because sequence numbers aren't supposed to be negative. I use uint32_t instead. I will hence know the sequence number space and can plan/code accordingly.

The code we write will be running on 32 as well as 64 bit machine so using "int" on different bit machines results in subtle bugs which can be a pain to identify. Using unint16_t will allocate 16 bits on 32 or 64 bit architecture.

Nikhil
  • 2,168
  • 6
  • 33
  • 51
  • The differing semantics between `uint32_t` on 32-bit systems `uint32_t` on 64-bit systems, and `int` can wreak havoc on code which doesn't use lots of explicit casts. Interactions among signed types tend to be more "sane". – supercat Apr 15 '17 at 19:01
3

No, I would say it's never a good idea to use those for general-purpose programming.

If you really care about number of bits, then go ahead and use them but for most general use you don't care so then use the general types. The general types might be faster, and they are certainly easier to read and write.

unwind
  • 391,730
  • 64
  • 469
  • 606
  • "The general types might be faster": Why? Aren't `uint32_t` etc just `typedef`s? – anishsane Nov 19 '13 at 16:54
  • Why are there easier to read and write? Do you mean they are faster to type or do they have some other meaning from the types defined in stdint.h? – Guy Nov 19 '13 at 16:55
  • 1
    @anishsane: yes, but if you work on a 16 bit platform using always 32 bit integers even if 16 (the "native" `int`) would suffice is going to slow the program down. – Matteo Italia Nov 19 '13 at 16:56
  • 1
    @MatteoItalia That why you do have `uint_least8_t` or `int_fast16_t`... If you wants portability, never use int or long, but use instead `uint_leastXX_t`, `int_fastXX_t`, or the classic `uint32_t` – benjarobin Nov 19 '13 at 16:58
  • @benjarobin: I know, I was just explaining that them being "just typedefs" doesn't immediately relate to their "fastness". – Matteo Italia Nov 19 '13 at 16:59
  • 1
    It should also be noted that `uint32_t`, `int32_t`, etc, are **optional** to be supplied by an implementation, per C99 7.20.1.1-3. Only the `fast` and `least` types are required, 8 of each (8,16,32,64, each signed and unsigned, are the only ones mandated). – WhozCraig Nov 19 '13 at 17:00
  • To take the argument to its extreme, we always care about such things when using C. When we don't care we use Python. – Maxim Egorushkin Nov 19 '13 at 17:21
  • @WhozCraig: Can you please explain why uint32_t is optional? from the man page it seems to me to be required – Guy Nov 19 '13 at 17:31
  • 2
    @Guy it may be required for the implementation you're using, but the standard is fairly blunt. From 7.20.1.1-3 *"These types are optional. However, if an implementation provides integer types with widths of 8, 16, 32, or 64 bits, no padding bits, and (for the signed types) that have a two’s complement representation, it shall define the corresponding typedef names."* You'll likely get various takes what that *means*, to be sure. – WhozCraig Nov 19 '13 at 18:13
  • 2
    @MatteoItalia: "Even if 16 bit suffice": Then you should have used uint16_t. – anishsane Nov 20 '13 at 05:29
  • 1
    @anishsane: no, there are platforms/situations where using a smaller int than the native one result in slower code; you want uint_fast16_t, but plain unsigned int is already the native size *and* has the minimum required range. The point is, because of how they are defined, the primitive types are already akin to the various int_fastXX_t, with XX being 16 for short and int, 32 for long and 64 for long long, and are less awkward to type. – Matteo Italia Nov 20 '13 at 11:47
3

Fixed width datatypes should be used only when really required (e.g. when implementing transfer protocols or accessing hardware or requiring a certain range of values (you should use the ..._least_... variant there)). Your program won't adapt else on changed environments (e.g. using uint32_t for filesizes might be ok 10 years ago, but off_t will adapt to recent needs). As others have pointed out, there might be a performance impact as int might be faster than uint32_t on 16 bit platforms.

int itself is very problematic due to its signedness; it is better to use e.g. size_t when variable holds result of strlen() or sizeof().

ensc
  • 6,704
  • 14
  • 22