104

If you want to use Qt, you have to embrace quint8, quint16 and so forth.

If you want to use GLib, you have to welcome guint8, guint16 and so forth.

On Linux there are u32, s16 and so forth.

uC/OS defines SINT32, UINT16 and so forth.

And if you have to use some combination of those things, you better be prepared for trouble. Because on your machine u32 will be typedefd over long and quint32 will be typedefd over int and the compiler will complain.

Why does everybody do this, if there is <stdint.h>? Is this some kind of tradition for libraries?

lost_in_the_source
  • 10,998
  • 9
  • 46
  • 75
Amomum
  • 6,217
  • 8
  • 34
  • 62
  • For me the bigger question isn't "why don't they just use `stdint.h`?", but rather, would be, why would anyone e.g. use `UINT16` or `quint16` instead of just plain `unsigned short` in the first place? On *which compiler exactly* would doing so fail them? – user541686 Jul 24 '16 at 22:08
  • 8
    @Mehrdad in microcontroller programming you can have all sorts of things. On AVR Mega's for example (and consequently on famous Arduino) int is 16 bit. That may be a nasty suprise. In my opinion, 'unsigned short' requires more typing effort. And it always made me sad using 'unsigned char' for byte octet. Unsigned character, really? – Amomum Jul 25 '16 at 00:42
  • Yes I'm aware of that, but I was talking about `short`, not `int`. Do you know of any platform where `short` wouldn't work but `s8` and `s16` both would? I know that's quite possible in theory, but I'm pretty sure most of the libraries in which I see typedefs like this would never actually be targeting such platforms. – user541686 Jul 25 '16 at 02:04
  • 2
    @Mehrdad The point is that you can't really be sure. That's exactly why `stdint.h` was invented. – glglgl Jul 25 '16 at 09:07
  • @glglgl: But I think generally you *can* be sure. If you think about it, projects like Qt have platform-specific code for everything from Unicode conversion to multithreading. So it's not like you can compile them on any arbitrary system that supports the C abstract machine. Now, if your project is only targeting systems X, Y, and Z, and you know that neither X nor Y nor Z is *ever* going to have anything other than a 16-bit short why in the world would you ever use `int16`? The only thing that does is hard-code the (in all likelihood, somewhat arbitrary) number "16" into your code... why? – user541686 Jul 25 '16 at 09:29
  • @Mehrdad If I am targeting multiple systems, I'll have to think about what length `short`, `int` etc. can have - why think about that if I have the right data types already at hand? And as soon as you get to `int` or `long`, you have to think even harder. – glglgl Jul 25 '16 at 09:54
  • 1
    @glglgl: Here's another way to look at the problem: aren't you asking precisely the wrong question? If you're targeting multiple systems, why arbitrarily hard-code the number of bits into the code in the first place? i.e., why not just say `sizeof(int) * CHAR_BIT` (for example) and use that? If your `int` is too small to represent your range (e.g. an array index), then you almost certainly shouldn't be using `int` anyway, but something like `size_t`. Why would `int32` make any more sense? The only time fixed width makes sense is for communication between systems (e.g. file/network format)... – user541686 Jul 25 '16 at 10:06
  • 1
    @Mehrdad No. Sometimes I have values (such as from an ADC or whatever) that I need to store. I know they are 16 bit wide. So the best ting to use is `uint16_t` (or maybe its `fast` or `least` variant). My point being: These types are convenient to use and have their reason of existence. – glglgl Jul 25 '16 at 10:35
  • @glglgl: "Have their reason" definitely, I never disputed that. What I was disputing was whether they are being overused, not whether they have use cases. – user541686 Jul 25 '16 at 10:37
  • @Mehrdad "overused" is subjective. Where does that begin? – glglgl Jul 25 '16 at 10:38
  • @glglgl: I already suggested where it would begin: my premise was, if you're not communicating with another system, you shouldn't be using fixed-size integers to begin with. (Your ADC example is obviously "another system" here.) Now, quite a lot of source code uses fixed-size integers despite not doing any communication, so if you agree with these statements (both of which are objective, but perhaps not easily measurable) then isn't that it? – user541686 Jul 25 '16 at 10:44
  • What is "usC/OS"? Can you reveal some information about it? – Peter Mortensen Jul 25 '16 at 12:52
  • Related (not duplicate): *[Best Practices: Should I create a typedef for byte in C or C++?](http://stackoverflow.com/questions/1409305)* – Peter Mortensen Jul 25 '16 at 12:58
  • I often work on uCs where an int is 16b. To be able to explicitly specify uint64_t is much nicer for code portability than to remember how many longs that is on the given architecture. – stanri Jul 25 '16 at 13:34
  • Because it's much easier to make code, IDE's languages etc etc. more complicated that it is to simplify them. – Bradley Thomas Jul 25 '16 at 15:07
  • @PeterMortensen uC/OS is real-time operating system for embedded systems, it's quite popular (but in very specific domain). And you just spotted misprint in the question, thank you :) – Amomum Jul 26 '16 at 16:37
  • 1
    @Mehrdad: I would suggest that – assuming it seems worth your effort to produce quality code – you should define your own functional typedefs meaning _the way to interact with my API / the rest of my code_, and define these on technical grounds in terms of “technical” typedefs like `size_t` and/or `uint64_t`. – PJTraill Aug 01 '16 at 12:23

4 Answers4

81

stdint.h didn't exist back when these libraries were being developed. So each library made its own typedefs.

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
lost_in_the_source
  • 10,998
  • 9
  • 46
  • 75
  • 1
    Well why didn't everybody just take types from some other library or header file? Why Glib (that was developed on Linux) didn't use Linux typedefs? – Amomum Jul 24 '16 at 13:11
  • 4
    And okay, I suppose the lack of stdint.h is a good reason, but why even today these typedefs are over int, long and so for and not over stdint types? It would make them interchangeable atleast – Amomum Jul 24 '16 at 13:14
  • 1
    @Amomum Removing `quint8` (for example) would break existing code. For all that we know, `quint8` may well be a typedef for the standard 8-bit unsigned integer type `uint8_t`. – Kusalananda Jul 24 '16 at 13:17
  • 25
    @Amomum "Why Glib (that was developed on Linux) didn't use Linux typedefs?" While the "home base" of glib certainly is Linux it is as certainly by design a portable library. Defining its own types ensures portability: One only has to adapt a tiny header which matches the library types to the proper respective target platform types. – Peter - Reinstate Monica Jul 24 '16 at 13:27
  • @Kusalananda The point is, in reality it's often _not_, and the question is _why not_ have it be a typedef for `uint8_t` (in practice the problem is usually for the 32-bit and 64-bit types, since depending on the system these could be `long` or `int` and `long long`) – Random832 Jul 24 '16 at 18:34
  • 3
    @Amomum *Why Glib (that was developed on Linux) ...* No, it wasn't. Glib was created *way* before the Linux kernel. – andy256 Jul 25 '16 at 05:01
  • 5
    @andy256 "GLib" is not short for "glibc". It's a library that branched off from gtk. It's not older than Linux. –  Jul 25 '16 at 12:05
  • 2
    @andy256: glib is 1998, linux is 1991. IOW, GLib was created way _after_ Linux. – MSalters Jul 25 '16 at 13:18
  • @Wumpus Yes, I agree. My recollection from those days fuzzed out for a moment. As Peter A. Schneider [says](http://stackoverflow.com/questions/38552314/why-does-everybody-typedef-over-standard-c-types/38552322?noredirect=1#comment64495756_38552322) it was built to be portable (which is how I confused it, since all early GNU projects had to be portable to some extent). When I first saw it in the mid 90's Glib was using completely normal and commonplace portability techniques. – andy256 Jul 26 '16 at 01:19
40

For the older libraries, this is needed because the header in question (stdint.h) didn't exist.

There's still, however, a problem around: those types (uint64_t and others) are an optional feature in the standard. So a complying implementation might not ship with them -- and thus force libraries to still include them nowadays.

Ven
  • 19,015
  • 2
  • 41
  • 61
  • 14
    The `uintN_t` types are optional, but the `uint_leastN_t` and `uint_fastN_t` types are not. – Kusalananda Jul 24 '16 at 13:29
  • @Kusalananda seems to correspond to my answer – Ven Jul 24 '16 at 13:29
  • I wasn't sure what you meant by "and others". – Kusalananda Jul 24 '16 at 13:30
  • @Kusalananda oh! I meant the size. – Ven Jul 24 '16 at 13:31
  • 7
    @Kusalananda: Sadly those are of limited usefulness. – Lightness Races in Orbit Jul 24 '16 at 14:42
  • 17
    Of course the reason that they are optional is that you are not guaranteed that there *are* integer types with exactly that number of bits. C still supports architectures with rather odd integer sizes. – celtschk Jul 24 '16 at 16:04
  • 5
    @LightnessRacesinOrbit: How are they of limited usefulness? I must admit that apart from hardware interfaces, I don't see why you would need an exact number of bits, rather than just a minimum, to ensure all your values fit in. – celtschk Jul 24 '16 at 16:07
  • 2
    @celtschk any time you're talking over a network, you need exact length values, as you can't make assumptions about types on the remote end. – Leliel Jul 25 '16 at 00:26
  • 1
    @celtschk And reading/writing file formats, IPC with 3rd party applications, etc. – pilkch Jul 25 '16 at 00:49
  • 24
    @Amomum The typedefs *are* required if the implementation has types meeting the requirements: "However, if an implementation provides integer types with widths of 8, 16, 32, or 64 bits, no padding bits, and (for the signed types) that have a two’s complement representation, it shall define the corresponding typedef names." (quote from N1570, 7.20.1.1 "Exact-width integer types") So if the standard library doesn't have them, a third-party library couldn't either, it seems. – Eric M Schmidt Jul 25 '16 at 00:54
  • 1
    @Leiitel: You also cannot make assumptions about the representation the other side uses. So the only thing you can reliably transmit are byte sequences (translating between platform bytes and network bytes is the task of the network driver code). To translate numbers into byte sequences, you don't need to have exact integer sizes. – celtschk Jul 25 '16 at 05:08
  • 3
    If an implementation does not have `uint64_t`, surely it is because it does not have any unsigned 64-bit type, so making your own typedefs will not work either. – Tor Klingberg Jul 25 '16 at 13:19
  • 1
    They are "optional" in that if there is no such type, you do not have to supply them. However, if there is no 32 bit integer type, quint32 isn't going to exist either. – Yakk - Adam Nevraumont Jul 25 '16 at 13:50
  • @celtschk "apart from hardware interfaces" -- well, of course; this (and serialization, etc.) is *exactly* why the explicit-width types are useful. I would go further and argue that this is *also* where *low level languages like C itself* are most useful. – Kyle Strand Jul 25 '16 at 15:54
  • @EricMSchmidt: Any implementation which could define such types could also decline to define them if for each integer type it defines xx_MIN==-xx_MAX. Even if all computations would behave as though xx_MIN==-xx_MAX-1, any computation which would yield the value (-xx_MAX-1) would invoke Undefined Behavior, and so the implementation could legitimately do anything it likes, including treating the result numerically like -xx_MAX-1. – supercat Jul 25 '16 at 15:56
  • @KyleStrand: Hardware interfacing is where low-level languages like 1990s C are most useful. The hyper-modern non-low-level dialects that seem more fashionable nowadays are far less useful since operations which should have meaning on some platforms but not others are regarded by hyper-modern compilers as not having meaning on *any* platform. – supercat Jul 25 '16 at 15:58
  • @supercat I don't doubt that. Still, that leaves the list of mainstream languages that are useful for hardware interfacing vanishingly small. – Kyle Strand Jul 25 '16 at 16:32
  • @KyleStrand: I find it very irksome that people who want to write FORTRAN believe that the C Spec should be interpreted in a way that turns C into FORTRAN and makes it useless for writing C, when there's a better language for writing FORTRAN, and C has a long history of being used to write C. – supercat Jul 25 '16 at 16:40
  • @supercat Your comment looks like it would be hilarious if I had any idea what you're talking about :/ I have negligible experience with either language, let alone with their respective communities. – Kyle Strand Jul 25 '16 at 16:43
  • 2
    @KyleStrand: FORTRAN is famous for its ability to have compilers take loops and rearrange them for optimal efficiency, but such ability requires that the compiler know all the ways that data might be accessed. C is famous for its ability to let programmers freely access storage in many different ways. It is increasingly fashionable for compilers to perform the kinds of optimizations that used to be reserved to FORTRAN compilers by presumptively declaring that since the Standard doesn't mandate that compilers recognize all the ways in which C implementations have historically allowed... – supercat Jul 25 '16 at 16:53
  • 1
    ...programs to access storage, they can therefore generate code that will break if programs try to access storage in ways which the Standard doesn't mandate that compilers must recognize. Pushers of what I'd call "hyper-modern C" think that since it's useful to be able to declare arrays of two different kinds of 32-bit integers and have compilers presume that accesses made using one type won't affect arrays of the other type, the logical way for a compiler to facilitate that is by treating a 32-bit "int" and a 32-bit "long" as types that aren't alias-compatible. – supercat Jul 25 '16 at 16:57
  • @supercat Ah, that makes sense. Thanks for the explanation. – Kyle Strand Jul 25 '16 at 17:05
13

stdint.h has been standardised since 1999. It is more likely that many applications define (effectively alias) types to maintain partial independence from the underlying machine architecture.

They provide developers confidence that types used in their application matches their project specific assumptions on behavior that may not match either the language standard or compiler implementation.

The practice is mirrored in the object oriented Façade design pattern and is much abused by developers invariably writing wrapper classes for all imported libraries.

When compliers were much less standard and machine architectures could vary from 16-bit, 18-bit through 36-bit word length mainframes this was much more of a consideration. The practice is much less relevant now in a world converging on 32-bit ARM embedded systems. It remains a concern for low-end microcontrollers with odd memory maps.

Pekka
  • 3,529
  • 27
  • 45
  • 1
    Granted, `stdint.h` has been standardized since 1999, but how long has it been available in practice? People drag their feet implementing and adopting new standards, and during that long transitional period, old methods still are a must. – Siyuan Ren Jul 25 '16 at 03:07
  • 1
    One nasty gotcha with `stdint.h` is that even on platforms where e.g. `long` and `int32_t` have the same size and representation, there's no requirement that casting an `int32_t*` to `long*` will yield a pointer that can reliably access a `int32_t`. I can't believe the authors of the Standard thought it obvious that layout-compatible types should be alias-compatible, but since they didn't bother saying so the authors of gcc and IIRC clang think the language would be improved by ignoring aliasing even in cases where it's obvious. – supercat Jul 25 '16 at 15:49
  • 2
    @supercat -- that's probably worth submitting as an erratum to the C committee...because that is *gratuitously dumb*, to put it mildly – LThode Jul 25 '16 at 16:27
  • @LThode: For the C committee to acknowledge that as a mistake would require that it officially declare the behavior of clang and gcc as obtuse. Do you think that's going to happen? The best that could be hoped for (and IMHO the logical way to proceed) would be to define ways for programs to specify "aliasing modes". If a program says specifies that it can accept very strict aliasing rules, then a compiler could use that to allow optimizations beyond what's presently possible. If a program specifies that it requires rules that are somewhat more programmer-friendly than the present ones... – supercat Jul 25 '16 at 16:47
  • ...but would still allow many useful optimizations, then a compiler could generate code that was much more efficient than would be possible with `-fno-strict-alias`, but which would still actually work. Even if there weren't an existing code base, no single set of rules could strike the optimal balance of optimization and semantics for all applications, because different applications have different needs. Add in the existing codebase and the need for different modes should be clear. – supercat Jul 25 '16 at 16:49
3

So you have the power to typedef char to int.

One "coding horror" mentioned that one companies header had a point where a programmer wanted a boolean value, and a char was the logical native type for the job, and so wrote typedef bool char. Then later on someone found an integer to be the most logical choice, and wrote typedef bool int. The result, ages before Unicode, was virtually typedef char int.

Quite a lot of forward-thinking, forward compatibility, I think.

Christos Hayward
  • 5,777
  • 17
  • 58
  • 113