12

can someone explain the difference between the types uint8_t and __u8 ?

i know that uint8_t are defined in stdint.h and they are available on every unix system.

/* Unsigned.  */
typedef unsigned char       uint8_t;
typedef unsigned short int  uint16_t;
...

And if i use them its recognizable what i intent to do.

now i stumbled over the __u8 and __u16 types. its seems for me to be the same.

some of those types are defined in linux/types.h

#ifdef __CHECKER__
#define __bitwise__ __attribute__((bitwise))
#else
#define __bitwise__
#endif
#ifdef __CHECK_ENDIAN__
#define __bitwise __bitwise__
#else
#define __bitwise
#endif

typedef __u16 __bitwise __le16;
typedef __u16 __bitwise __be16;
typedef __u32 __bitwise __le32;
...

i didnt find __u8 but i can still use it and it behaves like uint8_t.

is there some difference in performance or memory consumption?

thanks for help :)

baam
  • 1,122
  • 2
  • 14
  • 25
  • 7
    Consider anything with two adjacent underscores in it off-limits for you. – Kerrek SB Apr 26 '13 at 09:04
  • 2
    See [this question](http://stackoverflow.com/q/228783/440558) about leading underscores in identifiers. – Some programmer dude Apr 26 '13 at 09:06
  • `uint8_t` is available on systems where there is a native type with exactly eight bits. If there is no such type, then `uint8_t` is not defined. This has nothing to do with unix, linux, OS X, or whatever. It's about the hardware that the program is running on. – Pete Becker Apr 26 '13 at 13:32

1 Answers1

22

uintn_t are typedefs specified* by C99 (in <stdint.h>) and C++11 (in <cstdint>) standards. All new compilers provide these and appopriate definition is easy to get for the few ancient ones easily, so for portability always use this.

__un are Linux-specific typedefs predating those standards. They are not portable. The double underscore is used to signify a non-standard definition.

* For 8, 16, 32 and 64 they shall be defined if the compiler has a type of that size, additional ones may be defined.

Jan Hudec
  • 73,652
  • 13
  • 125
  • 172
  • The `[u]int[8,16,32,64]_t` are not required by the C99/C11 nor C++11 standard. They are optional `typedef`s if the exact-width types are available on a system. – rubenvb Apr 26 '13 at 09:11
  • 3
    @0x90: And where are they specified? – Jan Hudec Apr 26 '13 at 09:15
  • 2
    @rubenvb: a clarification - while the exact-width integer types are optional in general, the typedefs corresponding to sizes 8, 16, 32, and 64 are required by C99 and C11 if the platform has those integer types (in two's complement representation for the signed variants). – Michael Burr Apr 26 '13 at 09:25
  • Thanks for the answer. I need the portability therefore i will use the `uintX_t` typedefs! – baam Apr 26 '13 at 11:17
  • 1
    The Linux kernel uses u8, u16, u32, (and signed equivalent s8, s16, and s32) internally. The double underscored __u8 & friends are primarily there for when kernel structures get exported to userspace, which should be avoided most of the time. – Joshua Clayton Jan 06 '14 at 21:06
  • 1
    Even more clarification: the `uint_leastn_t`/`int_leastn_t` and `uint_fastn_t`/`int_fastn_t` types are not optional but mandatory from 8 to 64 bits. No exceptions, not even for freestanding implementations. And the only reason why `uintn_t`/`intn_t` are optional on some systems is to not block exotic mumbo jumbo architectures that don't use 2's complement and/or don't have 8 bit bytes. Systems like x86 Linux _must_ support the exact-width types from 8 to 64 bit as mentioned, it's not optional there. – Lundin Feb 24 '21 at 10:31