97

I read that the order of bit fields within a struct is platform specific. What about if I use different compiler-specific packing options, will this guarantee data is stored in the proper order as they are written? For example:

struct Message
{
  unsigned int version : 3;
  unsigned int type : 1;
  unsigned int id : 5;
  unsigned int data : 6;
} __attribute__ ((__packed__));

On an Intel processor with the GCC compiler, the fields were laid out in memory as they are shown. Message.version was the first 3 bits in the buffer, and Message.type followed. If I find equivalent struct packing options for various compilers, will this be cross-platform?

YePhIcK
  • 5,816
  • 2
  • 27
  • 52
dewald
  • 5,133
  • 7
  • 38
  • 42
  • 21
    Since a buffer is a set of bytes, not bits, "the first 3 bits in the buffer" isn't a precise concept. Would you consider the 3 lowest-order bits of the first byte to be the first 3 bits, or the 3 highest-order bits? – caf Sep 29 '09 at 03:07
  • 2
    When transiting on the network, "The first 3 bits in the buffer" turns out to be _very_ well defined. – Joshua Dec 12 '11 at 19:02
  • 3
    @Joshua IIRC, Ethernet transmits the least-significant bit of each byte *first* (which is why the broadcast bit is where it is). – tc. Jan 08 '14 at 01:31
  • 1
    When you say "portable" and "cross-platform" which do you mean? The executable will correctly access the order regardless of target OS -- or -- the code will compile regardless of toolchain? – That Realty Programmer Guy Aug 08 '19 at 21:16

7 Answers7

114

No, it will not be fully-portable. Packing options for structs are extensions, and are themselves not fully portable. In addition to that, C99 §6.7.2.1, paragraph 10 says: "The order of allocation of bit-fields within a unit (high-order to low-order or low-order to high-order) is implementation-defined."

Even a single compiler might lay the bit field out differently depending on the endianness of the target platform, for example.

Stephen Canon
  • 103,815
  • 19
  • 183
  • 269
  • 3
    Yeah, the GCC, for instance, specifically notes that bitfields are arranged as per the ABI, not the implementation. So, just staying on a single compiler is not sufficient to guarantee ordering. The architecture has to be checked, too. A bit of a nightmare for portability, really. – underscore_d Jan 05 '16 at 13:34
  • 13
    Why didn't the C standard guarantee an order for bit fields? – Aaron Campbell Mar 14 '16 at 23:01
  • 10
    It's difficult to consistently and portably define "order" of bits within bytes, much less the order of bits that may cross byte boundaries. Any definition the you settle on will fail to match a considerable amount of existing practice. – Stephen Canon Mar 14 '16 at 23:20
  • 4
    implementaiton-defined allows for platform-specific optimization. On some platforms, padding between the bit fields can improve access, imagine four seven-bit fields in a 32 bit int: aligning them at every 8th bit is a significant improvement for platforms that have byte reads. – peterchen Sep 09 '16 at 13:48
  • does `packed` enforce ordering: https://stackoverflow.com/questions/1756811/does-gccs-attribute-packed-retain-the-original-ordering how to enforce bit ordering: https://stackoverflow.com/questions/6728218/gcc-compiler-bit-order – Ciro Santilli OurBigBook.com Jul 30 '17 at 11:10
  • As the question is about C++ as well: For example the C++17 standard does state in section [12.2.4 class.bit](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2017/n4659.pdf#subsection.12.2.4) §1: "Bit-fields are assigned right-to-left on some machines, left-to-right on others." – user5534993 Jul 02 '20 at 07:42
48

Bit fields vary widely from compiler to compiler, sorry.

With GCC, big endian machines lay out the bits big end first and little endian machines lay out the bits little end first.

K&R says "Adjacent [bit-]field members of structures are packed into implementation-dependent storage units in an implementation-dependent direction. When a field following another field will not fit ... it may be split between units or the unit may be padded. An unnamed field of width 0 forces this padding..."

Therefore, if you need machine independent binary layout you must do it yourself.

This last statement also applies to non-bitfields due to padding -- however all compilers seem to have some way of forcing byte packing of a structure, as I see you already discovered for GCC.

Joshua
  • 40,822
  • 8
  • 72
  • 132
36

Bitfields should be avoided - they aren't very portable between compilers even for the same platform. from the C99 standard 6.7.2.1/10 - "Structure and union specifiers" (there's similar wording in the C90 standard):

An implementation may allocate any addressable storage unit large enough to hold a bitfield. If enough space remains, a bit-field that immediately follows another bit-field in a structure shall be packed into adjacent bits of the same unit. If insufficient space remains, whether a bit-field that does not fit is put into the next unit or overlaps adjacent units is implementation-defined. The order of allocation of bit-fields within a unit (high-order to low-order or low-order to high-order) is implementation-defined. The alignment of the addressable storage unit is unspecified.

You cannot guarantee whether a bit field will 'span' an int boundary or not and you can't specify whether a bitfield starts at the low-end of the int or the high end of the int (this is independant of whether the processor is big-endian or little-endian).

Prefer bitmasks. Use inlines (or even macros) to set, clear and test the bits.

Michael Burr
  • 333,147
  • 50
  • 533
  • 760
  • 3
    The order of bitfields can be determined at compile time. – Greg A. Woods Jul 26 '13 at 18:37
  • 10
    Also, bitfields are highly preferred when dealing with bit flags that have no external representation outside the program (i.e. on disk or in registers or in memory accessed by other programs, etc). – Greg A. Woods Jul 26 '13 at 18:38
  • 1
    @GregA.Woods: If this really is the case, please provide an answer describing how. I could not find anything but your comment when googling for it... – mozzbozz Dec 11 '14 at 13:57
  • 1
    @GregA.Woods: Sorry, should have written to which comment I referred. I meant: You say that "The order of bitfields can be determined at compile time.". I cannot anything about it and how to do it. – mozzbozz Jan 05 '15 at 11:53
  • 2
    @mozzbozz Have a look at http://www.planix.com/~woods/projects/wsg2000.c and search for definitions and use of `_BIT_FIELDS_LTOH` and `_BIT_FIELDS_HTOL` – Greg A. Woods Jan 12 '15 at 22:22
  • 1
    @GregA.Woods that file provides a very good example of why you shouldn't try to use bitfields portably - it's horrendous! – kbro Jun 29 '21 at 12:05
  • 1
    The standard could have made things trivial requiring such defines by default, but of course committees.... Meanwhile have a look at any sufficiently "modern" collection of system headers and I'm sure you'll see far worse. – Greg A. Woods Jun 29 '21 at 22:18
14

endianness are talking about byte orders not bit orders. Nowadays , it is 99% sure that bit orders are fixed. However, when using bitfields, endianness should be taken in count. See the example below.

#include <stdio.h>

typedef struct tagT{

    int a:4;
    int b:4;
    int c:8;
    int d:16;
}T;


int main()
{
    char data[]={0x12,0x34,0x56,0x78};
    T *t = (T*)data;
    printf("a =0x%x\n" ,t->a);
    printf("b =0x%x\n" ,t->b);
    printf("c =0x%x\n" ,t->c);
    printf("d =0x%x\n" ,t->d);

    return 0;
}

//- big endian :  mips24k-linux-gcc (GCC) 4.2.3 - big endian
a =0x1
b =0x2
c =0x34
d =0x5678
 1   2   3   4   5   6   7   8
\_/ \_/ \_____/ \_____________/
 a   b     c           d

// - little endian : gcc (Ubuntu 4.3.2-1ubuntu11) 4.3.2
a =0x2
b =0x1
c =0x34
d =0x7856
 7   8   5   6   3   4   1   2
\_____________/ \_____/ \_/ \_/
       d           c     b   a
BTum
  • 3
  • 3
pierrotlefou
  • 39,805
  • 37
  • 135
  • 175
  • 8
    The output of a and b indicates that endianness is still talking about bit orders AND byte orders. – Windows programmer Sep 29 '09 at 05:03
  • wonderful example with bit ordering and byte ordering problematics – Jonathan Nov 04 '16 at 11:16
  • 2
    Did you actually compile and run the code? The values for "a" and "b" don't seem logical to me: you are basically saying that the compiler will swap the nibbles within a byte because of endianness. In the case of "d", endiannes should not affect the byte order within char arrays (assuming char is 1 byte long); if the compiler did that, we wouldn't be able to iterate through an array using pointers. If, on the other hand you had used an array of two 16 bit integers e.g.: uint16 data[]={0x1234,0x5678}; then d would definitely be 0x7856 in little endian systems. – Krauss Sep 12 '17 at 10:25
  • 1
    if the standard says "implementation-defined" then all bets are off. – kbro Jun 29 '21 at 12:06
6

Most of the time, probably, but don't bet the farm on it, because if you're wrong, you'll lose big.

If you really, really need to have identical binary information, you'll need to create bitfields with bitmasks - e.g. you use an unsigned short (16 bit) for Message, and then make things like versionMask = 0xE000 to represent the three topmost bits.

There's a similar problem with alignment within structs. For instance, Sparc, PowerPC, and 680x0 CPUs are all big-endian, and the common default for Sparc and PowerPC compilers is to align struct members on 4-byte boundaries. However, one compiler I used for 680x0 only aligned on 2-byte boundaries - and there was no option to change the alignment!

So for some structs, the sizes on Sparc and PowerPC are identical, but smaller on 680x0, and some of the members are in different memory offsets within the struct.

This was a problem with one project I worked on, because a server process running on Sparc would query a client and find out it was big-endian, and assume it could just squirt binary structs out on the network and the client could cope. And that worked fine on PowerPC clients, and crashed big-time on 680x0 clients. I didn't write the code, and it took quite a while to find the problem. But it was easy to fix once I did.

Bob Murphy
  • 5,814
  • 2
  • 32
  • 35
3

Thanks @BenVoigt for your very useful comment starting

No, they were created to save memory.

Linux source does use a bit field to match to an external structure: /usr/include/linux/ip.h has this code for the first byte of an IP datagram

struct iphdr {
#if defined(__LITTLE_ENDIAN_BITFIELD)
        __u8    ihl:4,
                version:4;
#elif defined (__BIG_ENDIAN_BITFIELD)
        __u8    version:4,
                ihl:4;
#else
#error  "Please fix <asm/byteorder.h>"
#endif

However in light of your comment I'm giving up trying to get this to work for the multi-byte bit field frag_off.

Duncan Roe
  • 31
  • 3
-13

Of course the best answer is to use a class which reads/writes bit fields as a stream. Using the C bit field structure is just not guaranteed. Not to mention it is considered unprofessional/lazy/stupid to use this in real world coding.

99999999
  • 1
  • 1
  • 6
    I think it is wrong to state that it is stupid to use bit fields since it provide a very clean way to represent hardware registers, which it was created to model, in C. – trondd Aug 11 '11 at 06:26
  • 15
    @trondd: No, they were created to save memory. Bitfields aren't intended to map to outside data structures, such as memory-mapped hardware registers, network protocols, or file formats. If they were intended to map to outside data structures, the packing order would have been standardized. – Ben Voigt Jan 22 '13 at 16:16
  • 4
    Using bits saves memory. Using bit fields increases readability. Using less memory is faster. Using bits allows for more complex atomic operations. In out applications in the real world, there is need for performance and complex atomic operations. This answer wouldn't work for us. – johnnycrash Jul 27 '15 at 17:07
  • 3
    @BenVoigt probably true, but if a programmer is willing to confirm that the ordering of their compiler/ABI matches what they need, and sacrifice quick portability accordingly - then they certainly _can_ fulfil that role. As for 9*, which authoritative mass of "real world coders" consider all use of bitfields to be "unprofessional/lazy/stupid" and where did they state this? – underscore_d Jan 05 '16 at 13:44
  • 5
    Using less memory is not always faster; it is often more efficient to use more memory and reduce post-read operations, and the processor/processor mode can make that even more true. – Dave Newton Sep 27 '18 at 14:17