Here's an example of how most people think bitfields should work - picture your typical IEEE 754 single-precision float:
+-+--------+-----------------------+
| | | |
+-+--------+-----------------------+
^ ^ ^
| | |
| | +----------------------- significand
| +-------------------------------- exponent
+---------------------------------- sign bit
You could create a struct
type with bitfields to represent this format:
struct ieee_754_s {
unsigned sign_bit : 1;
unsigned exponent : 8;
unsigned significand : 23;
};
This could be a useful abstraction for learning and understanding how floating point formats work, and what operations need to be performed on each field in order to do floating point arithmetic, etc.
However, there also tends to be a naive expectation that this type would map directly onto a float
and you could do all kinds of clever bit manipulations on float
objects.
Except it doesn't. At least, it's not guaranteed to. On a little-endian system like x86 the fields don't map correctly. To make it work on my Macbook, I had to define the type as
struct ieee_754_le {
unsigned sig_3 : 8; // lowest-order bits of the significand
unsigned sig_2 : 8;
unsigned sig_1 : 7; // highest-order bits of the significand
unsigned sign : 1;
unsigned exp : 8;
};
and then I could get it to map correctly. But again, it's still not guaranteed. Bit fields do not have to be laid out contiguously, and the order in which they're allocated within a storage unit is implementation-defined. The layout above just happens to map cleanly onto a sequence of storage units, and the compiler was nice enough to order the bit fields as I expected within each storage unit. It could have flipped the ordering of the sign
and sig_1
fields.
Bit fields are a way to deal with objects whose sizes are not an integral number of storage units without having to do a bunch of bit masking. They are not a good way to represent and interact with low-level binary formats.