So the place where bitfields are beneficial is where you have a lot of bool flags that you can pack into a single word. The code for testing one particular flag is comparable, but the code for testing for a particular subpattern of flags can be much shorter, though you might need to write the test yourself:
// using bool
bool a,b,c,d;
if (a && !b && c && !d) ...
// hoping the compiler knows what we are doing
enum { A= 0x01, B= 0x02, C= 0x04, D= 0x08};
if ((flags&A) && !(flags&B) && (flags&C) && !(flags&D)) ...
// Optimise it ourselves:
enum { A= 0x01, B= 0x02, C= 0x04, D= 0x08};
if ((flags & (A|B|C|D)) == (A|!B|C|!D)) ...
In the first case, the compiler must load each of the locations, though it could early-out. In the latter cases, at minimum it can load the value once and do multiple operations in core. A good optimiser could optimise the second pattern to the third, which is 2 instructions, though the optimiser might also realise it can load all the bools as a "long" to reduce bandwidth, and they would at least all be in the same cache line, which is almost as good as being in core these days.
But in any case, the 3rd form will win vs. the first form for brevity, as well as saving a little storage.
Note that your two examples do not test the same thing.
A & 5
tests that either the 4-bit or the 1-bit are set, but ignores the 2-bit, and any higher bits completely.
A == 5
does test that the 1-bit and 4-bit are set, but it also checks that ALL OTHER bits are clear.