As a follow-up to "How to use bitmask": When I want to test if a certain flag is set in a given bitmask, I would usually do so solely through the usage of a bitwise AND:
if(bitmask & flag) { …
Yet I frequently see something like this:
if((bitmask & flag) == flag) { …
This is something I observed on both, strongly and weakly typed languages, which is ruling out some sort of prevention for type-casting disasters. The only scenario I can come up with in which the two tests were not equivalent were if flag
happens to have actually more than one bit set and all of those are required to be set in bitmask
for the condition to pass. Is that all or am I missing something here?
Bonus: Do compilers have means to recognize a flag
that will have at most one bit set during runtime and optimize the (possibly bogus) comparison away?