I have seen some code in open source libraries. They check if a particular flag is set in a variable with the test if !!(flag & FLAG1)
My question is why not simply write if (flag & FLAG1)
instead? Is the first version more optimized?
I have seen some code in open source libraries. They check if a particular flag is set in a variable with the test if !!(flag & FLAG1)
My question is why not simply write if (flag & FLAG1)
instead? Is the first version more optimized?
It can be used this way:
int a=!!(flag & FLAG1);
If flag & FLAG1
evaluates to 0, then a
will be assigned 0. If flag & FLAG1
evaluates to another value, then a
will be assigned to 1.
Without a bit more context it's hard to know the author's reason for doing this, but the most common is that it converts the value to either 0 or 1. This is particularly interesting if you're using __builtin_expect()
, in which case yes, it could result in better-optimized code.
It's also used occasionally to make the code a bit more self-documenting… if you see it, you know that you're meant to be treating it as true/false. Usually, when used like this, it's part of a macro.
It's also worth noting that the result is an int
, regardless of the original type. This isn't usually consequential, but sometimes it's important.
!!
is convert to the opposite boolean value, and convert back. The first logical operator will convert whatever return value to the opposite boolean value, and then the second logical operator will convert the opposite value back.
For example:
bool b = !!(flag & FLAG1);
converting flag & FLAG1
return value to a boolean value.
Unless you want to obfuscate your source code, you don’t use this kind of statement. Instead, you can use normal casting to clearly express what you want to do
bool b = (bool)(flag & FLAG1);