The traditional way
int count = 32;
for(int i = 1 << 31; i != 0; i >>= 1, count--)
if((number & i) != 0) return count;
You can get more fancy with optimization.
EDIT 2 Here's the fastest code I could think of without the use of Bit Scan Reverse opcode. You could use a bigger (256 entry) LUT and remove the last IF statement. In my testing this was faster than the repeated OR-SHIFT then LUT method described in another post.
int[] Log2_LUT = new int[16]{0, 1, 2, 2, 3, 3, 3, 3, 4, 4, 4, 4, 4, 4, 4, 4};
int Log2 (number) {
int count = 0;
if((number & 0xFFFF0000) != 0) {
number >>= 16;
count += 16;
}
if((number & 0x0000FF00) != 0) {
number >>= 8;
count += 8
}
if((number & 0x000000F0) != 0) {
number >>= 4;
count += 4;
}
return count + Log2_LUT[number];
}
Or if your in x86 or x86-64 bit architecture you can use the BSR (Bit Scan Reverse) opcode.
You can find the c++ intrinsic for it http://msdn.microsoft.com/en-us/library/fbxyd7zd%28v=vs.80%29.aspx
Also you question is similar to this one What is the fastest way to calculate the number of bits needed to store a number
EDIT Why the log2 answers are not optimal...
While mathematically correct, complex floating point operations (sine, cosine, tan, log) are the slowest performing operations on modern computers. This is compounded by having to convert integer to a float and having to ceiling/floor it as well.