I came across this algorithm submitted here:How does this algorithm to count the number of set bits in a 32-bit integer work?
I ran the algorithm submitted by "yeer" in the link above as both the algorithms more or less looked the same.
I wrote the traditional(slower method,assuming) code to check how much performance has improved.
Yeer Code:
unsigned int number=0xFFFFFFFF;
number= (number & 0x55555555) + ((number>>1) & 0x55555555);
number= (number & 0x33333333) + ((number>>2) & 0x33333333);
number= (number & 0x0F0F0F0F) + ((number>>4) & 0x0F0F0F0F);
number= (number & 0x00FF00FF) + ((number>>8) & 0x00FF00FF);
number= (number & 0x0000FFFF) + ((number>>16) & 0x0000FFFF);
printf("%d",number);
traditional way:
unsigned int number=0xFFFFFFFF;
unsigned char i=0;
unsigned char count=0;
for(i=0;i<32;i++)
{
if((number>>i) & 1)
count++;
}
printf("%d",count);
The second code outbeats "yeers" method.
For input value 0xFF(using variable as unsigned char), Traditional= 0.047s, Other= 0.063s For input value 0xFFFFFFFF(using variable as unsigned int), Traditional= 0.141s, Other= 0.141s
Whats so special in the other algorithm?
I used Codeblocks IDE to run both the codes.