Assume we have a bitmask, say 64 bits. A simple way to find the lowest 0 bit, would be to do a logical bit & operation with the mask 0x0001
and shift the above mask to the left whenever the result is 1.
This is a linear time complexity in the number of bits, O(n). I wonder if you can achieve a faster execution than the naive iterative approach or if this is the theoretical optimum.