Can someone explain in detail what is going on in k = n & andmask
? How can n, which is a number such as 2, be an operand of the same &
operator with andmask
, eg 10000000, since 2 is 1 digit value and 10000000 is multiple digit value?
The number of digits in a number is a characteristic of a particular representation of that number. In context of the code presented, you actually appear to be using two different forms of representation yourself:
- "2" seems to be expressed (i) in base 10, and (ii) without leading zeroes.
On the other hand, I take
- "10000000" as expressed (i) in base 2, and (ii) without leading zeroes.
In this combination of representations, your claim about the numbers of digits is true, but not particularly interesting. Suppose we consider comparable representations. For example, what if we express both numbers in base 256? Both numbers have single-digit representations in that base.
Both numbers also have arbitrary-length multi-digit representations in base 256, formed by prepending any number of leading zeroes to the single-digit representations. And of course, the same is true in any base. Representations with leading zeroes are uncommon in human communication, but they are routine in computers because computers work most naturally with fixed-width numeric representations.
What matters for bitwise and (&
) are base-2 representations of the operands, the width of one of C's built-in arithmetic types. According to the rules of C, the operands of any arithmetic operators are converted, if necessary, to a common numeric type. These have the same number of binary digits (i.e. bits) as each other, some of which often being leading zeroes. As I infer you understand, the &
operator computes a result by combining corresponding bits from those base-2 representations to determine the bits of the result.
That is, the bits combined are
(leading zeroes)10000000 & (leading zeroes)00000010
Also why is char
is used for n
and not int
?
It is unsigned char
, not char
, and it is used for both n
and andmask
. That is a developer choice. n
could be made an int
instead, and the showbits()
function would produce the same output for all inputs representable in the original data type (unsigned char
).