Why in Computer Science we use powers of 2 as units of measures of data volume? For example, 1 byte is 2^3 bits. Is this established by convention or due to other reasons?
And, once we have the byte, why do we consider 1 megabyte = 2^20 instead of 10^6?
Some people say that this a matter of convenience, since the computer adopts binary representation. But this does not answers the question. Yes, computers use bits. However, we could build processor register with arbitrary capacities (for example, 20 bits, which is not a power of 2).
Is this just a convention or there is another reason underlying this?