1

Why in Computer Science we use powers of 2 as units of measures of data volume? For example, 1 byte is 2^3 bits. Is this established by convention or due to other reasons?

And, once we have the byte, why do we consider 1 megabyte = 2^20 instead of 10^6?

Some people say that this a matter of convenience, since the computer adopts binary representation. But this does not answers the question. Yes, computers use bits. However, we could build processor register with arbitrary capacities (for example, 20 bits, which is not a power of 2).

Is this just a convention or there is another reason underlying this?

Zaratruta
  • 2,097
  • 2
  • 20
  • 26
  • Possible duplicate of [Why is number of bits always(?) a power of two?](http://stackoverflow.com/questions/1606827/why-is-number-of-bits-always-a-power-of-two) – ardhitama Feb 28 '16 at 03:15
  • Possible duplicate of [Why are all datatypes a power of 2?](https://stackoverflow.com/questions/5191833/why-are-all-datatypes-a-power-of-2) – phuclv Aug 29 '17 at 08:38

1 Answers1

1

Consider how electronic memory is used.

To address a particular chunk (e.g. byte) of memory, digital address lines are set to a pattern of 1's and 0's to indicate the desired address. One manufacturer used binary we know and another used BCD, four lines to represent addresses 0-9. To access that data, one system uses 2^20 bytes and 20 address lines. The other system has 10^6 bytes and 24 address lines.

What system would most choose? The one that needs a memory chip with 20 lines or 24?

Given that many approaches have been taken, the market chose density over decimal-ness and other alternatives. Binary is very natural for many computer algorithms and architectures. Decimal is useful for humans.

chux - Reinstate Monica
  • 143,097
  • 13
  • 135
  • 256