0

I was reading up on the difference between 32-bit and 64-bit systems, and came across this blog in the process: https://www.zdnet.com/article/clearing-up-the-3264-bit-memory-limit-confusion/

Now I'm confused because in this blog they provide a note as follows:

Note: Wondering how we arrive at that 4GB limit? Here's the math for 32-bit systems:

2^32 = 4,294,967,296 bytes 4,294,967,296 / (1,024 x 1,024) = 4,096 MB = 4GB

It's different for 64-bit:

2^64 = 18,446,744,073,709,551,616 18,446,744,073,709,551,616 / (1,024 x 1,024) = 16EB (exabytes)

They state that whether a program is 32 bit or 64 bit changes the memory limit it can use.

What I don't understand is, how the bits change into bytes? If you work out 2 bits to the power of 32, surely the result is 4,294,967,296 bits and not bytes? And if this were so, then the memory limit on a 32-bit system would be 4 GigaBits and not 4 GigaBytes?

Can someone explain how this works out? Maybe I'm missing something?

Community
  • 1
  • 1
AutoBaker
  • 919
  • 2
  • 15
  • 31
  • looking for a duplicate now; I think I saw one earlier today. Yup, found it in my browser history :) – Peter Cordes Jul 06 '18 at 10:16
  • can you provide a link to the duplicate? I did search through several questions on the site and couldn't find anything similar. – AutoBaker Jul 06 '18 at 10:41
  • It's at the top of the page, because I closed the question as a duplicate. The answer I posted is also a complete answer, so I decided not to make it just a comment even though I was planning to close it. – Peter Cordes Jul 06 '18 at 11:49

1 Answers1

1

each separately-addressable memory location is a byte. Memory is not bit-addressable, only in byte chunks or larger. That's why setting a single bit in a bitmap requires a read-modify-write of the containing byte or word.

Peter Cordes
  • 328,167
  • 45
  • 605
  • 847