1

So I just read that virtual addresses are divided up into 1 - page number and 2 - offset.

I also read that page number directs you to be able to find the right page and the offset to get the right "byte" that you want to get the physical memory of. So for example in 4KB sized page, we have 12bits reserved for offset since 2^12 = 4096, which is 4KB.

I get the concepts. But I don't get the reasoning behind using pages. I mean, using the 4KB sized page or 8KB sized page, why couldn't we use 1byte big page?

I guess that could make everything byte by byte read and write, which you could say it would slow things down.

But aren't we already doing the same thing with first finding page and finding the correct byte with offset?

What is the motivation behind coming up with bigger sized pages than 1byte? I get the reason behind the use of virtual memory: to avoid swapping. But why couldn't we do this with smaller, more direct one byte sized page?

styopdev
  • 2,604
  • 4
  • 34
  • 55
Joseph
  • 73
  • 10
  • From ["Datacenter Computers modern challenges in CPU design. Dick Sites. Google Inc. February 2015"](http://www.pdl.cmu.edu/SDI/2015/slides/DatacenterComputers.pdf): slide 27 "L1 cache size = associativity * page size – Need bigger than 4KB pages; Translation buffer at 256 x 4KB covers only 1MB of memory – Need bigger than 4KB pages; With 256GB of RAM @4KB: 64M pages – Need bigger than 4KB pages", slide 29 "Modern challenges in CPU design • Lots of memory • More prefetching in software • Bigger page size(s)" – osgx May 07 '17 at 16:12
  • That was very helpful. Thanks! – Joseph May 08 '17 at 03:14
  • More details on L1 size = associativity * page size: It allows the cache to have VIPT speed but without homonymn/synonym aliasing, so it's also PIPT. Page 11 in https://www.ece.cmu.edu/~ece447/s13/lib/exe/fetch.php?media=onur-447-spring13-lecture24-advancedcaching-afterlecture.pdf. See also https://stackoverflow.com/questions/39436982/virtually-addressed-cache – Peter Cordes Oct 01 '17 at 17:07

1 Answers1

3

This is the same question as cluster sizes on disks.

Larger pages => Lower overhead (smaller page tables)

Smaller pages => Greater overhead

Larger pages => More wasted memory and more disk reading/writing on paging

Smaller pages => Less wasted memory and less disk reading/writing on paging

In ye olde says page sizes tended to be much smaller than they are today (512 bytes being common). As memory has grown, the wasted memory paging problems have diminished while the overhead problem (due to more pages) has grown. Thus we have larger page sizes.

A one byte page gets you nothing. You have to write to disk in full disk blocks (typically 512bytes or larger). Paging single bytes would be tediously slow.

Now add in page protection and the page tables. With one-byte pages, there would be more page table overhead than useable memory.

user3344003
  • 20,574
  • 3
  • 26
  • 62