1

I was reading a book which talks about virtual memory:

Intel Core i7 supports a 48-bit (256 TB) virtual address space and a 52-bit (4 PB) physical address space enter image description here

below is my question

Q1-since we mostly use 64 bits machine, how come the virtual address is only 48 bits? Shouldn't it be 64 bits virtual memory as well?
Editor's note: this part is an exact duplicate of Why do x86-64 systems have only a 48 bit virtual address space?

(Editor's note: this part is an exact duplicate of Why in x86-64 the virtual address are 4 bits shorter than physical (48 bits vs. 52 long)?)
Q2-How come the address space of physical memory(52 bits) is greater than virtual memory's(48 bits), shouldn't it be that virtual memory's address space should be greater than physical memory's?

(Editor's note: this part is a duplicate of several questions, including Is a process' page table mapped to Kernel address space? and Where is page table located?)
Q3-my understanding is that: all page tables stored in kernel memory which is invisible to user, is my understanding correct?

Peter Cordes
  • 328,167
  • 45
  • 605
  • 847
  • I edited the tags of your question because it is not related to Linux but to the x86 CPU architecture. – Martin Rosenau Sep 20 '20 at 05:22
  • Please [edit](https://stackoverflow.com/posts/63975447/edit) your question to *explain* what kind of Linux software do you have in mind. – Basile Starynkevitch Sep 20 '20 at 05:39
  • @MartinRosenau: do you know (in 2020) any computer with more than 4 terabytes of RAM which does not run Linux or at least [FreeBSD](http://freebsd.org/) ?? If you do, please contact me by email to `basile@starynkevitch.net` – Basile Starynkevitch Sep 20 '20 at 05:40
  • 1
    None of Intel’s CPUs support 52 address bits. In fact they all support fewer physical address bits than virtual address bits, just as you suggest. You can find out the number of physical address bits using CPUID with EAX = 80000008H. – prl Sep 20 '20 at 06:26
  • 3
    By the time Intel processors support 52 physical address bits, they will surely also support [5-level paging](https://en.wikipedia.org/wiki/Intel_5-level_paging) with 57-bit virtual addresses. – prl Sep 20 '20 at 06:39
  • [PAE](https://en.wikipedia.org/wiki/Physical_Address_Extension) in x86 also gives a bigger physical address space (36-bit) than virtual address space (32-bit) – phuclv Sep 21 '20 at 16:09
  • Multiply posted at https://unix.stackexchange.com/q/610309/5132 . – JdeBP Oct 12 '20 at 07:43

2 Answers2

2

It is economics!

  1. The cost of building a machine large enough to accommodate sufficient RAM to support 64 bits of virtual addressing is prohibitive. (Probably, even for the NSA!) Therefore we can conclude that the demand for chipsets that will actually support this is minimal.

  2. Each bit of physical address space corresponds to a pin on the CPU chip, and a silicon to support it, and a wire on the PC board ... and a pin on each memory DIMM. These all add to the cost of manufacture, directly or indirectly.

  3. It does not make business sense to ask customers to pay a premium for functionality that 99.999+% of them don't need and cannot possibly use. You do that and your competitors will be able to beat you on price / performance metrics.


Q1-since we mostly use 64 bits machine, how come the virtual address is only 48 bits? shouldn't it be 64 bits virtual memory as well?

Since you cannot afford enough RAM to effectively use 64 bits of virtual space, this is moot.

Q2-How come the address space of physical memory(52 bits) is greater than virtual memory's(48 bits), shouldn't it be that virtual memory's address space should be greater than physical memory's?

Not sure about this one. You would need to talk to the designers about that. But it is moot for the same reasons as above.

Q3-my understanding is that: all page tables stored in kernel memory which is invisible to user, is my understanding correct?

Yes that is correct (in a well designed multi-user OS), though I don't see how that relates to the rest of the question.

Stephen C
  • 698,415
  • 94
  • 811
  • 1,216
  • About Q1) The size of the RAM is related to the physical address space, not the virtual address space. The i80386 already had 48 bit (45 bits usable) virtual address space in 1985 although it could only control 32 bits of physical addresses. – Martin Rosenau Sep 20 '20 at 05:28
  • That is true. But that's not the point I am making. If you try to use significantly more virtual address space than you have RAM to accommodate, you end up thrashing. And at that point you may as well use a conventional database. OK there are niche applications which are simplified by scattering data across a huge virtual address space, but beware of thrashing! – Stephen C Sep 20 '20 at 05:34
  • Bits of physical address space also cost space in cache tags, and in TLBs. So there's a cost inside every core, and in every cache, which scales with core count, not just per package pins. And it's a cost in power and thus performance. Re: other questions, see the linked duplicates of the question; there are exact duplicates for all 3 of the OP's questions so this shows no research effort. [Why in x86-64 the virtual address are 4 bits shorter than physical (48 bits vs. 52 long)?](https://stackoverflow.com/q/46509152) has an interesting answer related to page-table format / levels. – Peter Cordes Sep 20 '20 at 13:30
  • *and a pin on each memory DIMM* - Any single DIMM only needs enough address lines to address that one DIMM, plus a Chip Select pin that can be wired separately for each DIMM on the same channel. A system with multiple memory channels (on each socket...) can have more physical RAM without forcing each DIMM to have more pins. e.g. 2 DIMMs per channel x 6 channels per socket x 2 or 4 sockets per system makes 1.5TiB of memory possible on a dual-socket Cascade Lake Xeon system with 24x 64GiB DIMMs: https://www.thomas-krenn.com/en/wiki/Optimize_memory_performance_of_Intel_Xeon_Scalable_systems – Peter Cordes Sep 20 '20 at 15:03
  • Max supported capacity for DDR4 means the DDR4 standard must specify that many physical pins to etch on the board, but smaller DIMMs only have to wire up the ones actually used. Lower-capacity DIMMs can leave the upper address lines unused. e.g. a 2GB DIMM can use A0-A13 for row addresses, plus 4 bank-group pins https://www.systemverilog.io/ddr4-basics. (Remember row and column are sent separately in DDR SDRAM https://www.akkadia.org/drepper/cpumemory.pdf, but apparently column addresses are always 10-bit, not half the total address width, for a fixed DRAM "page" size). – Peter Cordes Sep 20 '20 at 15:08
-2

Virtual address is used mainly for process isolation these days. With virtual memory management, we could provide hard disk storage as swap memory but it's painfully slow and not the main point.

https://en.wikipedia.org/wiki/Virtual_address_space

If we live in a world where hard disk is faster than RAM, we could have larger vitual address bit than physical address bit.

But if we can't support 64 bit address space with actual RAM, larger virtual address bit is meaningless.

I don't know exatly but saved 4 bit (52 - 48 = 4) maybe used for address translation or TLB access. So larger virtual address bit is practically impossible.

9dan
  • 4,222
  • 2
  • 29
  • 44