0

I am currently studying computer architecture and I am having difficulty understanding the following true/false statements. I would greatly appreciate it if anyone could provide clarification on these statements and help me determine if they are true or false:

(a) A cache using a write-through policy writes back to the main memory simultaneously as it writes to the cache.

(b) Two important properties of a virtual memory are to give the illusion of a very large memory and to give memory protection between two concurrently running programs.

(c) A unit called the memory management unit (MMU) is a common solution to perform fast translations from virtual page numbers to physical page numbers.

(d) Modern high-performance processors do not use multi-level caches because they are too expensive and give little performance benefits.

Here’s what I think:

(a) True. A cache using a write-through policy writes to the main memory and the cache simultaneously, meaning that any updates to the data are immediately reflected in both the cache and the main memory.

(b) True. Virtual memory provides the illusion of a larger memory space than is physically available and also provides memory protection between two concurrently running programs, ensuring that one program cannot access or modify the memory space of another program.

(c) True. The memory management unit (MMU) is responsible for translating virtual addresses generated by a program into physical addresses that can be used by the memory. This is done quickly and efficiently, providing fast access to memory.

(d) False. Modern high-performance processors often use multi-level caches, as they provide significant performance benefits by allowing frequently used data to be stored closer to the processor. Although they can be more expensive, the benefits in terms of performance make them well worth the investment.

Thank you in advance for your help!

Bryan C
  • 1
  • 1
  • 1
    I agree that (a) is largely true, but phrasing like "simultaneously" and "immediately" are problematic, since they go to timing, and we can expect main memory to take longer to write a value than it takes the cache to accept the same write. – Erik Eidt Feb 12 '23 at 16:33
  • (d) is hilariously false, you're correct. Modern L2 and L3 caches are huge compared to L1, which needs to be super fast and multi-ported. If it's small enough, it can also be VIPT for speed while also effectively being PIPT for lack of aliasing. See [Why is the size of L1 cache smaller than that of the L2 cache in most of the processors?](https://stackoverflow.com/q/4666728) . And then there's the fact that L1 and L2 are per-core private (or sometimes L2 is shared between a small group of low-power cores), while L3 is shared chip-wide (Intel) or for a larger group of cores (AMD Zen) – Peter Cordes Feb 12 '23 at 20:43
  • See also [Which cache mapping technique is used in intel core i7 processor?](https://stackoverflow.com/q/49092541). and [How can cache be that fast?](https://electronics.stackexchange.com/q/329789) . For general background about modern CPU design tradeoffs, [Modern Microprocessors A 90-Minute Guide!](https://www.lighterra.com/papers/modernmicroprocessors/) is very well written. – Peter Cordes Feb 12 '23 at 20:44
  • Re: write-through caches. One notable design which *attempted* high performance with a write-through L1d cache was AMD Bulldozer-family, from about 2010 until they were finally able to replace it with the much better Zen. It had a small 4K write-combining buffer to insulate L2 from the write bandwidth of frequently-overwritten data. See [When use write-through cache policy for pages](https://stackoverflow.com/q/61129142) / [Why do L1 and L2 Cache waste space saving the same data?](https://stackoverflow.com/q/49785750) / https://www.realworldtech.com/bulldozer/8/ – Peter Cordes Feb 12 '23 at 21:39
  • (c) sounds like it's just describing a TLB, not the rest of the MMU that checks page permissions. Of course, there isn't a *separate* "MMU" in modern CPUs; it's part of load and store execution units in each core, along with special registers for its state (like a top-level page table pointer) and there's (one or more) hardware page-walk unit(s). [VIPT Cache: Connection between TLB & Cache?](https://stackoverflow.com/q/46480015) – Peter Cordes Feb 12 '23 at 21:42

0 Answers0