3

I was reading the MDS attack paper RIDL: Rogue In-Flight Data Load. The set pages as write-back, write-through, write-combined or uncacheable and with different experiments determines that the Line Fill Buffer is the cause of the micro-architectural leaks.


On a tangent: I was aware that memory can be uncacheable, but I assumed that cacheable data was always cached in a write-back cache, i.e. I assumed that the L1, L2 and LLC were always write-back caches.

I read up on the differences between write-back and write-through caches in my Computer Architecture book. It says:

Write-through caches are simpler to implement and can use a write buffer that works independently of the cache to update memory. Furthermore, read misses are less expensive because they do not trigger a memory write. On the other hand, write-back caches result in fewer transfers, which allows more bandwidth to memory for I/O devices that perform DMA. Further, reducing the number of transfers becomes increasingly important as we move down the hierarchy and the transfer times increase. In general, caches further down the hierarchy are more likely to use write-back than write-through.

So a write-through cache is simpler to implement. I can see how that can be an advantage. But if the caching policy is settable by the page table attributes then there can't be an implementation advantage - every cache needs to be able to work in either write-back or write-through.

Questions

  1. Can every cache (L1, L2, LLC) work in either write-back or write-through mode? So if the page attribute is set to write-through, then they all will be write-through?
  2. Write combining is useful for GPU memory; Uncacheable is good when accessing hardware registers. When should a page be set to write-through? What are the advantages to that?
  3. Are there any write-through caches (if it really is a property of the hardware and not just something that is controlled by the pagetable attributes) or is the trend that all caches are created as write-back to reduce traffic?
Daniel Näslund
  • 2,300
  • 3
  • 22
  • 27

1 Answers1

2

Can every cache (L1, L2, LLC) work in either write-back or write-through mode?

In most x86 microarchitectures, yes, all the data / unified caches are (capable of) write-back and used in that mode for all normal DRAM. Which cache mapping technique is used in intel core i7 processor? has some details and links. Unless otherwise specified, the default assumption by anyone talking about x86 is that DRAM pages will be WB.

AMD Bulldozer made the unconventional choice to use write-through L1d with a small 4k write-combining buffer between it and L2. (https://www.realworldtech.com/bulldozer/8/). This has many disadvantages and is I think widely regarded (in hindsight) as one of several weaknesses or even design mistakes of Bulldozer-family (which AMD fixed for Zen). Note also that Bulldozer was an experiment in CMT instead of SMT (two weak integer cores sharing an FPU/SIMD unit, each with separate L1d caches sharing an L2 cache) https://www.realworldtech.com/bulldozer/3/ shows the system architecture.

But of course Bulldozer L2 and L3 caches were still WB, the architects weren't insane. WB caching is essential to reduce bandwidth demands for shared LLC and memory. And even the write-through L1d needed a write-combining buffer to allow L2 cache to be larger and slower, thus serving its purpose of sometimes hitting when L1d misses. See also Why is the size of L1 cache smaller than that of the L2 cache in most of the processors?

Write-through caching can simplify a design (especially of a single-core system), but generally CPUs moved beyond that decades ago. (Write-back vs Write-Through caching?). IIRC, some non-CPU workloads sometimes benefit from write-through caching, especially without write-allocate so writes don't pollute cache. x86 has NT stores to avoid that problem.

So if the page attribute is set to write-through, then they all will be write-through?

Yes, every store has to go all the way to DRAM in a page that's marked WT.

The caches are optimized for WB because that's what everyone uses, but hopefully do support passing on the line to outer caches without evicting from L1d. (So WT doesn't necessarily turn stores into something like movntps cache-bypassing / evicting stores. But check on that; apparently on some CPUs, like Pentium Pro family at least, a WT store hit in L1 updates the line, but a WT hit in L2 evicts the line instead of bringing it in to L1d.)

When should a page be set to write-through? What are the advantages to that?

Basically never; (almost?) all CPU workloads do best with WB memory.

OSes don't even bother to make it easy (or possible?) for user-space to allocate WC or WT DRAM pages. (Although that certainly doesn't prove they're never useful.) e.g. on CPU cache inhibition, I found a link about a Linux patch that never made it into the mainline kernel that added the possibility of mapping a page WT.

WB, WC, and UC are common for normal DRAM, device memory (especially GPU), and MMIO respectively.

I have seen at least one paper that benchmarked WT vs. WB vs. UC vs. WC for some workload (googled but didn't find it, sorry). And people testing obscure x86 stuff will sometimes include it for completeness. e.g. The Microarchitecture Behind Meltdown is a good article in general (and related to what you're reading up on).

One of the few advantages of WT is that stores end up in L3 promptly where loads from other cores can hit. This may possibly be worth the extra cost for every store to that page, especially if you're careful to manually combine your writes into one large 32-byte AVX store. (Or 64-byte AVX512 full-line write.) And of course only use that page for shared data.

I haven't seen anyone ever recommend doing this, though, and it's not something I've tried. Probably because the extra DRAM bandwidth for writing through L3 as well isn't worth the benefit for most use-cases. But probably also because you might have to write a kernel module to get a page mapped that way.

And it might not even work quite this way, if CPUs evict from outer caches on an L2 or L3 hit for a WT store, like @Lewis comments that PPro is documented to do.

So maybe I'm wrong about the purpose of WT, and it's intended (or at least usable) for device-memory use-cases, like maybe parts of video RAM that GPU won't modify.

Peter Cordes
  • 328,167
  • 45
  • 605
  • 847
  • 1
    The [ioremap_wt](https://elixir.bootlin.com/linux/latest/ident/ioremap_wt) function used for mapping pages as write-through is only used for some old fbdev drivers. There's a good chance those are copy-pastes. That supports your claim that WC should almost never be used. An amazing answer. I know you're not supposed to use the comments for thank you's, but thank you anyway! – Daniel Näslund Apr 10 '20 at 05:32
  • 1
    Are you sure? From the Pentium Pro Developers manual – A WT line is cacheable but is not fetched into the cache on a write miss. A write to a WT line goes out on the bus. For the Pentium Pro processor, a WT hit to the Ll cache updates the Ll cache. A WT hit to L2 cache invalidates the L2 cache.i didn't know. It's a slightly weaker version of WP – Lewis Kelsey Apr 25 '21 at 10:24
  • @LewisKelsey: No, I'm not sure, I was making some big assumptions about how WT worked. – Peter Cordes Apr 25 '21 at 10:26
  • That's weird. Apparently I got it right [here](https://stackoverflow.com/a/61750646/7194773) but I don't recall knowing that claim about WT. So I think WP / UC invalidates L1/L2/L3 if there is a line in cache (if the there is a read in there for WP or if the attribute previously wasn't UC but was changed to it) and WT only invalidates L2/L3. I can't think of a point in WT because a DMA read from RAM im pretty sure snoops the cache anyway. So WT basically ensures that anything not in L1 will be in RAM, perhaps it makes DMA read from RAM faster as L2 cache doesn't need to be written back to L3 – Lewis Kelsey Apr 25 '21 at 11:05
  • @LewisKelsey: Yes, x86 DMA is guaranteed to be cache-coherent. I'm not sure if there's any point to WT; in my edit to add caveats, I suggested possibly device memory (like parts of video RAM) that the device only reads: you want CPU writes to be visible to devices, but reads to hit in cache. OTOH, I'm not sure Intel CPUs support any kind of cacheable device memory at all. Dr. Bandwidth commented that he'd experimented and CPUs lock up if you even try to map PCIe MMIO regions as WB, I think, or maybe even WT. Or maybe it was memory regions, not just MMIO; can't find the comment now. – Peter Cordes Apr 25 '21 at 11:14
  • I found the original source for what I said about WT on that answer: https://patents.google.com/patent/US6223258B1/en. Windows physical page (PFN) database only supports UC, WC and WB memory types anyway. Im yet to ever think of a legitimate use case for WT. I think WP could be used for a DMA buffer that the CPU only ever writes to using that logic, WT seems like an obstacle in the way of achieving that. I think you also can use WP only when a write makes physical change to a device or a value read will not change, or a value read will not change physical device state – Lewis Kelsey Apr 25 '21 at 11:19
  • 1
    The use case for WT is when you need to make sure that the write occurs ASAP (and doesn't just sit in the cache for ages until being evicted). For this case, the alternatives are UC (significantly worse if reads can come from cache) and "WB then CLFLUSH" (also significantly worse) and WC (also significantly worse). Of course this use case is/was very rare (but I can imagine that changing soon - consider power failures for non-volatile RAM). – Brendan Apr 25 '21 at 12:07
  • @Brendan: efficient support for `clwb` (write-back *without* eviction) in Ice Lake is probably good enough for a lot of cases where you'd otherwise consider using WT. (It runs on SKX, but only doing the same thing as `clflushopt`, not leaving the line hot in cache.) Still worse for code size, especially if you do it after *every* write. But currently yeah there might be something to gain from WT on Cascade Lake with persistent memory. – Peter Cordes Apr 25 '21 at 12:14
  • 1
    @Brendan WT is the same as WP (it doesn't pollute L2/L3) except it updates rather than evicts L1 while performing the uncached write. So it's basically WP but better read performance if it's just been read. Same DMA performance as WP. Faster to write back than WB+clflush. Better DMA performance than WB without clflush. @ Peter it shouldn't lock up because that's what CAR is, WB device memory, but it could be related to what I said about the SAD on INVD. It might not work on multisocket because it's supposed to send MMIO to the target node IIO and cacheable stuff to the target home agent – Lewis Kelsey Apr 25 '21 at 15:53