170

My understanding is that the main difference between the two methods is that in "write-through" method data is written to the main memory through the cache immediately, while in "write-back" data is written in a "later time".

We still need to wait for the memory in "later time" so What is the benefit of "write-through"?

triple fault
  • 13,410
  • 8
  • 32
  • 45
  • @EricWang I think you mean `write back` has better performance? – wlnirvana Aug 07 '16 at 22:32
  • @wlnirvana Yes, you are right, it's my clerical error. I would remove it, and put in a new comment here to avoid future misleading. – Eric Aug 08 '16 at 04:31
  • 9
    Simply put, `write back` has better performance, because writing to main memory is much slower than writing to cpu cache, and the data might be short during (means might change again sooner, and no need to put the old version into memory). It's complex, but more sophisticated, most memory in modern cpu use this policy. – Eric Aug 08 '16 at 04:32
  • I see that an explanatory answer has been given. I advise you to look at Write-Allocate, Write-NoAllocate tags after covering the write-back algorithm. – Caglayan DOKME Apr 07 '19 at 09:06
  • The answer to your question is that with write-through caching, when writing within the same block, only one write to main memory is needed. See my answer for details. – qwr Sep 28 '19 at 20:29

5 Answers5

168

The benefit of write-through to main memory is that it simplifies the design of the computer system. With write-through, the main memory always has an up-to-date copy of the line. So when a read is done, main memory can always reply with the requested data.

If write-back is used, sometimes the up-to-date data is in a processor cache, and sometimes it is in main memory. If the data is in a processor cache, then that processor must stop main memory from replying to the read request, because the main memory might have a stale copy of the data. This is more complicated than write-through.

Also, write-through can simplify the cache coherency protocol because it doesn't need the Modify state. The Modify state records that the cache must write back the cache line before it invalidates or evicts the line. In write-through a cache line can always be invalidated without writing back since memory already has an up-to-date copy of the line.

One more thing - on a write-back architecture software that writes to memory-mapped I/O registers must take extra steps to make sure that writes are immediately sent out of the cache. Otherwise writes are not visible outside the core until the line is read by another processor or the line is evicted.

Craig S. Anderson
  • 6,966
  • 4
  • 33
  • 46
  • 12
    For memory-mapped I/O, those addresses are typically mapped as uncached. Write through can also be used to increase reliability (e.g., if L1 only has parity protection and L2 has ECC). Write through is also more popular for smaller caches that use no-write-allocate (i.e., a write miss does not allocate the block to the cache, potentially reducing demand for L1 capacity and L2 read/L1 fill bandwidth) since much of the hardware requirement for write through is already present for such write around. –  Nov 27 '14 at 10:42
  • 1
    is it possible to check whether my cache method in my core are write-back or write through? – ArtificiallyIntelligence Nov 03 '16 at 22:04
  • 4
    It may be misleading to say that write-back is more complex because the processor must stop main memory from replying to the read request. It's moreso that the cache keeps track of what data is data (not aligned with main memory) and what is not by using "dirty bit(s)", thus it's possible to not check main memory at all. – steviesh Nov 10 '16 at 19:21
  • @Shaowu "lshw" command that show cache capabilities like "asynchronous internal write-back" – mug896 Feb 10 '17 at 07:50
  • I still don't understand what's the real steps used in write-back, but just know it's complicated... Could you provide more resource/details about it? – Kindred Jan 08 '19 at 11:00
  • @ptr_NE - The steps used for write-back are complicated. See [https://en.wikipedia.org/wiki/MESI_protocol] for details on one instance of a write-back protocol. – Craig S. Anderson Jan 31 '19 at 03:16
  • @Craig: MESI is used between non-hierarchical caches, e.g. when two cores each have private caches. You don't need that complexity when you have a single hierarchy, like a single core with an L1 that only talks to its L2 cache, never directly to memory. (You only need to pass through forced flushes, or if you want to implement coherent DMA then you need the outer cache to know if the inner cache might have a line cached.) But at the most basic, you need MESI for a single core. If L1 has a dirty copy of a line, it doesn't matter if L2 also has an older dirty copy; it will never be read by L1 – Peter Cordes Sep 28 '19 at 23:10
  • A issue about write-through order of operations. It seems that for complete consistency, one should write to the backing storage first, and then to the cache. This is because if the backing storage fails the write, the cache won't be written and you won't get inconsistency between the cache and the backing storage. Would this be right? – CMCDragonkai Jul 18 '21 at 08:15
60

Hope this article can help you Differences between disk Cache Write-through and Write-back

Write-through: Write is done synchronously both to the cache and to the backing store.

Write-back (or Write-behind): Writing is done only to the cache. A modified cache block is written back to the store, just before it is replaced.

Write-through: When data is updated, it is written to both the cache and the back-end storage. This mode is easy for operation but is slow in data writing because data has to be written to both the cache and the storage.

Write-back: When data is updated, it is written only to the cache. The modified data is written to the back-end storage only when data is removed from the cache. This mode has fast data write speed but data will be lost if a power failure occurs before the updated data is written to the storage.

Shengmin Zhao
  • 601
  • 6
  • 5
  • 2
    I don't follow the explanation from the very last sentence. In a power failure, the DRAM will also lose the data regardless of write-through or write-back, so that should not be a write-back specific issue. – gustafbstrom Nov 30 '20 at 14:27
  • 2
    @gustafbstrom Not all memory is DRAM. – Tripp Kinetics Mar 17 '21 at 16:38
  • 11
    @gustafbstrom I think this explanation is from the perspective of ram/disk rather than cache/ram. However, the concept is the same. – onlycparra Apr 23 '21 at 19:32
  • 3
    @gustafbstrom Write-back is the one that is more dangerous if you lose power and have no battery backup on the cache. The thing is write-through could also lose data. I think the safest way is if you turn off all write caching and use read caching only. – Shengmin Zhao Sep 29 '21 at 09:52
  • ` only when data is removed from the cache` isn't necessarily true - it could be at some later date, maybe based on LRU priority. – Tom Hale Jun 02 '22 at 05:22
  • Thanks for the info. From the article it looks like this insight is for CPU cache, does it also hold true for cache installed in other components (e.g. hard drive)? – torez233 Sep 06 '22 at 21:25
13

Let's look at this with the help of an example. Suppose we have a direct mapped cache and the write back policy is used. So we have a valid bit, a dirty bit, a tag and a data field in a cache line. Suppose we have an operation : write A ( where A is mapped to the first line of the cache).

What happens is that the data(A) from the processor gets written to the first line of the cache. The valid bit and tag bits are set. The dirty bit is set to 1.

Dirty bit simply indicates was the cache line ever written since it was last brought into the cache!

Now suppose another operation is performed : read E(where E is also mapped to the first cache line)

Since we have direct mapped cache, the first line can simply be replaced by the E block which will be brought from memory. But since the block last written into the line (block A) is not yet written into the memory(indicated by the dirty bit), so the cache controller will first issue a write back to the memory to transfer the block A to memory, then it will replace the line with block E by issuing a read operation to the memory. dirty bit is now set to 0.

So write back policy doesnot guarantee that the block will be the same in memory and its associated cache line. However whenever the line is about to be replaced, a write back is performed at first.

A write through policy is just the opposite. According to this, the memory will always have a up-to-date data. That is, if the cache block is written, the memory will also be written accordingly. (no use of dirty bits)

Rajat
  • 139
  • 1
  • 3
10

Write-back and write-through describe policies when a write hit occurs, that is when the cache has the requested information. In these examples, we assume a single processor is writing to main memory with a cache.

Write-through: The information is written to the cache and memory, and the write finishes when both have finished. This has the advantage of being simpler to implement, and the main memory is always consistent (in sync) with the cache (for the uniprocessor case - if some other device modifies main memory, then this policy is not enough), and a read miss never results in writes to main memory. The obvious disadvantage is that every write hit has to do two writes, one of which accesses slower main memory.

Write-back: The information is written to a block in the cache. The modified cache block is only written to memory when it is replaced (in effect, a lazy write). A special bit for each cache block, the dirty bit, marks whether or not the cache block has been modified while in the cache. If the dirty bit is not set, the cache block is "clean" and a write miss does not have to write the block to memory.

The advantage is that writes can occur at the speed of the cache, and if writing within the same block only one write to main memory is needed (when the previous block is being replaced). The disadvantages are that this protocol is harder to implement, main memory can be not consistent (not in sync) with the cache, and reads that result in replacement may cause writes of dirty blocks to main memory.

The policies for a write miss are detailed in my first link.

These protocols don't take care of the cases with multiple processors and multiple caches, as is common in modern processors. For this, more complicated cache coherence mechanisms are required. Write-through caches have simpler protocols since a write to the cache is immediately reflected in memory.

Good resources:

qwr
  • 9,525
  • 5
  • 58
  • 102
2

Write-Back is a more complex one and requires a complicated Cache Coherence Protocol(MOESI) but it is worth it as it makes the system fast and efficient.

The only benefit of Write-Through is that it makes the implementation extremely simple and no complicated cache coherency protocol is required.

  • 2
    WT still needs a coherency protocol. A store from one core still needs to invalidate copies in other caches so they don't keep reading stale data indefinitely. Atomic RMW needs some special support. All of this is easier with WT, I think, but the required coherency is still somewhat complicated. – Peter Cordes Jun 12 '18 at 04:24
  • Or maybe you were talking about a single-core system with a cache hierarchy of L1 / L2 (and maybe more). In that case, you don't really have to use MESI/MOESI for inner caches that fetch through outer caches, unless you want to support cache-coherent DMA which can access the outer-most cache direction. But then you still need coherency for a DMA write to invalidate the inner cache. – Peter Cordes Jun 11 '19 at 23:09
  • 1
    The cache coherency protocol is only needed if there needs to be support for multiple caches/processors or something affects memory like DMA. Write-through has its advantages even for single processor systems, namely write speed. – qwr Sep 28 '19 at 20:36
  • For DMA the OS can explicitly flush the cache after I/O. Being software it is less efficient. – qwr Sep 28 '19 at 20:57