Write-back and write-through describe policies when a write hit occurs, that is when the cache has the requested information. In these examples, we assume a single processor is writing to main memory with a cache.
Write-through: The information is written to the cache and memory, and the write finishes when both have finished. This has the advantage of being simpler to implement, and the main memory is always consistent (in sync) with the cache (for the uniprocessor case - if some other device modifies main memory, then this policy is not enough), and a read miss never results in writes to main memory. The obvious disadvantage is that every write hit has to do two writes, one of which accesses slower main memory.
Write-back: The information is written to a block in the cache. The modified cache block is only written to memory when it is replaced (in effect, a lazy write). A special bit for each cache block, the dirty bit, marks whether or not the cache block has been modified while in the cache. If the dirty bit is not set, the cache block is "clean" and a write miss does not have to write the block to memory.
The advantage is that writes can occur at the speed of the cache, and if writing within the same block only one write to main memory is needed (when the previous block is being replaced). The disadvantages are that this protocol is harder to implement, main memory can be not consistent (not in sync) with the cache, and reads that result in replacement may cause writes of dirty blocks to main memory.
The policies for a write miss are detailed in my first link.
These protocols don't take care of the cases with multiple processors and multiple caches, as is common in modern processors. For this, more complicated cache coherence mechanisms are required. Write-through caches have simpler protocols since a write to the cache is immediately reflected in memory.
Good resources: