Some landmark studies a number of years ago now showed that silent corruption in large datasets was far more widespread than previously anticipated (and today I guess you'd say it's more than commonly realized).
Assuming that the application and OS wrote a sector and had time to let everything flush, with no crash or abnormal shutdown, or software bugs that would command wrong data to be saved.
Later, a sector is read back in, and there is no read error from the HDD. But it contains the wrong data.
Since HDD data encoding contains error correction codes, I would assume that any mysterious state change to a bit would generally be noticed by the checking. Even if the check is not very strong so some errors slip through, there would still be vastly more detected errors that tell you something is wrong with the drive. But that doesn't happen: apparently, data is found to be wrong with no symptoms.
How can that happen?
My experience on a desktop PC is that sometimes files that were once good are later found to be bad, but perhaps that is due to unnoticed problems during writing, either moving the sectors or in the file system tracking of the data. Point is, errors may be introduced at write time where data is corrupted inside the HDD (or RAID hardware) so wrong data is written with its error correction codes to match. If that is the (only) cause, than a single verify should be enough to show that it did write OK.
Or, does data go bad after it has been seen to be OK on the disk? That is, verify once and all is fine; verify later and an error is found, when that sector has not been written in the interim. I think this is what is meant, since the write-time errors would be easy to deal with through improved flushing checking.
So how can that happen without tripping the error correction codes that go with the data?