4

I wonder what kind of reliability guarantees NTFS provides about the data stored on it? For example, suppose I'm opening a file, appending to the end, then closing it, and the power goes out at a random time during this operation. Could I find the file completely corrupted?

I'm asking because I just had a system lock-up and found two of the files that were being appended to completely zeroed out. That is, of the right size, but made entirely of the zero byte. I thought this isn't supposed to happen on NTFS, even when things fail.

Roman Starkov
  • 59,298
  • 38
  • 251
  • 324
  • 1
    I'm not aware it gives any. But in any case, this is OT here. –  May 25 '11 at 18:40
  • @Neil: No, it isn’t, at least not according to the FAQ linked to at the top. What criteria are you going by? – Timwi May 25 '11 at 20:32
  • @Neil: Actually now I’m not sure whether you meant *off-topic* or *on-topic*. Maybe you should write out what you mean. – Timwi May 25 '11 at 20:35
  • I recently had a BSOD cause by an audio driver, and after that Windows chkdsk utility found many corrupted files, which were not being written to at the moment of BSOD. So I guess, NTFS is not ACID at all, if it allows such major corruption to happen in the file table. If it were truly transactional, it should discard the instable file table state and revert to the one before the BSOD. – JustAMartin Apr 06 '13 at 18:33

2 Answers2

2

NTFS is a transactional file system, so it guarantees integrity - but only for the metadata (MFT), not the (file) content.

Helge Klein
  • 8,829
  • 8
  • 51
  • 71
  • I assume that this doesn't apply to Transactional NTFS (capitalized), introduced in Vista, which seems to make file content part of transactions? – Roman Starkov May 26 '11 at 10:31
  • @romkyns: Correct, Transactional NTFS wraps operations on the data in transactions that either complete or are rolled back which guarantees you always have a consistent state. – Helge Klein May 31 '11 at 14:19
1

The short answer is that NTFS does metadata journaling, which assures valid metadata.

Other modifications (to the body of a file) are not journaled, so they're not guaranteed.

There are file systems that do journaling of all writes (e.g., AIX has one, if memory serves), but with them, you tend to get a tradeoff between disk utilization and write speed. IOW, you need a lot of "free" space to get decent performance -- they basically just do all writes to free space, and link that new data into the right spots in the file. Then they go through and clean out the garbage (i.e., free up parts that have since been overwritten, and usually coalesce the pieces of a file together as well). This can get slow if they have to do it very often though.

Jerry Coffin
  • 476,176
  • 80
  • 629
  • 1,111