In my C++ code, I am constantly writing different values into a file. My question is that if there is any circumstances that write or << would fail under, considering the fact that file was opened successfully. Do I need to check every single call of write or << to make sure it was carried out correctly?
3 Answers
There are too many failure reasons to list them all. Possible ones would be:
- the partition is finally full
- the user exceeds his disk quota
- the partition has been brutally unmounted
- the partition has been damaged (filesystem bug)
- the disk failed physically
- ...
Do I need to check every single call of write or << to make sure it was carried out correctly?
If you want your program to be resilient to failures then, definitely, yes. If you don't, it simply means the data you are writing may or may not be written, which amounts to say you don't care about it.
Note: Rather than checking the stream state after every operation (which will soon be extremely tedious) you can set std::ostream::exceptions
to your liking so that the stream will throw an exception when it fails (which shouldn't be a problem since such disk failures are quite exceptional by definition).

- 14,701
- 3
- 41
- 65
-
-
2@ipluto: See my edit to avoid checking every single call "by hand". I believe exceptions are **the** Right Tool in this case. – syam May 17 '13 at 03:33
There are any number of reasons why a write could fail. Off the top of my head here are a few:
- The disk is full
- The disk fails
- The file is on an NFS mount and the network goes down
- The stream you're writing to (remember that an ostream isn't always a file) happens to be a pipe that closes when the downstream reader crashes
- The stream you're writing to is a TCP socket and the peer goes away
And so on.
EDIT: I know you've said that you're writing to a file, I just wanted to draw attention to the fact that your code should only care that it's writing to an ostream which could represent any kind of stream.

- 51,692
- 2
- 65
- 86
The others covered situations that might result in output failure.
But:
Do I need to check every single call of write or << to make sure it was carried out correctly?
To this, I would answer "no". You could conceivably just as well check
- if the file was opened successfully, and
- if the stream is still
good()
after you wrote your data.
This depends, of course, on the type of data written, and the possibility / relative complexity of recovering from partial writes vs. re-running the application.
If you need closer control on when exactly a write failed (e.g. in order to do a graceful recovery), the ostream exceptions syam linked to are the way to go. Polling stream state after each operation would bloat the code.

- 67,862
- 21
- 134
- 209
-
+1, it can indeed make sense to delay the check if the data is not too critical (I guess being used to handle critical data has blinded me to this possibility of loose checking). – syam May 17 '13 at 03:45
-
@syam: See -- in the app I am working with, partial writes don't make sense, and a meaningful recovery is not possible. It's all or nothing in my case. I didn't even know about the possibility to make `ostream` belch an exception immediately, that was a nice win for me right there if I ever need it. ;-) – DevSolar May 17 '13 at 03:54
-
My latest projects are quite the opposite, I need to store incoming data in (almost) real-time and minimize the loss risks as much as I can (hence my eagerness to check every single write). Anyway, glad my answer was useful to you. – syam May 17 '13 at 04:06
-
@syam: ...so you're careful not only to set `ostream::exceptions` but disable buffering as well, I presume? I am quite at home with C's `setvbuf()` / `_IONBF` but couldn't quite figure out how to achieve unbuffered I/O with C++ streams. Would you mind tossing me that tidbit too, while we're at it? ;-) – DevSolar May 17 '13 at 06:05
-
I'm afraid you'll be disappointed. Basically... In C++ I just `flush` the stream to ensure it goes to the OS immediately, the bulk of the synchronization is done by the OS itself (dedicated ext4 partition mounted `data=journal,sync`). There's a NVRAM buffer between the live data and the disk writes so I can afford the `sync` mount's poor performance. All I really get is nice exceptions thanks to the `sync` mount which immediately reports *any* error (contrary to async commits that can fail later without being caught at the C++ level). No C++ magic here, sorry, only OS tweaking. ;) – syam May 17 '13 at 07:14
-
1[Posted new question.](http://stackoverflow.com/questions/16605233/how-to-disable-buffering-on-a-stream) – DevSolar May 17 '13 at 09:02