It's simple to generate a write error in a test suite by writing to /dev/full
. Is there a good technique to generate a read error? I'm currently using LD_PRELOAD to override read
but that seems too complicated and non-portable (not that /dev/full is portable...).

- 204,365
- 48
- 270
- 300
-
Just a thought, but what about reading from a file with perms set to 000? – David R Tribble Jun 27 '12 at 16:20
-
@Loadmaster That will cause an `open` error rather than a `read` error. – William Pursell Jun 27 '12 at 16:21
-
@jpe I don't much care what the error is; I want to open the file successfully but get a read error. I do not want to simply invoke the function which calls read with an invalid pointer, but would like to run the program in full under conditions which trigger a read error. A simple unit test with an invalid pointer does not test the program under actual conditions. – William Pursell Jun 28 '12 at 14:13
-
1Same question on [Server Fault](http://serverfault.com/questions/498900/intentionally-cause-an-i-o-error-in-linux) and on [Unix and Linux](http://unix.stackexchange.com/questions/77492/special-file-that-causes-i-o-error). – Gilles 'SO- stop being evil' May 29 '13 at 21:30
4 Answers
Besides reading from a directory (as mentioned in a previous answer) you can try to read /proc/self/mem
to get an error (this should get you an EIO
on Linux). For an explanation, please see: https://unix.stackexchange.com/a/6302
-
Nice trick, but beware that `/proc/self/mem` is only good if you read from the beginning of the file. If the program seeks first, it might accidentally end up in a mapped zone. – Gilles 'SO- stop being evil' May 29 '13 at 21:31
-
This seems to not generate an error when mapping /proc/self/mem to stdin using the bash shell. When opening /proc/self/mem with fopen(), it generates an error. This is not portable for two reasons: On systems that don't have a /proc file system, this will fail. On systems where NULL is mapped (Amiga OS, for example), this will instead read successfully (until unmapped memory is hit). Still, this is probably the most convenient solution. (Upvoted) – Christian Hujer Apr 25 '22 at 02:31
An approach that works on all major unices would be to implement a small FUSE filesystem. EIO is the default error code when your userspace filesystem driver does something wrong, so it's easy to achieve. Both the Perl and Python bindings come with examples to get started, you can quickly write a filesystem that mostly mirrors existing files but injects an EIO in carefully chosen places.
There's an existing such filesystem: petardfs (article), I don't know how well it works out of the box.

- 104,111
- 38
- 209
- 254
According to the (OS X) read(2) manpage, read(2) will generate an error if "[a]n attempt is made to read a directory." You could therefore open(2) a directory (make sure the prot doesn't permit writing, or this'll throw an error) and then try to read from it. That looks like the only error listed there which could happen in 'normal' circumstances (ie without doing something like deliberately breaking a FILE* struct).
I'm presuming you're talking about read(2) errors in C or something like it, but even in a higher-level language, you might be able to open a directory and try to read from it (though I just tried it with Python, and it's too smart to let you open the directory...)

- 11,978
- 2
- 33
- 56
You could as well pass an illegal pointer as buffer to read, which would return an -EFAULT. Something like :
read(fd, (char *)0, cout);
Thanks Suzuki

- 11,868
- 11
- 75
- 131

- 11
- 1
-
That won't generate a read error (EINVAL), but a buffer is outside your accessible address space (EFAULT) during read. But according to man 2 read it should be possible to get to EINVAL with slight modification. But the point in the question seems to be "how to get to EINVAL without modifying the code but producing a wrapper that will emulate all interesting fault situations". – jpe Jun 28 '12 at 06:14
-
Indeed the code should not be modified and the process must successfully fopen a file, perform multiple I/O operations (both read and write) but then some subsequent read operation should be made to fail. This is doable by overriding read and implementing a simple counter so that the Nth read operation fails, but it would be nice to do something from a shell script similar to: `kill -STOP $pid; chmod 000 file; kill -CONT $pid`. A method for getting a write error in similar conditions would be nice, as writing to /dev/full fails on the first write. – William Pursell Jun 28 '12 at 14:08