30

According to man page fclose(3):

RETURN VALUE

Upon successful completion 0 is returned. Otherwise, EOF is returned and the global variable errno is set to indicate the error. In either case any further access (including another call to fclose()) to the stream results in undefined behavior.

ERRORS

EBADF The file descriptor underlying fp is not valid.

The fclose() function may also fail and set errno for any of the errors specified for the routines close(2), write(2) or fflush(3).

Of course fclose(NULL) should fail but I expect that it to return with an errno normally instead of dying by segmentation fault directly. Is there any reason of this behavior?

Thanks in advance.

UPDATE: I shall put my code here (I'm trying strerror(), particularly).

FILE *not_exist = NULL;

not_exist = fopen("nonexist", "r");
if(not_exist == NULL){
    printError(errno);
}

if(fclose(not_exist) == EOF){
    printError(errno);
}
ib.
  • 27,830
  • 11
  • 80
  • 100
Vdragon
  • 411
  • 1
  • 4
  • 8
  • Possible duplicate of [fclose() causing segmentation fault](http://stackoverflow.com/questions/1443164/fclose-causing-segmentation-fault) – Jim Fell Nov 13 '15 at 22:22

5 Answers5

40

fclose requires as its argument a FILE pointer obtained either by fopen, one of the standard streams stdin, stdout, or stderr, or in some other implementation-defined way. A null pointer is not one of these, so the behavior is undefined, just like fclose((FILE *)0xdeadbeef) would be. NULL is not special in C; aside from the fact that it's guaranteed to compare not-equal to any valid pointer, it's just like any other invalid pointer, and using it invokes undefined behavior except when the interface you're passing it to documents as part of its contract that NULL has some special meaning to it.

Further, returning with an error would be valid (since the behavior is undefined anyway) but harmful behavior for an implementation, because it hides the undefined behavior. The preferable result of invoking undefined behavior is always a crash, because it highlights the error and enables you to fix it. Most users of fclose do not check for an error return value, and I'd wager that most people foolish enough to be passing NULL to fclose are not going to be smart enough to check the return value of fclose. An argument could be made that people should check the return value of fclose in general, since the final flush could fail, but this is not necessary for files that are opened only for reading, or if fflush was called manually before fclose (which is a smarter idiom anyway because it's easier to handle the error while you still have the file open).

R.. GitHub STOP HELPING ICE
  • 208,859
  • 35
  • 376
  • 711
  • I don't think that returning an error hides undefined behavior: it's undefined - there is no implication that it would be likely to crash -> if any result would hide the undefined behaviour, then every other result would also do that, including crashes, which would hide that you can't rely on getting a crash. – Kaiserludi Mar 10 '14 at 14:13
  • @Kaiserludi: See https://sourceware.org/glibc/wiki/Style_and_Conventions#Invalid_pointers – R.. GitHub STOP HELPING ICE Mar 10 '14 at 16:46
  • Your link does not answer the question, if returning an error hides undefined behavior. It only discusses the reasons, which let the gnu c developers to the decision of not returning an error in case of invalid NULL-values. – Kaiserludi Mar 10 '14 at 17:01
  • 1
    It "hides" the error in the sense that the program keeps running with no indication that something is wrong. This hiding is *active* rather than *by omission* because the code had to actively test the pointer against NULL in order to avoid crashing. This is worse behavior than not performing the check at all, because it makes the program harder to debug. If it crashed right away (the default behavior) you would see the bug right away and fix it. With the check for null (and failure to crash or abort with an assertion failure) the erroneous state is allowed to persist/propagate. – R.. GitHub STOP HELPING ICE Mar 10 '14 at 18:17
  • 18
    Interesting that this reasoning was not followed for `malloc`/`free` where freeing a null pointer is perfectly fine. Inconsistent... – Thomas Jun 15 '14 at 02:51
  • 3
    @Thomas: The requirement for `free` to accept null pointers is not an implementation choice; it's mandated by the language. Some people claim this requirement is related to the fact that `malloc(0)` is permitted to return a null pointer on success, but I haven't seen any official confirmation of that claim. – R.. GitHub STOP HELPING ICE Jun 15 '14 at 03:42
  • Do you have a reference for the preconditions on `fclose`? I can't find any; `man fclose` on my Linux agrees with you, but I can't find such requirements in the C standard. – Kerrek SB Nov 29 '14 at 15:33
  • 1
    @KerrekSB: See C11 7.1.4 Use of library functions, paragraph 1. – R.. GitHub STOP HELPING ICE Nov 29 '14 at 17:09
  • 4
    I suppose the question is why didn't the C standard make `fclose(0);` a no-op, just like `free(0);`. Surely, this wouldn't break any existing code or cause anything undesirable. – P.P Aug 18 '15 at 15:57
  • @BlueMoon: It would cause something undesirable: lack of crashing (on implementations that trap all null pointer dereference, or intentionally trap/crash/assert to catch this error) in broken applications, which leads to the brokenness going undetected. – R.. GitHub STOP HELPING ICE Oct 06 '15 at 02:25
  • 5
    The statement is not correct in it's plain reading: '... A null pointer is not one of these...' This is not correct. null pointer is a pointer returned by `fopen` when file can not be open. On a side note, it would be appropriate for `fclose` to accept null pointer, alas, Standard doesn't require it. – SergeyA May 16 '18 at 20:43
  • @SergeyA: A null pointer is not a pointer "obtained by `fopen`". It's a value `fopen` returns to indicate error. It's clear that `fclose` is not specified to accept a null pointer, and thus the behavior is undefined; my answer already explains this and why, while doing nothing is of course permissible, such behavior is undesirable. – R.. GitHub STOP HELPING ICE May 16 '18 at 21:27
  • @R.. how exactly is 'returned from fopen' different from 'obtained by fopen' (which can only mean 'obtain by [calling] fopen', otherwise the sentence makes no sense? We can argue why 'such behavior is desirable' until cows come home, but that's beside the point. – SergeyA May 16 '18 at 21:28
  • 1
    @SergeyA: See *7.1.4 Use of library functions*, ¶1: "If an argument to a function has an invalid value (such as a value outside the domain of the function, or a pointer outside the address space of the program, or a null pointer, or a pointer to non-modifiable storage when the corresponding parameter is not const-qualified) or a type (after promotion) not expected by a function with variable number of arguments, the behavior is undefined." – R.. GitHub STOP HELPING ICE May 16 '18 at 23:43
  • @R.. so what? This is not what I am arguing about. – SergeyA May 17 '18 at 13:14
14

fclose(NULL) should succeed. free(NULL) succeeds, because that makes it easier to write cleanup code.

Regrettably, that's not how it was defined. Therefore you can't use fclose(NULL) in portable programs. (E.g. see http://pubs.opengroup.org/onlinepubs/9699919799/).

As others have mentioned, you don't generally want an error return if you pass NULL to the wrong place. You want a warning message, at least on debug/test builds. Dereferencing NULL gives you an immediate warning message, and the opportunity to collect a backtrace which identifies the programming error :). While you're programming, a segfault is about the best error you can get. C has many more subtle errors, which take much longer to debug...

It is possible to abuse error returns to increase robustness against programming errors. However, if you're worried a software crash would lose data, note that exactly the same can happen e.g. if your hardware loses power. That's why we have autosave (since Unix text editors with two-letter names like ex and vi). It'd still be preferable for your software to crash visibly, rather than continuing with an inconsistent state.

sourcejedi
  • 3,051
  • 2
  • 24
  • 42
7

The errors that the man page are talking about are runtime errors, not programming errors. You can't just pass NULL into any API expecting a pointer and expect that API to do something reasonable. Passing a NULL pointer to a function documented to require a pointer to data is a bug.

Related question: In either C or C++, should I check pointer parameters against NULL/nullptr?

To quote R.'s comment on one of the answers to that question:

... you seem to be confusing errors arising from exceptional conditions in the operating environment (fs full, out of memory, network down, etc.) with programming errors. In the former case, of course a robust program needs to be able to handle them gracefully. In the latter, a robust program cannot experience them in the first place.

Community
  • 1
  • 1
Billy ONeal
  • 104,103
  • 58
  • 317
  • 552
5

This fclose() issue seems to be a legacy of FreeBSD, and was accepted uncritically by both the Microsoft and Linux camps.

But HP, SGI, Solaris, and CYGWIN, on the other hand, all handle fclose(NULL) reasonably. For example, man fclose for CYGWIN, which uses newlib rather than the OP's glibc, states:

fclose returns 0 if successful (including when FP is NULL or not an open file)

See https://stackoverflow.com/a/8442421/318716 for a related discussion.

Joseph Quinsey
  • 9,553
  • 10
  • 54
  • 77
  • 2
    If "reasonable" is "hiding completely broken code that exhibits undefined behavior" then I guess that is reasonable. – Billy ONeal Jun 04 '13 at 16:42
1

I think the manpage talks about underlying file descriptor (the one that is obtained by it internally via the open system call when you call fopen) being invalid, not the file pointer which you pass to fclose.

szx
  • 6,433
  • 6
  • 46
  • 67