41

This question was inspired by this answer.

I've always been of the philosophy that the callee is never responsible when the caller does something stupid, like passing of invalid parameters. I have arrived at this conclusion for several reasons, but perhaps the most important one comes from this article:

Everything not defined is undefined.

If a function doesn't say in it's docs that it's valid to pass nullptr, then you damn well better not be passing nullptr to that function. I don't think it's the responsibility of the callee to deal with such things.

However, I know there are going to be some who disagree with me. I'm curious whether or not I should be checking for these things, and why.

Community
  • 1
  • 1
Billy ONeal
  • 104,103
  • 58
  • 317
  • 552
  • 2
    this question seems related to the recent glibc memcpy debacle (https://bugzilla.redhat.com/show_bug.cgi?id=638477). – lijie Dec 08 '10 at 16:54
  • @R.. - I *sympathize* with the view but don't agree with it in all cases. What if you are programming a new Win32 API? You just **have** to do better than 'sorry, you are SOL' on obvious invalid input like NULL raw pointer. – Steve Townsend Dec 08 '10 at 17:12
  • 5
    Why do you have to do "better"? How is what you propose "better"? – R.. GitHub STOP HELPING ICE Dec 08 '10 at 17:16
  • @lijie: Wow what a stupid thread. Why is nobody proposing the correct solution of patching the relocation table of the buggy proprietary software so that it simply calls `memmove` like it should? – R.. GitHub STOP HELPING ICE Dec 08 '10 at 17:19
  • Geez, I didn't expect this to become a religious war :P – Billy ONeal Dec 08 '10 at 17:34
  • 1
    Part of why it's become a religious war is that some of us have gotten stuck writing performance-critical functions that spend 75% of their time checking all their arguments against NULL and the remaining 25% performing a single trivial operation, due to legacy API requirements and/or broken callers that require NULL to be interpreted specially. As such, I am vehemently against any practice that perpetuates the myth among users of libraries that "passing NULL is okay" except when there's a good reason for the function to accept NULL and it's documented as such. – R.. GitHub STOP HELPING ICE Dec 08 '10 at 17:48
  • 11
    "the callee is never responsible when the caller does something stupid" That is completely true, but there's a big difference between what your code is actually responsible for and what your code actually gets blamed for. – James McNellis Dec 08 '10 at 17:50
  • 2
    Note that sometimes the situation I described is not immediately apparent when the API is first designed, because the function originally has a more complicated task which only becomes trivial years down the line when you can remove support for legacy junk. Imagine an EBCDIC-to-ASCII conversion function in mainframe software being ported to modern machines. You dummy-out the conversion, and now all the function does is waste time comparing pointers to NULL, but you can't just `#define` it out to a no-op because callers expect the NULL checks... – R.. GitHub STOP HELPING ICE Dec 08 '10 at 17:51
  • 3
    Patient says to doctor "It hurts when I lift my arm up". Doctor replies "Don't do it then". – dalle Dec 08 '10 at 18:09
  • @dalle Well, the patient was dumb enough not to know that, so do you expect the caller to know that what they're doing is "dumb"? – Mateen Ulhaq Dec 09 '10 at 00:23
  • @R: Someone did actually suggest exactly that (patching the relocation table of the buggy flash plugin). – caf Dec 09 '10 at 01:35
  • @dalle: I don't understand your point. – Billy ONeal Dec 09 '10 at 06:40
  • API user says to API developer "It crashes when I pass invalid arguments". Developer replies "Don't do it then". -- It is often better to treat the cause instead of the symptoms. – dalle Dec 09 '10 at 09:36
  • 2
    @dalle In your case, the crashing is a symptom, and the bad parameters being passed is the cause. Exactly the opposite of the doctor case. – Andrei Krotkov Jan 01 '11 at 01:36

20 Answers20

47

If you're going to check for NULL pointer arguments where you have not entered into a contract to accept and interpret them, do it with an assert, not a conditional error return. This way the bugs in the caller will be immediately detected and can be fixed, and it makes it easy to disable the overhead in production builds. I question the value of the assert except as documentation however; a segfault from dereferencing the NULL pointer is just as effective for debugging.

If you return an error code to a caller which has already proven itself buggy, the most likely result is that the caller will ignore the error, and bad things will happen much later down the line when the original cause of the error has become difficult or impossible to track down. Why is it reasonable to assume the caller will ignore the error you return? Because the caller already ignored the error return of malloc or fopen or some other library-specific allocation function which returned NULL to indicate an error!

R.. GitHub STOP HELPING ICE
  • 208,859
  • 35
  • 376
  • 711
  • 3
    the flaw of putting an `assert()` but no conditionnal `return` is: If ever you don't test all of the possible ways of executing your code (at all those precise places where pointers could be dangerously nullptr), then you potentially have an application crash. Having a conditionnal return allows the application to continue working without the crash. – Stephane Rolland Dec 08 '10 at 17:13
  • 11
    In a really perverse sense of "working". This is roughly equivalent to setting up a signal hander for `SIGSEGV` and `longjmp`'ing out... The caller has already proven it has major bugs and there's no guarantee its data hasn't already gone to hell. – R.. GitHub STOP HELPING ICE Dec 08 '10 at 17:14
  • No, absolutely not. Having pointers to NULL value in itself is not always bug. And if you have to dereference a pointer, I think it should be tested. Logically. And you could return error code, having then a decent behaviour. – Stephane Rolland Dec 08 '10 at 17:39
  • 25
    Having a NULL pointer is not a bug. Passing it to a function that is documented to require a pointer to data is a bug. I already explained why returning an error is likely to be ignored. – R.. GitHub STOP HELPING ICE Dec 08 '10 at 17:45
  • 1
    This answer almost got the checkmark (it *did* get +1 ;) ). I checkmarked the other answer however merely because it took a view contrary to the one I asserted in my original question. – Billy ONeal Dec 09 '10 at 05:05
  • 1
    @StephaneRolland I think the most failsafe way is to do both, i.e. `if (!input) { assert(false); return; }`. This way program intention is documented with an `assert()`, and we gracefully handle the case where `NDEBUG` is defined and the `assert()` macro has no effect. – j b Aug 06 '13 at 16:25
  • 3
    @JamieBullock: That's not a good idea. The misuse of `assert` hides the problem at debug time (`assert` is unable to print the actual assertion, `input != NULL`, which would have immediately shown what the error was) and the `return;` hides the bug if it manages to make it into a production build without assertions enabled. You **want to crash** here, either via `assert` or null pointer dereference. – R.. GitHub STOP HELPING ICE Jun 29 '14 at 03:16
29

In C++, if you don't want to accept NULL pointers, then don't take the chance: accept a reference instead.

Karl Knechtel
  • 62,466
  • 11
  • 102
  • 153
  • 6
    Where possible I do this. However occasionally I need to expose a C API, which leaves me dealing with pointers :) – Billy ONeal Dec 08 '10 at 17:02
  • 2
    Agree. The semantics of a pointer in C++ is that it can be NULL. If it couldn't be NULL, you'd use a reference. You didn't, therefore NULL is allowed. – Bill Dec 08 '10 at 17:16
  • 2
    I disagree. Passing references in C++ hides the fact that the callee might modify data, and should be considered harmful when they're used for writing. Passing a pointer never implies that NULL is valid. Moreover, a reference **can** in practice be null, e.g. `function_that_takes_reference(*ptr);` where `ptr` is `NULL`. (Don't quote useless formal specs; the reality is that the machine code in the function **will see a NULL pointer** and **will dereference it**.) – R.. GitHub STOP HELPING ICE Dec 08 '10 at 17:27
  • 5
    @R.., "the reality is that the machine code in the function will see a NULL pointer and will dereference it." - No; the dereference happens before the function is called, and is undefined behaviour. Pointing out that the archetypical example of undefined behaviour is, in fact, undefined behaviour is about as far from "quoting a useless formal spec" as I can imagine it's possible to get. As for "hiding the fact that the callee might modify data", I've hashed this argument out over many pages of discussion elsewhere on the internet and still find that position dubious at best. – Karl Knechtel Dec 08 '10 at 17:33
  • 2
    Either way, the crash will happen *inside your library function* and not in the caller, from a debugging standpoint. I don't care that the C++ spec tries to pretend references are more than syntactic sugar for pointers; in reality they're the same thing. – R.. GitHub STOP HELPING ICE Dec 08 '10 at 17:35
  • @R.: Well, if there's a null reference, then you already have undefined behavior, because to assign the reference you had to dereference a null pointer. – Billy ONeal Dec 08 '10 at 17:36
  • 4
    @R.., that's simply not true; there are many cases where the compiler can optimize a reference away completely but wouldn't be able to optimize away "equivalent" code using a pointer for indirection. As for where the crash happens, that's not relevant to the question of **who wrote the broken code**. You can't reasonably defend undefined behaviour with "but the crash doesn't happen until someone else's code runs", in the same way that you can't reasonably defend syntax errors with "but the error isn't apparent until the compiler tries to compile it". – Karl Knechtel Dec 08 '10 at 17:46
  • 1
    Or put another way: "I don't care that the C++ spec tries to pretend that memory past the end of an array doesn't exist; if I write past the bounds of an array and corrupt the heap, the crash will happen *inside your delete statement* ". – Karl Knechtel Dec 08 '10 at 17:49
  • The only code that cannot be made to invoke undefined behavior by a broken caller is code where every single bit-combination of arguments has a valid interpretation. Anything using pointers (by whatever name you call them) can be made to invoke UB if the caller wishes. Formally, with C++ references, the caller invoked the UB by "dereferencing" bad pointers when passing them, but in terms of real world effects it's the same. Regardless of whether you use references or pointers, if an invalid address reaches the function, it's the caller's fault. – R.. GitHub STOP HELPING ICE Dec 08 '10 at 18:01
  • 1
    "The only code that cannot be made to invoke undefined behavior by a broken caller"... you still don't seem to be getting it. In the "null reference" example, **the caller is invoking the undefined behaviour, not the called function**. That the undefined behaviour is **detected** only when the called function is (a) irrelevant, and (b) not even always what happens - by the nature of undefined behaviour. "... if an invalid address reaches the function, it's the caller's fault." And that's exactly the point! The called function **shall not be blamed** for this. – Karl Knechtel Dec 08 '10 at 18:04
  • @Karl Knechtel: So the solution, if you do not want to handle a null pointer in your function, is simply to define *"invoking this function with a null pointer is undefined behaviour"*, and instantly you have transferred the responsibility to the caller, regardless of where the crash happens. – caf Dec 09 '10 at 01:39
  • 1
    @caf the thing is that when the Standard says something is undefined behaviour, and "something" is universal to the language, people tend to take that a little more seriously than when you dictate that a particular application shall be undefined behaviour because that's convenient for you. :) – Karl Knechtel Dec 09 '10 at 02:28
  • 5
    @Karl Knechtel: The problem is that you really *have* to allow your library authors the luxury of defining "undefined behaviour", since otherwise you could pass, for example, an already-freed pointer to the function, and then claim that any subsequent crash is the called function's fault. – caf Dec 09 '10 at 02:35
  • 1
    This answer implies that, by moving the dereference of NULL from inside the callee to outside, that somehow some "chance" has been averted. Nothing of the sort is true, of course ... there is UB either way, and in practice it's exactly the same -- the same code is generated, and the machine instruction that dereferences NULL occurs inside callee. @R.. got it right, as did caf -- it is sufficient to define the function's contract as not allowing NULL pointers. A programmer who doesn't take that "seriously" is incompetent. – Jim Balter Feb 25 '13 at 07:12
  • When you take "const std::string&" as a param and someone passes nullptr, we're back in undefined behavior. So I agree that the problem isn't (always) solved, but it's not due an incompetent programmer. It's the language and/or library that is lacking. – Jon Sep 16 '20 at 06:20
  • The `nullptr` literal was, afaict, not even an existing concept at the time I wrote this answer. – Karl Knechtel Sep 16 '20 at 08:06
15

While in general I don't see the value in detecting NULL (why NULL and not some other invalid address?) for a public API I'd probably still do it simply because many C and C++ programmers expect such behavior.

Nemanja Trifunovic
  • 24,346
  • 3
  • 50
  • 88
  • +1 for one of the few answers here that gives a reason rather than just "Well it's what I do..." – Billy ONeal Dec 08 '10 at 17:10
  • @ Billy and @R: Yeah, i hate to be in the pro-checking camp :( – Nemanja Trifunovic Dec 08 '10 at 17:31
  • 1
    Checkmarked this answer because 1. the author answered contrary to what was a pretty loaded question against doing the null checks, and 2. actually qualified their response with reasoning instead of just "Well this is my opinion". – Billy ONeal Dec 09 '10 at 05:04
9

Defense in Depth principle says yes. If this is an external API then totally essential. Otherwise, at least an assert to assist in debugging misuse of your API.

You can document the contract until you are blue in the face, but you cannot in callee code prevent ill-advised or malicious misuse of your function. The decision you have to make is what's the likely cost of misuse.

Steve Townsend
  • 53,498
  • 9
  • 91
  • 140
  • 4
    You can't in general check for invalid parameters. Checking for NULL only guards against an extremely small subset of mistakes. My assertion is that because the library in general cannot prevent misuse of the function, and that the function is going to crash on such misuses anyway, that it's better to cause a crash where the mistake is, rather than turn that into an error code which can be ignored. +1 for the assert suggestion though. – Billy ONeal Dec 08 '10 at 17:07
  • I agree with the comment on 'can't handle general case', but I think it's still worth checking for `NULL` if you allow raw pointers. Use of invalid pointer is UB anyway so if you want to 'cause a crash' on invalid input, that's not possible in the general case either. Best thing you can do is stuff like `IsBadReadPtr` (on Windows), and that way lies madness - the last resort of a dev who cannot find their bug. – Steve Townsend Dec 08 '10 at 17:09
  • 8
    @Steve: [IsBadXxxPtr should really be called CrashProgramRandomly](http://blogs.msdn.com/b/oldnewthing/archive/2006/09/27/773741.aspx). That's actually the other reason I asked this question :) – Billy ONeal Dec 08 '10 at 17:12
  • +1 for use of `assert()`; IMO it is not worth the runtime hit in release code to protect against a NULL pointer. And checking for NULL doesn't always protect you anways, for instance, even if the API adds a host of NULL pointer checks I can still cause undefined behavior by passing a pointer to a string literal to a function that expects to be able to modify the memory. – Praetorian Dec 08 '10 at 17:16
  • @Praetorian: Or someone might happen to pass `&ptr_that_happens_to_be_null[1]`.... :-) – R.. GitHub STOP HELPING ICE Dec 08 '10 at 17:53
7

In my view, it's not a question of responsibility. It's a question of robustness.

Unless I have full control on the caller and I must optimize for even the minute speed improvement, I always check for NULL.

Android Eve
  • 14,864
  • 26
  • 71
  • 96
  • 3
    Define "robustness", please :) – Billy ONeal Dec 08 '10 at 17:08
  • 1
    Robustness, in this context, means minimizing the chances for "core-dump" or "access violation" and, instead, handling the caller error more gracefully. – Android Eve Dec 08 '10 at 17:52
  • 4
    What if minimizing the chance of core dump means increasing the chance of overwriting your files on disk with corrupt nonsense? – R.. GitHub STOP HELPING ICE Dec 08 '10 at 18:03
  • @R.. You raise a good point. Different cases need to be addressed differently. It really depends on the library being written and its foreseen uses. In the same way one could ask "what if *not* checking for NULL means increasing the chance of overwriting your files on disk with corrupt nonsense?" There is only so much you, as a callee, can do for the caller. Also consider the value of debug & development time: As a programmer, I prefer receiving a documented error return code from a library function, rather than hunting down a core dump.. – Android Eve Dec 08 '10 at 19:45
  • 1
    If you consistently check return values for errors, you would almost surely have never passed a NULL pointer to the library function in the first place. – R.. GitHub STOP HELPING ICE Dec 08 '10 at 20:11
4

I lean heavily on the side of 'don't trust your user's input to not blow up your system' and in defensive programming in general. Since I have made APIs in a past life, I have seen users of the libraries pass in null pointers and then application crashes result.

If it is truly an internal library and I'm the only person (or only a select few) have the ability to use it, then I might ease up on null pointer checks as long as everyone agrees to abide by general contracts. I can't trust the user base at large to adhere to that.

wkl
  • 77,184
  • 16
  • 165
  • 176
  • 9
    My assertion is that "application crashes" are the *desired* behavior here. – Billy ONeal Dec 08 '10 at 17:02
  • That is a wicked *desire* ;-) – Stephane Rolland Dec 08 '10 at 17:16
  • 7
    @Stephane: Yes, it is. But a crash is better than continuing to execute in some inconsistent state, possibly corrupting other data. Unless you're controlling nuclear reactors, it's better to fail spectacularly than to fail silently if you must fail at all. – Billy ONeal Dec 08 '10 at 17:18
  • @Billy - if it is then I would like to handle it a little gracefully being terminating the application, like with some logging because our users just weren't that good at debugging and relied mostly on our dev team to solve bugs like that (and unfortunately we didn't ship PDBs to aid them in debugging). I do see your point, especially in the general case where you aren't passed `NULL`, but are instead just given a pointer to junk. Can't really defend against that. – wkl Dec 08 '10 at 17:20
  • Crashing on error sounds better than not discovering that you have critical flaws, but does it crash in a way that hints at the cause of the error? If the stack trace is full of implementation details of the library function, it may be harder to trace back to the caller's mistake. – Chris Stratton Dec 08 '10 at 17:21
  • 1
    @Chris: Even if your stack trace doesn't contain the library functions, it certainly contains your last call into the library. – Billy ONeal Dec 08 '10 at 17:23
  • A debugger is not necessary to test this. Static analysis can tell the programmer that they forgot to check for an error return when the pointer was assigned. – R.. GitHub STOP HELPING ICE Dec 08 '10 at 17:24
  • @Billy hey, we are **not** dealing with invalid pointers. We are dealing with NULL pointers. I think the problem is **totally** different. – Stephane Rolland Dec 08 '10 at 17:29
  • 4
    NULL pointers **are invalid pointers**. Sadly lots of C and C++ newbies don't understand this. – R.. GitHub STOP HELPING ICE Dec 08 '10 at 17:30
  • @BillyONeal - generally yes, but the error triggering the stack trace is not always linked to an error in the call. It could be due to a time bomb set up by a previous call and triggered by an apparently valid subsequent one. It could be distantly related to the parameters of the current call by non-obvious implementation logic. And in some unfortunate cases, the stack trace output may be truncated before it gets back up to a function which the person trying to solve the problem has the source of. – Chris Stratton Dec 08 '10 at 17:31
  • @R Of course nullptr is an invalid pointer, I understand your precision. However I do a **precise** distinction between a pointer with NULL value and a pointer of invalid value (for example a random integer, different of 0/NULL of course). I initialize cleanly all my pointers to NULL **all of the time**, and I am sure it's a clean behaviour to initialize its variable all of the time : thus it is clean and normal behaviour to have somewhere pointers of NULL value **ALL OF THE TIME**. – Stephane Rolland Dec 09 '10 at 10:00
4

The answer is going to be different for C and C++.

C++ has references. The only difference between passing a pointer and passing a reference is that the pointer can be null. So, if the writer of the called function expects a pointer argument and forgets to do something sane when it's null, he's silly, crazy or writing C-with-classes.

Either way, this is not a matter of who wears the responsibility hat. In order to write good software, the two programmers must co-operate, and it is the responsibility of all programmers to 1° avoid special cases that would require this kind of decision and 2° when that fails, write code that blows up in a non-ambiguous and documented way in order to help with debugging.

So, sure, you can point and laugh at the caller because he messed up and "everything not defined is undefined" and had to spend one hour debugging a simple null pointer bug, but your team wasted some precious time on that.

Victor Nicollet
  • 24,361
  • 4
  • 58
  • 89
4

My philosophy is: Your users should be allowed to make mistakes, your programming team should not.

What this means is that the only place you should check for invalid parameters including NULL, is in the top-level user interface. Everywhere the user can provide input to your code, you should check for errors, and handle them as gracefully as possible.

Everywhere else, you should use ASSERTS to ensure the programmers are using the functions correctly.

If you are writing an API, then only the top-level functions should catch and handle bad input. It is pointless to keep checking for a NULL pointer three or four levels deep into your call stack.

AShelly
  • 34,686
  • 15
  • 91
  • 152
3

I am pro defensive programming.

Unless you can profile that these nullptr checkings happen in a bottleneck of your application... (in such cases it is conceivable one should not do those pointers value tests at those points)

but all in all comparing an int with 0 is really cheap an operation. I think it is a shame to let potential crash bugs instead of consuming so little CPU.

so: Test your pointers against NULL!

Stephane Rolland
  • 38,876
  • 35
  • 121
  • 169
  • 1
    I don't care about execution speed. On the other hand, I do care about having to constantly write oodles and oodles of boilerplate code and documentation. – Billy ONeal Dec 08 '10 at 17:33
  • But as you said You write functions that takes References as input, and not pointers. So you don't care about a so little boiler plate code !? Do you ? :-) – Stephane Rolland Dec 08 '10 at 17:35
  • @Bill, I was just kidding of you, if you use many refrences, you dont use that much pointers, so not so many pointers checking... nothing really serious :-) – Stephane Rolland Dec 08 '10 at 20:32
1

He who performas invalid operations on invalid or nonexisting data, only deserves his system-state to become invalid.

I consider it complete nonsense that functions which expect input should check for NULL. Or whatever other value for that matter. The sole job of a function is to do a task based on its input or scope-state, nothing else. If you have no valid input, or no input at all, then don't even call the function. Besides, a NULL-check doesn't detect the other millions and millions of possible invalid values. You know on forehand you would be passing NULL, so why would you still pass it, waste valuable cycles on yet another function call with parameter passing, an in-function comparison of some pointer, and then check the function output again for success or not. Sure, I might have done so when I was 6 years old back in 1982, but those days have long since gone.

There is ofcourse the argument to be made for public API's. Like some DLL offering idiot-proof checking. You know, those arguments: "If the user supplies NULL you don't want your application to crash." What a non-argument. It is the user which passes bogus data in the first place; it's an explicit choice and nothing else than that. If one feels that is quality, well... I prefer solid logic and performance over such things. Besides, a programmer is supposed to know what he's doing. If he's operating on invalid data for the particular scope, then he has no business calling himself a programmer. I see no reason to downgrade the performance, increase power consumption, while increasing binary size which in turn affects instruction caching and branch-prediction, of my products in order to support such users.

1

I think that you should strive to write code that is robust for every conceivable situation. Passing a NULL pointer to a function is very common; therefore, your code should check for it and deal with it, usually by returning an error value. Library functions should NOT crash an application.

Michael K
  • 3,297
  • 3
  • 25
  • 37
  • 1
    -1 for straw man. If a program passes an invalid pointer to a library, and the program crashes as a result, this is not the library function crashing the application. "Library functions should NOT crash an application" is an extremely important principle, but it does not apply here. It would apply if the library had called `malloc` and failed to check the result. – R.. GitHub STOP HELPING ICE Dec 08 '10 at 17:03
  • 3
    But the library function didn't crash the application; the caller did. And there's no general way for the library to determine whether a given memory buffer is valid. – Billy ONeal Dec 08 '10 at 17:04
  • @Billy: On many systems such as Linux there is. Call `write(devnullfd, ptr, sizeof(*ptr));` and check for `EFAULT` before each access. I vote that we all add 50000-cycle `write` overhead to every library function to make sure no idiots call our functions with invalid pointers!!! – R.. GitHub STOP HELPING ICE Dec 08 '10 at 17:13
  • @R. That's fundimentially doing the same thing as [IsValidXxxPtr which we have already stipulated is a problem](http://blogs.msdn.com/b/oldnewthing/archive/2006/09/27/773741.aspx). – Billy ONeal Dec 08 '10 at 17:20
  • @R. I thought the first part was serious and the second sarcastic. Nvm then :P – Billy ONeal Dec 08 '10 at 17:27
1

For C++, if your function doesn't accept nullpointer, then use a reference argument. In general.

There are some exceptions. For example, many people, including myself, think it's better with pointer argument when the actual argument will most naturally be a pointer, especially when the function stores away of a copy of the pointer. Even when the function doesn't support nullpointer argument.

How much to defend against invalid argument depends, including that it depends on subjective opinion and gut-feeling.

Cheers & hth.,

Cheers and hth. - Alf
  • 142,714
  • 15
  • 209
  • 331
1

One thing you have to consider is what happens if some caller DOES misuse your API. In the case of passing NULL pointers, the result is an obvious crash, so it's OK not to check. Any misuse will be readily apparent to the calling code's developer.

The infamous glibc debacle is another thing entirely. The misuse resulted in actually useful behavior for the caller, and the API stayed that way for decades. Then they changed it.

In this case, the API developers' should have checked values with an assert or some similar mechanism. But you can't go back in time to correct an error. The wailing and gnashing of teeth were inevitable. Read all about it here.

JCCyC
  • 16,140
  • 11
  • 48
  • 75
  • I'm sorry, I'm not firmiliar with the glibc thing. (That was written by some commenter) – Billy ONeal Dec 08 '10 at 17:19
  • It provides a nice contrast between the particular case in your question and another similar case with subtly different circumstances. See http://lwn.net/Articles/414467/ -- it's an interesting read, I recommend it. – JCCyC Dec 08 '10 at 18:23
1

If you don't want a NULL then don't make the parameter a pointer.
By using a reference you guarantee that the object will not be NULL.

Martin York
  • 257,169
  • 86
  • 333
  • 562
0

I don't think it's the responsibility of the callee to deal with such things

If it doesn't take this responsibility it might create bad results, like dereferencing NULL pointers. Problem is that it always implicitly takes this responsibility. That's why i prefer graceful handling.

Andrey
  • 59,039
  • 12
  • 119
  • 163
0

In my opinion, it's the callee's responsibility to enforce its contract.

If the callee shouldn't accept NULL, then it should assert that.

Otherwise, the callee should be well behaved when it's handed a NULL. That is, either it should functionally be a no-op, return an error code, or allocate its own memory, depending on the contract that you specified for it. It should do whatever seems to be the most sensible from the caller's perspective.

As the user of the API, I want to be able to continue using it without having the program crash; I want to be able to recover at the least or shut down gracefully at worst.

greyfade
  • 24,948
  • 7
  • 64
  • 80
  • 1
    But in the general case the callee can't enforce it's contract. If you get a bad pointer, you get a bad pointer. – Billy ONeal Dec 08 '10 at 18:54
  • 4
    There is absolutely no way for a callee to enforce the contract like: `s` shall point to an array of at least `n` objects of type `T`. Checking for the 1% of testable violations of the contract and ignoring the 99% of violations which are untestable is nonsense. Should I also write code to test that the pointer doesn't compare equal to the address of any of my function's local variables? After all, those are testably-invalid pointers too. – R.. GitHub STOP HELPING ICE Dec 08 '10 at 20:14
  • 2
    And greyfade, in your last paragraph, you seem to be confusing errors arising from exceptional conditions in the operating environment (fs full, out of memory, network down, etc.) with programming errors. In the former case, of course a robust program needs to be able to handle them gracefully. In the latter, a robust program **cannot** experience them in the first place. – R.. GitHub STOP HELPING ICE Dec 08 '10 at 20:17
0

One side effect of that approach is that when your library crashes in response to being passed an invalid argument, you will tend to get the blame.

There is no better example of this than the Windows operating system. Initially, Microsoft's approach was to eliminate many tests for bogus arguments. The result was an operating system that was more efficient.

However, the reality is that invalid arguments are passed all time. From programmers that aren't up to snuff, or just using values returned by other functions there weren't expected to be NULL. Now, Windows performs more validation and is less efficient as a result.

If you want to allow your routines to crash, then don't test for invalid parameters.

Jonathan Wood
  • 65,341
  • 71
  • 269
  • 466
0

Yes, you should check for null pointers. You don't want to crash an application because the developer messed something up.

Neal P
  • 599
  • 1
  • 8
  • 16
0

Overhead of development time + runtime performance has a trade-off with the robustness of the API you are designing.

If the API you are publishing has to run inside the process of the calling routine, you SHOULD NOT check for NULL or invalid arguments. In this scenario, if you crash, the client program crashes and the developer using your API should mend his ways.

However, if you are providing a runtime/ framework which will run the client program inside it (e.g., you are writing a virtual machine or a middleware which can host the code or an operating system), you should definitely check of the correctness of the arguments passed. You don't want your program to be blamed for the mistakes of a plugin.

Jaywalker
  • 3,079
  • 3
  • 28
  • 44
-1

There is a distinction between what I would call legal and moral responsibility in this case. As an analogy, suppose you see a man with poor eyesight walking towards a cliff edge, blithely unaware of its existence. As far as your legal responsibility goes, it would in general not be possible to successfully prosecute you if you fail to warn him and he carries on walking, falls off the cliff and dies. On the other hand, you had an opportunity to warn him -- you were in a position to save his life, and you deliberately chose not to do so. The average person tends to regard such behaviour with contempt, judging that you had a moral responsibility to do the right thing.

How does this apply to the question at hand? Simple -- the callee is not "legally" responsible for the actions of the caller, stupid or otherwise, such as passing in invalid input. On the other hand, when things go belly up and it is observed that a simple check within your function could have saved the caller from his own stupidity, you will end up sharing some of the moral responsibility for what has happened.

There is of course a trade-off going on here, dependent on how much the check actually costs you. Returning to the analogy, suppose that you found out that the same stranger was inching slowly towards a cliff on the other side of the world, and that by spending your life savings to fly there and warn him, you could save him. Very few people would judge you entirely harshly if, in this particular situation, you neglected to do so (let's assume that the telephone has not been invented, for the purposes of this analogy). In coding terms, however, if the check is as simple as checking for NULL, you are remiss if you fail to do so, even if the "real" blame in the situation lies with the caller.

Stuart Golodetz
  • 20,238
  • 4
  • 51
  • 80
  • But does the null check really save them from stupidity? If they crash, then they know there's a bug in their program. If you silently fail, then they'll probably just leave the bad code there. – Billy ONeal Dec 08 '10 at 18:58
  • @Billy: Yes, it does -- because if I check for `NULL`, then I can throw an exception, preventing undefined behaviour in my own code as a result of the caller's input. The caller will be able to immediately detect what happened, either because they catch the exception, or because the program crashes as a result of the uncaught exception. In either case, the behaviour is well-defined. If I just assume that their input is valid, my own code may silently 'work', and they'll be none the wiser. – Stuart Golodetz Dec 08 '10 at 19:09
  • The point being that I'm absolutely *not* advocating silently failing, I'm advocating checking whether I'm failing, and kicking up a loud, vulgar rumpus if that's the case. On the other hand, I would advocate "doing the right thing in the face of invalid input" where that actually is practicable. It's sometimes feasible to output warnings (to tell the caller they need to fix things) but make a default assumption in the meantime. – Stuart Golodetz Dec 08 '10 at 19:11
  • A final point is that if I consistently throw an exception when something goes wrong, and the caller knows this, they can arrange to catch it (if they weren't doing so before) and find out what the problem is. If I just crash by doing something undefined inside my function, their only option is to hunt me down. Not appealing, thanks all the same. – Stuart Golodetz Dec 08 '10 at 19:14