15

Searching something on SO, I stumbled across this question and one of the comments to the most voted answer (the fifth comment to that most voted answer) suggests that delete p; p = NULL; is an antipattern. I must confess that I happen to use it quite often paring it sometimes\most of the times with the check if (NULL != p). The Man himself seems to suggest it (please see the destroy() function example) so I'm really confused to why it might be such a dreaded thing to be considered an antipattern. I use it for the following reasons:

  • when I'm releasing a resource I also want to kind of invalidate it for further usage and NULL is the right tool to say a pointer is invalid
  • I don't want to leave behind dangling pointers
  • I want to avoid double\multiple free bugs - deleting a NULL pointer is like a nop but deleting a dangling pointer is like "shooting yourself in the foot"

Please note that I'm not asking the question in the context of the "this" pointer and let's assume we don't live in a perfect C++ world and that legacy code does exist and it has to be maintained, so please do not suggest any kind of smart pointer :).

Community
  • 1
  • 1
celavek
  • 5,575
  • 6
  • 41
  • 69
  • 1
    Oh yes. Any decent debug allocator will tell you when you free memory twice. You defeat it by setting the pointer to null. You want your leg get blown off. So either do nothing or set it to a value that always generates a processor fault. – Hans Passant Aug 18 '11 at 21:46
  • I don't think it's really a "pattern" in the usual sense of the word because it's not a way of approaching a design; it's pretty local. But it could be a bad sign if it's the only thing stopping you from deleting something twice. – Owen Aug 18 '11 at 21:47
  • Just found this other question(http://stackoverflow.com/q/1931126/59775) on the same topic. – celavek Aug 18 '11 at 22:22
  • He's not recommending it in general, he's saying *if* you're going to do it on a regular basis, make it a function: "*If* you consider zeroing out pointers important..." – Bill Aug 18 '11 at 23:07
  • @Bill I did not say recommend, but suggest :); and in the context from the link he indeed suggests it. – celavek Aug 18 '11 at 23:22
  • And yet another one http://stackoverflow.com/q/3060006/59775 ... it seems I did not do my search in the right way. – celavek Aug 18 '11 at 23:31

7 Answers7

15

Yes, I would not recommended doing it.

The reason is that the extra setting to null will only help in very limited contexts. If you are in a destructor, the pointer itself will not exist right after the destructor execution, which means that whether it is null or not does not matter.

If the pointer was copied, setting p to null will not set the rest of the pointers, and you can then hit the same problems with the extra issue that you will be expecting to find deleted pointers being null, and it won't make sense how your pointer became non-zero and yet the object is not there....

Additionally it might hide other errors, like for example if your application is trying to delete a pointer many times, by setting the pointer to null, the effect is that the second delete will be converted to a no-op, and while the application will not crash the error in the logic is still there (consider that a later edit accesses the pointer right before the delete that is not failing... how can that fail?

David Rodríguez - dribeas
  • 204,818
  • 23
  • 294
  • 489
  • Regarding your second point, that's why you should always have well-defined ownership semantics in your code to avoid that. Forcing the program to crash is not a good replacement for knowing who owns the pointer. Regarding your third point, when you're programming defensively, then you may have two places that may delete the pointer if it still exists, and it's perfectly valid. – Chris Walton Aug 18 '11 at 22:11
  • @arke: I agree, I meant it the other way around: *setting the pointer to null to avoid the program from crashing is not a replacement for well-defined ownership semantics*. – David Rodríguez - dribeas Aug 18 '11 at 22:24
  • I perfectly agree with first and third points you make, but for the second if I delete the memory through one of the pointers and then try to access it through the copy it's a completely wrong thing to do regardless of setting or not the original pointer to NULL. – celavek Aug 18 '11 at 22:35
  • Is the scenario in your first point safe in a multithreading context? – celavek Aug 18 '11 at 23:32
  • 1
    @celavek: When the destructor is entered, no other thread should try and use it any longer. An object lifetime starts when its constructor ends and ends when its destructor starts, accessing an object outside its lifetime is *undefined behavior*. So yes, David advice is sound in the first point: you already invoked undefined behavior if you are using the object in another thread at this point, and setting the pointer to NULL achieves nothing more. – Matthieu M. Aug 19 '11 at 06:20
  • While the context for the first point is somewhat limited, I don't quite agree with the dismissive sentiment. Setting such pointers to null can help catch errors if the destructor calls other member functions that could mistakenly try to access that member. (Of course, all of this could be avoided by using smart pointers...) – jamesdlin May 07 '16 at 06:17
6

I recommend doing so.

  • Obviously, it's valid in a context where NULL is a valid value of the pointer. This, of course, means if it's used in other places, it must be checked.
  • Even if the pointer may, technically, not be NULL, it does help in real-world scenarios when customers send you a crash dump. If it's NULL and it's not supposed to be (and it wasn't trapped in testing with the assert() that you should do), then it's easy to see that this is the problem - you'll crash at something like mov eax, [edx+4] or something, you'll see that edx is 0, and you know what the problem is. If, on the other hand, you don't NULL the pointer, but it is deleted, then all sorts of things may happen. It may work, it may crash immediately, it may crash later, it may show weird things - at this point, anything that happens is soft-non-deterministic.
  • Defensive programming is King. That includes setting a pointer to NULL extraneously, even if you think you don't have to, and doing that extra NULL check in a few places even though you technically know it isn't supposed to happen.
  • Even if you have the logic error of a pointer going through delete twice, it's better to not crash and handle it safely than to crash. That may mean you do that extra check, and you'll probably want to log that, or maybe even end the program gracefully, but it's not just a crash. Customers don't like that.

Personally, I use an inline function that takes the pointer as a reference and sets it to NULL.

Chris Walton
  • 1,326
  • 8
  • 12
  • On item 2: If you are to follow that path, then you should recommend not setting to null, but rather to `reinterpret_cast(1)`, which is an invalid pointer in most architectures (alternatively any other odd number). Setting it to 0 means that you will not be able to tell whether the pointer was never allocated or already released. On the crashing/hidding bug side of things, I believe in extensive testing, and if you do perform testing, chances are that the crash will happen before the customer get's the software in the first place. – David Rodríguez - dribeas Aug 19 '11 at 12:16
4

No, it's not an anti-pattern.

Setting a pointer to NULL is a perfectly good way of indicating that the pointer is no longer pointing at anything valid. In fact, that's exactly what NULL value is intended to mean.

If there's an anti-pattern to be avoided here, the anti-pattern would be not having a simple and well-defined set of rules/conventions for how your program manages its memory. It doesn't matter so much what those rules are, as long as they work to avoid leaks and crashes, and as long as you can and do follow them consistently.

Jeremy Friesner
  • 70,199
  • 15
  • 131
  • 234
  • 2
    You are definitely missing the whole anti-pattern idea. If the program manages memory correctly, why on Earth would you burn a cpu cycle on setting the pointer to null? Just leave it the way it is. If your compiler has some way to generate a diagnostic build, why on Earth would you make its job *harder*? – Hans Passant Aug 18 '11 at 23:04
  • If your program has some other simple way of knowing whether the pointer is valid or not (e.g. the parent object only allocates the sub-object in the constructor, and only deletes it in the destructor) then sure, there's no reason to NULL out the pointer. But if the sub-object may or may not be allocated at any particular point (e.g. it is a demand-constructed sub-object), then you either need a separate pointer_is_currently_valid boolean, or you can just set the pointer to NULL when it's not pointing to a valid object. – Jeremy Friesner Aug 19 '11 at 04:17
  • That's not what is being discussed here. – Hans Passant Aug 19 '11 at 10:18
4

It depends on how the pointer is used. If all the code that can see the pointer should "know" that it's no longer valid — especially in a destructor where the pointer is about to go out of scope anyway — there's no need to set it to null.

On the other hand, the pointer may represent an object that sometimes exists, sometimes doesn't, and you have code like if (p) { p->doStuff(); } to act upon the object when it exists. In that case, obviously, you should set it to null after deleting the object.

The important distinction in the latter case is that the lifetime of the pointer variable is much longer than the lifetime of the objects it (sometimes) points to, and its null-ness carries some significant meaning that has to be communicated to other parts of the program.

Wyzard
  • 33,849
  • 3
  • 67
  • 87
3

In my opinion the anti-pattern is not delete p; p = NULL;, it's assert(this != NULL).

I use the anti-pattern as you call it for two reasons - first to enhance the likelyhood that bad code will crash spectacularly without hiding, and second to make the core problem more obvious in debugging. I wouldn't litter my code with asserts just on the off chance that it might catch something.

Mark Ransom
  • 299,747
  • 42
  • 398
  • 622
  • The assert (this=null) is also somewhat duplicate code. You are going to crash immediately if this==null anyways, so why call it out? – Edward Feb 14 '13 at 03:47
  • @Edward, you won't *always* crash immediately - you'll need to access a member variable or call a virtual function. Still it's generally quite obvious why you crashed once you do. – Mark Ransom Feb 14 '13 at 03:51
3

Although I think @Mark Ransom is on sort of the right track, I would suggest there's an even more fundamental problem than just assert(this!=NULL).

I think it's much more telling that you're using raw pointers and new (directly) often enough to care. While this isn't (necessarily) a code smell/anti-pattern/whatever in itself, it often points toward code that's unnecessarily similar to C, and isn't taking advantage of the things like containers in the standard library.

Even if the standard containers don't fit your needs well, you can still normally wrap your pointer up into a small enough, simple enough package that techniques like this are irrelevant -- you've restricted access to the pointer in question to such a small amount of code that you can glance through it and assure that it's only ever used when it's valid. As Hoare said, there are two ways of doing things: one is to keep the code so simple there are obviously no deficiencies; the other is to make the code so complex there are no obvious deficiencies. This technique only appears relevant (to me) when you're already dealing with the latter case.

Ultimately, I see the desire to do this at all as basically admitting defeat -- rather than even attempting to assure that the code is correct, you've done the equivalent of setting up a bug-zapper next to a swamp. It reduces the appearance of bugs in a small area by a small percentage, but leaves so many more free to breed that if there's any effect on the total population, it's far too small to measure.

Jerry Coffin
  • 476,176
  • 80
  • 629
  • 1,111
  • "I think it's much more telling that you're using raw pointers and new (directly) often enough to care" - the fact is I do not have a choice; I work in the contex of a very large code base which does not employ the use of smart pointers or any other techniques of properly managing dynamically allocated memory; trying to do it right would require extra development time which I do not have. I think you are perfectly right in the argument you're making here and for "new code" I always try to employ proper techniques and drive myself to learn how to do it wright. – celavek Aug 19 '11 at 10:57
  • And sometimes yes I have to admit defeat hoping that I will come on top in the next battle(yay, it sounds like taken from a Coelho book). – celavek Aug 19 '11 at 11:00
0

The reason is that the extra setting to null will only help in very limited contexts. If you are in a destructor, the pointer itself will not exist right after the destructor execution, which means that whether it is null or not does not matter.

Got to correct this statement since it is false in C++.

The other functions of an object being destroyed may be called while the object is getting destroyed (because somehow the destruction process requires it.) It is generally viewed as ugly, but there not always good work around.

Therefore, clearing your pointers may be the only good solution to avoid problems (i.e. these other functions being called can then test to see whether the object is valid before using it.)

Yet, a good idea in C++ is to use smart objects (probably what you are talking about). More or less, a class that holds a reference to an object and which makes sure said object is released when the destructor is hit and not add many objects to destroy all at once in one object (although the result is the same, it is cleaner.)

Alexis Wilke
  • 19,179
  • 10
  • 84
  • 156