0

When I wrap "raw" resources in a C++ class, in destructor code I usually simply release the allocated resource(s), without paying attention to additional steps like zeroing out pointers, etc. e.g.:

class File
{
public:
  ...

  ~File()
  {
    if (m_file != NULL)
      fclose(m_file);
  }

private:
  FILE * m_file;
};

I wonder if this code style contains a potential bug: i.e. is it possible that a destructor is called more than once? In this case, the right thing to do in the destructor would be to clear pointers to avoid double/multiple destructions:

~File()
{
  if (m_file != NULL)
  {
    fclose(m_file);
    m_file = NULL; // avoid double destruction
  }
}

A similar example could be made for heap-allocated memory: if m_ptr is a pointer to memory allocated with new[], is the following destructor code OK?

// In destructor:
delete [] m_ptr; 

or should the pointer be cleared, too, to avoid double destruction?

// In destructor:
delete [] m_ptr;
m_ptr = NULL; // avoid double destruction
  • The part of the question about setting the members to `NULL`, has been [asked](http://stackoverflow.com/questions/3060006/is-it-worth-setting-pointers-to-null-in-a-destructor) [before](http://stackoverflow.com/questions/4236736/do-i-need-to-nullify-a-member-variable-in-the-destructor). – João Portela Jan 31 '12 at 17:46

3 Answers3

5

No. It is useful if you have a Close() function or the like:

void Close()
{
    if (m_file != NULL)
    {
        fclose(m_file);
        m_file = NULL;
    }
}
~File()
{
    Close();
}

This way, the Close() function is idempotent (you can call it as many times as you want), and you avoid one extra test in the destructor.

But since destructors in C++ can only be called once, assigning NULL to pointers there is pointless.

Unless, of course, for debuggin-purposes, particularly if you suspect a double-delete.

rodrigo
  • 94,151
  • 12
  • 143
  • 190
  • Sure for the Close() method (I use the same pattern in case there is a Close() method). –  Jan 31 '12 at 17:51
  • +1 Note an important detail: if in your application it does not make sense to `close()` before the object is destroyed (i.e. the lifetime of the object and the resource are the same), then `close()` should not be implemented (it allows for different lifetimes) and the handler should not be reset in the constructor. – David Rodríguez - dribeas Jan 31 '12 at 18:16
2

If a destructor is called more than once, you already have undefined behavior. This will also not affect clients that may have a pointer to the resource themselves, so this is not preventing a double delete. A unique_ptr or scoped_ptr seem to be better solutions to me.

pmr
  • 58,701
  • 10
  • 113
  • 156
  • I'm assuming the resource pointer is local to the class (no resource sharing). –  Jan 31 '12 at 17:45
2

In a buggy application (for example, improper use of std::unique_ptr<> can result in two std::unique_ptr<> holding the same raw pointer), you can end up with a double delete, as the second one goes out of scope.

We care about these bad cases - otherwise, what's the point of discussing setting a pointer to nullptr in the destructor? It's going away anyways!

Hence, in this example, at least, it would be better to let the program seg-fault inside a debugger during a unit-test, so you can trace the real cause of the problem.

So, in general, I don't find setting pointers to nullptr to be particularly useful for memory management.

You could do it, but a more robust alternative is to do unit tests and to judiciously use a memory checker like valgrind.

After all, with some memory errors, your program can seemingly run ok many times, until it crashes unexpectedly - much safer to do quality assurance with a memory checker, especially as your program gets larger, and memory errors become less obvious.

kfmfe04
  • 14,936
  • 14
  • 74
  • 140
  • 2
    I always NULL out pointers so that if I mess up and reference an object after it's destroyed, I am more likely to get a segfault than silent failure. Interesting that we do opposite things for exactly the same reasons. – Mooing Duck Jan 31 '12 at 18:20
  • And there are a lot of cases where valgrind is not an option. – rodrigo Jan 31 '12 at 21:21