0

The second delete in following code will cause program crash because it has been deleted before:

int* i = new int;
delete i;
delete i;

Trying to catch it using exception doesn't help either:

int* i = new int;
delete i;
try {
  delete i;
} catch(exception& e) { // program just crashes, doesn't go into this exception block
  cout << "delete failed" << endl;
}

How to perform a safe delete (check first if region pointed by pointer has been deleted before)?

Or if it's not possible, how to spit out the line number where crash occurs (without debugging tool)?

gerrytan
  • 40,313
  • 9
  • 84
  • 99
  • 1
    There's no way to perform a safe delete. – 101010 May 27 '14 at 07:15
  • There best you can do is set I to NULL after deleting. Alternatively, just don't double delete! (Use RAII, and you shouldn't need to delete at all). – Oliver Charlesworth May 27 '14 at 07:16
  • There's no portable way. Depends on the architecture, OS, allocation library... In some cases it might not even crash on the second delete, but much later. In some cases it might raise an SIGABORT... in others it might be a SIGSEGV. – jsantander May 27 '14 at 07:16
  • @OliCharlesworth: `nullptr` in C++11 – Jarod42 May 27 '14 at 07:16
  • Or to define your own overloaded delete operator (like for debug builds) if you can accept that overhead (=detect and throw, do not ignore)... – Adriano Repetti May 27 '14 at 07:17
  • Guys I'll go with setting the pointer to NULL, that merely solves the problem. However, there's no standard portable way to know if a pointer is valid. – 101010 May 27 '14 at 07:19
  • Setting the pointer to NULL (or otherwise adding in some logic to avoid a double-delete) solves one particular instance of the problem. But if you also want to ensure that memory problems don't creep in again as more code gets added, the way to do that is to use smart pointers so that you never need to explicitly call delete in the first place. If you never have to call delete, there's no chance of messing it up. (Using smart pointers not only prevents double-delete errors, but also memory leaks and premature deletes--all of which can be very tricky to track down and debug otherwise) – Jeremy Friesner May 27 '14 at 07:25

5 Answers5

4

delete does not try detect whether the pointer is valid or not, it just deallocates the pointer passed to it. You can set i to nullptr after deletion each time. And check if(i==nullptr) before deleting again(although deleting nullptr again will not cause any problem, since deleting nullptr is no op, it effectively does nothing).

If you are just playing around, then this kind of code may help to learn about the language well. But in production you should be careful about these kinds of code and eliminate them. It is also a good indicator that your code may have other resource management bugs.

Rakib
  • 7,435
  • 7
  • 29
  • 45
1

The modern C++ solution is to never use new or delete. Just make C++ handle everything automatically.

unique_ptr<int> i = make_unique<int>();

or

shared_ptr<int> i = make_shared<int>();

No need to delete it. In case you do not have make_unique you can write your own.

Community
  • 1
  • 1
nwp
  • 9,623
  • 5
  • 38
  • 68
0

You could set the pointer that you delete to (0, NULL) prior to C++11 or to nullptr for after C++11 compilers. However, this merely solves the problem (look example below):

void deletePointer(int* & iptr) {
  delete iptr;
  iptr = nullptr;
}

There is no portable and standard test to check whether a pointer is "valid" for deletion. The best you can do, if your compiler supports C++11, is to go with smart pointers. Thus, you wouldn't have to worry about invalid deletions.

Wagner Patriota
  • 5,494
  • 26
  • 49
101010
  • 41,839
  • 11
  • 94
  • 168
0
{
  std::unique_ptr<int> i(new int);
}
Chris Drew
  • 14,926
  • 3
  • 34
  • 54
  • That's the fix. It's not how to find the bug. – Lightness Races in Orbit May 27 '14 at 10:03
  • @LightnessRacesinOrbit The question is "How to perform a safe delete", no? – Chris Drew May 27 '14 at 10:09
  • No. It's "How to perform a safe delete (check first if region pointed by pointer has been deleted before)?" You have to read _all_ of it! He's asking whether he can make `delete` immune to previous `delete`s. The answer is "no". Quite right, he can and should avoid the entire thing by not writing _any_ `delete`s, but that's not the answer to the stated question. – Lightness Races in Orbit May 27 '14 at 12:18
  • @LightnessRacesinOrbit I read the part in brackets as being OP's assumption on how a safe delete would work. And I am saying that assumption is wrong. Anyway, you could say that the equivalent to `delete` with RAII is closing scope and it is safe to close scope twice. I can add an extra pair of braces if you like! – Chris Drew May 27 '14 at 12:41
0

Let's start by saying that a double delete is an undefined behaviour... and that means a behaviour which is not defined :)

Allow me to remind also that deleting a null pointer is a no-op. So no need to check on whether the pointer is null.

Without more context, the answer is that there is no portable solution.

Depending on architecture, OS, even the memory allocation library you are using, it will produce different effects and provide you with different options to detect it.

There's no guarantee that the program will crash on the second delete. It might simply corrupt the memory allocator structures in a way that will only crash after some time.

If your objective is detecting the problem, your best chance is to set up your program for capturing crashing signals (e.g. SIGABRT, SIGSEGV, SIGBUS...) and print a stack trace on the signal handler, before allowing the program to terminate/write the core file. As I said above, this might be or might not be the place of the memory corruption... but it will be the place where the memory allocator/program cannot go on any more.

That's the less invasive.

Using customized memory allocators or memory allocators with debugging options (e.g. libumem in Solaris) can help you in detecting earlier or more accurately where the problem was. The catch is that usually there's a bigger or smaller performance penalty.

If your objective is to prevent the problem... then you have to resort to best practices... For example.

  1. Resort to using RAII or smart pointers in general, or at least use them when you cannot safely establish the memory ownership throughout your program.
  2. At the very least, always remember to to set to null a pointer that you have deleted. That doesn't guarantee anything because you can always have concurrent deletes... but it helps reduce the scenarios where you could have a crash.
jsantander
  • 4,972
  • 16
  • 27
  • I dislike the set to null advice. It just hides the error. I am more in favor of letting it crash and fixing the double delete. Or your second advice. – nwp May 27 '14 at 07:38
  • @nwp reverted the order to imply the preference for smart pointers. Still, I don't dislike advicing to set deleted pointers to 0 so much... if that helps me avoiding a crash in the peak hour leaving many users without service, I'd certainly do it. Sometimes your code base or your environment does not allow you to bring in the *revolution* and start changing everything. – jsantander May 27 '14 at 07:46