6

Possible Duplicate:
How could pairing new[] with delete possibly lead to memory leak only?

I was always told that it's not safe to call delete on an array allocated with new[]. You should always pair new with delete and new[] with delete[].

So I was surprised to discover that the following code compiles and runs ok, in both Debug and Release mode under VS2008.

class CBlah
{
public:
    CBlah() : m_i(0) {}

private:
    int m_i;
};

int _tmain(int argc, _TCHAR* argv[])
{
    for(;;)
    {
        CBlah * p = new CBlah[1000]; // with []
        delete p;                    // no []
    }
    return 0;
}

It took me a while to figure out why this works at all, and I think it's just luck and some undefined behaviour.

BUT... it made me wonder... why doesn't Visual Studio pick this up, at least in the Debug memory manager? Is it because there's lots of code out there that makes this mistake and they don't want to break it, or do they feel it's not the job of the Debug memory manager to catch this kind of mistake?

Any thoughts? Is this kind of misuse common?

Community
  • 1
  • 1
ben
  • 1,441
  • 2
  • 16
  • 21
  • This is a logical error, which cannot be detected by compilers. Good IDEs could be able to spot these ones, though. – jwueller Nov 02 '10 at 13:27
  • This question http://stackoverflow.com/questions/1913343/how-could-pairing-new-with-delete-possibly-lead-to-memory-leak-only explains all the details thoroughly. And yes, this is a typical error. – sharptooth Nov 02 '10 at 13:30
  • 2
    @Hans Passant: No, the array is not leaked, this is UB. I linked to a question that explains all the inner workings in details. – sharptooth Nov 02 '10 at 13:31
  • Use a `std::vector`, don't delete things manually. – GManNickG Nov 02 '10 at 15:18

9 Answers9

9

It will certainly compile ok, because there is no information in the pointer (compile-time) which will see if pointer points to array or what. For example:

int* p;

cin>>x;
if(x == 0)
  p = new int;
else
  p = new int [10];

delete p; //correct or not? :)

Now , about running ok. This is called undefined behavior in C++, that is, there is no guarantee what will happen - everything can run OK, you can get a segfault, you can get just wrong behavior, or your computer may decide to call 911. UB <=> no guarantee

Armen Tsirunyan
  • 130,161
  • 59
  • 324
  • 434
  • 1
    Yes. In practice either it works or it corrupts the heap. And heap corruption is *very bad*. – sharptooth Nov 02 '10 at 13:34
  • 1
    It will compile is also not guaranteed. Basically it is UB. There are no rules. – Chubsdad Nov 02 '10 at 13:39
  • @Chubsdad The compilation in this is guaranteed (at least practically), isn't it? Because it can never know whether a pointer points to an array or what. In general, true, UB involves compilation failure, it involves everything actually. – Armen Tsirunyan Nov 02 '10 at 13:40
  • 2
    And the reason heap corruption is *very bad* is that, when it does crash or otherwise break, it will do so at some point *distant* from where the bug is... and if you don't pick up on this fast, you'll be spending hours or days trying to debug what may be perfectly good code, not realizing that the real bug may be in an entirely different part of the program. – Mike DeSimone Nov 02 '10 at 13:40
  • Thanks everyone... But I still don't see why the memory manager can't detect this at runtime, at least in Debug. It prefixes every allocated block with a 32 byte header (as far as i can see). This includes the number of items in the array (so delete[] knows how many times to call the destructor)... Anyways... it's UB, I agree. – ben Nov 02 '10 at 14:09
  • @ben: how do you expect the compiler to know how to treat the first 32-bits? These bits could designate the array size OR they could be the actual int to delete. The only way I can think of for the compiler to know is to store a table of addresses returned by new. Some just don't do that and I don't blame them. Anyway explicit deletes should be rare because we should use RAII whenever possible – Armen Tsirunyan Nov 02 '10 at 18:43
3

It's undefined behavior and everything is fair in love, war and undefined behavior...:)

Chubsdad
  • 24,777
  • 4
  • 73
  • 129
1

According to MDSN, it translates delete to delete[] when trying to delete an array. (See there, for instance). Though you should have a warning after compiling.

Raveline
  • 2,660
  • 1
  • 24
  • 28
1

The reason the Debug Memory Manager does not pick up on this error is probably because it it not implemented at the level of new/delete, but at the level of the memory manager that gets invoked by new/delete to allocate the required memory.
At that point, the distinction between array new and scalar new is gone.

Bart van Ingen Schenau
  • 15,488
  • 4
  • 32
  • 41
  • Actually in Visual C++ `new[]` always calls `operator new[]()` and `delete` always calls `operator delete()`, but they are just implemented the same way - via `malloc()`. Noone cares and for a reason - catching all possible errors in a C++ program is not the debug CRT task. – sharptooth Nov 02 '10 at 13:38
0

You can read these SO answers and links about delete and delete[]: About delete, operator delete, delete[], ...

Community
  • 1
  • 1
Cedric H.
  • 7,980
  • 10
  • 55
  • 82
0

I don't know what makes you think it "works ok". It compiles and completes without crashing. That does not mean necessarily there was no leak or heap corruption. Also if you got away with it this time, it doesn't necessarily make it a safe thing to do.

Sometimes even a buffer overwrite is something you will "get away with" because the bytes you have written to were not used (maybe they are padding for alignment). Still you should not go around doing it.

Incidentally new T[1] is a form of new[] and still requires a delete[] even though in this instance there is only one element.

CashCow
  • 30,981
  • 5
  • 61
  • 92
0

Interesting point. Once I did a code review and tried to convince programmers to fix new[]-delete mismatch. I've argumented with "Item 5" from Effective C++ by Scott Meyers. However, they argumented with "What do you want, it works well!" and proved, that there was no memory leakage. However, it worked only with POD-types. Looks like, MS tries to fix the mismatch as pointed out by Raveline.

What would happen, if you added a destructor?

#include <iostream>
class CBlah
{
  static int instCnt;
public:
    CBlah() : m_i(0) {++instCnt;}
    ~CBlah()
    {
      std::cout <<"d-tor on "<< instCnt <<" instance."<<std::endl;
      --instCnt;
    }
private:
    int m_i;
};

int CBlah::instCnt=0;

int main()
{
    //for(;;)
    {
        CBlah * p = new CBlah[10]; // with []
        delete p;                    // no []
    }
    return 0;
}

Whatever silly"inteligence" fix is added to VS, the code is not portable.

Valentin H
  • 7,240
  • 12
  • 61
  • 111
0

Remember that "works properly" is within the universe of "undefined behavior". It is quite possible for a particular version of a particular compiler to implement this in such a way that it works for all intents and purposes. The important thing to remember is that this is not guaranteed and you can't really ever be sure it's working 100%, and you can't know that it will work with the next version of the compiler. It's also not portable, since another compiler might work in a different fashion.

Mark Ransom
  • 299,747
  • 42
  • 398
  • 622
-1

This works because the particular C++ runtime library it was linked with uses the same heap for both operator new and operator new[]. Many do, but some don't, which is why the practice is not recommended.

The other big difference is that if CBlah had a non-trivial destructor, the delete p; would only call it for the first object in the array, whereas delete[] p; is sure to call it for all the objects.

aschepler
  • 70,891
  • 9
  • 107
  • 161
  • Actually this won't bother you much: http://stackoverflow.com/questions/2245196/c-urban-myths/2245208#2245208 – sharptooth Nov 02 '10 at 13:32
  • It is an UB - anything can happen. – BЈовић Nov 02 '10 at 13:49
  • The OP says that some specific code "runs ok" [or seems to]. Even if it's the case that there's some heap corruption that didn't happen to cause a crash yet, it's still true that any destructor for the deleted type would only be called once by the `delete p` version. – aschepler Nov 02 '10 at 13:54