The main difference is that the version using exception handling at least might work, where the one using the if
statement can't possibly work.
Your first snippet:
int *ptr = new int[1000];
if (ptr == NULL) {
// Handle error cases here...
}
...seems to assume that new
will return a null pointer in case of failure. While that was true at one time, it hasn't been in a long time. With any reasonably current compiler, the new
has only two possibilities: succeed or throw. Therefore, your second version aligns with how C++ is supposed to work.
If you really want to use this style, you can rewrite the code to get it to return a null pointer in case of failure:
int *ptr = new(nothrow) int[1000];
if (ptr == NULL) {
// Handle error cases here...
}
In most cases, you shouldn't be using new
directly anyway -- you should really use std::vector<int> p(1000);
and be done with it.
With that out of the way, I feel obliged to add that for an awful lot of code, it probably makes the most sense to do neither and simply assume that the memory allocation will succeed.
At one time (MS-DOS) it was fairly common for memory allocation to actually fail if you tried to allocate more memory than was available -- but that was a long time ago. Nowadays, things aren't so simple (as a rule). Current systems use virtual memory, which makes the situation much more complicated.
On Linux, what'll typically happen is that even the memory isn't really available, Linux will do what's called an "overdcommit". You'll still get a non-null pointer as if the allocation had succeeded -- but when you try to use the memory, bad things will happen. Specifically, Linux has what's called an "OOM Killer" that basically assumes that running out of memory is a sign of a bug, so if it happens, it tries to find the buggy program(s), and kills it/them. For most practical purpose, this means your program will probably be killed, and other (semi-arbitrarily chosen) ones may be as well.
Windows stays a little closer to the model C++ expects, so if (for example) your code were running on an unattended server, the allocation might actually fail. Long before it fails, however, it'll drag the rest of the machine to its knees, madly swapping in a doomed attempt at making the allocation succeed. If the user is actually operating the machine at the time, they'll typically either kill your program or else kill some others to free up enough memory for your code to get the requested memory fairly quickly.
In none of these cases is it particularly realistic to program against the assumption that an allocation can fail though. For most practical purposes, one of two things happens: either the allocation succeeds, or the program dies.
That leads back to the previous advice: in a typical case, you should generally just use std::vector
, and assume your allocation will succeed. If you need to provide availability beyond that, you just about need to do it some other way (such as re-starting the process if it dies, preferably in a way that uses less memory).