11

I am a student and I have small knowledge on C++, which I try to expand. This is more of a philosophical question.. I am not trying to implement something.

Since

#include <new> 
//...
T * t = new (std::nothrow) T();
if(t)
{
    //...
}
//...

Will hide the Exception, and since dealing with Exceptions is heavier compared to a simple if(t), why isn't the normal new T() not considered less good practice, considering we will have to use try-catch() to check if a simple allocation succeeded (and if we don't, just watch the program die)??

What are the benefits (if any) of the normal new allocation compared to using a nothrow new? Exception's overhead in that case is insignificant ?

Also, Assume that an allocation fails (eg. no memory exists in the system). Is there anything the program can do in that situation, or just fail gracefully. There is no way to find free memory on the heap, when all is reserved, is there?

Incase an allocation fails, and an std::bad_alloc is thrown, how can we assume that since there is not enough memory to allocate an object (Eg. a new int), there will be enough memory to store an exception ??

Thanks for your time. I hope the question is in line with the rules.

  • If new fails in your code above. What do you plan to do in the if statement? There is no way to fix the error at this point. – Martin York Dec 31 '10 at 19:36
  • @Martin, Nothing really. I was just curious about this case and if there is any advantage on using `nothrow`. Actually answers made many things clear. –  Dec 31 '10 at 19:44
  • 1
    You picked an unfortunate example in memory allocation. Applications running on modern desktop OSes generally don't throw an exception or return an error message when they run out of memory. Instead, the whole system simply freezes, while the OS fights a losing battle of "simulating" the requested memory using slower storage. But the question of exceptions vs return codes is good if applied to file I/O, network access, string parsing, or any number of other tasks. – Ben Voigt Jan 01 '11 at 03:04

6 Answers6

11

Since dealing with Exceptions is heavier compared to a simple if(t), why isn't the normal new T() not considered less good practice, considering we will have to use try-catch() to check if a simple allocation succeeded (and if we don't, just watch the program die)?? What are the benefits (if any) of the normal new allocation compared to using a nothrow new? Exception's overhead in that case is insignificant ?

The penalty for using exceptions is indeed very heavy, but (in a decently tuned implementation) the penalty is only paid when an exception is thrown - so the mainline case stays very fast, and there is unlikely to be any measurable performance between the two in your example.

The advantage of exceptions is that your code is simpler: if allocating several objects you don't have to do "allocate A; if (A) { allocate B; if (B) etc...". The cleanup and termination - in both the exception and mainline case - is best handled automatically by RAII (whereas if you're checking manually you will also have to free manually, which makes it all too easy to leak memory).

Also, Assume that an allocation fails (eg. no memory exists in the system). Is there anything the program can do in that situation, or just fail gracefully. There is no way to find free memory on the heap, when all is reserved, is there?

There are many things that it can do, and the best thing to do will depend on the program being written. Failing and exiting (gracefully or otherwise) is certainly one option. Another is to reserve sufficient memory in advance, so that the program can carry on with its functions (perhaps with reduced functionality or performance). It may be able to free up some of its own memory (e.g. if it maintains caches that can be rebuilt when needed). Or (in the case of a server process), the server may refuse to process the current request (or refuse to accept new connections), but stay running so that clients don't drop their connections, and things can start working again once memory returns. Or in the case of an interactive/GUI application, it might display an error to the user and carry on (allowing them to fix the memory problem and try again - or at least save their work!).

Incase an allocation fails, and an std::bad_alloc is thrown, how can we assume that since there is not enough memory to allocate an object (Eg. a new int), there will be enough memory to store an exception ??

No, usually the standard libraries will ensure, usually by allocating a small amount of memory in advance, that there will be enough memory for an exception to be raised in the event that memory is exhausted.

psmears
  • 26,070
  • 4
  • 40
  • 48
  • 2
    The cost of exceptions is more than normal code flow but heavy is a loaded word. I would bet the cost of throwing an exception up ten function calls is the same as passing an error code out through ten layers of function calls to where it can be handled. Yet the code is much more intuitive and clean (when exceptions are being used) – Martin York Dec 31 '10 at 19:21
  • @Martin York: You're right, "heavy" is indeed a loaded word :) But it's difficult to be more specific when dealing with generic questions like this - the exact penalty will depend on the implementation, platform, and on the number of those ten function calls that catch and re-throw the exception. You might bet that the cost would be the same, and you may be right; if I were in situation where I cared enough about the difference I would measure it :-) – psmears Dec 31 '10 at 19:41
  • 1
    @Martin: Exceptions are hideously more expensive. I would be surprised if checking ten return values was even noticeable compared to an exception. It's checking those ten return values during 100,000 successful operations that is worse than the exception. Therefore, for validation of user-provided data, return values are to be preferred, since failure is relatively frequent. Network operations, again fail relatively frequently, so go with return values. Allocation, never fails***, so go with the exception. [***Footnote: most systems will page to death before exhausting address space] – Ben Voigt Jan 01 '11 at 02:47
  • @Ben Voigt: "Most systems will page to death before exhausting address space" - you're certainly right when talking about most desktop systems, and probably most servers too. But some servers are configured with no swap and no overcommit - because for their workloads it is better to stop accepting new connections (but keep running, fast) than to either become dead slow, or kill a random process to regain memory. And of course on embedded systems there may not even be a mass storage device to use as swap at all :-) – psmears Jan 01 '11 at 12:06
  • @psmears: I agree completely about the disk-less embedded systems, that's why I said "most systems will page to death". On the server side, you might be interested to know that paging can occur even without a swapfile (all memory-mapped sections, including code, are still eligible for paging). I'm sure there's some OS that allows you to turn paging off completely, of course, but I don't think the mainstream server OSes have such an option. – Ben Voigt Jan 01 '11 at 16:04
  • 1
    @Ben Voigt: Yes, you're right, the picture is more complicated :) I'm not sure if you can 100% disable swapping (in the sense of removing read-only pages of file-mapped executables from RAM), but with a combination of application-level (eg `mprotectall()`) and system-level (eg `/proc/sys/vm/swappiness` on Linux) tuning it's possible to achieve the aim of keeping an application responsive even in low-memory conditions, at the expense of hitting a brick wall once memory's gone. But I agree this is the exception (pardon the expression!) rather than the rule. – psmears Jan 01 '11 at 18:08
  • 1
    (And of course in the Linux case, by default malloc() and its ilk never fail - instead the memory is allocated lazily, when it is accessed, and if at *that* time it's found there's not enough, then the kernel picks a process to kill to free some up...) – psmears Jan 01 '11 at 18:10
  • @Ben Voigt: Exceptions are hideously more expensive. That's an urben legend. Not true for modern compilers. Try it and see. With real code the extra control flow for tidying up after an error is just as expensive as the exception code. – Martin York Jan 01 '11 at 19:29
  • @Martin: Are we comparing the complexity of logic inside the handler (which I agree might well be as expensive as the exception management logic itself), or the cost of performing early returns? `longjmp` is quite possibly cheaper than returning through each level of the call stack, but exceptions aren't just longjmp -- the stack has to be walked, running destructors, checking whether the exception is compatible with catch blocks, etc. Best case is when all the code between the throw site and the catch site is inlined, then exceptions are cheap (but return codes equally so). – Ben Voigt Jan 01 '11 at 19:44
  • @Ben Voigt: I am not sure if we are on the same track (we may be but it's is hard to tell in 550 byte snip-its). What I am trying to say is that equivalent code (one using exceptions the other using return codes) will have equal cost. **BUT** To make them equivalent the code with returns needs to add the extra infrastructure manually that exceptions provides for free (checking return values/ not executing the rest of the function etc). Now you can make the return code faster by removing the infrastructure (fine; but that is not an equivalent test)). – Martin York Jan 02 '11 at 04:47
  • @Martin: If exceptions were syntactic sugar for return values (maybe in a special "error code" register) that would be true. But the implementation of exceptions (on commonly used compilers) is actually very different from return codes. The practical result is that code written with exceptions has no penalty in the fast path, which is a little better than return codes, since the instructions to check whether a procedure succeeded aren't needed. But when an exception is thrown, the compiler calls a helper routine which walks the stack and looks each instruction pointer... – Ben Voigt Jan 02 '11 at 04:58
  • looks up each instruction pointer (current IP of throw site, plus each return address) in a metadata table which lists what variables are in scope with destructors that need to be called, what catch blocks are present, what exception types can be caught by each, and so on. The net result is that throwing an exception is much much slower than bubbling a failure return code up the call stack until a function can handle it. In some toolchains, a software interrupt is executed to start this process, which is immediately several orders of magnitude more expensive than a conditional return. – Ben Voigt Jan 02 '11 at 05:00
  • @Ben Vogit: @Steve Jessop: I started with this [performance-of-c0x-exceptions](http://stackoverflow.com/questions/1018800/performance-of-c0x-exceptions/1019020#1019020) then tried to make a fare(r) comparison by adding the appropriate infrastructure that would be needed in real code [exceptions-vs-returns](http://snipplr.com/view/46393/exceptions-vs-returns/). Now I am the first to admit that one off benchmarks like this are not going to prove anything (we need real code). But it does suggest that that there is no big difference. – Martin York Jan 02 '11 at 05:03
  • @Martin: What I don't like about that "benchmark" is that it's exactly the easy case I described a couple comments ago -- the compiler can inline the whole try block and convert the throw statement into a jump directly to the catch block. And same for the return code, all the redundant failure tests can be optimized away. A better test would be if there was an indirect call on the stack (since the exception vs return code debate usually takes place in the context of some library that supports callbacks, or in the context of heavily-OO code with tons of virtual functions). – Ben Voigt Jan 02 '11 at 05:10
  • @Ben Voigt: I agree the exception code does work (no doubt about it). What I am claiming is that this is work that also needs to be done when you do a return version of error checking. The problem is that the extra work needs to be done manually (and most tests leave this work out when they test (which is I suppose like real life where people don't check error codes)). But to have a fair test you must compare apples to apples the work that is done be the exception must also be done by the return code you can not ignore this cost when making a comparison (which you are doing). – Martin York Jan 02 '11 at 05:17
  • @Ben Voigt: I agree benchmarks are really hard to get meaning full results from (Which is why I deliberately stayed away from the phrase "this shows" or "this proves" and went with merely "this suggests"). please feel from to add the appropriate virtual calls or calls through function pointers that can't be optimized away. – Martin York Jan 02 '11 at 05:19
  • @Martin: Definitely those destructors have to be called in either version. It's the walking the stack and loading and processing the metadata that isn't necessary for the return code version. – Ben Voigt Jan 02 '11 at 15:13
  • @Ben Voigt: And the exception code does not have any control flow statements (that are required in the return version to make them equivalent). – Martin York Jan 02 '11 at 19:33
  • @Martin: Not in the fast path, but in the exception-processing case it most certainly does. They're in the compiler-provided stack-walking code instead of in the user's function, but they're there and they're more complex than the return value checking flow control. – Ben Voigt Jan 02 '11 at 19:35
  • @Ben Voigt: yes that's exactly why there is not huge difference between returns and exceptions. Both versions have control flow. Exceptions it is auto generated in returns it must be manually generated but it is the same thing. Yes the Exception stuff looks complex because it is generic (to handle all types of function) while the return is specific to the function (so looks less complex). But in reality over multiple call levels the cost is the same. – Martin York Jan 02 '11 at 23:34
  • @Martin York, @Ben Voigt: This discussion has gone on a bit :) I guess the point is that there are two methods of implementing exceptions - one that more or less exactly mirrors the return-code equivalent (in both the generated code, and the performance), and another that makes a tradeoff of having exactly zero overhead in the non-exception case (i.e. not even the `if` statements of the return-code approach), for a much more complex exception path (which instead of just executing the cleanup code, has to figure out what to do by examining and interpreting the call stack). – psmears Jan 03 '11 at 08:28
6

Nothrow was added to C++ primarily to support embedded systems developers that want to write exception free code. It is also useful if you actually want to handle memory errors locally as a better solution than malloc() followed by a placement new. And finally it is essential for those who wished to continue to use (what were then current) C++ programming styles based on checking for NULL. [I proposed this solution myself, one of the few things I proposed that didn't get downvoted :]

FYI: throwing an exception on out of memory is very design sensitive and hard to implement because if you, for example, were to throw a string, you might double fault because the string does heap allocation. Indeed, if you're out of memory because your heap crashed into the stack, you mightn't even be able to create a temporary! This particular case explains why the standard exceptions are fairly restricted. Also, if you're catching such an exception fairly locally, why you should catch by reference rather than by value (to avoid a possible copy causing a double fault).

Because of all this, nothrow provide a safer solution for critical applications.

Yttrill
  • 4,725
  • 1
  • 20
  • 29
4

I think that the rationale behind why you'd use the regular new instead of the nothrow new is connected to the reason why exceptions are usually preferred to explicitly checking the return value of each function. Not every function that needs to allocate memory necessarily knows what to do if no memory can be found. For example, a deeply-nested function that allocates memory as a subroutine to some algorithm probably has no idea how what the proper course of action to take is if memory can't be found. Using a version of new that throws an exception allows the code that calls the subroutine, not the subroutine itself, to take a more appropriate course of action. This could be as simple as doing nothing and watching the program die (which is perfectly fine if you're writing a small toy program), or signalling some higher-level program construct to start throwing away memory.

In regards to the latter half of your question, there actually could be things you could do if your program ran out of memory that would make memory more available. For example, you might have a part of your program that caches old data, and could tell the cache to evict everything as soon as resources became tight. You could potentially page some less-critical data out to disk, which probably has more space than your memory. There are a whole bunch of tricks like this, and by using exceptions it's possible to put all the emergency logic at the top of the program, and then just have every part of the program that does an allocation not catch the bad_alloc and instead let it propagate up to the top.

Finally, it usually is possible to throw an exception even if memory is scarce. Many C++ implementations reserve some space in the stack (or some other non-heap memory segment) for exceptions, so even if the heap runs out of space it can be possible to find memory for exceptions.

Hope this helps!

templatetypedef
  • 362,284
  • 104
  • 897
  • 1,065
3

Going around exceptions because they're "too expensive" is premature optimisation. There is practically no overhead of a try/catch if an exception is not thrown.

Is there anything the program can do in that situation

Not usually. If there's no memory in the system, you probably can't even write anything to a log, or print to stdout, or anything. If you're out of memory, you're pretty much screwed.

Falmarri
  • 47,727
  • 41
  • 151
  • 191
  • 1
    The 'premature optimization' argument is a previous-century slogan that kills any reasonable discussion off before it even had a chance. For instance, in time-critical environments where stability is key, you really don't want a bunch of unknown exception handling destroy the flow of your software. – StarShine Oct 14 '16 at 14:28
  • @StarShine: That's a decent argument. But exceptions being "too expensive" in the general case is not something that you should worry about. – Falmarri Oct 14 '16 at 23:31
  • I was once taught to agree with your statement, but what to think of 1) the 'general case' increasingly doesn't warrant the use of C++ and 2) the semantic meaning of what an 'exception' is tends to vary according to your mileage/programming language. I mean, the principle is nice, and it can save development time if everyone understands the same thing. In practice.. – StarShine Oct 17 '16 at 07:29
2

Running out of memory is expected to be a rare event, so the overhead of throwing an exception when it happens isn't a problem. Implementations can "pre-allocate" any memory that's needed for throwing a std::bad_alloc, to ensure that it's available even when the program has otherwise run out of memory.

The reason for throwing an exception by default, instead of returning null, is that it avoids the need for null checks after every allocation. Many programmers wouldn't bother doing that, and if the program were to continue with a null pointer after a failed allocation, it would probably just crash later with something like a segmentation fault, which doesn't indicate the real cause of the problem. The use of an exception means that if the OOM condition isn't handled, the program will immediately terminate with an error that actually indicates what went wrong, which makes debugging much easier.

It's also easier to write handling code for out-of-memory situations if they throw exceptions: instead of having to individually check the result of every allocation, you can put a catch block somewhere high in the call stack to catch OOM conditions from many places throughout the program.

Wyzard
  • 33,849
  • 3
  • 67
  • 87
-1

In Symbian C++ it works the other way around. If you want an exception thrown when OOM you have to do

T* t = new(ELeave) T();

And you're right about the logic of throwing a new exception when OOM being strange. A scenario that is manageable suddenly becomes a program termination.

James
  • 9,064
  • 3
  • 31
  • 49
  • 2
    That only tells that Symbian C++ is not actually a standard C++. Now, arguing for error codes instead of exceptions is very old and was repeatedly shown to be wrong. A concise summary can be found here: http://www.boost.org/community/exception_safety.html – Gene Bushuyev Dec 31 '10 at 19:35
  • Wrong? Lol, that's like arguing that stick-shift car transmissions are wrong – James Dec 31 '10 at 23:47