74

Consider the following simple code that makes use of new (I am aware there is no delete[], but it does not pertain to this question):

int main()
{
    int* mem = new int[100];

    return 0;
}

Is the compiler allowed to optimize out the new call?

In my research, g++ (5.2.0) and Visual Studio 2015 do not optimize out the new call, while clang (3.0+) does. All tests have been made with full optimizations enabled (-O3 for g++ and clang, Release mode for Visual Studio).

Isn't new making a system call under the hood, making it impossible (and illegal) for a compiler to optimize that out?

EDIT: I have now excluded undefined behaviour from the program:

#include <new>  

int main()
{
    int* mem = new (std::nothrow) int[100];
    return 0;
}

clang 3.0 does not optimize that out anymore, but later versions do.

EDIT2:

#include <new>  

int main()
{
    int* mem = new (std::nothrow) int[1000];

    if (mem != 0)
      return 1;

    return 0;
}

clang always returns 1.

Cœur
  • 37,241
  • 25
  • 195
  • 267
Banex
  • 2,890
  • 3
  • 28
  • 38

5 Answers5

64

The history seems to be that clang is following the rules laid out in N3664: Clarifying Memory Allocation which allows the compiler to optimize around memory allocations but as Nick Lewycky points out :

Shafik pointed out that seems to violate causality but N3664 started life as N3433, and I'm pretty sure we wrote the optimization first and wrote the paper afterwards anyway.

So clang implemented the optimization which later on became a proposal that was implemented as part of C++14.

The base question is whether this is a valid optimization prior to N3664, that is a tough question. We would have to go to the as-if rule covered in the draft C++ standard section 1.9 Program execution which says(emphasis mine):

The semantic descriptions in this International Standard define a parameterized nondeterministic abstract machine. This International Standard places no requirement on the structure of conforming implementations. In particular, they need not copy or emulate the structure of the abstract machine. Rather, conforming implementations are required to emulate (only) the observable behavior of the abstract machine as explained below.5

where note 5 says:

This provision is sometimes called the “as-if” rule, because an implementation is free to disregard any requirement of this International Standard as long as the result is as if the requirement had been obeyed, as far as can be determined from the observable behavior of the program. For instance, an actual implementation need not evaluate part of an expression if it can deduce that its value is not used and that no side effects affecting the observable behavior of the program are produced.

Since new could throw an exception which would have observable behavior since it would alter the return value of the program, that would seem to argue against it being allowed by the as-if rule.

Although, it could be argued it is implementation detail when to throw an exception and therefore clang could decide even in this scenario it would not cause an exception and therefore eliding the new call would not violate the as-if rule.

It also seems valid under the as-if rule to optimize away the call to the non-throwing version as well.

But we could have a replacement global operator new in a different translation unit which could cause this to affect observable behavior, so the compiler would have to have some way a proving this was not the case, otherwise it would not be able to perform this optimization without violating the as-if rule. Previous versions of clang did indeed optimize in this case as this godbolt example shows which was provided via Casey here, taking this code:

#include <cstddef>

extern void* operator new(std::size_t n);

template<typename T>
T* create() { return new T(); }

int main() {
    auto result = 0;
    for (auto i = 0; i < 1000000; ++i) {
        result += (create<int>() != nullptr);
    }

    return result;
}

and optimizing it to this:

main:                                   # @main
    movl    $1000000, %eax          # imm = 0xF4240
    ret

This indeed seems way too aggressive but later versions do not seem to do this.

Cœur
  • 37,241
  • 25
  • 195
  • 267
Shafik Yaghmour
  • 154,301
  • 39
  • 440
  • 740
21

This is allowed by N3664.

An implementation is allowed to omit a call to a replaceable global allocation function (18.6.1.1, 18.6.1.2). When it does so, the storage is instead provided by the implementation or provided by extending the allocation of another new-expression.

This proposal is part of the C++14 standard, so in C++14 the compiler is allowed to optimize out a new expression (even if it might throw).

If you take a look at the Clang implementation status it clearly states that they do implement N3664.

If you observe this behavior while compiling in C++11 or C++03 you should fill a bug.

Notice that before C++14 dynamic memory allocations are part of the observable status of the program (although I can not find a reference for that at the moment), so a conformant implementation was not allowed to apply the as-if rule in this case.

sbabbi
  • 11,070
  • 2
  • 29
  • 57
  • @Banex IMH yes. The implementation is basically allowed to replace dynamic storage with automatic storage. Since allocating automatic storage can not fail, `mem != nullptr` is always true. You should mention which standard version you are using. – sbabbi Aug 07 '15 at 11:05
  • I see. You are right, Clang 3.4+ is conformant. However, Clang 3.3, which according to their status page does not implement N3664, also optimizes such code. So at least that version has a bug. – Banex Aug 07 '15 at 11:08
  • 2
    @Banex That proposal was made from the folks from clang. I believe that what happened is that they first implemented that (non-trivial) optimization pass and later figured out that it was not standard compliant... and filled a proposal to fix that. – sbabbi Aug 07 '15 at 11:14
  • 7
    That N3664 proposal is called "Clarifying Memory Allocation". The intent is not to change the standard but to make it explicit that certain optimizations are allowed. In example it changes "A new-expression obtains storage for the object by calling an allocation function (3.7.4.1)" to "A new-expression may obtain storage for the object by calling an allocation function (3.7.4.1)". I would argue that "may obtain" was already possible under the "as-if" clause. N3664 just made it explicit. Thus I consider 3.3 conformant. – Anonymous Coward Aug 07 '15 at 11:29
9

Bear in mind the C++ standard tells what a correct program should do, not how it should do it. It can't tell the later at all since new architectures can and do arise after the standard is written and the standard has to be of use to them.

new does not have to be a system call under the hood. There are computers usable without operating systems and without a concept of system call.

Hence, as long as the end behaviour does not change, the compiler can optimize any and everything away. Including that new

There is one caveat.
A replacement global operator new could have been defined in a different translation unit
In that case the side effects of new could be such that can't be optimized away. But if the compiler can guarantee that the new operator has no side effects, as would be the case if the posted code is the whole code, then the optimization is valid.
That new can throw std::bad_alloc is not a requirement. In this case, when new is optimized, the compiler can guarantee that no exception will be thrown and no side effect will happen.

Anonymous Coward
  • 3,140
  • 22
  • 39
  • 4
    *Bear in mind the C++ standard tells what a correct program should do, not how it should do it.* is kind of glossing over some details and they matter to this question. See the possible duplicate I linked above. – Shafik Yaghmour Aug 07 '15 at 09:37
  • 1
    I've checked it and it reinforces my position. The compiler is just required to generate code that executes "as-if". The only part which is significative is that of "A replacement global operator new could have been defined in a different translation unit" – Anonymous Coward Aug 07 '15 at 10:49
  • 1
    @JoseAntonioDuraOlmos The problem here is "is the heap part of the observable status?" If the answer is "Yes", the "as-if" rule does not apply. – sbabbi Aug 07 '15 at 10:53
  • 2
    The unallocated heap is not part of the observable status. Among other things because it is acceptable to have a heap with a size which varies in time. Optimizing away the allocation only has effects on the unallocated heap (it will be bigger than if the allocation had not been optimized). It has no effects on the spaces already allocated, and those are the ones which are observable. – Anonymous Coward Aug 07 '15 at 10:56
  • 1
    I'd venture the program has no observable effects (no `volatile` accesses or calls into opaque functions) at all. The heap itself is not observable. – Simon Richter Aug 07 '15 at 13:12
7

It is perfectly allowable (but not required) for a compiler to optimize out the allocations in your original example, and even more so in the EDIT1 example per §1.9 of the standard, which is usually referred to as the as-if rule:

Conforming implementations are required to emulate (only) the observable behavior of the abstract machine as explained below:
[3 pages of conditions]

A more human-readable representation is available at cppreference.com.

The relevant points are:

  • You have no volatiles, so 1) and 2) do not apply.
  • You do not output/write any data or prompt the user, so 3) and 4) do not apply. But even if you did, they would clearly be satisfied in EDIT1 (arguably also in the original example, although from a purely theoretical point of view, it is illegal since the program flow and output -- theoretically -- differs, but see two paragraphs below).

An exception, even an uncaught one, is well-defined (not undefined!) behavior. However, strictly speaking, in case that new throws (not going to happen, see also next paragraph), the observable behavior would be different, both by the program's exit code and by any output that might follow later in the program.

Now, in the particular case of a singular small allocation, you can give the compiler the "benefit of doubt" that it can guarantee that the allocation will not fail.
Even on a system under very heavy memory pressure, it is not possible to even start a process when you have less than the minimum allocation granularity available, and the heap will have been set up prior to calling main, too. So, if this allocation was to fail, the program would never start or would already have met an ungraceful end before main is even called.
Insofar, assuming that the compiler knows this, even though the allocation could in theory throw, it is legal to even optimize the original example, since the compiler can practically guarantee that it will not happen.

<slightly undecided>
On the other hand, it is not allowable (and as you can observe, a compiler bug) to optimize out the allocation in your EDIT2 example. The value is consumed to produce an externally observable effect (the return code).
Note that if you replace new (std::nothrow) int[1000] with new (std::nothrow) int[1024*1024*1024*1024ll] (that's a 4TiB allocation!), which is -- on present day computers -- guaranteed to fail, it still optimizes out the call. In other words, it returns 1 although you wrote code that must output 0.

@Yakk brought up a good argument against this: As long as the memory is never touched, a pointer can be returned, and not actual RAM is needed. Insofar it would even be legitimate to optimize out the allocation in EDIT2. I am unsure who is right and who is wrong here.

Doing a 4TiB allocation is pretty much guaranteed to fail on a machine that doesn't have at least something like a two-digit gigabyte amount of RAM simply because the OS needs to create page tables. Now of course, the C++ standard does not care about page tables or about what the OS is doing to provide memory, that is true.

But on the other hand, the assumption "this will work if memory is not touched" does rely on exactly such a detail and on something that the OS provides. The assumption that if RAM that is not touched it is actually not needed is only true because the OS provides virtual memory. And that implies that the OS needs to create page tables (I can pretend that I don't know about it, but that doesn't change the fact that I rely on it anyway).

Therefore, I think it is not 100% correct to first assume one and then say "but we don't care about the other".

So, yes, the compiler can assume that a 4TiB allocation is in general perfectly possible as long as memory is not touched, and it can assume that it is generally possible to succeed. It might even assume that it's likely to succeed (even when it's not). But I think that in any case, you are never allowed to assume that something must work when there is a possibility of a failure. And not only is there a possibility of failure, in that example, failure is even the more likely possibility.
</slightly undecided>

Damon
  • 67,688
  • 20
  • 135
  • 185
  • You want `1024LL`, not `1024`. –  Aug 07 '15 at 10:23
  • 2
    I think this answer needs a citation of why `new` should be required to throw on a 4 TiB allocation. –  Aug 07 '15 at 10:25
  • @Hurkyl: Ty, fixed that (though I think for literals it doesn't matter much, compiler didn't complain and result is the same anyway). – Damon Aug 07 '15 at 10:25
  • 1024 is a literal that denotes something of type `int`, though. `1099511627776` would give the right thing. –  Aug 07 '15 at 10:27
  • 3
    I disagree: the compiler is free to return 1. With the memory unused, the memory not allocated behaves exactly as-if it was allocated as far as the standard is concerned. `new` can return a pointer witha non-null value that points to nothing, and if the compiler can prove that no defined access to what is pointed to occurs, it passes the demands of the standard. If `delete` could be called, things get trickier, but only marginally (similar arguments could skip that call too) – Yakk - Adam Nevraumont Aug 07 '15 at 10:28
  • @Yakk: That's an interesting point of view, will have to think about that (one thing that comes to mind is: How much RAM is used to set up 4TiB worth of pages descriptors...). – Damon Aug 07 '15 at 10:32
  • 2
    @damon C++ standard does not describe page descriptors: their state is an implementation detail, and thus irrelevant under as-if. – Yakk - Adam Nevraumont Aug 07 '15 at 10:34
  • @Yakk: Yes and no. Is it legal to assume that something that _cannot possibly_ work on _some_ legitimate target hardware (say, a computer with 4GiB of memory) won't happen? Assuming you only need one 64-bit pointer per page, and you definitively need more, you'll need 8 GiB for page descriptors. So, it's legal for the compiler to assume that it _can work_ (it will work on e.g. a 16GiB machine). But I don't think it is legal to infer from that that it's _guaranteed_ to never fail and therefore not to generate the handler code at all. – Damon Aug 07 '15 at 10:38
  • I mean, only because I put my fingers into my ears and keep singing "I don't know about how the system manages memory" does not mean that I can legitimately assume it will magically work, only because I choose not to look at it. At best, I can hope for the best (but plan for the worst). – Damon Aug 07 '15 at 10:40
  • 3
    Yes, it is legal, you keep on talking about irrelevant implementation details: as-if does not care how it would otherwise be implemented. No, it is not required that the compiler make that optimization: the compiler is free to always throw on every call to `new`, not doing so is a quality of implementation issue. Trying to allocate 4 attobytes can be done "honestly" and throwing, be turned into a `throw` without trying, or if provably never used turned into a noop. Same for allocating 1 byte (except honest branch more likely to work) – Yakk - Adam Nevraumont Aug 07 '15 at 10:45
  • 4 TB is only 2 million large pages right? And 4000 really huge pages. – MSalters Aug 07 '15 at 14:53
  • @MSalters: That is a doubly flawed logic, though. First of all, large pages do not exist on all systems, and do not exist in an easy to handle way or even _transparently_ on all systems. Second, where they exist, huge pages cannot be moved to swap. Which means you would need to have 4TiB of _physical memory_. If you have 4TiB of physical memory on a Linux machine, then yes... the logic holds. But only because you are able to construct a possible configuration where it would work, the compiler cannot simply assume that as a given for every system, in my opinion. – Damon Aug 08 '15 at 13:52
  • 1
    @Damon: Name one system that has 4TB of memory but not huge pages? Linux can break up huge pages into normal pages if they need to be swapped. And with Transparent Huge Pages it's not that complex t handle. (That said, if you have such a 4TB system, you can bother to spend a few minutes to configure them). – MSalters Aug 08 '15 at 19:13
  • 2
    @Damon: If I write `int foo(unsigned long long n) { unsigned long long a,b; a=0; for (b=0; b – supercat Nov 01 '15 at 02:56
2

The worst that can happen in your snippet is that new throws std::bad_alloc, which is unhandled. What happens then is implementation-defined.

With the best case being a no-op and the worst case not being defined, the compiler is allowed to factor them into non-existence. Now, if you actually try and catch the possible exception :

int main() try {
    int* mem = new int[100];
    return 0;
} catch(...) {
  return 1;
}

... then the call to operator new is kept.

Quentin
  • 62,093
  • 7
  • 131
  • 191
  • It is kept in that compiler. But, would it be standard conformant to optimize it away for that particular code in your answer? I think so. – Anonymous Coward Aug 07 '15 at 09:17
  • @JoseAntonioDuraOlmos if you change the `100` to some huge value, you will expect the allocation to fail, and optimizing the `new` away would mean changing the observable behaviour of the program. The compiler can't just always fail either, because that same program could be run on a machine with 3 Exabytes of memory in the future and be expected to succeed. – Quentin Aug 07 '15 at 09:21
  • @Jarod42 this one is curious, both success and failure lead to a no-op but it isn't optimized away. But it's much harder to find out why a compiler keeps code than why it throws it away. Edit : well OP sorted it out : later versions do remove it. – Quentin Aug 07 '15 at 09:24
  • @JoseAntonioDuraOlmos and now that I tried it with Clang 3.6... it actually always returns zero. That's a bug. – Quentin Aug 07 '15 at 09:26
  • Please provide an argument why it is a bug. How does it not pass the as-if test: what paragraph of the standard does returning `0` violate? Note that when to throw is *unspecified*. – Yakk - Adam Nevraumont Aug 07 '15 at 10:33
  • @Yakk Isn't an allocation failure guaranteed to throw ? – Quentin Aug 07 '15 at 10:54
  • 2
    @quen When allocations fail is implementation-defined. As a successful allocation has no side effects beyond returning `0`, a program that returns `0` behaves as-if the allocation succeedes, and as such is a conforming program *with a successful allocation* (even if it measured in attobytes). Allocation failure is merely a quality of implementation issue. (note that a program that fails every allocarion is conforming) – Yakk - Adam Nevraumont Aug 07 '15 at 10:57