With GCC there is a very useful compiler switch: -fdump-tree-optimized
which will show the code after the performed optimizations.
You can discover that it all depends on Big_obj
.
E.g.
struct Big_obj
{
int elem;
int vect[1000];
};
struct Data { Big_obj x, y; };
and g++ -Wall -O3 -fdump-tree-optimized
will produce a .165t.optimized file containing:
int foo(const Data&, int) (const struct Data & a, int pos)
{
int x$elem;
const struct Big_obj * iftmp.4;
int _8;
<bb 2>:
if (pos_2(D) == 1)
goto <bb 3>;
else
goto <bb 4>;
<bb 3>:
iftmp.4_4 = &a_3(D)->x;
goto <bb 5>;
<bb 4>:
iftmp.4_5 = &a_3(D)->y;
<bb 5>:
# iftmp.4_1 = PHI <iftmp.4_4(3), iftmp.4_5(4)>
x$elem_7 = MEM[(const struct Big_obj &)iftmp.4_1];
_8 = x$elem_7 * x$elem_7;
return _8;
}
This is exactly the optimized code you posted. But if you change Big_obj
(vect
type has been changed from array to std::vector
):
struct Big_obj
{
int elem;
std::vector<int> vect;
};
the optimization won't be performed anymore (i.e. foo()
will also allocate/deallocate memory for x.vect
).
In the example the reason is that optimizations must be implemented according to the as-if rule: only code transformations that do not change the observable behavior of the program are allowed.
operator new
could have a custom implementation counting how many times it gets called (and this is hard to detect). But even without a custom operator new
, there are other issues (see Does allocating memory and then releasing constitute a side effect in a C++ program?).
With other compilers you have to study the assembly code (e.g. clang++ -S
switch), but the same holds true.