In some cases, such as the one described, the C++ Standard allows compilers to process constructs in whatever fashion their customers would find most useful, without requiring that behavior be predictable. In other words, such constructs invoke "Undefined Behavior". That doesn't imply, however, that such constructs are meant to be "forbidden" since the C++ Standard explicitly waives jurisdiction over what well-formed programs are "allowed" to do. While I'm unaware of any published Rationale document for the C++ Standard, the fact that it describes Undefined Behavior much like C89 does would suggest the intended meaning is similar: "Undefined behavior gives the implementor license not to catch certain program errors that are difficult to diagnose. It also identifies areas of possible conforming language extension: the implementor may augment the language by providing a definition of the officially undefined behavior".
There are many situations where the most efficient way to process something would involve writing the parts of a structure that downstream code is going to care about, while omitting those that downstream code isn't going to care about. Requiring that programs initialize all members of a structure, including those that nothing is ever going to care about, would needlessly impede efficiency.
Further, there are some situations where it may be most efficient to have uninitialized data behave in non-deterministic fashion. For example, given:
struct q { unsigned char dat[256]; } x,y;
void test(unsigned char *arr, int n)
{
q temp;
for (int i=0; i<n; i++)
temp.dat[arr[i]] = i;
x=temp;
y=temp;
}
if downstream code won't care about the values of any elements of x.dat
or y.dat
whose indices weren't listed in arr
, the code might be optimized to:
void test(unsigned char *arr, int n)
{
q temp;
for (int i=0; i<n; i++)
{
int it = arr[i];
x.dat[index] = i;
y.dat[index] = i;
}
}
This improvement in efficiency wouldn't be possible if programmers were required to explicitly write every element of temp.dat
, including those downstream wouldn't care about, before copying it.
On the other hand, there are some applications where it's important to avoid the possibility of data leakage. In such applications, it may be useful to either have a version of the code that's instrumented to trap any attempt to copy uninitialized storage without regard for whether downstream code would look at it, or it might be useful to have an implementation guarantee that any storage whose contents could be leaked would get zeroed or otherwise overwritten with non-confidential data.
From what I can tell, the C++ Standard makes no attempt to say that any of these behaviors is sufficiently more useful than the other as to justify mandating it. Ironically, this lack of specification may be intended to facilitate optimization, but if programmers can't exploit any kind of weak behavioral guarantees, any optimizations will be negated.