Let's say that we have 2 code samples, A and B:
A.
int n;
cin>>n;
//n is read as 2483
//n never changes beyond this point
int a[n];
for(int i=0; i<n; i++)
cin>>a[i];
B.
int n;
cin>>n;
//n is read as 2483
//n never changes beyond this point
int a[10000];
for(int i=0; i<n; i++)
cin>>a[i];
Why is A "forbidden" by ISO, despite attempting to save memory based on the current needs, with B being preferred, even though it's wasting some memory (e.g. allocating memory for 10000 ints instead of the required, let's say, 2483 ints)?
Can someone please explain this to me, including the technical details?