I created a program in C that writes a program as its output.
The objective is to test aspects of performance in monolithic programs.
The first test was configured with 10 000 iterations and the result program compiled and ran. The second test with 100 000 iterations is compiling (3030 minutes until now) in Ubuntu 12.04 x86_64 in an i7 3770 with 16 GB RAM (16 GB SWAP).
I know that a parse complexity is from O(n**2) to O(n**3), but this is taking too long. In the worst scenario, 1000 times more compile timing.
Is consuming 35,2% of memory and still increases.
My question is:
Does GCC have limitations in the number of variables per module or module size?
Is this a bug?
The original program generator is:
#include <stdio.h>
#define MAX_INTERACTION 100000
int main(int argc, char **argv)
{
FILE * fp;
fp = fopen("source.c","w");
fprintf(fp,"#include <stdio.h> \n \n \n");
fprintf(fp,"int main(int argc, char **argv) \n");
fprintf(fp,"{ \n");
// local variables and exchange variables
for (int i=0; i< MAX_INTERACTION ; ++i)
{
// passed variable, return label , local variable
fprintf(fp," int pv%d , rl%d, loc%d ; \n",i,i,i);
}
fprintf(fp," int pvd =0 ;\n \n \n");
//code blocks
for (int i=0; i< MAX_INTERACTION ; ++i)
{
fprintf(fp," block%d : \n",i);
fprintf(fp," loc%d = pv%d +1 ; \n",i,i);
fprintf(fp," goto rl%d; \n",i);
}
//call blocks
for (int i=1; i< MAX_INTERACTION +1; ++i)
{
fprintf(fp," pvd = pv%d ;\n",(i-1));
fprintf(fp," goto block%d; \n",(i-1));
fprintf(fp," rl%d: \n",(i-1));
}
fprintf (fp,"printf( \"Concluido \\n \"); \n");
fprintf(fp,"}\n");
fclose(fp);
}