So, the title. It is generally known that it is possible to write C++ program, which would require infinite time to compile (in theory). But is it possible to write such program in plain C? Or is there a way to slow down compilation time to at least several minutes with a small program?
-
1Are you allowed to use preprocessor macros? If so, you could easily use them to expand the code exponentially. – JS1 Mar 29 '15 at 20:54
-
have a look on this questions about recursive macros in C/C++: http://stackoverflow.com/questions/12447557/can-we-have-recursive-macros – Salah Saleh Mar 29 '15 at 21:00
-
@JS1 There are no limitations on C features. I'd wanted to make it work for gcc. Could you please provide an example? – rktavi Mar 29 '15 at 21:01
-
1While recursive macros could get infinite *preprocessing* time, the C compiler itself will never see the generated code unless the preprocessor finally turns it loose, so I'm not sure if that will fit the bill of your requirements. – WhozCraig Mar 29 '15 at 21:04
-
Why do you ask? What is your concrete point? Perhaps edit your question to improve it. What is your metric of "small program" (do you include or not the preprocessing)? – Basile Starynkevitch Mar 30 '15 at 05:08
-
@BasileStarynkevitch I was wondering about the possibility of DoS attack on server during some programming contest. It came to my mind when I was writing simplified version of this server as an assignment. – rktavi Mar 30 '15 at 11:22
-
But that is easy to counter: just set a limit (using [setrlimit(2)](http://man7.org/linux/man-pages/man2/setrlimit.2.html)...) on the compilation server. – Basile Starynkevitch Mar 30 '15 at 11:28
3 Answers
Here is the example you asked for, with exponentially increasing macros.
#define FOO i++ // Substitute any statement here for i++
#define FOO1 FOO ; FOO
#define FOO2 FOO1 ; FOO1
#define FOO3 FOO2 ; FOO2
#define FOO4 FOO3 ; FOO3
#define FOO5 FOO4 ; FOO4
#define FOO6 FOO5 ; FOO5
// Keep going however much you want
...
#define FOO40 FOO39 ; FOO39
volatile int i;
int main(void)
{
FOO40; // This expands to 2^40 statements.
}
I used FOO18
for a timing test to see what would happen. I tested both the preprocessor time and the compilation time separately:
(Preprocessor phase)
time gcc -E foo.c -o foo.i
1.7 seconds
(Compilation phase)
time gcc foo.i -o foo
21 seconds
Out of curiosity I kept trying bigger and bigger values. Unfortunately, at some point the compiler ran out of memory (the preprocessor was fine). I got this error:
cc1: out of memory allocating 16842751 bytes after a total of 403505152 bytes
At FOO16
with -O2
, I was able to get 2:23 compilation time without running out of memory. So if you want to get an even longer compilation time, first find out many many statements you can put into a single function without running out of memory (FOO16
for me). Then make several functions, like this:
int main(void)
{
FOO16;
}
void bar1(void)
{
FOO16;
}
void bar2(void)
{
FOO16;
}
// etc...

- 4,745
- 1
- 14
- 19
-
2Stricto sensu, it is not "infinite compilation", but "a huge amount of time needed for compilation in preprocessing phase". – Basile Starynkevitch Mar 29 '15 at 21:27
-
@BasileStarynkevitch See my edit. I timed both phases separately, and most of the time was spent compiling, not preprocessing. – JS1 Mar 29 '15 at 21:33
-
@JS1 Thanks, that looks great. As a followup question, is there any way to make it consume only time, but not memory? – rktavi Mar 29 '15 at 21:50
-
@vaeea See my recent edit. The amount of memory seems to be based on the largest function you create. So you can pick a `FOO` size that fits within memory, and then simply create multiple functions all using that same `FOO` macro. The amount of memory should stay roughly the same, but the amount of time will increase linearly with the number of functions. – JS1 Mar 29 '15 at 21:52
Experimentally, if you ask a recent GCC or Clang/LLVM to compile with optimizations some C code containing a single (or very few) C function(s) with many thousands of statements, the compilation time is growing a lot.
By experience, compilation with gcc -O2
of a single function with many thousand C statements requires a time proportional to the square of the number of statements (more exactly number of Gimple statements after preprocessing & gimplification).
The intuition explaining that is that the algorithms for register allocation, instruction scheduling, and middle-end optimizations are often worse than O(n) ; naive non-optimizing C compilers like tinycc, nwcc, 8cc
etc don't have such time behavior (but generate code really worse than gcc -O2
...)
BTW, you could play (on Linux) with my manydl.c program (which generates some more or less random C code, then compile it and dlopen
it, to show that you can dlopen
many hundred thousands of shared objects). Read its source code, it will illustrate my point.
More seriously, I did (in the past) experiment the same issue in old versions of MELT, which generates C++ code suitable for extension of GCC. I had to split in several routines some huge sequential initialization code.
You might compile with gcc -ftime-report -O2
such a huge function to understand more precisely in which optimization passes is the compiler spending its time.
At last, if you want a small source code, you can cheat by asking it to #include "some-huge-file.c"
but I believe that does not count.

- 223,805
- 18
- 296
- 547
-
Thank you, that's really interesting. I didn't mention that in my question, but my intent was to use as little additional tools as possible and to write minimal code. – rktavi Mar 29 '15 at 21:53
Include recursively
#include __FILE__
Most compilers will probably bail out early, but theoretically, this would cause infinite compilation.
Include (or compile directly, or link against) a device instead of a file
#include "/dev/tty"
On systems that support it, this causes the compiler to wait for input. Similarly, you could use a named pipe.
Find a bug in the compiler
It is possible the compiler has a logic error that will cause it to loop forever.

- 69,070
- 8
- 110
- 193