The following simple program is behaving unpredictably. Sometimes it prints "0.00000", sometimes it prints more "0" than I can count. Some times it uses up all memory on the system, before the system either kills some process, or it fails with bad_alloc.
#include "stdio.h"
int main() {
fprintf(stdout, "%.*f", 0.0);
}
I'm aware that this is incorrect usage of fprintf. There should be another argument specifying the width of the formatting. It's just surprising that the behavior is so unpredictable. Sometimes it seems to use a default width, while sometimes it fails very badly. Could this not be made to always fail or always use some default behaviour?
I came over similar usage in some code at work, and spent a lot of time figuring out what was happening. It only seemed to happen with debug builds, but would not happen while debugging with gdb. Another curiosity is that running it through valgrind would consistently bring about the printing of many "0"s case, which otherwise happens quite seldom, but the memory usage issue would never occur then either.
I am running Red Hat Enterprise Linux 7, and compiled with gcc 4.8.5.