I have a simple C file I/O program which demonstrates reading a text file, line-by-line, an outputting its contents to the console:
/**
* simple C program demonstrating how
* to read an entire text file
*/
#include <stdio.h>
#include <stdlib.h>
#define FILENAME "ohai.txt"
int main(void)
{
// open a file for reading
FILE* fp = fopen(FILENAME, "r");
// check for successful open
if(fp == NULL)
{
printf("couldn't open %s\n", FILENAME);
return 1;
}
// size of each line
char output[256];
// read from the file
while(fgets(output, sizeof(output), fp) != NULL)
printf("%s", output);
// report the error if we didn't reach the end of file
if(!feof(fp))
{
printf("Couldn't read entire file\n");
fclose(fp);
return 1;
}
// close the file
fclose(fp);
return 0;
}
It looks like I've allocated an array with space for 256 characters per line (1024 bytes bits on a 32-bit machine). Even when I fill ohai.txt
with more than 1000 characters of text on the first line, the program doesn't segfault, which I assumed it would, since it overflowed the allocated amount of space available to it designated by the output[]
array.
My hypothesis is that the operating system will give extra memory to the program while it has extra memory available to give. This would mean the program would only crash when the memory consumed by a line of text in ohai.txt
resulted in a stackoverflow.
Could someone with more experience with C and memory management support or refute my hypothesis as to why this program doesn't crash, even when the amount of characters in one line of a text file is much larger than 256?