Here's a small program to a college's task:
#include <unistd.h>
#ifndef BUFFERSIZE
#define BUFFERSIZE 1
#endif
main()
{
char buffer[BUFFERSIZE];
int i;
int j = BUFFERSIZE;
i = read(0, buffer, BUFFERSIZE);
while (i>0)
{
write(1, buffer, i);
i = read(0, buffer, BUFFERSIZE);
}
return 0;
}
There is a alternative using the stdio.h fread and fwrite functions instead.
Well. I compiled this both versions of program with 25 different values of Buffer Size: 1, 2, 4, ..., 2^i with i=0..30
This is a example of how I compile it: gcc -DBUFFERSIZE=8388608 prog_sys.c -o bin/psys.8M
The question: In my machine (Ubuntu Precise 64, more details in the end) all versions of the program works fine: ./psys.1M < data
(data is a small file with 3 line ascii text.)
The problem is: When the buffer size is 8MB or greater. Both versions (using system call or clib functions) crashes with these buffer sizes (Segmentation Fault).
I tested many things. The first version of the code was like this: (...) main() { char buffer[BUFFERSIZE]; int i;
i = read(0, buffer, BUFFERSIZE);
(...)
This crashs when I call the read function. But with these versions:
main()
{
char buffer[BUFFERSIZE]; // SEGMENTATION FAULT HERE
int i;
int j = BUFFERSIZE;
i = read(0, buffer, BUFFERSIZE);
main()
{
int j = BUFFERSIZE; // SEGMENTATION FAULT HERE
char buffer[BUFFERSIZE];
int i;
i = read(0, buffer, BUFFERSIZE);
Both them crashs (SEGFAULT) in the first line of main. However, if I move the buffer out of main to the global scope (thus, allocation in the heap instead the stack), this works fine:
char buffer[BUFFERSIZE]; //NOW GLOBAL AND WORKING FINE
main()
{
int j = BUFFERSIZE;
int i;
i = read(0, buffer, BUFFERSIZE);
I use a Ubuntu Precise 12.04 64bits and Intel i5 M 480 1st generation.
#uname -a
Linux hostname 3.2.0-34-generic #53-Ubuntu SMP Thu Nov 15 10:48:16 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux
I don't know the OS limitations about the stack. Is there some way to allocate big data in stack, even if this is not a good pratice?