I'm working on a small program that deals with rather large (4-5 MB), matrix-shaped (numeric values stored as N*M rows and columns) ASCII files:
1 2 3
4 5 6
7 8 9
etc.
I've noticed that the old-style C file input method:
csFile = fopen("file.dat","r");
while(fscanf(csFile, "%lf", &Point)!=EOF) {
}
fclose(csFile);
is much faster than the most basic C++ implementation (230 ms compared to ~1500 ms for a 3MB file that stores about 230k numeric values):
ifstream myfile ("file.dat");
while(myfile >> Point) {
}
myfile.close();
For simplicity's sake, I've omitted data manipulation functions inside the loops, but even these "bare" examples show almost a sevenfold enhancement of the C type I/O. Why is there such a huge performance difference? Is there a faster way to read these kind of files using the C++ streams/functions?