1

I was developing programs for reading data from files in C++. I came up with two different methods to do it. But am not sure which is efficient and i dont even know how to find the efficiencies of a program. I have given yu two snippets. Please somebody tell me how to analyse the efficiencies or tell me which is efficient.

ifstream inFile( "Data" );
if( !inFile ){
    cerr << "Error :Unable to open the file";
    return -1;
}

string word;
while( inFile >> word )
    cout << word << endl ;

and the other method is :

FILE* input;
input = fopen("/home/jayanarayanan/Project/Data", "rb"); //file input
if ( input == NULL ) { perror (" Error opening File "); }
else{   
    char buffer[27];

    char *in;
    in = fgets (buffer, 28, input);

    output << in;
}
Yu Hao
  • 119,891
  • 44
  • 235
  • 294

3 Answers3

2

Nobody here will be able to give you a realistic answer. Efficiency depends, among other things, on:

  • Your compiler.
  • The version of your compiler.
  • The settings with which you invoke your compiler.
  • The operating system on which the program will run.
  • The kind of data you receive.
  • How often you receive the data.
  • The rest of your program.

Measure efficiency on the target system with realistic input data. Only then will you see what's better, and if there's a relevant difference at all.


Edit: a good start (and probably better than any guessing) would be to measure the sheer time your program takes to finish.

Christian Hackl
  • 27,051
  • 3
  • 32
  • 62
1

Formatted input >> in iostreams is known to be pretty slow.

Anyway it seems to me that this isn't a fair comparison: ifstream >> std::string is reading a word separated by spaces (not what fgets does).

Probably you should use std::getline (for std::string) or istream::getline (for char []) to have similar (though not identical) functionalities.

Moreover

Some food for thought:

Community
  • 1
  • 1
manlio
  • 18,345
  • 14
  • 76
  • 126
1

Performance should be measured, not asked on the internet. Measure the overall time of your application. If it's "too slow" (whatever that means), then use a profiler, or add code to measure different parts of your program, figure out where it is spending most of its time, and "fix that". I can answer what MY machine does, but if I measure, say, on a ARM development board with a single core processor running at 1000MHz with a network filesystem that I use at work, it will not give the same results as if I use my home desktop machine. The bottlenecks may well be at different places, and the effeciency of the compiler in different constructs is likely different too (x86 vs. ARM).

As it currently stands, I'd say your first code is "safe", and the second one "unsafe", which is a much more critical factor than "which is more efficient"

char buffer[27];

char *in;
in = fgets (buffer, 28, input);

This will overwrite buffer by one byte, and if you are unlucky (e.g you put some other data immediately after buffer and update it to be non-zero between fgets and cout << in;, then you may get (a lot, potentially) rubbish printed, or even a crash).

Your input function are also potentially doing different things, depending on exactly what your input is.

Mats Petersson
  • 126,704
  • 14
  • 140
  • 227