1

So, our class was given this code to compile and run, and see how long it takes to run for different sized inputs N:

#include <iostream>
#include <fstream>
#include <iterator>
#include <vector>
#include <algorithm>
#include <iostream>
#include <stdio.h>

using namespace std;

int main(int argc, char *argv[]) {
  int N;
  sscanf(argv[1], "%d", &N);
  vector<double> data(N);
  for(unsigned int i=0; i<N; i++) {
    data[i] = rand()/(RAND_MAX+1.0);
  }
  sort(data.begin(), data.end());
  copy(data.begin(), data.end(), ostream_iterator<double>(cout,"\n"));
}

We have never been taught C++ and are not expected to know anything about how this code works. They even give us the commands for compiling and running the code. However, they failed to mention how exactly we can measure how long the program takes. I have tried with this approach:

#include <iostream>
#include <fstream>
#include <iterator>
#include <vector>
#include <algorithm>
#include <iostream>
#include <stdio.h>
#include <time.h>

using namespace std;

int main(int argc, char *argv[]) {

  double start_time = time(NULL);

  int N;
  sscanf(argv[1], "%d", &N);
  vector<double> data(N);
  for(unsigned int i=0; i<N; i++) {
    data[i] = rand()/(RAND_MAX+1.0);
  }
  sort(data.begin(), data.end());
  copy(data.begin(), data.end(), ostream_iterator<double>(cout,"\n"));

  double end_time = time(NULL);
  printf("%lf seconds\n", end_time - start_time);

}

Literally just including a time library, then getting the current time before and after the program runs, and printing the difference at the end.
All of which I copied straight from this site actually because, again, none of us know (or apparently need to know) how to code anything in C++ until next year.

However, the output is always

0.000000 seconds

even for inputs of sizes in the millions or billions, where I can see that it takes a few seconds or minutes to process.
What am I doing wrong in this code?

I've read some sources saying to use the Chrono library to measure time but I was getting far more complicated errors when I tried that. This at least compiles and runs, but is just wrong every time.

πάντα ῥεῖ
  • 1
  • 13
  • 116
  • 190
Bill
  • 41
  • 1
  • 3

3 Answers3

4

You were probably expected to use common tools of the environment, rather than modifying the code.

For example, in Linux, the time tool:

g++ theCode.cpp -o theProgram
time ./theProgram 10
time ./theProgram 100
time ./theProgram 1000
time ./theProgram 10000

time(NULL) doesn't return a double; it returns a time_t. You've converted both timestamps to double and possibly caused yourself precision problems, since UNIX timestamps are quite high and the number of seconds' difference you're expecting is relatively small.

You should get rid of the doubles and stick with the time_t type that time(NULL) gives you. Don't forget to update your printf format string from %lff to something else.

Also, it's better spelt time(nullptr) now, or you could use the modern C++ features in <chrono>.

Lightness Races in Orbit
  • 378,754
  • 76
  • 643
  • 1,055
  • Addendum: The magic that allows `printf` and its large family to accept an undefined number of parameters of undefined types makes it impossible for the compiler to verify that the parameters provided make sense. The function can and will take any given parameters no matter how hilariously wrong. Some compilers have added extensions to inspect the format string and warn about mismatches, so check the build output even if the program compiled. It could save you much debugging. – user4581301 Feb 21 '19 at 18:56
  • Using `time ./theProgram` just returns the error `The system cannot accept the time entered`. I also tried changing the `double` to `time_t` and it now displays exactly `4294967296 seconds` every time, regardless of input size. – Bill Feb 22 '19 at 02:48
  • @Bill Are you on Linux? – Lightness Races in Orbit Feb 22 '19 at 13:30
1

This is likely because the time that had elapsed was less than a second. The resolution of time() is in seconds.

Here's how to switch of use the chrono library to measure in nanoseconds:

#include <iostream>
#include <fstream>
#include <iterator>
#include <vector>
#include <algorithm>
#include <iostream>
#include <stdio.h>
#include <time.h>
#include <chrono>

using namespace std;
using namespace std::chrono;

int main(int argc, char *argv[]) {

  high_resolution_clock::time_point start_time = high_resolution_clock::now();

  int N;
  sscanf(argv[1], "%d", &N);
  vector<double> data(N);
  for(unsigned int i=0; i<N; i++) {
    data[i] = rand()/(RAND_MAX+1.0);
  }
  sort(data.begin(), data.end());
  copy(data.begin(), data.end(), ostream_iterator<double>(cout,"\n"));

  high_resolution_clock::time_point end_time = high_resolution_clock::now();
  printf("%lf nanoseconds\n", duration_cast< nanoseconds >( end_time - start_time ).count() );

}
flu
  • 546
  • 4
  • 11
  • Any time I try to use `#include ` I get the error: `This file requires compiler and library support for the ISO C++ 2011 standard. This support is currently experimental, and must be enabled with the -std=c++11 or -std=gnu++11 compiler options` I looked a little into compiler options, it definitely seems way beyond my understanding, though I did try typing `-std=c++11` in a few places, all of which just gave more various errors. – Bill Feb 22 '19 at 02:47
  • @Bill, the following should work `g++ -std=c++11 -o app source.cpp`. – flu Feb 25 '19 at 16:22
0

The time function returns a second count (an integral count, not a floating point number). It seems that the algorithm does not run long enough and you end up with an execution time of less than a second. The easiest way to make the execution time measurable is to introduce an outer loop like so:

for (int n=0; n<NumLoops; n++)
{
    // your logic
}

with NumLoops adjusted to a large number (start low-ish and increase) until your execution time is maybe 30 seconds or longer. Then divide the total time measured by NumLoops.

Note that this is not a high precision approach. It does not measure actual thread execution time (that's another subject) and assumes that the execution time of the inner logic far outweighs the overheads introduced by the outer loop (which seems to be the case here).

J.R.
  • 1,880
  • 8
  • 16