1

I have a program which reads 2 input files. First file contains some random words which are put into an BST and AVL tree. Then the program looks for the words listed in the second read file and says if they exist in the trees, then writes an output file with the information gathered. While doing this the program prints out the time spent for finding a certain item. However the program does not seem to be measuring the time spent.

BST* b = new BST();
AVLTree* t = new AVLTree();

string s;

ifstream in;
in.open(argv[1]);

while(!in.eof())
{
    in >> s;
    b->insert(s);
    t->insert(s);
}

ifstream q;    
q.open(argv[2]);

ofstream out;
out.open(argv[3]);

int bstItem = 0;
int avlItem = 0;
float diff1 = 0;
float diff2 = 0;

clock_t t1, t1e, t2, t2e;

while(!q.eof())
{
    q >> s;

    t1 = clock();
    bstItem = b->findItem(s);
    t1e = clock();

    diff1 = (float)(t1e - t1)/CLOCKS_PER_SEC;        

    t2 = clock();
    avlItem = t->findItem(s);
    t2e = clock();

    diff2 = (float)(t2e - t2)/CLOCKS_PER_SEC;

    if(avlItem == 0 && bstItem == 0)
        cout << "Query " << s << " not found in " << diff1 << " microseconds in BST, " << diff2 << " microseconds in AVL" << endl;

    else
        cout << "Query " << s << " found in " << diff1 << " microseconds in BST, " << diff2 << " microseconds in AVL" << endl;

    out << bstItem << " " << avlItem << " " << s  << "\n"; 
}

The clock() value I get just before entering while and just after finishing it is exactly the same. So it appears as if the program does not even run the while loop at all, so it print 0. I know that this is not the case since it takes around 10 seconds for the program the finish as it should. Also the output file contains correct results, so the possibility of having bad findItem() functions is also not true.

I did a little bit research in Stack Overflow, and saw that many people experience the same problem as me. However none of the answers I read solved it.

harbinger
  • 49
  • 1
  • 3
  • 10
  • The standard clock has a fairly coarse resolution. If no individual round takes over, say, 10ms, you won't be able to measure it. Also note that `eof()` is always wrong. – Kerrek SB Apr 21 '13 at 15:16
  • As I mentioned above, I don't believe it is something related to clock's resolution, since it takes around 10 seconds for the program to finish. So it is not possible that the execution time is unmeasurably small. Also can you be more specific about why you said eof() is always wrong. – harbinger Apr 21 '13 at 15:19
  • what's your platform and what the value of CLOCKS_PER_SEC on your platform? – evilruff Apr 21 '13 at 15:27
  • @harbinger You say your program takes 10 seconds, but your program is a loop that makes repeated calls to `findItem` and its those individual calls that you're timing. Unless you're really trying to say that a single `findItem` call can take 10 seconds, then clock resolution is most likely your problem. – TheUndeadFish Apr 21 '13 at 15:38
  • Just for grins, try putting calls to `clock()` just *outside* your final loop, and see if what it shows doesn't show a reasonable fraction of the overall run-time. – Jerry Coffin Apr 21 '13 at 15:58
  • @TheUndeadFish I mean that the whole program takes about 10 seconds to finish when I run it in. Other than that the timings for each particular findItem call should take something around 20 ms. – harbinger Apr 21 '13 at 17:56
  • @evilruff x86 Windows using GNU C++ compilers. My CLOCK_PER_SEC value is 1000. – harbinger Apr 21 '13 at 17:57
  • @JerryCoffin I tried doing that as well. The result was as if the while loop had never been run. I get the same clock() value as just before the loop is entered. – harbinger Apr 21 '13 at 17:59
  • As being mentioned about you measuring wrong thing, findItem call indeed takes less then one tick, but your while loop does more.. – evilruff Apr 21 '13 at 17:59

2 Answers2

1

I solved my problem using a higher resolution clock, though the clock resolution was not my problem. I used clock_gettime() from time.h. As far as I know higher clock resolutions than clock() is platform dependent and this particular method I used in my code is only available for Linux. I still haven't figured out why I wasn't able to obtain healthy results from clock(), but I suspect platform dependency again.

An important note, the use of clock_gettime() requires you to include POSIX real time extension when compiling the code. So you should do:

g++ a.cpp b.cpp c.cpp -lrt -o myProg

where -lrt is the parameter to include POSIX extensions.

harbinger
  • 49
  • 1
  • 3
  • 10
  • Here is a link to a windows port of clock_gettime() : http://stackoverflow.com/a/5404467/1911235 – Meep Jan 13 '14 at 15:59
0

If (t1e - t1) is < CLOCKS_PER_SEC your result will always be 0 because integer division is truncated. Cast CLOCKS_PER_SEC to float.

diff1 = (t1e - t1)/((float)CLOCKS_PER_SEC);

krsteeve
  • 1,794
  • 4
  • 19
  • 29
  • 1
    The problem does not occur in the division. The clock() values I obtain just before executing the command and after it is always the same. – harbinger Apr 23 '13 at 21:30