I have looked into gprof. But dont quite understand how to acheive the following:
I have written a clustering procedure. In each iteration 4 functions are called repetitively. There are about a 100000 iterations to be done. I want to find out how much time was spent in each function.
These functions might call other sub functions and may involve data structures like hashmaps, maps etc. But I dont care about these sub functions. I just want to know how much total time was spent in all those parent functions over all the iterations. This will help me optimize my program better.
The problem with gprof is that, it analyzes every function. So even the functions of the stl datastructures are taken in to account.
Currently I am using clock_gettime. For each function, I output the time taken for each iteration. Then I manipulate this outputfile. For this I have to type a lot of profiling code. The profiling code makes my code look very complex and I want to avoid it. How is this done in industries?
Is there an easier way to do this?
If you have any other cleaner ways, please let me know