0

I am trying to make my code run faster and I am using the time functionality in Linux and this is what I am getting. I am wondering which number should I be looking at to determine how fast it's actually running.

0.019u 0.001s 0:02.50 0.4%      0+0k 0+0io 2pf+0w

Also, I am new to this so I'd appreciate it if someone could explain to me what each of the numbers mean.

In my program, I need to read in a large input of lines and parse them, and I am storing them in a vector of struct. I will later access this vector. I am wondering if, it will make my code any faster if I store a vector of pointer to struct as opposed to a vector of struct.

I'd appreciate any input. Thank you.

michcs
  • 57
  • 4
  • I suppose guys from http://unix.stackexchange.com/ can answer you better. – Andrei Drynov Mar 20 '12 at 02:41
  • And a short time (fractions of second) is probably not enough to be very precise and reliable. Try to e.g. change the input or parameters of your program so that it runs for several seconds. Repeat the benchmarking measure several times. Profile with `gprof` and `oprofile` – Basile Starynkevitch Mar 20 '12 at 06:22

1 Answers1

0

http://en.wikipedia.org/wiki/Time_(Unix)

"User" time is the amount of time your program spends doing things like looping and processing within its own program.

"System" is the time that is spend running system operations like reading files from the file system, running processes, etc. Things your program may not specifically be asking for but are being executed by the system to operate your tool.

"Real" is a combined time from start to finish which can also include times when its not really doing any work and just waiting on something.

Its really a matter of what your program does that determines what information of this is important to you. If all your program does is internally crunch numbers, then only User time will be important. If its doing tons of processing, calling out, reading files, opening processes, then you might just want a general Real time it takes to complete running.

Using time to gauge your performance is obviously a rough estimate of performance. You can't know if certain functions have gained any speed. For that you would need to look into how to profile your code. Or add your own time tests that surround functions so you know exactly how long a specific block of code takes to run.

jdi
  • 90,542
  • 19
  • 167
  • 203
  • Thanks for the link. That was exactly where I went to before I posted this question. I am not understanding the technical terms used in the explanation. Please bear with me. – michcs Mar 20 '12 at 02:50
  • @michcs: Included some more details for you. – jdi Mar 20 '12 at 02:55
  • Thanks jdi, that helps a lot. I will experiment with gprof and see if I can make some sense out of that. – michcs Mar 20 '12 at 03:01
  • You might want to ignore gprof and use OProfile - the reasons why are covered enough on stackoverflow: [alternatives-to-gprof](http://stackoverflow.com/questions/1777556/alternatives-to-gprof) – Klaas van Gend Mar 20 '12 at 12:45