0

I have a method which returns the current time as a string. Though this method is called millions of times per second and thus I optimized this method in several ways (statically allocated buffers for the time string etc).

For this application it is perfectly fine to approximate the time. For example I use a resolution of 10 milliseconds. Within this time the same time string is returned.

Though when profiling the code the clock() call consumes the vast amount of time.

What other and faster choices do I have to approximate the time difference with milliseconds resolution?

benjist
  • 2,740
  • 3
  • 31
  • 58
  • 3
    Do you really need the time _as a string_ millions of times a second? It sounds like there's some very strange design choices here. – colonel of truth Apr 28 '18 at 23:39
  • Why are you converting the time *to* a string several million times per second? – user253751 Apr 28 '18 at 23:51
  • The question is valid, though that's really how it is. The code is part of a SQL engine's datetime method. It's not me who chooses the code, but customers commonly use a datetime comparison in the user-assignable SQL and I need to optimize it for this reason. Also the string usage is already optimized, using static buffers. The time measurement takes the vast amount of CPU now. – benjist Apr 29 '18 at 00:00
  • If you are using a resolution of 10 milliseconds, then you're only updating the string 100 times a second, regardless of how many times the method is called. So it's not clear why the `clock()` call is consuming so much time. – user3386109 Apr 29 '18 at 00:16
  • The method is called like 1 million times per second, and so is also clock() called 1 million times in order to check the time difference. In other words: The string is optimized to only be updated 100 times per second, and is also optimized to use static allocated memory. The method returns 1 million times, but only 100 times a different string. It really is the 1M call to clock() remaining which takes lots of time. everything else is optimized. – benjist Apr 29 '18 at 00:22
  • Perhaps this will help: https://stackoverflow.com/questions/6749621 – user3386109 Apr 29 '18 at 03:48
  • Daring to share the code? From what did you deduce that's the call to `clock()` being the bottleneck? – alk Apr 29 '18 at 10:11
  • From the profiler (Xcode Instruments). – benjist Apr 29 '18 at 13:03

1 Answers1

0

To answer my own question: The solution was to limit calls to clock(), or any time function for that matter. The overall execution time for the whole test case is now 22x faster.

I think I can give a general advise after profiling this quite extensively: If you can live with a lower time resolution, and you really need to optimize your code for speed, change the problem into using a single global timer and avoid costly time comparisons for each run.

I have a simple thread now, sleeping for the desired resolution time, and updating an atomic int ticker variable on each loop. In the function I needed to optimize I then just compare two ints (the last tick and the current tick). If not equal, it‘s time for an update.

benjist
  • 2,740
  • 3
  • 31
  • 58