0

I have a time stamper that gives me times in nanos and I want to get millis from nanos. So I am dividing nanos / 1000000. Any fast way to go from nanos to millis without paying the DIV cost, which can be expensive?


Any kind of shift? Maybe I can shift by a number close to 1,000,000 that is a power of 2? That might be a smart trick to do that, because I am giving up the nanos precision to go to millis.


My benchmarks show that it can take anywhere from 0 nanos to 1 millis.

The 99% percentile has an average time of 72 nanos with a max time of 1 micro.

That looks bad.

user207421
  • 305,947
  • 44
  • 307
  • 483
JohnPristine
  • 3,485
  • 5
  • 30
  • 49
  • 5
    Is this really a performance bottleneck? – Mysticial Mar 19 '14 at 01:29
  • It seems overwhelmingly unlikely that you'll be able to improve significantly on this...or that any improvement you generate will actually be worthwhile. – Louis Wasserman Mar 19 '14 at 01:30
  • 2
    Modern processors handle division much, much more efficiently than you'll ever be able to do on your own. The only case where you could do better is in dividing by a power of 2, in which case a right shift *might* be faster. Even then, I wouldn't count on it. – ajb Mar 19 '14 at 01:33
  • 2
    To echo Mysticial, you have given a completely inadequate description of your problem. How many times do you do this? How fast? Since you're down at the division-of-integers level, on what processor? Are you going to be doing a lot of them out of an array of them, or is it part of a tight loop on individual values? The VAST majority of times that people are looking for a fast way to do one operation, they are looking at the wrong operation. If you would like help optimizing an overall algorithm, tell us about it. But this question is nearly meaningless without more context. – arcy Mar 19 '14 at 01:34
  • Premature optimization is the root of all evil. – rcs Mar 19 '14 at 01:34
  • If you're curious about "how to divide without using '/'": look here: http://stackoverflow.com/questions/21074682/dividing-a-number-without-using-division-operator-in-c. If you seriously think it might "optimize" your program's performance - you're probably mistaken ;) – FoggyDay Mar 19 '14 at 01:38
  • My main point is that yes, (under some circumstances), it is possible to better than `/ 1000000`. But it sounds like you're using this for timer outputs - which is definitely not something I would consider, "performance critical". – Mysticial Mar 19 '14 at 01:39
  • Converting the nanos to string, and try to append ‘0’ before the string ? I'm not sure this is a good idea – YaleCheung Mar 19 '14 at 01:44
  • 1
    Apart from all the comments above, if you are in desperate need to optimize a division for whatever reason, only feasible way to do is just _not_ to do it, if you ask me. So, just try to use nanos instead of millis – Engin Kayraklioglu Mar 19 '14 at 01:47
  • I would stream perf data in raw form to memory and do the conversion out side the time critical loop. – Chris Mar 19 '14 at 02:21

1 Answers1

1

2^20 is 1,048,576 so down shifting by 20 bits, is approximately division by 1M. This has approximately a 5% error - but that may be an OK trade-off for you. Otherwise straight division is probably going to be fastest.

Michael Anderson
  • 70,661
  • 7
  • 134
  • 187
  • That's what I thought. I will make some tests and give you some feedback. Thanks Michael! – JohnPristine Mar 19 '14 at 01:49
  • You beat me by about 2^38 nanoseconds, will delete my answer. – user949300 Mar 19 '14 at 02:00
  • If you do this, *please* don't refer to the value as "milliseconds" in your code. This will drive everyone else nuts. Maybe "pseudo-milliseconds" or maybe some other term that you invent, like "fweegloseconds" (1 fweeglosecond = 0.001048576 second)... – ajb Mar 19 '14 at 02:05