23

I ran some simple tests on abs() and fabs() functions and I don't understand what are the advantages of using fabs(), if it is:

1) slower

2) works only on floats

3) will throw an exception if used on a different type

In [1]: %timeit abs(5)
10000000 loops, best of 3: 86.5 ns per loop

In [3]: %timeit fabs(5)
10000000 loops, best of 3: 115 ns per loop

In [4]: %timeit abs(-5)
10000000 loops, best of 3: 88.3 ns per loop

In [5]: %timeit fabs(-5)
10000000 loops, best of 3: 114 ns per loop

In [6]: %timeit abs(5.0)
10000000 loops, best of 3: 92.5 ns per loop

In [7]: %timeit fabs(5.0)
10000000 loops, best of 3: 93.2 ns per loop

it's even slower on floats!

From where I am standing the only advantage of using fabs() is to make your code more readable, because by using it, you are clearly stating your intention of working with float/double point values

Is there any other use of fabs()?

HamsterWithPitchfork
  • 750
  • 1
  • 12
  • 21
  • 7
    premature optimization, eh? – devnull Feb 24 '14 at 17:00
  • 3
    The maximum difference in runtime there is ~29 ns. You'd have to call that function over 20 billion times before you could regain the ten minutes you spent thinking about it.. – DSM Feb 24 '14 at 17:09
  • 11
    @devnull when you find two functions that appear to do the same thing, it's only natural to figure out what the differences are. One of those differences would be timing. I'm sure this question has nothing to do with premature optimization. – Mark Ransom Feb 24 '14 at 17:09
  • 3
    If you pass an `int` to `fabs` it is *obviously* slower, since it has to convert it into a `float`. In other words the *only* fair comparison you showed is on `-5.0` where there is a `0.7 ns` difference, which means *nothing*. There's a high chance that the difference is just due to some random factor and not consistent when trying that many times. – Bakuriu Feb 24 '14 at 17:11
  • 2
    @DSM it's common in image processing for example to do operations millions of times and expect a result near instantaneously. Don't assume any optimization is premature. – Mark Ransom Feb 24 '14 at 17:11
  • 2
    @MarkRansom: Python iteration is notoriously slow, which is why those of us who use Python for heavy numerics have traditionally used a mix of numpy and cython. Since even `for i in xrange(10**9): pass` takes about 20s for me, thinking about image processing as a regime where this optimization would matter is, frankly, silly. This is independent from the issue of "why is it like this?" which is often interesting, and why I upvoted both the question and the answer. – DSM Feb 24 '14 at 17:20

2 Answers2

22

From an email response from Tim Peters:

Why does math have an fabs function? Both it and the abs builtin function wind up calling fabs() for floats. abs() is faster to boot.

Nothing deep -- the math module supplies everything in C89's standard libm (+ a few extensions), fabs() is a std C89 libm function.

There isn't a clear (to me) reason why one would be faster than the other; sounds accidental; math.fabs() could certainly be made faster (as currently implemented (via math_1), it endures a pile of general-purpose "try to guess whether libm should have set errno" boilerplate that's wasted (there are no domain or range errors possible for fabs())).

It seems there is no advantageous reason to use fabs. Just use abs for virtually all purposes.

Community
  • 1
  • 1
Inbar Rose
  • 41,843
  • 24
  • 85
  • 131
  • Not a Python-specific answer, but if you have to convert a float to int or vice-versa, that's fairly costly. In this case, depending on the sophistication of the language support, it might be done only once (negligible impact) during compile. Or, the support might even pick the "right" library code for you. Then, I would expect fabs() to be faster, as all it has to do is unconditionally clear the high bit (assuming IEEE 754 floats), while abs() needs to check for a negative number and then negate it. Again, with constant arguments, it may be negligible (done just once). – Phil Perry Feb 24 '14 at 17:21
  • If I understand you correctly, fabs was included because it is a standard function in C89 library. I can totally understand the advantage of why functions, modules are being standardized, however wouldn't it make more sense to have only fabs() function which does the job that is currently assigned to both abs and fabs? – HamsterWithPitchfork Feb 24 '14 at 17:23
  • 4
    @metacore No. It would have made sense to have *only* `abs` which works on *any* numerical object. For example `abs(Decimal('-5.0'))` returns `Decimal(5.0)`, while `fabs(Decimal('-5.0'))` returns `5.0` [note the forced conversion to `float` which results in a loss of precision, in the general case]. – Bakuriu Feb 24 '14 at 17:30
  • 1
    @PhilPerry -- In python, the conversion will be done repeatedly. *almost everything* is done in python at runtime -- there are very few "compile-time" (parse-time) optimizations. – mgilson Feb 24 '14 at 17:33
0

i personally had an issue with my gcc compiler in c++ , when using abs it returns always an integer and not a double even if the result is a double , it was really a big issue for me at that time because it did not occur to me that abs could be a problem (i mean it is not obvious and easy to think that way). but i tried accidently using fabs and the issue is solved , now my program runs perfectly .

geek
  • 1
  • 1
    As it’s currently written, your answer is unclear. Please [edit] to add additional details that will help others understand how this addresses the question asked. You can find more information on how to write good answers [in the help center](/help/how-to-answer). – Anurag Regmi May 24 '22 at 10:11