0

In the math-headers we see

extern float fabsf(float);
extern double fabs(double);
extern long double fabsl(long double);

...

extern float fmodf(float, float);
extern double fmod(double, double);
extern long double fmodl(long double, long double);

Why is there one function for each type? Isn't this a lot of duplicate code? If I where to say write a lerp-function or a clamp-function would I need to write one for each type?

Seems like we will have duplicate code where there's only one thing changing – the type.

extern float clampf(float value, float min, float max)
{
    if(value > max)
        return max;
    if(value < min)
        return min;
    return value;
}

extern double clamp(double value, double min, double max)
{
    if(value > max)
        return max;
    if(value < min)
        return min;
    return value;
}

Question 1: What is the historical reason for this structure?

Question 2: Should I follow the same pattern? Or should I only implement the double-kind since it is the one which is most common?

Question 3: Or should I just use macro's to overcome the type-issue altogether?

hfossli
  • 22,616
  • 10
  • 116
  • 130
  • 2
    Your question title does not match your description. You already know "when to use what?" – 0xF1 Dec 10 '13 at 10:42
  • This question appears to be off-topic because it is about history. –  Dec 10 '13 at 10:50
  • @0xF1 I've updated the question title. – hfossli Dec 10 '13 at 12:30
  • @H2CO3 Well if it is for historical reasons it is kept this way then that *is* an interesting answer. Also I'd also like to know wether I should follow the same pattern or not. – hfossli Dec 10 '13 at 12:32

2 Answers2

5

Historically (circa C89 and before), the math library contained only the double-precision versions of these functions, which is why those versions have no suffix. If you needed to compute the sine of a float, you either wrote your own implementation, or (more likely!) you simply wrote:

float x;
float y = sin(x);

However, this introduces some overhead on modern architectures. Specifically, on the most common architectures today, it is necessary for the compiler to emit code that looks something like this:

convert x to double
call sin
convert result to float

These conversions are pretty fast (about the same as an addition, usually), but they still have some cost. On top of the cost of conversion, sin needs to deliver a result that has ~53 bits of precision, more than half of which are completely wasted if the result is just going to be converted back to single precision. Between these two factors, it is possible for a dedicated single-precision sin routine to be about twice as fast; that’s a significant win for some very frequently-used library functions!

If we look at functions like fabs (and assume that the compiler does not simply inline and lower them), the situation is much, much worse. fabs, on a typical modern architecture, is a simple bitwise-and operation. So the two conversions bracketing the call (if all you have is double) are significantly more expensive than the operation itself, and can easily cause a 5x slowdown. That’s why multiple versions of these functions were added to support each FP type.

If you don’t want to keep track of all of them, you can #include <tgmath.h>, which will infer the correct function to use based on the type of the argument (meaning

sin((float)x)

will generate a call to sinf(x), whereas

sin((long double)x)

will call sinl(x)).

In your own code, you usually know a priori what the type of your arguments is, and only need to support one or maybe two types. clamp and lerp in particular are graphics operations, and almost universally are used only in single-precision variants.

Incidentally, the fact that you’re using clamp and lerp is a pretty good indication that you might want to look at writing your code in OpenCL instead of C/Obj-C; the OpenCL math library implements these operations (and many other similar operations) for you, and provides implementations that work with a wide range of basic types, including vectors.

Stephen Canon
  • 103,815
  • 19
  • 183
  • 269
  • +1 I was just about to write a similar answer but it wouldn't have been even half as informative. – dreamlax Dec 10 '13 at 12:53
  • Not to mention that a correctly rounded `sinf` can be expected to return a more accurate result than `(float)sin((double)x)` once every 2^(52-23) calls. Hey, that is a nice little experiment to run. Now to find a correctly rounded `sinf`. Hopefully `sinl` will do the trick. – Pascal Cuoq Dec 12 '13 at 20:57
1

float and double are different data types, same as int and long int. You can use the functions which operate on double on float values and implicit conversion will happen to make it work as expected in most circumstances, but if you use functions which operate on float on double values, you will almost inevitably lose precision.

There are other longer explanations available, e.g. What's the difference between a single precision and double precision floating point operation? .

Community
  • 1
  • 1
Ivan Voras
  • 1,895
  • 1
  • 13
  • 20