4

Why use a double or float literal when you need an integral value and an integer literal will be implicitly cast to a double/float anyway? And when a fractional value is needed, why bother adding the f (to make a floating point literal) where a double will be cast to a float anyway?

For example, I often see code similar to the following

float foo = 3.0f;
double bar = 5.0;
// And, unfortunately, even
double baz = 7.0f;

and

void quux(float foo) {
     ...
}

...

quux(7.0f);

But as far as I can tell those are equivalent to

float foo = 3;
// or
// float foo = 3.0;
double bar = 5;
double baz = 7;
quux(9);

I can understand the method call if you are in a language with overloading (c++, java) where it can actually make a functional difference if the function is overloaded (or will be in the future), but I'm more concerned with C (and to a lesser extent Objective-C), which doesn't have overloading.

So is there any reason to bother with the extra decimal and/or f? Especially in the initialization case, where the declared type is right there?

Kevin
  • 53,822
  • 15
  • 101
  • 132
  • 1
    I appreciate when my fellow developers are explicit about their types, because it tells me they were *thinking about type* when they wrote their code. Otherwise, it looks like they may have been lazy and not thinking: mixing up `ints`, `floats` and `doubles` on a whim. In short, it gives me as a code reviewer, confidence that they put in due diligence. – abelenky May 27 '14 at 20:29
  • I would do that when the meaning of the variable is a float. Even if I currently use it with an integer value, the declaration of the variable reminds me what the variable is supposed to stand for. You are right, the extra .0 is not needed, it's just an extra reminder for me. – LaszloLadanyi May 27 '14 at 20:30
  • 1
    @abelenky When you're code reviewing and see a `f`, just be sure to double check that they actually want a float, not a double. This was partially motivated by finding a float literal stored into a double variable. It happened not to matter with this particular value, but just because they add the `f` doesn't necessarily mean they're paying attention to types. – Kevin May 27 '14 at 20:34
  • possible duplicate of [What is the significance of 0.0f when initializing (in C)?](http://stackoverflow.com/questions/5199338/what-is-the-significance-of-0-0f-when-initializing-in-c) – Jongware May 27 '14 at 21:14
  • 2
    @Jongware The accepted answer to that question is wrong for any non-archaic compiler. – Tavian Barnes May 27 '14 at 21:17
  • @Jongware I'm quite aware of the implicit typecasting, in fact I mention it in the question. Any compiler made since Nixon was president will take care of that at compile time, and that question doesn't answer my question of why one would use the non-typecasting literals. – Kevin May 27 '14 at 21:25
  • @Kevin Does my answer, answer your question? – this May 27 '14 at 21:26
  • I think it's a combination of factors.. some who aren't sure about C's promotion rules so they throw suffixes on every literal just to be safe; some Java coders where this is actually a compilation error; some who follow arbitrary style guides – M.M May 27 '14 at 21:30
  • @TavianBarnes I agree - in any version of Standard C the effect is identical ; and the justification of selecting floating-point schemes at runtime seems bogus as that would apply to all versions. So it is basically saying "do it because of a bug in a 40-year-old compiler". Perhaps some downvoting is in order – M.M May 27 '14 at 21:40

4 Answers4

4

Many people learned the hard way that

double x = 1 / 3;

doesn't work as expected. So they (myself included) program defensively by using floating-point literals instead of relying on the implicit conversion.

Tavian Barnes
  • 12,477
  • 4
  • 45
  • 118
  • I imagine this is the root cause, but I still think it's an overreaction. – Kevin May 27 '14 at 21:35
  • 1
    @Kevin It depends on your philosophy I guess. Do you think always using braces on `if` statements is an overreaction? – Tavian Barnes May 27 '14 at 21:50
  • @TavianBarnes: Whether braces are mandatory should depend, IMHO, upon whether opening braces are placed on a line by themselves or at the end of the control statement to which they're attached. If it's visually obvious which control statements have opening braces and which don't, it will be obvious which ones need closing braces. If it's not easy to see opening braces, then having some statements use braces and some not will make it hard to tell which statements are not properly matched. – supercat Jun 23 '14 at 21:59
  • @supercat That's certainly a valid coding standard. But a policy that requires braces, to avoid `if (x) y; z;` bugs, is a perfectly reasonable standard too. (Well, exception is usually made for `if (x) { y; } else if (z) { q; }` because `else { if (z) { q; } }` is ugly.) – Tavian Barnes Jun 23 '14 at 22:06
  • Your comment didn't show line breaks, so it's hard to tell what you're assuming about them. If open-braces go on lines by themselves, then it's obvious when an open-brace lacks a close-brace. If statements are never indented except when an open-brace is on a previous line, then any out-dent without a close-brace will be conspicuous. The brace-on-its-own line allows a single-statement `if` to take two lines, but a two-statement `if` will need five; the brace-on-previous line style means a two-statement `if` needs four lines (vs 5), but a single-statement `if` will need three (vs 2). – supercat Jun 24 '14 at 17:11
3

C doesn't have overloading, but it has something called variadic functions. This is where the .0 matters.

void Test( int n , ... )
{
    va_list list ;
    va_start( list , n ) ;
    double d = va_arg( list , double ) ;
    ...
}

Calling the function without specifying the number is a double will cause undefined behaviour, since the va_arg macro will interpret the variable memory as a double, when in reality it is an integer.

Test( 1 , 3 ) ; has to be Test( 1 , 3.0 ) ;


But you might say; I will never write variadic functions, so why bother?

printf( and family ) are variadic functions.

The call, should generate a warning:

printf("%lf" , 3 ) ;   //will cause undefined behavior

But depending on the warning level, compiler, and forgetting to include the correct header, you will get no warning at all.

The problem is also present if the types are switched:

printf("%d" , 3.0 ) ;    //undefined behaviour
this
  • 5,229
  • 1
  • 22
  • 51
  • 1
    This explains none of the examples presented by the OP. In fact, calling `printf` only requires the format string and the argument types to match, so if you are using a correct format string, there is no reason to use `0.0f` or `5.0`. – user4815162342 May 27 '14 at 21:07
  • 2
    @user4815162342 No if you omit `.0`, and specify a floating point type, you get UB. + My answer explains *So is there any reason to bother with the extra decimal and/or f?* and the title, so your point is moot. – this May 27 '14 at 21:10
  • Why specify a floating-point type, then? Besides, none of the supposed examples provided by the OP even involve variadic functions. – user4815162342 May 28 '14 at 06:10
  • @user4815162342 I guess you didn't read the question. If you did, you would know that the premise is, why to specify .0 when the final type is indented to be floating point and vice versa. – this May 28 '14 at 06:39
  • Exactly, the final type is intended to be floating-point, which is not the case when invoking `printf`, which must be told what the given type is in the first place. The question and the given examples make it quite clear that the final type is known at compile-time and properly declared. – user4815162342 May 28 '14 at 06:49
  • @user4815162342 *Why bother using a float / double literal when not needed?* The question makes it quite clear, that the final type is not the same as the type presented to the compiler, a quote from OP: *But as far as I can tell those are equivalent to float foo = 3;* – this May 28 '14 at 06:51
  • Not the same as the final type, but *known to the compiler*, which is not the case with `printf`. You are answering a different question than the one that was asked. – user4815162342 May 28 '14 at 07:24
  • @user4815162342 I have answered the question OP asked. You are simply seeing thing that aren't there. – this May 28 '14 at 08:56
  • That is *ad hominem*, and also ignores clear examples from the question. – user4815162342 May 28 '14 at 10:38
  • @user4815162342 Again; you have to read the entire question. btw, that was not *ad hominem*. – this May 28 '14 at 10:39
2

Why use a double or float literal when you need an integral value and an integer literal will be implicitly cast to a double/float anyway?

First off, "implicit cast" is an oxymoron (casts are explicit by definition). The expression you're looking for is "implicit [type] conversion".

As to why: because it's more explicit (no pun intended). It's better for the eye and the brain if you have some visual indication about the type of the literal.

why bother adding the f (to make a floating point literal) where a double will be cast to a float anyway?

For example, because double and float have different precision. Since floating-point is weird and often unintuitive, it is possible that the conversion from double to float (which is lossy) will result in a value that is different from what you actually want if you don't specify the float type manually.

  • Do you have an example where the implicit conversion from a double to float would be different from specifying the same literal with an `f`? I have a hard time believing that it could differ. – Kevin May 27 '14 at 21:40
  • @Kevin See #3 [here](http://stackoverflow.com/questions/7662109/should-we-generally-use-float-literals-for-floats-instead-of-the-simpler-double#comment9311634_7662804). – The Paramagnetic Croissant May 27 '14 at 21:48
  • Croissant and @Kevin - #3 [there](http://stackoverflow.com/questions/7662109/should-we-generally-use-float-literals-for-floats-instead-of-the-simpler-double#comment9311634_7662804) is *not* the requested example-- that's just an example where double->float->double results in a value different from the original double, which is easy. What you're describing here is much less common, if it happens at all-- it's where `(float)` is different from `f`. Whether this can happen might be dependent on the compiler's implementation of parsing literals; I'm not sure. – Don Hatch Jul 27 '17 at 02:07
  • @Kevin Here's your example: (float)1.0000000596046448 is not the same as 1.0000000596046448f. Roughly speaking, the former goes decimal->double->float resulting in rounding down twice, producing 1; whereas the latter goes directly decimal->float, which rounds up, to the next float higher than 1. – Don Hatch Jul 27 '17 at 03:03
0

In most cases, it's simply a matter of saying what you mean.

For example, you can certainly write:

#include <math.h>
...
const double sqrt_2 = sqrt(2);

and the compiler will generate an implicit conversion (note: not a cast) of the int value 2 to double before passing it to the sqrt function. So the call sqrt(2) is equivalent to sqrt(2.0), and will very likely generate exactly the same machine code.

But sqrt(2.0) is more explicit. It's (slightly) more immediately obvious to the reader that the argument is a floating-point value. For a non-standard function that takes a double argument, writing 2.0 rather than 2 could be much clearer.

And you're able to use an integer literal here only because the argument happens to be a whole number; sqrt(2.5) has to use a floating-point literal, and

My question would be this: Why would you use an integer literal in a context requiring a floating-point value? Doing so is mostly harmless, since the compiler will generate an implicit conversion, but what do you gain by writing 2 rather than 2.0? (I don't consider saving two keystrokes to be a significant benefit.)

Keith Thompson
  • 254,901
  • 44
  • 429
  • 631