I had some questions about putting f
next to literal values. I know it defines it as a float
but do I really need it?
Is this 2.0f * 2.0f
any faster or compiled any different than 2.0 * 2.0
? Is a statement like float a = 2.0;
compiled differently than float a = 2.0f;
?

- 10,754
- 6
- 50
- 92
-
3You mean "literal values", not "variable names". `2.0` is a `double` literal value. `2.0f` is a `float` literal value. `2` is an `int` literal value. None of these are variables. – RBerteig Sep 12 '10 at 22:30
-
sorry, didn't mean to sound like I was ranting there... I probably should have just edited the body silently. – RBerteig Sep 12 '10 at 22:44
3 Answers
Sometimes you need it to explicitly have type float
, like in the following case
float f = ...;
float r = std::max(f, 42.0); // won't work; (float, double).
float r = std::max(f, 42.0f); // works: both have same type

- 496,577
- 130
- 894
- 1,212
I's rarely about speed (at least directly), but the fact that otherwise the compiler will warn about converting double
to float
.

- 476,176
- 80
- 629
- 1,111
-
Ok but on the a * 2.0 example is that really using double multiplication? – Justin Meiners Sep 12 '10 at 22:25
-
2In theory, I believe that should result in multiplying two `double`s, that converting the result to a `float`. In reality, with it being 2.0, chances are that the compiler can/will figure out that it can use `float` throughout. OTOH, there are mandatory limits on optimizing FP math (especially in C99), so making it explicit could help. – Jerry Coffin Sep 12 '10 at 22:29
-
-
3Sometimes, it is also about accuracy. Assuming your variable names do not reflect their types, 2.0f actually warns anyone else reading the code that they need to keep in mind the fact that they are dealing with a float and not a double. This becomes handy knowledge when debugging things like loops with lots of comparisons in them. – Carl Sep 12 '10 at 22:40
AFAIK, on "normal" PCs (x86 with x87-like mathematical coprocessor) the difference in speed is irrelevant, since the calculations are internally done anyway in 80-bit precision.
Floats may gain importance when you have large arrays of floating-point numbers to manage (scientific calculations or stuff like that), so having a smaller data type may be convenient, both to use less memory and to be faster to read them from RAM/disk.
It may also be useful to use floats instead of doubles on machines that lack a floating point unit (e.g. most microcontrollers), where all the floating-point arithmetic is performed in software by code inserted by the compiler; in this case, there may be a gain in speed operating on floats (and in such environments often also every bit of memory matters).
On PCs, IMO you can just use double in "normal" contexts, just try to avoid mixing datatypes (double, floats, ints, ...) in the same expression to avoid unnecessary costly conversions. Anyhow, with literals the compiler should be smart enough to perform the conversion at compile time.

- 123,740
- 17
- 206
- 299
-
Since what I've read, the iPhone processor is an ARM1176 with the optional FPU provided (see here: http://www.arm.com/products/processors/technologies/vector-floating-point.php); I'm no expert of it, but I don't think that speed changes a lot with floats/doubles in this case. My only concern is that it has "16 double precision or 32 single precision registers", so using only floats you may gain something from the additional registers, but it may depend also on how the environment is set up. – Matteo Italia Sep 12 '10 at 22:37
-
Well I know you can but on the iphone double is definitely slower then float. – Justin Meiners Sep 12 '10 at 22:39
-
The only way to find out is profiling. Anyhow, if you want to go safe and you don't need the additional precision of doubles, just use floats. – Matteo Italia Sep 12 '10 at 22:41
-
5Turns out that someone else already had a similar question: have a look at http://stackoverflow.com/questions/1622729/double-vs-float-on-the-iphone – Matteo Italia Sep 12 '10 at 22:47