107

What does the f after the numbers indicate? Is this from C or Objective-C? Is there any difference in not adding this to a constant number?

CGRect frame = CGRectMake(0.0f, 0.0f, 320.0f, 50.0f);

Can you explain why I wouldn't just write:

CGRect frame = CGRectMake(0, 0, 320, 50);
Alexander Abakumov
  • 13,617
  • 16
  • 88
  • 129
typeoneerror
  • 55,990
  • 32
  • 132
  • 223

10 Answers10

94
CGRect frame = CGRectMake(0.0f, 0.0f, 320.0f, 50.0f);

uses float constants. (The constant 0.0 usually declares a double in Objective-C; putting an f on the end - 0.0f - declares the constant as a (32-bit) float.)

CGRect frame = CGRectMake(0, 0, 320, 50);

uses ints which will be automatically converted to floats.

In this case, there's no (practical) difference between the two.

Frank Shearar
  • 17,012
  • 8
  • 67
  • 94
  • 26
    Theoretically, the compiler may not be smart enough to convert them to float at compile time, and would slow the execution down with four int->float conversions (that are among the slowest casts). Although in this case is almost unimportant, it's always better to specify correctly f if needed: in an expression a constant without the right specifier may force the whole expression to be converted to double, and if it's in a tight loop the performance hit may be noticeable. – Matteo Italia Mar 06 '10 at 11:18
61

When in doubt check the assembler output. For instance write a small, minimal snippet ie like this

#import <Cocoa/Cocoa.h>

void test() {
  CGRect r = CGRectMake(0.0f, 0.0f, 320.0f, 50.0f);
  NSLog(@"%f", r.size.width);
}

Then compile it to assembler with the -S option.

gcc -S test.m

Save the assembler output in the test.s file and remove .0f from the constants and repeat the compile command. Then do a diff of the new test.s and previous one. Think that should show if there are any real differences. I think too many have a vision of what they think the compiler does, but at the end of the day one should know how to verify any theories.

epatel
  • 45,805
  • 17
  • 110
  • 144
  • 11
    The output turned out to be identical for me, even without any `-O`. I'm on i686-apple-darwin10-gcc-4.2.1 (GCC) – kizzx2 Apr 13 '11 at 14:57
  • 2
    I tried the above example with LLVM version 7.0.0 (clang-700.0.65) x86_64-apple-darwin15.0.0 and the .out files were identical as well. – Nick Aug 21 '15 at 00:00
44

Sometimes there is a difference.

float f = 0.3; /* OK, throw away bits to convert 0.3 from double to float */
assert ( f == 0.3 ); /* not OK, f is converted from float to double
   and the value of 0.3 depends on how many bits you use to represent it. */
assert ( f == 0.3f ); /* OK, comparing two floats, although == is finicky. */
Potatoswatter
  • 134,909
  • 25
  • 265
  • 421
28

It tells the computer that this is a floating point number (I assume you are talking about c/c++ here). If there is no f after the number, it is considered a double or an integer (depending on if there is a decimal or not).

3.0f -> float
3.0 -> double
3 -> integer
NickLH
  • 2,643
  • 17
  • 28
  • is this convention part of the C++ standard or is it found in the compiler? – jxramos Sep 15 '15 at 19:52
  • 1
    As far as I can tell it is part of the standard (someone correct me if I'm wrong). The quickest reference I could find is http://open-std.org/jtc1/sc22/open/n2356/lex.html#lex.fcon, but there are probably more up to date references if you care to look for them. – NickLH Sep 16 '15 at 20:10
6

The f that you are talking about is probably meant to tell the compiler that it's working with a float. When you omit the f, it is usually translated to a double.

Both are floating point numbers, but a float uses less bits (thus smaller and less precise) than a double.

Kev
  • 118,037
  • 53
  • 300
  • 385
Yuri
  • 2,008
  • 17
  • 36
5

A floating point literal in your source code is parsed as a double. Assigning it to a variable that is of type float will lose precision. A lot of precision, you're throwing away 7 significant digits. The "f" postfix let's you tell the compiler: "I know what I'm doing, this is intentional. Don't bug me about it".

The odds of producing a bug isn't that small btw. Many a program has keeled over on an ill-conceived floating point comparison or assuming that 0.1 is exactly representable.

Hans Passant
  • 922,412
  • 146
  • 1,693
  • 2,536
3

It's a C thing - floating point literals are double precision (double) by default. Adding an f suffix makes them single precision (float).

You can use ints to specify the values here and in this case it will make no difference, but using the correct type is a good habit to get into - consistency is a good thing in general, and if you need to change these values later you'll know at first glance what type they are.

Paul R
  • 208,748
  • 37
  • 389
  • 560
2

From C. It means float literal constant. You can omit both "f" and ".0" and use ints in your example because of implicit conversion of ints to floats.

Wildcat
  • 8,701
  • 6
  • 42
  • 63
1

It is almost certainly from C and reflects the desire to use a 'float' rather than a 'double' type. It is similar to suffixes such as L on numbers to indicate they are long integers. You can just use integers and the compiler will auto convert as appropriate (for this specific scenario).

tyranid
  • 13,028
  • 1
  • 32
  • 34
0

It usually tells the compiler that the value is a float, i.e. a floating point integer. This means that it can store integers, decimal values and exponentials, e.g. 1, 0.4 or 1.2e+22.

Polynomial
  • 27,674
  • 12
  • 80
  • 107