0

I tried doing the following:

float a = 10.5;
//compile error (required float, provided double)

Meaning the default decimal is always double, which is 64-bit long, whereas float is 32-bit long. So technically I can't put something big in a smaller 'cup'.

Then I did some corrections, that both work. I am curious what is the difference (if there is any) between these two approaches:

float a = 10.5f;
float a = (float)10.5;
Stefan
  • 969
  • 6
  • 9
  • 4
    Here: `float a = 10.5f` you explicitly declare the variable a as a 32 bit containing a float value. In your second example, you're casting `10.5` (which is, by default, a double) to a float, meaning you're losing precision by doing so. – kali Oct 15 '20 at 21:22
  • 3
    Have you seen [this](https://stackoverflow.com/questions/33163772/what-is-the-difference-between-casting-to-float-and-adding-f-as-a-suffix-whe) answer, might be it can help for you to understand the same. – Vivek Jain Oct 15 '20 at 21:26
  • @kali Makes sense. One follow up thing. Why do byte and short don't have the possibility of doing this? The only thing I can do there is cast like `byte a = (byte)10000`. Doing something like `byte a = 10000b` is impossible. – Stefan Oct 15 '20 at 21:27
  • @VivekJain It helped :) – Stefan Oct 15 '20 at 21:32
  • I don't know why, but it doesn't matter a whole lot since you're not losing precision – user Oct 15 '20 at 21:43
  • @Markus there isn't such a thing as a byte literal (the b at the end if a digit) – kali Oct 15 '20 at 21:52

1 Answers1

0

It is the difference between casting a double to a float or just declaring a float variable. In this instance it would be better to just declare the variable as a float as there is little point in casting a literal. Additionally, you lose precision when casting a double to a float.