-1

I am working in Unity3D with C# and I get a weird result. Can anyone tell me why my code equals 0?

float A = 1 / 90;
Wiktor Zychla
  • 47,367
  • 6
  • 74
  • 106
anonymous-dev
  • 2,897
  • 9
  • 48
  • 112
  • What is the precision of `float`? – ScottJShea Feb 21 '15 at 22:26
  • I can't seem to find any documentation on that but I gues I am looking wrong. Its not in the unity script reference... – anonymous-dev Feb 21 '15 at 22:34
  • 4
    I changed the tag because this has nothing to do with unity, that's plain c#. 1 and 90 are int. So the result is an int (zero) than converted to float in the assigment. use proper suffix if you need float – Heisenbug Feb 21 '15 at 22:51
  • As in most C-derived languages, dividing an integer by an integer yields an integer result (truncated, not rounded -- so 4/5 is 0 too). If at least one of the operands is a floating-point constant (or cast to a floating-point type), then the result will be a floating point value. – Cameron Feb 21 '15 at 22:58

1 Answers1

5

The literals 1 and 90 are interpreted as an int. So integer division is used. After that the result is converted to a float.

In general C# will read all sequences (without decimal dot) of digits as an int. An int will be converted to a float if necessary. But before the assignment, that's not necessary. So all calculations in between are done as ints.

In other words, what you've written is:

float A = (float) ((int) 1)/((int) 90)

(made it explicit here, this is more or less what the compiler reads).

Now a division of two int's is processed such that it takes only the integral part into account. The integral part of 0.011111 is 0 thus zero.

If you however modify one of the literals to a floating point (1f, 1.0f, 90f,...) or both, this will work. Thus use one of these:

float A = 1/90.0f;
float A = 1.0f/90;
float A = 1.0f/90.0f;

In that case, floating point division will be performed. Which takes into account both parts.

etc.

Willem Van Onsem
  • 443,496
  • 30
  • 428
  • 555