1

Possible Duplicate:
Is JavaScript’s Math broken?

I'm running a few very basic functions in javascript to try and convert a float to a currency. For example:

var t = Array(15.90, 15.95, 15.95);
var x = t[0];
if(x%1 == 0)
output += ".0";
if(x%.1 == 0)
output += "0";

Output should yield 0, but for some reason, in some cases, this won't work and when I echo the value of x to the console, I'll either get 15.89999999999 or 15.900000000001. Why?

Thanks for your help.

Community
  • 1
  • 1
Liftoff
  • 24,717
  • 13
  • 66
  • 119
  • 2
    Don't represent currency in a floating point data type. Ever. They have rounding errors. – TheZ Oct 31 '12 at 23:38
  • 3
    @TheZ: It's fine to use a floating point data type. It's not fine to use a *binary* floating point data type. (Using something like .NET's `System.Decimal` type is reasonable.) – Jon Skeet Oct 31 '12 at 23:39
  • @David A fixed or arbitrary precision type of some sort. – millimoose Oct 31 '12 at 23:39
  • @millimoose: Using a fixed precision *binary* type would have the same problem of not being able to represent all *decimal* values accurately, which is typically what you want for currency values. – Jon Skeet Oct 31 '12 at 23:40
  • @JonSkeet I was thinking fixed precision as "using integers to represent a fraction of a dollar", but you're right that it's an ambiguous term. – millimoose Oct 31 '12 at 23:41
  • 1
    @nathanhayfield Note that *all* numbers in JavaScript are floating point numbers. There are no integers in JS. – NullUserException Oct 31 '12 at 23:42
  • Also relevant: http://stackoverflow.com/questions/2876536/precise-financial-calculation-in-javascript-what-are-the-gotchas – Felix Kling Oct 31 '12 at 23:43
  • @NullUserException Are they also usually represented that way, or can implementations use native ints? (Not much help if you do any math with them though I suppose.) – millimoose Oct 31 '12 at 23:44
  • 1
    [What Every Computer Scientist Should Know About Floating-Point Arithmetic](http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html) – Dave Newton Oct 31 '12 at 23:45
  • @millimoose There are some things internally in JS (eg: length of an array) where the standard calls for unsigned integers, in which case an implementation can choose to optimize and store it internally as an int. But the only real numeric type which you will have access to in JS is an IEEE-754 double. I don't think it makes sense for an implementation to use ints to store these numbers. – NullUserException Oct 31 '12 at 23:48

1 Answers1

1

You could just toFixed() to print out the number of decimals you want:

var priceString = price.toFixed(2);

That will always give you a number formatted to two decimal places.

I know that lots of people here will tell you not to use floating point for currency-based computations. However, I find that the precision of floating point numbers is more than sufficient for reasonable dollar amounts. You won't loose pennies unless you multiply large amounts by small percentages. eg: 1000000 * .0000001 = .099999999

slashingweapon
  • 11,007
  • 4
  • 31
  • 50
  • 4
    This will just round the number. You still lose precision. – millimoose Oct 31 '12 at 23:42
  • Precision wasn't really the question. The asker wanted to "try and convert a float to a currency", which in practice means "print out to two decimal places". – slashingweapon Oct 31 '12 at 23:50
  • `toFixed()` is way slower than using `Math.round()` http://jsperf.com/math-round-vs-tofixed-2-decimal-places – Ryan Jan 16 '14 at 17:55