Answer to revision:
Within javascript there is absolutely no way to distinguish between 2
2.0
and 2.000
. Therefore, you will never without some additional decimal place supplied, be able to detect from var a = 2.00
that 2 was ever anything other than an integer
(per your method) after it's been assigned.
Case in point, despite the [misleading] built-in methods:
typeof parseInt('2.00', 10) == typeof parseFloat('2.00')
'number' == 'number'
/* true */
Original Answer:
JavaScript doesn't have hard-based scalar types, just simply a Number
. For that reason, and because you really only have 1 significant figure, JavaScript is taking your 2.00
and making it an "integer" [used loosly] (therefore no decimal places are present). To JavaScript: 2 = 2.0 = 2.00 = 2.00000
).
Case in point, if I gave you the number 12.000000000000
and asked you to remember it and give it to someone a week from now, would you spend the time remember how many zeros there were, or focus on the fact that I handed you the number 12
? (twelve
takes a lot less effort to remember than twelve with as many decimal places
)
As far as int
vs float
/double
/real
, you're really only describing the type of number from your perspective and not JavaScript's. Think of calling a number in JavaScript an int
as giving it a label and not a definition. to outline:
Value: To JavaScript: To Us:
------ -------------- ------
1 Number integer
1.00 Number decimal
1.23 Number decimal
No matter what we may classify it as, JavaScript still only sees it as a Number
.
If you need to keep decimal places, Number.toFixed(n)
is going to be your best bet.
For example:
// only 1 sig-fig
var a = 2.00;
console.log(''+a); // 2
console.log(a.toFixed(2)); // 2.00
// 3 sig-figs
var b = 2.01
console.log(''+b); // 2.01
console.log(b.toFixed(2)); // 2.01
BTW, prefixing the var with ''+
is the same as calling a .toString()
, it's just cast just shorthand. The same outcome would result if I had used a.toString()
or b.toString()