Why is JavaScript being so much more faster in this computation?
I've been making some tests with four simple factorial algorithms: recursion, tail recursion, while
loop and for
loop. I've made the tests in R, Python, and Javascript.
I measured the time it took for each algorithm to compute 150 factorial, 5000 times. For R I used system.time(replicate())
. For Python I used time.clock()
, the resource
module, and the timeit
module. For JavaScript I used console.time()
, Date().getMilliseconds()
, and Date().getTime()
, running the script using node via the terminal.
This was never intended to compare running times between languages, but to see which form (recursive, tail recursive, for loop, or while loop) was faster for the languages I'm learning. The performance of the JavaScript algorithms caught my attention, though.
You can see the 4 different factorial algorithms and the measurement implementations here:
R factorial algorithms and performance.
Python factorial algorithms and performance.
JavaScript factorial algorithms and performance.
On the following examples, f stands for for loop, w stands for while loop.
The results for R are:
Running time of different factorial algorithm implementations, in seconds.
Compute 150 factorial 5000 times:
factorialRecursive()
user system elapsed
0.044 0.001 0.045
factorialTailRecursive()
user system elapsed
3.409 0.012 3.429
factorialIterW()
user system elapsed
2.481 0.006 2.488
factorialIterF()
user system elapsed
0.868 0.002 0.874
The results for Python are:
Running time of different factorial algorithm implementations.
Uses timeit module, resource module, and a custom performance function.
Compute 150 factorial 5000 times:
factorial_recursive()
timeit: 0.891448974609
custom: 0.87
resource user: 0.870953
resource system: 0.001843
factorial_tail_recursive()
timeit: 1.02211785316
custom: 1.02
resource user: 1.018795
resource system: 0.00131
factorial_iter_w()
timeit: 0.686491012573
custom: 0.68
resource user: 0.687408
resource system: 0.001749
factorial_iter_f()
timeit: 0.563406944275
custom: 0.57
resource user: 0.569383
resource system: 0.001423
The results for JavaScript are:
Running time of different factorial algorithm implementations.
Uses console.time(), Date().getTime() and Date().getMilliseconds()
Compute 150 factorial 5000 times:
factorialRecursive(): 30ms
Using Date().getTime(): 19ms
Using Date().getMilliseconds(): 19ms
factorialTailRecursive(): 44ms
Using Date().getTime(): 44ms
Using Date().getMilliseconds(): 43ms
factorialIterW(): 4ms
Using Date().getTime(): 3ms
Using Date().getMilliseconds(): 3ms
factorialIterF(): 4ms
Using Date().getTime(): 4ms
Using Date().getMilliseconds(): 3ms
If I understand correctly, there is no way to measure CPU time in JavaScript using JS code, and the methods used above measure wall clock time.
The wall clock time measurements of JavaScript are much faster than the Python or R implementations.
For example, wall clock running time of factorial algorithm using for loop: R: 0.874s Python: 0.57 s JavaScript: 0.004s
Why is JavaScript being so much more faster in this computation?