The following Python 3.x integer multiplication takes on average between 1.66s and 1.77s:
import time
start_time = time.time()
num = 0
for x in range(0, 10000000):
# num += 2 * (x * x)
num += 2 * x * x
print("--- %s seconds ---" % (time.time() - start_time))
if I replace 2 * x * x
with 2 *(x * x)
, it takes between 2.04
and 2.25
. How come?
On the other hand it is the opposite in Java: 2 * (x * x)
is faster in Java. Java test link: Why is 2 * (i * i) faster than 2 * i * i in Java?
I ran each version of the program 10 times, here are the results.
2 * x * x | 2 * (x * x)
---------------------------------------
1.7717654705047607 | 2.0789272785186768
1.735931396484375 | 2.1166207790374756
1.7093875408172607 | 2.024367570877075
1.7004504203796387 | 2.047525405883789
1.6676218509674072 | 2.254328966140747
1.699510097503662 | 2.0949244499206543
1.6889283657073975 | 2.0841963291168213
1.7243537902832031 | 2.1290600299835205
1.712965488433838 | 2.1942825317382812
1.7622807025909424 | 2.1200053691864014