I am currently comparing two loop calculation in Python3 and C. For Python, I have:
# Python3
t1 = time.process_time()
a = 100234555
b = 22333335
c = 341500
for i in range(1, 10000000001):
a = a - (b % 2)
b = b - (c % 2)
print("Sum is", a+b)
t2 = time.process_time()
print(t2-t1, "Seconds")
Then in C, I do the same thing:
#include <stdio.h>
int main() {
long long a = 100234555;
long long b = 22333335;
long long c = 341500;
for(long long i = 1; i <= 10000000000; i++){
a = a - (b % 2);
b = b - (c % 2);
}
printf("Sum is %lld\n", a+b);
return 0;
}
I timed both the code in Python and in C. The timing for Python is around 3500 seconds while the timing in C (including compilation and execution) only takes around 0.3 seconds.
I am wondering how there is such a big difference in timing. The execution was done on a server with 100 GB Ram and enough processing power.