I am developing a Python application to determine divisibility by repeatedly truncating the final digits of dividend and transforming it to a smaller number. I am running Python 3.7.10 under Windows 10 using a P5 processor. The code was developed using PyCharm. I wish to benchmark the division by 16 by shifting 4 bits to the right against divmod().
import math
import datetime
from sys import getsizeof
def PrintTime(startTime, message):
seconds_in_day = 24 * 3600
later_time = datetime.datetime.now()
difference = later_time - first_time
time_diff = 10 ** 6 * (difference.days * seconds_in_day + difference.seconds) + difference.microseconds
print(message + str(time_diff / 10 ** 6))
myVariable = 3 ** 100000 - 1
hLength = math.ceil(math.log(myVariable)/math.log(16))
dLength = math.ceil(math.log(myVariable)/math.log(10))
print('Length in hex' + ' ' + str(hLength) + '. Decimal length ' + str(dLength) + ' Size of myVariable ' + str(getsizeof(myVariable)))
n = myVariable
first_time = datetime.datetime.now()
for i1 in range(0, hLength - 1):
a = n >> 4
b = n & 0xF
n = n - 1761 * b
PrintTime(first_time, 'Hex truncation time: ')
n = myVariable
first_time = datetime.datetime.now()
for i2 in range(0, hLength - 1):
n, r = divmod(n, 10)
PrintTime(first_time, 'divmod time - base 16: ')
n = myVariable
first_time = datetime.datetime.now()
for i2 in range(0, dLength - 1):
n, r = divmod(n, 16)
PrintTime(first_time, 'divmod time - base 16: ')
This is the output from the application. Length in hex 39625. Decimal length 47713 Size of myVariable 21160
Hex truncation time: 0.614306
divmod time - base 10: 0.526677
divmod time - base 16: 1.42582
Why is the divmod(n, 16) taking nearly three times the execution time as compared with divmod(n, 10). Am I correct in saying that Python is transforming division by 16 to base 10 and converting the output back to base 16?