I have a python script that is writing the objects of a dict to a file. Using 2.7 on windows 10 the duration of writes is being reported as a number of milliseconds with either .0000nnn or .9999nnnn following. I am using python's time library just like this:
import time
logline = ""
def writer(mydict):
records = 0
global logline
lstart_time = time.time()
for item in mydict:
some_open_file.write(item)
dur = time.time() - lstart_time
logline +=("\t{0:25}transformed {2} records in {1} ms; avg: {3:2.3f} mis per record.\n".format(
page['name'], (dur*1000), records, (dur/records)*1000000))
for documents in bigdict:
logline += some_dict['title']
for x in some_dict['records']:
writer(x)
print(logline)
It doesn't happen in osx with 2.7 though. osx output
How does the windows implementation of python handle the time floats differently than osx? Both are 64 bit, both on python 2.7, both have intel core processors, and both have ssds (although that shouldn't matter). So why does windows/python handle floats differently? How can I get windows to show the more precise numbers I get on the macbook?
Some observations: The mac also seems to have greater variance in the time it takes for the writes. While windows seems consistently take 5.7-6μs per write record on average, the mac ranged from 8-24μs. Kinda surprised the 3-4 year old desktop was faster with very similar overall specs, but Im guessing that's because the desktop probably has a much larger l3 cache even though the chip is older and just slightly faster. That is for another forum.
I used 'mic' because μs was throwing all sorts of encoding errors. Instant praise for anyone that corrects that logline assignment to work with μs in any console.