I want to measure the time that is required to load packages on different cpus in python (with Pycharm). For that, i wrote this little script:
import time
from helper.aux_function import timeit, get_processor_name
modules = ["tensorflow", "torch"]
results = {"cpu":get_processor_name()}
for m in modules:
@timeit
def import_module():
return __import__(m)
(mod, it) = import_module()
results[f"{m}_{mod.__version__}"] = "{:.3f}".format(it)
print(results)
with
def timeit(func):
"""
Decorator for measuring function's running time.
"""
def measure_time(*args, **kw):
start_time = time.time()
result = func(*args, **kw)
et = time.time() - start_time
print("Processing time of %s(): %.2f seconds."
% (func.__qualname__.split(".")[-1], et))
return result, et
return measure_time
def get_processor_name():
if platform.system() == "Linux":
command = "cat /proc/cpuinfo"
all_info = subprocess.check_output(command, shell=True).strip()
for line in all_info.decode().split("\n"):
if "model name" in line:
return re.sub( ".*model name.*:", "", line,1)
return ""
Running the script the first time after switching to the right interpreter in Pycharm gives me
Processing time of import_module(): 9.91 seconds.
Processing time of import_module(): 2.77 seconds.
However, running it a second time yields,
Processing time of import_module(): 1.08 seconds.
Processing time of import_module(): 0.32 seconds.
It seems obvious that there might be some form of caching that causes that second run to speed up - is there any way I can get similar results in both cases, maybe disable caching modules or s.th.?