I'm working on a time-critical application in Python. To identify locations and situations, where a most of time is consumed, I wrote a tracking decorator, similar to this one:
tracker_repository = {}
def track(name: str = "default", active: bool = None) -> Callable:
if name not in tracker_repository:
tracker_repository[name] = []
def track_f(f) -> Callable:
if not active:
return f
def exec_f(*args, **kwargs) -> Any:
start_time = time.time()
result = f(*args, **kwargs)
end_time = time.time()
tracker_repository[name].append({
"f": f,
"args": args,
"kwargs": kwargs,
"start_time": start_time,
"end_time": end_time
})
return result
return exec_f
return track_f
And I use it like this:
@track()
def some_function(input):
return "output"
Unfortunately, it turns out that using this tracking decorator in every function of the code base slows down execution speed considerably, it is about 3 times slower than before. This holds true even if the decorator is "deactivated". If the "active" flag is False, then the original function is returned, exec_f
is thus not even touched.
Do you have an idea what could be the issue here and how the tracking code can be improved to have a less big impact on the timing? Or are there better alternatives to measure execution time of the program parts in real-life?