The problem with timing your code is that you're measuring the execution time of each individual loop iteration rather than the overall time taken for the loop to execute a specific number of iterations. This can introduce inaccuracies, primarily if any overhead is related to starting and stopping the timer or other tasks/processes on your system interfere.
To get a more accurate estimation of the time it takes to run your code, consider the following approaches:
Time a large number of iterations
Instead of timing each loop iteration (or one distance calculation), time the code over many iterations (calculations) and divide by the number of iterations to get an average time per iteration (calculation).
num_iterations = 10000 # or any large number
start = time.time()
for _ in range(num_iterations):
# Your distance estimation code here
end = time.time()
total_time = end - start
average_time_per_iteration = total_time / num_iterations
Use timeit
- dedicated tool for benchmarking small pieces of code
Results from timeit
is often more accurate for the following reasons according to this answer: time.time vs. timeit.timeit
- It repeats the tests many times to eliminate the influence of other tasks on >your machine, such as disk flushing and OS scheduling.
- It disables the garbage collector to prevent that process from skewing the >results by scheduling a collection run at an inopportune moment.
- It picks the most accurate timer for your OS, time.time or
time.clock
in >Python 2 and time.perf_counter()
in Python 3. See timeit.default_timer
.
Here's an example of how to use timeit
.
import timeit
def estimate_distance():
# Your distance estimation code here
pass
num_iterations = 10000 # or even bigger number
time_taken = timeit.timeit(estimate_distance, number=num_iterations)
average_time_per_iteration = time_taken / num_iterations