I recently made an initial comparison of the running time of Dijkstra's algorithm using two data structures, the Java-based PriorityQueue (based on a binary heap, if I'm not mistaken), and a Fibonacci heap. I used Java's currentTimeMillis() to make my calculations. The results I ended up with are quite interesting. This is the output for one of my testcases:
Running Dijkstra's with 8 nodes and 27 links
- Execution time with binary heap: 1 miliseconds
- Execution time with Fibonacci heap: 4 miliseconds
Admittedly, I'm short on data sets at the the moment, with the above graph being my largest (I plan to make more soon). But does this make any sense? I have always thought Fibonacci heaps were faster than other data structures due to their amortized running time compared to the other data structures. I'm not really sure where this 3-milisecond difference is coming from. (I'm running it on an Intel Core Ivy Bridge i7-3630M processor, if that helps.)
Note: I stumbled upon this thread which might explain the issue, though I'm still unclear as to why the Fibonacci heap version is taking longer. According to that thread, it could be because my graph is not dense enough and therefore the number of decrease-Key operations is not large enough for the performance of the Fibonacci heap to really shine. Would this be the only plausible conclusion, or is there something else I'm missing?