0

I'm having difficulty understanding the time of the dijkstra algorithm. Below I put the pseudo code I was analyzing for array. Considering that V are the vertices and E the edges. The Q as an array has its initialization with time O(V), the minimum and the for inside the while would have O(V) of while times O(V + E), then the result would be O(V²), am I correct?

 1  function Dijkstra(Graph, source):
    2      dist[source] ← 0                                    // Initialization
    3
    4      create vertex set Q
    5
    6      for each vertex v in Graph:           
    7          if v ≠ source
    8              dist[v] ← INFINITY                          // Unknown distance from source to v
    9              prev[v] ← UNDEFINED                         // Predecessor of v
    10
    11         Q.add_with_priority(v, dist[v])
    12
    13
    14     while Q is not empty:                              // The main loop
    15         u ← Q.extract_min()                            // Remove and return best vertex
    16         for each neighbor v of u:                      // only v that is still in Q
    17             alt ← dist[u] + length(u, v) 
    18             if alt < dist[v]
    19                 dist[v] ← alt
    20                 prev[v] ← u
    21                 Q.decrease_priority(v, alt)
    22
    23     return dist[], prev[] 

Below I put the pseudo code I was analyzing for priority queue. Now, the Q as an priority queue has its initialization with time O(V), the minimum and the for inside the while would have O(V) of while times O(1 + ElogV), then the result would be O(VElogV), am I correct? Considering the worst case is E = (V - 1), then the result could not be O(V²logV), why the value I find on the web is O(ElogV)? why with priority queue is faster than array?

     1  function Dijkstra(Graph, source):
     2
     3      create vertex set Q
     4
     5      for each vertex v in Graph:             // Initialization
     6          dist[v] ← INFINITY                  // Unknown distance from source to v
     7          prev[v] ← UNDEFINED                 // Previous node in optimal path from source
     8          add v to Q                          // All nodes initially in Q (unvisited nodes)
     9
    10      dist[source] ← 0                        // Distance from source to source
    11      
    12      while Q is not empty:
    13          u ← vertex in Q with min dist[u]    // Node with the least distance
    14                                                      // will be selected first
    15          remove u from Q 
    16          
    17          for each neighbor v of u:           // where v is still in Q.
    18              alt ← dist[u] + length(u, v)
    19              if alt < dist[v]:               // A shorter path to v has been found
    20                  dist[v] ← alt 
    21                  prev[v] ← u 
    22
    23      return dist[], prev[]
markspace
  • 10,621
  • 3
  • 25
  • 39
panchester
  • 325
  • 1
  • 4
  • 13

1 Answers1

0

The Q as an array has its initialization with time O(V), the minimum and the for inside the while would have O(V) of while times O(V + E), then the result would be O(V²), am I correct?

If that was correct, the time complexity would be O(V² + VE), and O(VE) would dominate (since V <= E on a 'useful' graph), giving you O(VE) for the whole algorithm. But you are NOT correct in your analysis, which is why the resulting time complexity is O(V²).

You are correct that the min() operation inside the WHILE loop has O(V) and thus O(V²) for the whole algorithm, but for the FOR loop, it's O(E) for the WHOLE duration of the algorithm, not the individual iterations. This is because you remove each vertex from Q exactly once, and you inspect all outgoing edges from each removed vertex. In other words, you inspect all edges, thus O(E).

Now, the Q as an priority queue has its initialization with time O(V), the minimum and the for inside the while would have O(V) of while times O(1 + ElogV), then the result would be O(VElogV), am I correct?

No. the time complexity of the min() operation and the FOR loop would depend on what data structure is used as the priority queue. Assuming that you are using a min-heap as the priority queue, min() will take O(logV), and the FOR loop will take a TOTAL of O(ElogV), which dominates, and becomes the total time complexity of the algorithm.

Here's a link to another answer that explains how to analyze the time complexity of Dijkstra's algorithm depending on which data structure you use to implement the priority queue:

Complexity Of Dijkstra's algorithm

wookie919
  • 3,054
  • 24
  • 32