0

I'm currently studying the Depth-First Search algorithm and although i've confirmed that it does run in O(N+M) it still doesn't show up very well when i measure runtime, since in a graph with 2000 to 16000 nodes, and a constant 50000 edges. The runtime remains almost constant (close to 0,5 seconds), as if the nodes aren't doing much on it. Is there a way to get a more significant change in the runtime without adding too many more nodes?

I'm using an implementation in Python and using the command "time" to measure the runtime.

SrKurtz
  • 9
  • 5
  • (1) How did you measure it? Did you allow proper warm up before measuring? (2) In your case `m > n`, this behavior is kinda expected. – amit Oct 09 '16 at 19:40

1 Answers1

1

The problem may be that Python has relatively high overhead (reading and analyzing the program). Check out this question: How can you profile a python script?.

More specifically,

python -m cProfile myscript.py

should show you the total time (tottime) spent on the function that actually does the DFS.

Community
  • 1
  • 1
Kirill Bulygin
  • 3,658
  • 1
  • 17
  • 23