I'm currently studying the Depth-First Search algorithm and although i've confirmed that it does run in O(N+M) it still doesn't show up very well when i measure runtime, since in a graph with 2000 to 16000 nodes, and a constant 50000 edges. The runtime remains almost constant (close to 0,5 seconds), as if the nodes aren't doing much on it. Is there a way to get a more significant change in the runtime without adding too many more nodes?
I'm using an implementation in Python and using the command "time" to measure the runtime.