Also, can this notion be used in a practical fashion to improve an algorithm? If so, how?
It's not so much used for improving an algorithm but evaluating the performance of algorithms and deciding on which algorithm you choose to use. For any given problem, you really want to avoid algorithms that has O(N!) or O(N^x) since they slow down dramatically when the size of N (your input) increases. What you want is O(N) or O(log(N)) or even better O(1).
O(1) is constant time which means the algorithm takes the same amount of time to execute for a million inputs as it does for one. O(N) is of course linear which means the time it takes to execute the algorithm increases in proportion to its input.
There are even some problems where any algorithm developed to solve them end up being O(N!). Basically no fast algorithm can be developed to solve the problem completely (this class of problems is known as NP-complete). Once you realize you're dealing with such a problem you can relax your requirements a bit and solve the problem imperfectly by "cheating". These cheats don't necessarily find the optimal solution but instead settle for good enough. My favorite cheats are genetic/evolutionary algorithms and rainbow tables.
Another example of how understanding algorithmic complexity changes how you think about programming is micro-optimizations. Or rather, not doing it. You often see newbies asking questions like is ++x faster than x++
. Seasoned programmers mostly don't care and will usually reply the first rule of optimization is: don't
.
The more useful answer should be that changing x++
to ++x
does not in any way alter your algorithm complexity. The complexity of your algorithm has a much greater impact on the speed of your code than any form of micro-optimization. For example, it is much more productive for you to look at your code and reduce the number of deeply nested for loops than it is to worry about how your compiler turns your code to assembly.
Yet another example is how in games programming speeding up code counter-intuitively involve adding even more code instead of reducing code. The added code are in the form of filters (basically if..else statements) that decides which bit of data need further processing and which can be discarded. Form a micro-optimizer point of view adding code means more instructions for the CPU to execute. But in reality those filters reduce the problem space by discarding data and therefore run faster overall.