2

Relatively new to C++ but I am very interested in the algorithmic aspect of programming.

Is there a general framework for deciding if an algorithm is efficient? i.e. the quickest possible?

I am trying to write pseudocode on paper before implementing but there are probably many different ways to solve any given problem.

Would be very keen to learn best practice for constructing / analysing algorithms.

Thanks, and Happy New Year!

OliKlima
  • 41
  • 6
  • 4
    http://en.wikipedia.org/wiki/Big_O_notation is a start. (points to http://en.wikipedia.org/wiki/Analysis_of_algorithms as well) – njzk2 Dec 31 '13 at 23:28
  • "How can I tell if an algorithm is efficient?" - the only true way is benchmarking one of its actual implementations. –  Dec 31 '13 at 23:29
  • Thanks. I bought the Algorithms book by Cormen et al as well now - hopefully that'll help too. – OliKlima Dec 31 '13 at 23:33
  • @H2C03, can you elaborate a little? Do you mean run an algo that I devise and try to optimise somehow? Thanks! – OliKlima Dec 31 '13 at 23:36
  • @H2CO3 Good point on benchmarking, also it may measure something totally different if algorithm to actual code transformation was done wrong (like use binary search to optimize lookup in sorted linked list :) ) – Alexei Levenkov Dec 31 '13 at 23:37
  • 1
    H2CO3 suggests to implement algorithm in target language (C++ in your case) and measure (time, memory usage, IO requests, whatever else *you* care about for your definition of "efficient" at that particular project) - don't forget to vary inputs (i.e. sorting array of 1, 10, 10^6 elements) as different algorithms scale differently. – Alexei Levenkov Dec 31 '13 at 23:43
  • When you talk about efficient vs optimal, are you talking about taking computing resource / memory into account? So that what may be mathematically efficient is not optimal in a computer sense? – OliKlima Jan 01 '14 at 00:04
  • *"Is there a general framework for deciding if an algorithm is efficient? i.e. the quickest possible?"* - no, this is an unsolvable problem similar to the [halting problem](http://en.wikipedia.org/wiki/Halting_problem), so there can never be such a framework. On the other hand, it is possible (and commonly done) to benchmark an algorithm to see how fast it performs for a given environment (by environment I mean compiler/OS/hardware etc.). Armed with that data, it will still be up to you to determine if it is the "fastest possible". – JBentley Jan 01 '14 at 03:39

1 Answers1

4

Yes you can start with the Wikipedia article explaining the Big O notation, which in a nutshell is a way of describing the "efficiency" (upper bound of complexity) of different type of algorithms. Or you can look at an earlier answer where this is explained in simple english

Community
  • 1
  • 1
josephtikva1
  • 789
  • 5
  • 11
  • Thanks, I studied pure maths at uni luckily so Big-O, little-o makes sense. – OliKlima Jan 01 '14 at 00:03
  • Be careful when using Big-O or Theta, it will tell you bounds on how an algorithm scales but not how fast it will actually run. In practice, an `O(log n)` algorithm may be slower than an `O(n)` algorithm for the range of `n` that represents valid inputs. An example where this happens is Fibonacci calculations where the input is constrained so that the results fit in a 32 bit integer. – pjs Jan 01 '14 at 00:31