90

As Knuth said,

We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.

This is something which often comes up in Stack Overflow answers to questions like "which is the most efficient loop mechanism", "SQL optimisation techniques?" (and so on). The standard answer to these optimisation-tips questions is to profile your code and see if it's a problem first, and if it's not, then therefore your new technique is unneeded.

My question is, if a particular technique is different but not particularly obscure or obfuscated, can that really be considered a premature optimisation?

Here's a related article by Randall Hyde called The Fallacy of Premature Optimization.

TylerH
  • 20,799
  • 66
  • 75
  • 101
nickf
  • 537,072
  • 198
  • 649
  • 721
  • 46
    It's kind of ironic that many people who yell "Premature optimization is the root of all evil" themselves have prematurely optimized the quote: (cont) – some Dec 22 '08 at 04:13
  • 25
    "We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%" (Donald Knuth) – some Dec 22 '08 at 04:14
  • 2
    I believe it was CA Hoare who said this. Even Knuth says so. – jamesh Dec 22 '08 at 15:20
  • 1
    yes, Tony Hoare first said the "premature optimization is the root of all evil part", but Knuth quoted/paraphrased him adding the rest, i believe – nickf Dec 22 '08 at 16:46
  • No, Hoare never said this; it was Knuth all the time except when he attributed Hoare by mistake. Please look it up. (I don't like to link to my own blog, but this is important... http://shreevatsa.wordpress.com/2008/05/16/ ) – ShreevatsaR Dec 26 '08 at 07:50
  • 2
    While I agree the quote is question is most often abused and taken out of context, it is, by definition always correct because of the "premature" (However it is most often used incorrectly as a justification for sloppy design and code). By definition, if the optimization happened at the most opportune point in development, be it during design or any other point, it was not "premature". – Lawrence Dol Jan 04 '11 at 23:49
  • Related: [Why is it so bad to optimize too early?](https://gamedev.stackexchange.com/q/141399/18565) at GameDev.SE – kenorb May 23 '17 at 13:59
  • 1
    Donald Knuth also wrote "I am sorry to say that many people nowadays are condemning program efficiency" a paragraph before the "premature optimization is the root of all evil part", and "The conventional wisdom shared by many of today's software engineers calls for ignoring efficiency in the small; but I believe this is simply an overreaction to the abuses they see being practiced by penny-wise-and-pound-foolish programmers". – Pedro Amaral Couto Nov 19 '19 at 10:34

20 Answers20

114

Don Knuth started the literate programming movement because he believed that the most important function of computer code is to communicate the programmer's intent to a human reader. Any coding practice that makes your code harder to understand in the name of performance is a premature optimization.

Certain idioms that were introduced in the name of optimization have become so popular that everyone understands them and they have become expected, not premature. Examples include

  • Using pointer arithmetic instead of array notation in C, including the use of such idioms as

    for (p = q; p < lim; p++)
    
  • Rebinding global variables to local variables in Lua, as in

    local table, io, string, math
        = table, io, string, math
    

Beyond such idioms, take shortcuts at your peril.

All optimization is premature unless

  • A program is too slow (many people forget this part).

  • You have a measurement (profile or similar) showing that the optimization could improve things.

(It's also permissible to optimize for memory.)

Direct answer to question:

  • If your "different" technique makes the program harder to understand, then it's a premature optimization.

EDIT: In response to comments, using quicksort instead of a simpler algorithm like insertion sort is another example of an idiom that everyone understands and expects. (Although if you write your own sort routine instead of using the library sort routine, one hopes you have a very good reason.)

yfeldblum
  • 65,165
  • 12
  • 129
  • 169
Norman Ramsey
  • 198,648
  • 61
  • 360
  • 533
  • what about development time considerations? – Manu Dec 22 '08 at 08:05
  • pointer arithmetic and array notation are equivalent in C...arrays are just syntactic sugar for pointer arithmetic – frankodwyer Dec 22 '08 at 08:27
  • 13
    By your definitions; if a quicksort implementation is harder to read and understand than a bubblesort, it is a premature optimization. You can't optimize for memory? Try looking up same examples for large sparse matrices. IMHO, most optimization should occur at design stage. i,e, very early on. – SmacL Dec 22 '08 at 08:47
  • 1
    @frankodwyer: but incrementing the pointer is possibly faster than incrementing a counter and using the array notation, and it would be premature optimization. – Joachim Sauer Dec 22 '08 at 09:09
  • 5
    @Norman: Although quicksort is *now* ubiquitous, it wasn't when first invented, and was therefore QED a premature optimization that the author had no business messing with, right? – Lawrence Dol Feb 18 '09 at 20:39
  • 5
    @Software Monkey: Absolutely. All CS research is a waste of the taxpayers' money and should be halted immediately. – Norman Ramsey Feb 20 '09 at 04:42
  • 12
    Any sort algorithm, including the one you invented, is clear and concise if written as separate function called sortQuickly(...) with proper comments. – ilya n. Jun 22 '09 at 12:35
  • The key rule should be.. If it is working and working fast enough.. Forget optimisation.. "Unhealthy perfectionism is a big problem" – Sreekanth Karumanaghat Sep 13 '17 at 12:20
  • @Manu The user's time is way more important than the programmer's time. – jv110 Feb 26 '18 at 00:57
  • Norman Ramsey wrote "Any coding practice that makes your code harder to understand in the name of performance is a premature optimization." No, 'premature' is all about the timing. If there is zero data showing the proposed optimization is materially problematic, then it's premature. – Erik Ostermueller Mar 22 '20 at 16:53
42

IMHO, 90% of your optimization should occur at design stage, based on percieved current, and more importantly, future requirements. If you have to take out a profiler because your application doesn't scale to the required load you have left it too late, and IMO will waste a lot of time and effort while failing to correct the problem.

Typically the only optimizations that are worthwhile are those that gain you an order of magnitude performance improvement in terms of speed, or a multiplier in terms of storage or bandwidth. These types of optimizations typically relate to algorithm selection and storage strategy, and are extremely difficult to reverse into existing code. They may go as deep as influencing the decision on the language in which you implement your system.

So my advice, optimize early, based on your requirements, not your code, and look to the possible extended life of your app.

Carl Manaster
  • 39,912
  • 17
  • 102
  • 155
SmacL
  • 22,555
  • 12
  • 95
  • 149
  • 6
    I do not agree on your "left it too late" conclusion. Basically profiling is needed when an assumption does not hold, and the profiler is needed to tell you WHAT assumption broke. For instance I found that "delete the character at position 0" for StringBuffers in Java worked fine for the junit tests, but VERY slow for large strings. I did not suspect that code until the profiler pinpointed it as the culprit! – Thorbjørn Ravn Andersen May 24 '09 at 15:13
  • 7
    I do agree with the "when you need the profiler, it's already to late", based on my experience - the majority of my performance problems are not singular bottlenecks, but spread out over multiple contributors. But then, I have a strong background in low level code & cost, and would have instinctively shunned anythign that relies on (significantly repeated) removal of the first string character. +1 for "optimize during design". – peterchen Nov 23 '09 at 23:05
  • @peterchen just out of curiosity, what would you have done for "removal of the first string character." – Ghos3t Jun 01 '17 at 06:08
  • 1
    @user258365: Brute force would be to use a string representation that does not need to make a copy for sub strings. That's "almost trivial" for immutable reference counted strings. Alternatively, algorithmic changes, such as replacing (pseudocode) `while (s[0]==' ') s = s.substring(1)` `for(i=0; i – peterchen Jun 01 '17 at 09:32
  • @ThorbjørnRavnAndersen, I worked as a consultant to help a team to finish a project, yet it wasn't possible because severe performance issues were not planned (besides the spaghetti code). It was supposed to show a chronological chart with all patients' history. A single request was made for the entire data, like Google Maps fetching the entire world. Developing bad code, expecting for profiling later made the project fail. – Pedro Amaral Couto Nov 19 '19 at 19:33
  • @PedroAmaralCouto Naturally I have a basic assumption that the code is of a certain standard (even ten years ago). If the architecture is unsound, you have bigger problems and profiling won't save you. – Thorbjørn Ravn Andersen Nov 19 '19 at 20:33
  • @ThorbjørnRavnAndersen, nevertheless it's an example of being too late to use a profiler, as you agree. The profiler might be useful for debugging and improvements, but it doesn't replace good sense. – Pedro Amaral Couto Nov 19 '19 at 22:43
  • @ThorbjørnRavnAndersen, the point here is that if you treat performance as a requirement much like a functional requirement and deal with it at design stage you may never have to use the profiler. You really shouldn't need to have to write any code to know that your performance requirements will be met. As per your opening comment, I try to treat the profiler much like a debugger on the basis that most of my code will be performant by design and the profiler is used for the exceptions. Doesn't always pan out that way, but that's the plan ;) – SmacL Nov 20 '19 at 07:28
33

If you haven't profiled, it's premature.

JaredPar
  • 733,204
  • 149
  • 1,241
  • 1,454
  • 3
    I agree with the idea behind it, but also: unless the implementation is bound completely by CPU cycles, getting a measurement that's both reproducable and can be generalized is hard - and the more stable it is, the less realistic it is, too. – peterchen Nov 23 '09 at 22:48
  • 1
    The problem I have with the above answer is that it implies that you can't optimise an algorithm before you code it up. My way of working tends to be design the algorithm to meet functional requirements. Look at whether it is likely to fail performance requirements (e.g. high order of complexity and likely to hit large data sets) and optimise the algorithm before starting to code it. Optimisation is simply refinement to reach an optimal solution, it is often most efficiently done at design stage. – SmacL May 25 '17 at 07:00
  • 2
    I don't agree. Knuth was talking about small efficiencies. Optimization often happens at design stage. It involves the selection of proper data structures and algorithms, which often have a big performance impact and can't necessarily be exchanged later. – haslersn Jun 08 '17 at 22:02
  • @haslersn : "Knuth was talking about small efficiencies" Donald Knuth: "The conventional wisdom shared by many of today's software engineers calls for ignoring efficiency in the small; but I believe this is simply an overreaction to the abuses (...) In established engineering disciplines a 12 % improvement, easily obtained, is never considered marginal (...)" – Pedro Amaral Couto Nov 19 '19 at 10:45
28

My question is, if a particular technique is different but not particularly obscure or obfuscated, can that really be considered a premature optimisation?

Um... So you have two techniques ready at hand, identical in cost (same effort to use, read, modify) and one is more efficient. No, using the more efficient one would not, in that case, be premature.

Interrupting your code-writing to look for alternatives to common programming constructs / library routines on the off-chance that there's a more efficient version hanging around somewhere even though for all you know the relative speed of what you're writing will never actually matter... That's premature.

Shog9
  • 156,901
  • 35
  • 231
  • 235
  • 3
    Agreed, if you know one algorithm is more efficient for your use case, by all means use the more efficient one. If you don't know the most efficient algorithm, use what you have and profile later to see if it is a problem. – grepsedawk Dec 22 '08 at 04:04
11

Here's the problem I see with the whole concept of avoiding premature optimization.

There's a disconnect between saying it and doing it.

I've done lots of performance tuning, squeezing large factors out of otherwise well-designed code, seemingly done without premature optimization. Here's an example.

In almost every case, the reason for the suboptimal performance is what I call galloping generality, which is the use of abstract multi-layer classes and thorough object-oriented design, where simple concepts would be less elegant but entirely sufficient.

And in the teaching material where these abstract design concepts are taught, such as notification-driven architecture, and information-hiding where simply setting a boolean property of an object can have an unbounded ripple effect of activities, what is the reason given? Efficiency.

So, was that premature optimization or not?

Community
  • 1
  • 1
Mike Dunlavey
  • 40,059
  • 14
  • 91
  • 135
  • I like this answer, as it illustrates one of the major problems with abstraction and generalisation. As you generalise a class hierarchy to support a broader range of use cases, it is all too easy to seriously impair the performance for the most typical use cases. It is also easy to latch onto a class that provides a given piece of functionality without checking whether the functionality is provided at an acceptable level of performance for the scale of intended usage. – SmacL Feb 24 '10 at 08:10
  • 1
    "where simple concepts would be less *elegant* but entirely sufficient" Complex code is rarely more elegant than simple code when simple code meets the requirements. (Although, I would argue that you must ensure your simple code actually explodes with a clear indication of the unsupported state/input if someone tries to execute it on a more complex case.) – jpmc26 May 23 '17 at 00:46
9

First, get the code working. Second, verify that the code is correct. Third, make it fast.

Any code change that is done before stage #3 is definitely premature. I am not entirely sure how to classify design choices made before that (like using well-suited data structures), but I prefer to veer towards using abstractions taht are easy to program with rather than those who are well-performing, until I am at a stage where I can start using profiling and having a correct (though frequently slow) reference implementation to compare results with.

Vatine
  • 20,782
  • 4
  • 54
  • 70
8

From a database perspective, not to consider optimal design at the design stage is foolhardy at best. Databases do not refactor easily. Once they are poorly designed (this is what a design that doesn't consider optimization is no matter how you might try to hide behind the nonsense of premature optimization), is almost never able to recover from that becasue the database is too basic to the operation of the whole system. It is far less costly to design correctly considering the optimal code for the situation you expect than to wait until the there are a million users and people are screaming becasue you used cursors throughout the application. Other optimizations such as using sargeable code, selecting what look to be the best possible indexes, etc. only make sense to do at design time. There is a reason why quick and dirty is called that. Because it can't work well ever, so don't use quickness as a substitute for good code. Also frankly when you understand performance tuning in databases, you can write code that is more likely to perform well in the same time or less than it takes to write code which doesn't perform well. Not taking the time to learn what is good performing database design is developer laziness, not best practice.

HLGEM
  • 94,695
  • 15
  • 113
  • 186
7

What you seem to be talking about is optimization like using a hash-based lookup container vs an indexed one like an array when a lot of key lookups will be done. This is not premature optimization, but something you should decide in the design phase.

The kind of optimization the Knuth rule is about is minimizing the length the most common codepaths, optimizing the code that is run most by for example rewriting in assembly or simplifying the code, making it less general. But doing this has no use until you are certain which parts of code need this kind of optimization and optimizing will (could?) make the code harder to understand or maintain, hence "premature optimization is the root of all evil".

Knuth also says it is always better to, instead of optimizing, change the algorithms your program uses, the approach it takes to a problem. For example whereas a little tweaking might give you a 10% increase of speed with optimization, changing fundamentally the way your program works might make it 10x faster.

In reaction to a lot of the other comments posted on this question: algorithm selection != optimization

FredV
  • 147
  • 1
  • 3
6

The point of the maxim is that, typically, optimization is convoluted and complex. And typically, you the architect/designer/programmer/maintainer need clear and concise code in order to understand what is going on.

If a particular optimization is clear and concise, feel free to experiment with it (but do go back and check whether that optimization is effective). The point is to keep the code clear and concise throughout the development process, until the benefits of performance outweigh the induced costs of writing and maintaining the optimizations.

yfeldblum
  • 65,165
  • 12
  • 129
  • 169
  • 2
    Actually, a fair bit of "optimization" boils down to choosing the proper algorithm for the job; it's a high-level activity with high-level results - a far cry from the "small efficiencies" in the Knuth quote. – Shog9 Dec 22 '08 at 04:09
5

Optimization can happen at different levels of granularity, from very high-level to very low-level:

  1. Start with a good architecture, loose coupling, modularity, etc.

  2. Choose the right data structures and algorithms for the problem.

  3. Optimize for memory, trying to fit more code/data in the cache. The memory subsystem is 10 to 100 times slower than the CPU, and if your data gets paged to disk, it's 1000 to 10,000 times slower. Being cautious about memory consumption is more likely to provide major gains than optimizing individual instructions.

  4. Within each function, make appropriate use of flow-control statements. (Move immutable expressions outside of the loop body. Put the most common value first in a switch/case, etc.)

  5. Within each statement, use the most efficient expressions yielding the correct result. (Multiply vs. shift, etc)

Nit-picking about whether to use a divide expression or a shift expression isn't necessarily premature optimization. It's only premature if you do so without first optimizing the architecture, data structures, algorithms, memory footprint, and flow-control.

And of course, any optimization is premature if you don't define a goal performance threshold.

In most cases, either:

A) You can reach the goal performance threshold by performing high-level optimizations, so it's not necessary to fiddle with the expressions.

or

B) Even after performing all possible optimizations, you won't meet your goal performance threshold, and the low-level optimizations don't make enough difference in performance to justify the loss of readability.

In my experience, most optimization problems can be solved at either the architecture/design or data-structure/algorithm level. Optimizing for memory footprint is often (though not always) called for. But it's rarely necessary to optimize the flow control & expression logic. And in those cases where it actually is necessary, it's rarely sufficient.

benjismith
  • 16,559
  • 9
  • 57
  • 80
4

I try to only optimise when a performance issue is confirmed.

My definition of premature optimisation is 'effort wasted on code that is not known to be a performance problem.' There is most definitely a time and place for optimisation. However, the trick is to spend the extra cost only where it counts to the performance of the application and where the additional cost outweighs the performance hit.

When writing code (or a DB query) I strive to write 'efficient' code (i.e. code that performs its intended function, quickly and completely with simplest logic reasonable.) Note that 'efficient' code is not necessarily the same as 'optimised' code. Optimisations often introduce additional complexity into code which increases both the development and maintenance cost of that code.

My advice: Try to only pay the cost of optimisation when you can quantify the benefit.

Chris Nava
  • 6,614
  • 3
  • 25
  • 31
4

When programming, a number of parameters are vital. Among these are:

  • Readability
  • Maintainability
  • Complexity
  • Robustness
  • Correctness
  • Performance
  • Development time

Optimisation (going for performance) often comes at the expense of other parameters, and must be balanced against the "loss" in these areas.

When you have the option of choosing well-known algorithms that perform well, the cost of "optimising" up-front is often acceptable.

Ola Eldøy
  • 5,720
  • 7
  • 49
  • 82
  • 1
    You are missing the single most important QA parameter in you list above; Meeting requirements. If a piece of software does not meet the requirements of the intended audience, all of the other parameters are meaningless. If performance is not acceptable, requirements have not been met. – SmacL Dec 22 '08 at 11:43
  • 3
    That could be said to be covered by correctness. Besides, 'performance' in the sense of 'as fast as possible' is very rarely among requirements, and even it was Ola's point of it being a tradeoff with the other needs remains true. – frankodwyer Dec 22 '08 at 14:05
3

Norman's answer is excellent. Somehow, you routinely do some "premature optimization" which are, actually, best practices, because doing otherwise is known to be totally inefficient.

For example, to add to Norman's list:

  • Using StringBuilder concatenation in Java (or C#, etc.) instead of String + String (in a loop);
  • Avoiding to loop in C like: for (i = 0; i < strlen(str); i++) (because strlen here is a function call walking the string each time, called on each loop);
  • It seems in most JavaScript implementations, it is faster to do too for (i = 0 l = str.length; i < l; i++) and it is still readable, so OK.

And so on. But such micro-optimizations should never come at the cost of readability of code.

PhiLho
  • 40,535
  • 6
  • 96
  • 134
3

The need to use a profiler should be left for extreme cases. The engineers of the project should be aware of where performance bottlenecks are.

I think "premature optimisation" is incredibly subjective.

If I am writing some code and I know that I should be using a Hashtable then I will do that. I won't implement it in some flawed way and then wait for the bug report to arrive a month or a year later when somebody is having a problem with it.

Redesign is more costly than optimising a design in obvious ways from the start.

Obviously some small things will be missed the first time around but these are rarely key design decisions.

Therefore: NOT optimising a design is IMO a code smell in and of itself.

nbevans
  • 7,739
  • 3
  • 27
  • 32
  • Thing is that bottlenecks often come up in sections of code you never thought would be a problem. Profiling dispenses with pretense and shows the actual cost centers of the program. It's best to do obvious things right from the start, but for everything else there's profiling. – Chris Smith Jan 02 '09 at 21:02
3

It's worth noting that Knuth's original quote came from a paper he wrote promoting the use of goto in carefully selected and measured areas as a way to eliminate hotspots. His quote was a caveat he added to justify his rationale for using goto in order to speed up those critical loops.

[...] again, this is a noticeable saving in the overall running speed, if, say, the average value of n is about 20, and if the search routine is performed about a million or so times in the program. Such loop optimizations [using gotos] are not difficult to learn and, as I have said, they are appropriate in just a small part of a program, yet they often yield substantial savings. [...]

And continues:

The conventional wisdom shared by many of today's software engineers calls for ignoring efficiency in the small; but I believe this is simply an overreaction to the abuses they see being practiced by pennywise-and-pound-foolish programmers, who can't debug or maintain their "optimized" programs. In established engineering disciplines a 12% improvement, easily obtained, is never considered marginal; and I believe the same viewpoint should prevail in software engineering. Of course I wouldn't bother making such optimizations on a oneshot job, but when it's a question of preparing quality programs, I don't want to restrict myself to tools that deny me such efficiencies [i.e., goto statements in this context].

Keep in mind how he used "optimized" in quotes (the software probably isn't actually efficient). Also note how he isn't just criticizing these "pennywise-and-pound-foolish" programmers, but also the people who react by suggesting you should always ignore small inefficiencies. Finally, to the frequently-quoted part:

There is no doubt that the grail of efficiency leads to abuse. Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forgot about small efficiencies, say 97% of the time; premature optimization is the root of all evil.

... and then some more about the importance of profiling tools:

It is often a mistake to make a priori judgments about what parts of a program are really critical, since the universal experience of programmers who have been using measurement tools has been that their intuitive guesses fail. After working with such tools for seven years, I've become convinced that all compilers written from now on should be designed to provide all programmers with feedback indicating what parts of their programs are costing the most; indeed, this feedback should be supplied automatically unless it has been specifically turned off.

People have misused his quote all over the place, often suggesting that micro-optimizations are premature when his entire paper was advocating micro-optimizations! One of the groups of people he was criticizing who echo this "conventional wisdom" as he put of always ignoring efficiencies in the small are often misusing his quote which was originally directed, in part, against such types who discourage all forms of micro-optimization.

Yet it was a quote in favor of appropriately applied micro-optimizations when used by an experienced hand holding a profiler. Today's analogical equivalent might be like, "People shouldn't be taking blind stabs at optimizing their software, but custom memory allocators can make a huge difference when applied in key areas to improve locality of reference," or, "Handwritten SIMD code using an SoA rep is really hard to maintain and you shouldn't be using it all over the place, but it can consume memory much faster when applied appropriately by an experienced and guided hand."

Any time you're trying to promote carefully-applied micro-optimizations as Knuth promoted above, it's good to throw in a disclaimer to discourage novices from getting too excited and blindly taking stabs at optimization, like rewriting their entire software to use goto. That's in part what he was doing. His quote was effectively a part of a big disclaimer, just like someone doing a motorcycle jump over a flaming fire pit might add a disclaimer that amateurs shouldn't try this at home while simultaneously criticizing those who try without proper knowledge and equipment and get hurt.

What he deemed "premature optimizations" were optimizations applied by people who effectively didn't know what they were doing: didn't know if the optimization was really needed, didn't measure with proper tools, maybe didn't understand the nature of their compiler or computer architecture, and most of all, were "pennywise-and-pound-foolish", meaning they overlooked the big opportunities to optimize (save millions of dollars) by trying to pinch pennies, and all while creating code they can no longer effectively debug and maintain.

If you don't fit in the "pennywise-and-pound-foolish" category, then you aren't prematurely optimizing by Knuth's standards, even if you're using a goto in order to speed up a critical loop (something which is unlikely to help much against today's optimizers, but if it did, and in a genuinely critical area, then you wouldn't be prematurely optimizing). If you're actually applying whatever you're doing in areas that are genuinely needed and they genuinely benefit from it, then you're doing just great in the eyes of Knuth.

1

Premature optimization to me means trying to improve the efficiency of your code before you have a working system, and before you have actually profiled it and know where the bottleneck is. Even after that, readability and maintainability should come before optimization in many cases.

PolyThinker
  • 5,152
  • 21
  • 22
1

I don't think that recognized best practices are premature optimizations. It's more about burning time on the what ifs that are potential performance problems depending on the usage scenarios. A good example: If you burn a week trying to optimize reflecting over an object before you have proof that it is a bottleneck you are prematurely optimizing.

Daniel Auger
  • 12,535
  • 5
  • 52
  • 73
1

Unless you find that you need more performance out of your application, due to either a user or business need, there's little reason to worry about optimizing. Even then, don't do anything until you've profiled your code. Then attack the parts which take the most time.

Kamil Kisiel
  • 19,723
  • 11
  • 46
  • 56
1

As I posted on a similar question, the rules of optimisation are:

1) Don't optimise

2) (for experts only) Optimise later

When is optimisation premature? Usually.

The exception is perhaps in your design, or in well encapsulated code that is heavily used. In the past I've worked on some time critical code (an RSA implementation) where looking at the assembler that the compiler produced and removing a single unnecessary instruction in an inner loop gave a 30% speedup. But, the speedup from using more sophisticated algorithms was orders of magnitude more than that.

Another question to ask yourself when optimising is "am I doing the equivalent of optimising for a 300 baud modem here?". In other words, will Moore's law make your optimisation irrelevant before too long. Many problems of scaling can be solved just by throwing more hardware at the problem.

Last but not least it's premature to optimise before the program is going too slowly. If it's web application you're talking about, you can run it under load to see where the bottlenecks are - but the likelihood is that you will have the same scaling problems as most other sites, and the same solutions will apply.

edit: Incidentally, regarding the linked article, I would question many of the assumptions made. Firstly it's not true that Moore's law stopped working in the 90s. Secondly, it's not obvious that user's time is more valuable than programmer's time. Most users are (to say the least) not frantically using every CPU cycle available anyhow, they are probably waiting for the network to do something. Plus there is an opportunity cost when programmer's time is diverted from implementing something else, to shaving a few milliseconds off something that the program does while the user is on the phone. Anything longer than that isn't usually optimisation, it's bug fixing.

frankodwyer
  • 13,948
  • 9
  • 50
  • 70
-1

The way I see it is, if you optimize something without knowing how much performance you can gain in different scenario IS a premature optimization. The goal of code should really making it easiest for human to read.