363

This is definitely subjective, but I'd like to try to avoid it becoming argumentative. I think it could be an interesting question if people treat it appropriately.

The idea for this question came from the comment thread from my answer to the "What are five things you hate about your favorite language?" question. I contended that classes in C# should be sealed by default - I won't put my reasoning in the question, but I might write a fuller explanation as an answer to this question. I was surprised at the heat of the discussion in the comments (25 comments currently).

So, what contentious opinions do you hold? I'd rather avoid the kind of thing which ends up being pretty religious with relatively little basis (e.g. brace placing) but examples might include things like "unit testing isn't actually terribly helpful" or "public fields are okay really". The important thing (to me, anyway) is that you've got reasons behind your opinions.

Please present your opinion and reasoning - I would encourage people to vote for opinions which are well-argued and interesting, whether or not you happen to agree with them.

Community
  • 1
  • 1
Jon Skeet
  • 1,421,763
  • 867
  • 9,128
  • 9,194

407 Answers407

872

Programmers who don't code in their spare time for fun will never become as good as those that do.

I think even the smartest and most talented people will never become truly good programmers unless they treat it as more than a job. Meaning that they do little projects on the side, or just mess with lots of different languages and ideas in their spare time.

(Note: I'm not saying good programmers do nothing else than programming, but they do more than program from 9 to 5)

wvdschel
  • 11,800
  • 14
  • 41
  • 45
rustyshelf
  • 44,963
  • 37
  • 98
  • 104
769

The only "best practice" you should be using all the time is "Use Your Brain".

Too many people jumping on too many bandwagons and trying to force methods, patterns, frameworks etc onto things that don't warrant them. Just because something is new, or because someone respected has an opinion, doesn't mean it fits all :)

EDIT: Just to clarify - I don't think people should ignore best practices, valued opinions etc. Just that people shouldn't just blindly jump on something without thinking about WHY this "thing" is so great, IS it applicable to what I'm doing, and WHAT benefits/drawbacks does it bring?

David Basarab
  • 72,212
  • 42
  • 129
  • 156
Steven Robbins
  • 26,441
  • 7
  • 76
  • 90
710

Most comments in code are in fact a pernicious form of code duplication.

We spend most of our time maintaining code written by others (or ourselves) and poor, incorrect, outdated, misleading comments must be near the top of the list of most annoying artifacts in code.

I think eventually many people just blank them out, especially those flowerbox monstrosities.

Much better to concentrate on making the code readable, refactoring as necessary, and minimising idioms and quirkiness.

On the other hand, many courses teach that comments are very nearly more important than the code itself, leading to the this next line adds one to invoiceTotal style of commenting.

Ed Guiness
  • 34,602
  • 16
  • 110
  • 145
710

"Googling it" is okay!

Yes, I know it offends some people out there that their years of intense memorization and/or glorious stacks of programming books are starting to fall by the wayside to a resource that anyone can access within seconds, but you shouldn't hold that against people that use it.

Too often I hear googling answers to problems the result of criticism, and it really is without sense. First of all, it must be conceded that everyone needs materials to reference. You don't know everything and you will need to look things up. Conceding that, does it really matter where you got the information? Does it matter if you looked it up in a book, looked it up on Google, or heard it from a talking frog that you hallucinated? No. A right answer is a right answer.

What is important is that you understand the material, use it as the means to an end of a successful programming solution, and the client/your employer is happy with the results.

(although if you are getting answers from hallucinatory talking frogs, you should probably get some help all the same)

Gene Roberts
  • 2,192
  • 3
  • 17
  • 16
693

XML is highly overrated

I think too many jump onto the XML bandwagon before using their brains... XML for web stuff is great, as it's designed for it. Otherwise I think some problem definition and design thoughts should preempt any decision to use it.

My 5 cents

678

Not all programmers are created equal

Quite often managers think that DeveloperA == DeveloperB simply because they have same level of experience and so on. In actual fact, the performance of one developer can be 10x or even 100x that of another.

It's politically risky to talk about it, but sometimes I feel like pointing out that, even though several team members may appear to be of equal skill, it's not always the case. I have even seen cases where lead developers were 'beyond hope' and junior devs did all the actual work - I made sure they got the credit, though. :)

Dmitri Nesteruk
  • 23,067
  • 22
  • 97
  • 166
612

I fail to understand why people think that Java is absolutely the best "first" programming language to be taught in universities.

For one, I believe that first programming language should be such that it highlights the need to learn control flow and variables, not objects and syntax

For another, I believe that people who have not had experience in debugging memory leaks in C / C++ cannot fully appreciate what Java brings to the table.

Also the natural progression should be from "how can I do this" to "how can I find the library which does that" and not the other way round.

Michael Myers
  • 188,989
  • 46
  • 291
  • 292
Learning
  • 8,029
  • 3
  • 35
  • 46
539

If you only know one language, no matter how well you know it, you're not a great programmer.

There seems to be an attitude that says once you're really good at C# or Java or whatever other language you started out learning then that's all you need. I don't believe it- every language I have ever learned has taught me something new about programming that I have been able to bring back into my work with all the others. I think that anyone who restricts themselves to one language will never be as good as they could be.

It also indicates to me a certain lack of inquistiveness and willingness to experiment that doesn't necessarily tally with the qualities I would expect to find in a really good programmer.

glenatron
  • 11,018
  • 13
  • 64
  • 112
535

Performance does matter.

David Basarab
  • 72,212
  • 42
  • 129
  • 156
Daniel Paull
  • 6,797
  • 3
  • 32
  • 41
488

Print statements are a valid way to debug code

I believe it is perfectly fine to debug your code by littering it with System.out.println (or whatever print statement works for your language). Often, this can be quicker than debugging, and you can compare printed outputs against other runs of the app.

Just make sure to remove the print statements when you go to production (or better, turn them into logging statements)

David Koelle
  • 20,726
  • 23
  • 93
  • 130
467

Your job is to put yourself out of work.

When you're writing software for your employer, any software that you create is to be written in such a way that it can be picked up by any developer and understood with a minimal amount of effort. It is well designed, clearly and consistently written, formatted cleanly, documented where it needs to be, builds daily as expected, checked into the repository, and appropriately versioned.

If you get hit by a bus, laid off, fired, or walk off the job, your employer should be able to replace you on a moment's notice, and the next guy could step into your role, pick up your code and be up and running within a week tops. If he or she can't do that, then you've failed miserably.

Interestingly, I've found that having that goal has made me more valuable to my employers. The more I strive to be disposable, the more valuable I become to them.

Mike Hofer
  • 16,477
  • 11
  • 74
  • 110
465

1) The Business Apps farce:

I think that the whole "Enterprise" frameworks thing is smoke and mirrors. J2EE, .NET, the majority of the Apache frameworks and most abstractions to manage such things create far more complexity than they solve.

Take any regular Java or .NET ORM, or any supposedly modern MVC framework for either which does "magic" to solve tedious, simple tasks. You end up writing huge amounts of ugly XML boilerplate that is difficult to validate and write quickly. You have massive APIs where half of those are just to integrate the work of the other APIs, interfaces that are impossible to recycle, and abstract classes that are needed only to overcome the inflexibility of Java and C#. We simply don't need most of that.

How about all the different application servers with their own darned descriptor syntax, the overly complex database and groupware products?

The point of this is not that complexity==bad, it's that unnecessary complexity==bad. I've worked in massive enterprise installations where some of it was necessary, but even in most cases a few home-grown scripts and a simple web frontend is all that's needed to solve most use cases.

I'd try to replace all of these enterprisey apps with simple web frameworks, open source DBs, and trivial programming constructs.

2) The n-years-of-experience-required:

Unless you need a consultant or a technician to handle a specific issue related to an application, API or framework, then you don't really need someone with 5 years of experience in that application. What you need is a developer/admin who can read documentation, who has domain knowledge in whatever it is you're doing, and who can learn quickly. If you need to develop in some kind of language, a decent developer will pick it up in less than 2 months. If you need an administrator for X web server, in two days he should have read the man pages and newsgroups and be up to speed. Anything less and that person is not worth what he is paid.

3) The common "computer science" degree curriculum:

The majority of computer science and software engineering degrees are bull. If your first programming language is Java or C#, then you're doing something wrong. If you don't get several courses full of algebra and math, it's wrong. If you don't delve into functional programming, it's incomplete. If you can't apply loop invariants to a trivial for loop, you're not worth your salt as a supposed computer scientist. If you come out with experience in x and y languages and object orientation, it's full of s***. A real computer scientist sees a language in terms of the concepts and syntaxes it uses, and sees programming methodologies as one among many, and has such a good understanding of the underlying philosophies of both that picking new languages, design methods, or specification languages should be trivial.

pavpanchekha
  • 2,073
  • 1
  • 17
  • 23
Daishiman
  • 804
  • 1
  • 8
  • 14
439

Getters and Setters are Highly Overused

I've seen millions of people claiming that public fields are evil, so they make them private and provide getters and setters for all of them. I believe this is almost identical to making the fields public, maybe a bit different if you're using threads (but generally is not the case) or if your accessors have business/presentation logic (something 'strange' at least).

I'm not in favor of public fields, but against making a getter/setter (or Property) for everyone of them, and then claiming that doing that is encapsulation or information hiding... ha!

UPDATE:

This answer has raised some controversy in it's comments, so I'll try to clarify it a bit (I'll leave the original untouched since that is what many people upvoted).

First of all: anyone who uses public fields deserves jail time

Now, creating private fields and then using the IDE to automatically generate getters and setters for every one of them is nearly as bad as using public fields.

Many people think:

private fields + public accessors == encapsulation

I say (automatic or not) generation of getter/setter pair for your fields effectively goes against the so called encapsulation you are trying to achieve.

Lastly, let me quote Uncle Bob in this topic (taken from chapter 6 of "Clean Code"):

There is a reason that we keep our variables private. We don't want anyone else to depend on them. We want the freedom to change their type or implementation on a whim or an impulse. Why, then, do so many programmers automatically add getters and setters to their objects, exposing their private fields as if they were public?

Pablo Fernandez
  • 103,170
  • 56
  • 192
  • 232
383

UML diagrams are highly overrated

Of course there are useful diagrams e.g. class diagram for the Composite Pattern, but many UML diagrams have absolutely no value.

John Topley
  • 113,588
  • 46
  • 195
  • 237
Ludwig Wensauer
  • 1,885
  • 3
  • 32
  • 43
380

Opinion: SQL is code. Treat it as such

That is, just like your C#, Java, or other favorite object/procedure language, develop a formatting style that is readable and maintainable.

I hate when I see sloppy free-formatted SQL code. If you scream when you see both styles of curly braces on a page, why or why don't you scream when you see free formatted SQL or SQL that obscures or obfuscates the JOIN condition?

Timwi
  • 65,159
  • 33
  • 165
  • 230
354

Readability is the most important aspect of your code.

Even more so than correctness. If it's readable, it's easy to fix. It's also easy to optimize, easy to change, easy to understand. And hopefully other developers can learn something from it too.

Craig P. Motlin
  • 26,452
  • 17
  • 99
  • 126
341

If you're a developer, you should be able to write code

I did quite a bit of interviewing last year, and for my part of the interview I was supposed to test the way people thought, and how they implemented simple-to-moderate algorithms on a white board. I'd initially started out with questions like:

Given that Pi can be estimated using the function 4 * (1 - 1/3 + 1/5 - 1/7 + ...) with more terms giving greater accuracy, write a function that calculates Pi to an accuracy of 5 decimal places.

It's a problem that should make you think, but shouldn't be out of reach to a seasoned developer (it can be answered in about 10 lines of C#). However, many of our (supposedly pre-screened by the agency) candidates couldn't even begin to answer it, or even explain how they might go about answering it. So after a while I started asking simpler questions like:

Given the area of a circle is given by Pi times the radius squared, write a function to calculate the area of a circle.

Amazingly, more than half the candidates couldn't write this function in any language (I can read most popular languages so I let them use any language of their choice, including pseudo-code). We had "C# developers" who could not write this function in C#.

I was surprised by this. I had always thought that developers should be able to write code. It seems that, nowadays, this is a controversial opinion. Certainly it is amongst interview candidates!


Edit:

There's a lot of discussion in the comments about whether the first question is a good or bad one, and whether you should ask questions as complex as this in an interview. I'm not going to delve into this here (that's a whole new question) apart from to say you're largely missing the point of the post.

Yes, I said people couldn't make any headway with this, but the second question is trivial and many people couldn't make any headway with that one either! Anybody who calls themselves a developer should be able to write the answer to the second one in a few seconds without even thinking. And many can't.

Greg Beech
  • 133,383
  • 43
  • 204
  • 250
331

The use of hungarian notation should be punished with death.

That should be controversial enough ;)

David Basarab
  • 72,212
  • 42
  • 129
  • 156
Marc
  • 1,356
  • 2
  • 10
  • 15
287

Design patterns are hurting good design more than they're helping it.

IMO software design, especially good software design is far too varied to be meaningfully captured in patterns, especially in the small number of patterns people can actually remember - and they're far too abstract for people to really remember more than a handful. So they're not helping much.

And on the other hand, far too many people become enamoured with the concept and try to apply patterns everywhere - usually, in the resulting code you can't find the actual design between all the (completely meaningless) Singletons and Abstract Factories.

Michael Borgwardt
  • 342,105
  • 78
  • 482
  • 720
274

Less code is better than more!

If the users say "that's it?", and your work remains invisible, it's done right. Glory can be found elsewhere.

Jas Panesar
  • 6,597
  • 3
  • 36
  • 47
266

PHP sucks ;-)

The proof is in the pudding.

262

Unit Testing won't help you write good code

The only reason to have Unit tests is to make sure that code that already works doesn't break. Writing tests first, or writing code to the tests is ridiculous. If you write to the tests before the code, you won't even know what the edge cases are. You could have code that passes the tests but still fails in unforeseen circumstances.

And furthermore, good developers will keep cohesion low, which will make the addition of new code unlikely to cause problems with existing stuff.

In fact, I'll generalize that even further,

Most "Best Practices" in Software Engineering are there to keep bad programmers from doing too much damage.

They're there to hand-hold bad developers and keep them from making dumbass mistakes. Of course, since most developers are bad, this is a good thing, but good developers should get a pass.

Chad Okere
  • 4,570
  • 1
  • 21
  • 19
256

Write small methods. It seems that programmers love to write loooong methods where they do multiple different things.

I think that a method should be created wherever you can name one.

David Basarab
  • 72,212
  • 42
  • 129
  • 156
Matt Secoske
  • 41
  • 1
  • 2
  • 5
235

It's ok to write garbage code once in a while

Sometimes a quick and dirty piece of garbage code is all that is needed to fulfill a particular task. Patterns, ORMs, SRP, whatever... Throw up a Console or Web App, write some inline sql ( feels good ), and blast out the requirement.

John Farrell
  • 24,673
  • 10
  • 77
  • 110
196

Code == Design

I'm no fan of sophisticated UML diagrams and endless code documentation. In a high level language, your code should be readable and understandable as is. Complex documentation and diagrams aren't really any more user friendly.


Here's an article on the topic of Code as Design.

Jon B
  • 51,025
  • 31
  • 133
  • 161
186

Software development is just a job

Don't get me wrong, I enjoy software development a lot. I've written a blog for the last few years on the subject. I've spent enough time on here to have >5000 reputation points. And I work in a start-up doing typically 60 hour weeks for much less money than I could get as a contractor because the team is fantastic and the work is interesting.

But in the grand scheme of things, it is just a job.

It ranks in importance below many things such as family, my girlfriend, friends, happiness etc., and below other things I'd rather be doing if I had an unlimited supply of cash such as riding motorbikes, sailing yachts, or snowboarding.

I think sometimes a lot of developers forget that developing is just something that allows us to have the more important things in life (and to have them by doing something we enjoy) rather than being the end goal in itself.

Greg Beech
  • 133,383
  • 43
  • 204
  • 250
184

I also think there's nothing wrong with having binaries in source control.. if there is a good reason for it. If I have an assembly I don't have the source for, and might not necessarily be in the same place on each devs machine, then I will usually stick it in a "binaries" directory and reference it in a project using a relative path.

Quite a lot of people seem to think I should be burned at the stake for even mentioning "source control" and "binary" in the same sentence. I even know of places that have strict rules saying you can't add them.

David Basarab
  • 72,212
  • 42
  • 129
  • 156
Steven Robbins
  • 26,441
  • 7
  • 76
  • 90
180

Every developer should be familiar with the basic architecture of modern computers. This also applies to developers who target a virtual machine (maybe even more so, because they have been told time and time again that they don't need to worry themselves with memory management etc.)

Brian Rasmussen
  • 114,645
  • 34
  • 221
  • 317
163

Software Architects/Designers are Overrated

As a developer, I hate the idea of Software Architects. They are basically people that no longer code full time, read magazines and articles, and then tell you how to design software. Only people that actually write software full time for a living should be doing that. I don't care if you were the worlds best coder 5 years ago before you became an Architect, your opinion is useless to me.

How's that for controversial?

Edit (to clarify): I think most Software Architects make great Business Analysts (talking with customers, writing requirements, tests, etc), I simply think they have no place in designing software, high level or otherwise.

rustyshelf
  • 44,963
  • 37
  • 98
  • 104
152

There is no "one size fits all" approach to development

I'm surprised that this is a controversial opinion, because it seems to me like common sense. However, there are many entries on popular blogs promoting the "one size fits all" approach to development so I think I may actually be in the minority.

Things I've seen being touted as the correct approach for any project - before any information is known about it - are things like the use of Test Driven Development (TDD), Domain Driven Design (DDD), Object-Relational Mapping (ORM), Agile (capital A), Object Orientation (OO), etc. etc. encompassing everything from methodologies to architectures to components. All with nice marketable acronyms, of course.

People even seem to go as far as putting badges on their blogs such as "I'm Test Driven" or similar, as if their strict adherence to a single approach whatever the details of the project project is actually a good thing.

It isn't.

Choosing the correct methodologies and architectures and components, etc., is something that should be done on a per-project basis, and depends not only on the type of project you're working on and its unique requirements, but also the size and ability of the team you're working with.

Greg Beech
  • 133,383
  • 43
  • 204
  • 250
148

Most professional programmers suck

I have come across too many people doing this job for their living who were plain crappy at what they were doing. Crappy code, bad communication skills, no interest in new technology whatsoever. Too many, too many...

petr k.
  • 8,040
  • 7
  • 41
  • 52
115

A degree in computer science does not---and is not supposed to---teach you to be a programmer.

Programming is a trade, computer science is a field of study. You can be a great programmer and a poor computer scientist and a great computer scientist and an awful programmer. It is important to understand the difference.

If you want to be a programmer, learn Java. If you want to be a computer scientist, learn at least three almost completely different languages. e.g. (assembler, c, lisp, ruby, smalltalk)

Starkii
  • 1,149
  • 2
  • 9
  • 17
  • 2
    The first one is not really controversial, at least not in the CS field. – wds Jan 03 '09 at 19:06
  • I disagree. I know many people studying computer science that think they are getting a degree in programming. Every time I hear whining about why CS programs don't teach everyone Java I offer up a pained sigh. – Starkii Jan 05 '09 at 01:44
  • 6
    Java doesn't really teach you how to be a real programmer, since there's so much you can't learn with it. It's like building a car with legos. – Lance Roberts Jan 06 '09 at 01:58
  • 7
    I may agree with the first point, but saying that knowing only Java could make a programmer ..... that's a crime, punishable with death!!! – hasen Jan 07 '09 at 02:12
  • Can you move your second answer to another post so it can be rated separately. – Greg Domjan Jan 07 '09 at 06:57
  • I agree with "does not", but not with "is not supposed to". Where else in academia are you supposed to learn to program? There is no analog in software to the Engineering disciplies (mechanical, electrical, civil etc.). – MusiGenesis Jan 13 '09 at 17:12
  • @MusiGenesis: My local community college has an "Associate in Applied Science Degree" in "Computer Programming". (Washtenaw Community College) That is where I would go to be a programmer. It is important not to confuse Computer Science with Computer Programmer. They are _NOT_ the same thing – Starkii Jan 16 '09 at 04:07
  • 1
    @MusiGenesis: I've actually just completed my degree in Engineering (Software). I'm certainly not a computer scientist, and I don't want to be. – ajlane Mar 09 '09 at 12:43
  • A CS degree is indeed not a programming degree. But then again, a programming degree doesn't make you a good programmer either. Both can introduce you to the basics and some special subfields, but it's up to you to use that as one of many sources of information as you develop your skills. Now, you may be able to solve any problem your work poses to you using a single language, like Java. But is it the best way? Learning several different languages and paradigms can help expand your perception of how problems can be solved using program code, and allow you to create better solutions. – Lucas Lindström May 05 '09 at 21:02
  • 3
    I disagree that CS does not teach you to be a programmer. It DOES and SHOULD do that - incidentally by teaching multiple languages, not one only - but that's not ALL it should do. CS degrees should also teach you about as many different areas of CS as possible, eg basic programming, functional languages, databases, cryptography, AI, language engineering (ie compilers/parsing), architecture and math-leaning areas like computer graphics and various algorithms. – DisgruntledGoat May 10 '09 at 00:04
  • 1
    Programming is easier in some fields than in others. Web development and most of the work you do in Information Systems is not hard. If you have a bit of a knack for programming, you can do this stuff very well without a CS or engineering degree. If you want to be a game programmer, write device drivers, work with embedded systems, or other things of the like, you'll need to know certain things from the degree. – ravibhagw Jun 08 '10 at 19:46
  • I disagree. CS degree teaches you how to solve problems often using C/C++ (low level languages), it teaches you algorithm design, the theory behind OS, general algorithms used everywhere - all of these apply if you want to code. In other words, you get the basics - a foundation upon you can build by learning more languages. Knowing Java doesn't make you a programmer, in fact, it's the most ridiculous thing I have heard for awhile. – sarsnake Nov 30 '10 at 21:12
101

SESE (Single Entry Single Exit) is not law

Example:

public int foo() {
   if( someCondition ) {
      return 0;
   }

   return -1;
}

vs:

public int foo() {
   int returnValue = -1;

   if( someCondition ) {
      returnValue = 0;
   }

   return returnValue;
}

My team and I have found that abiding by this all the time is actually counter-productive in many cases.

strager
  • 88,763
  • 26
  • 134
  • 176
javamonkey79
  • 17,443
  • 36
  • 114
  • 172
  • I found it: Single Entry Single Exit !! – tuinstoel Jan 02 '09 at 22:47
  • I guess, that in other words it is "function should have only one return statement" - never agreed with that one. – Rene Saarsoo Jan 03 '09 at 20:04
  • 4
    Moreover, an exception is just another exit point. When functions are short and error-safe (-> finally, RAII), there is no need to follow SESE. – Luc Hermitte Jan 07 '09 at 14:01
  • 5
    Agreed. I cringe at the 100+ loc methods I've seen that carry a return value from the first line all the way to the bottom just to adhere to SESE. There is something to be said for exiting when you find the answer. – Rontologist Jan 09 '09 at 19:14
  • Totally agree on that one, I was about to add it onto this post, you beat me to it ;) – dbones Feb 03 '09 at 19:38
  • 2
    Wait people actually do this? Why can't you just search for "return"? – nosatalian May 31 '09 at 01:54
  • 1
    SESE is law in unmanaged code, but in managed code it isn't, some post somewhere here in SO explains it better – Jader Dias Jul 09 '09 at 00:23
  • I'd like to see that post, but admittedly, my opinion comes from a strict managed code domain. – javamonkey79 Jul 09 '09 at 01:29
  • This might be useful when your debugger only has a maximum of two breakpoints. Very common in embedded hardware environments. – cmcginty Jul 22 '09 at 23:37
  • 9
    I think SESE is a great example of a solution in search of a problem – Kevin Laity Oct 22 '09 at 00:24
  • 3
    SESE dates back to 1960s and structured programming. it made a lot of sense then. single entry is pretty much guaranteed today, clinging to single exit just betrays low iq. – just somebody Dec 15 '09 at 03:35
  • It only makes sense if it's SESRP: Single Entry, Single Return Point. This was important in languages like BASIC where you could GOTO here, there, and everywhere. Better practice was to always return where you came from, using GOSUB instead of GOTO. With modern programming languages this isn't so much of an issue...which seems to be how the sensible "return where you came from" morphed into the awful "exit from only one point of the method". – Ryan Lundy Dec 15 '09 at 21:29
  • I was running PMD on a project and came here to post this after an annoying set of 'OnlyOneReturn' point *violations* popped up. – Tom Neyland Nov 30 '10 at 22:22
100

C++ is one of the WORST programming languages - EVER.

It has all of the hallmarks of something designed by committee - it does not do any given job well, and does some jobs (like OO) terribly. It has a "kitchen sink" desperation to it that just won't go away.

It is a horrible "first language" to learn to program with. You get no elegance, no assistance (from the language). Instead you have bear traps and mine fields (memory management, templates, etc.).

It is not a good language to try to learn OO concepts. It behaves as "C with a class wrapper" instead of a proper OO language.

I could go on, but will leave it at that for now. I have never liked programming in C++, and although I "cut my teeth" on FORTRAN, I totally loved programming in C. I still think C was one of the great "classic" languages. Something that C++ is certainly NOT, in my opinion.

Cheers,

-R

EDIT: To respond to the comments on teaching C++. You can teach C++ in two ways - either teaching it as C "on steroids" (start with variables, conditions, loops, etc), or teaching it as a pure "OO" language (start with classes, methods, etc). You can find teaching texts that use one or other of these approaches. I prefer the latter approach (OO first) as it does emphasize the capabilities of C++ as an OO language (which was the original design emphasis of C++). If you want to teach C++ "as C", then I think you should teach C, not C++.

But the problem with C++ as a first language in my experience is that the language is simply too BIG to teach in one semester, plus most "intro" texts try and cover everything. It is simply not possible to cover all the topics in a "first language" course. You have to at least split it into 2 semesters, and then it's no longer "first language", IMO.

I do teach C++, but only as a "new language" - that is, you must be proficient in some prior "pure" language (not scripting or macros) before you can enroll in the course. C++ is a very fine "second language" to learn, IMO.

-R

'Nother Edit: (to Konrad)

I do not at all agree that C++ "is superior in every way" to C. I spent years coding C programs for microcontrollers and other embedded applications. The C compilers for these devices are highly optimized, often producing code as good as hand-coded assembler. When you move to C++, you gain a tremendous overhead imposed by the compiler in order to manage language features you may not use. In embedded applications, you gain little by adding classes and such, IMO. What you need is tight, clean code. You can write it in C++, but then you're really just writing C, and the C compilers are more optimized in these applications.

I wrote a MIDI engine, first in C, later in C++ (at the vendor's request) for an embedded controller (sound card). In the end, to meet the performance requirements (MIDI timings, etc) we had to revert to pure C for all of the core code. We were able to use C++ for the high-level code, and having classes was very sweet - but we needed C to get the performance at the lower level. The C code was an order of magnitude faster than the C++ code, but hand coded assembler was only slightly faster than the compiled C code. This was back in the early 1990s, just to place the events properly.

-R

Nick Heiner
  • 119,074
  • 188
  • 476
  • 699
Huntrods
  • 2,561
  • 3
  • 22
  • 29
  • I would upvote if it wasn't for the "it's a horrible first language", I think it sucks but it's a good first language, particularly because it does suck, then one can appreciate the need for better languages! – hasen Jan 03 '09 at 03:08
  • It's very difficult to create usable classes in C++, but once you create them, life is very easy. Way easier than using plain C. What I do is the following: I implement the functionality in C, then wrap it using C++ classes. – isekaijin Jan 03 '09 at 03:52
  • The way I see it, a lot of misgivings about C++ stem from the fact that C++ is generally taught wrong. One typically needs to unlearn a lot of C before one can grok C++ well. Learning C++ after C never seems a good idea to me. – Debajit Jan 03 '09 at 10:10
  • 1
    And I think that C++ is superior to C in every way, except that it unfortunately was designed to be “backwards” compatible to C. – Konrad Rudolph Jan 03 '09 at 11:57
  • 8
    I think C++ is a good example of "design by committee" done *RIGHT*. It's a mess in many ways, and for many purposes, it's a lousy languages. But if you bother to really learn it, there's a remarkably expressive and elegant language hidden within. It's just a shame that few people discover it. – jalf Jan 04 '09 at 01:01
  • 1
    I've got another bone to pick with you: “You can teach C++ in two ways” – this is wrong. Apparently you have only ever used C++ in two ways, without unlocking its true potential. This also explains your microcontroller related experience: C is *no* faster than (well-written) C++. – Konrad Rudolph Jan 04 '09 at 21:17
  • 1
    +1: Of all the languages I've ever played with, C++ is the only one which has made me sick every time I've approached it. I've had a book on C++ for years, I pick it up every once in a while and tell myself "it really can't be that bad" and read until my eyes bleed, I've made it to page 47. – Robert Gamble Jan 05 '09 at 04:18
  • 1
    There is a third approach to learning C++: Accelerated C++ takes it. It builds from the very beginning (variables, functions) but using real C++ elements (STL). I recommend it for anyone who wants another view into C++. – David Rodríguez - dribeas Jan 05 '09 at 11:34
  • @dribeas: I appreciate the recommendation, it looks like a good book. I doubt I'll ever be able to "appreciate" what C++ has to offer but if I ever recover from my previous experiences I will take you up on your recommendation. – Robert Gamble Jan 07 '09 at 06:39
  • 5
    Okay, if C++ code was ten times slower than C code, what sort of Mickey Mouse compilers were you using? Or what idiotic code conventions were you required to use? Were you asked to do exception specifications, for example (almost always a bad idea)? – David Thornley Jan 09 '09 at 14:43
  • Just throwing this out there, but the Programming Language benchmark game has quite a few examples of C++ being faster then C. – James McMahon Jan 13 '09 at 19:43
  • 1
    "When you move to C++, you gain a tremendous overhead imposed by the compiler in order to manage language features you may not use. In embedded applications, you gain little by adding classes and such, IMO. What you need is tight, clean code." - who says you *have* to use classes, rtti and whatnot? – Johannes Schaub - litb Jan 21 '09 at 05:03
  • 3
    you don't *have* to use those features. if you only use the C subset, then C++ is equally fast as C. then, you can selectively pick those C++ features *you* like. some vector sugar here, some other stuff there. isn't that nice? – Johannes Schaub - litb Jan 21 '09 at 05:05
  • 1
    and i agree it's all but a nice first language. it's not wise to teach it first IMHO. and it's good that it's compatible to C. nuff said :) – Johannes Schaub - litb Jan 21 '09 at 05:06
  • I agree that it's got a whole raft of problems. but worst ever? Ever seen intercal? BFUNGE? assembly language? – Brian Postow May 05 '09 at 21:38
  • Regarding your anecdote about C++ being an order of magnitude slower, keep in mind that C++ compilers of the '80s are not the same as C++ compilers of today. – davidtbernal May 08 '09 at 01:33
  • 1
    I agree that it's the worst language ever. Except for all the others. – Kaz Dragon Jun 17 '09 at 10:37
  • I don't agree that its the worst language; I do agree that its a bad language; I also agree that its a bad first language. C++ is powerful and has a lot of features that are very useful. This makes C++ a good choice - sometimes. C++ also has a lot of hidden evil (lots of undefined behavior that looks perfectly fine..) which makes it a bad language and definitely a bad first language. –  Jul 20 '09 at 08:45
  • 1
    @david-basarab - C++ compilers are now much better! I use c++ not only for MIDI but for audio DSP algorithms - utilizing C++ templates makes it very powerful to make tunable compile time parameters such as buffer size and layout which allows for automatic SSE/altivec optimizations. The benefit of C++ now is not the language which is always a template-puzzle nowadays, but because the compilers available are better at optimizing real time functions than Haskell, Ada, Scheme and Scala are – jdkoftinoff Aug 17 '09 at 01:33
  • 4
    -1. C++ is still the most powerful multi-paradigm widely available language there is. It's the most adaptable of them all, therefore it can solve many different problems, which in some applications is _very_ useful. It might not be best at each specific thing, but overall, it's seldom a really bad choice. – Macke Aug 21 '09 at 18:14
  • 1
    C++ is like Democracy, "Many forms of Government have been tried, and will be tried in this world of sin and woe. No one pretends that democracy is perfect or all-wise. Indeed, it has been said that democracy is the worst form of government except all those other forms that have been tried from time to time." -Sir Winston Churchill – gradbot Oct 13 '09 at 17:11
  • C++ is massive, and massively popular. Like all languages, it has applications for which is it well suited, and applications for which it is poorly suited. – ravibhagw Jun 08 '10 at 19:48
  • +1 For second language. I learned Java first and a bit of C one year later. I'm glad I learned the low-level C stuff because it makes me a better high-level programmer, but I'm also glad I didn't have to start with C. – Bart van Heukelom Jul 09 '10 at 10:21
  • What about Objective-C? And I totally agree with you, Huntrods. –  Dec 20 '10 at 14:36
94

You must know how to type to be a programmer.

It's controversial among people who don't know how to type, but who insist that they can two-finger hunt-and-peck as fast as any typist, or that they don't really need to spend that much time typing, or that Intellisense relieves the need to type...

I've never met anyone who does know how to type, but insists that it doesn't make a difference.

See also: Programming's Dirtiest Little Secret

Ryan Lundy
  • 204,559
  • 37
  • 180
  • 211
  • 1
    I know how to type (was an army teleprinterist) but I insist it makes no difference whatsoever. – Nemanja Trifunovic Jan 02 '09 at 21:17
  • 4
    Nemanja->"no difference whatsoever"?! I just got 70wpm on an online test. I could see how someone could scrape by at 20-30wpm, but if they are using two fingers, plugging away at 5wpm (yes, I've worked with people like that), it's holding them back. – KeyserSoze Jan 02 '09 at 22:03
  • 7
    No difference whatsoever. I don't even know what is my current wpm level, because i completely lost interest in it. Surely, it is useful to type quickly when you are writing documentation or ansering e-mails, but for coding? Nah. Thinking takes time, typing is insignificant. – Nemanja Trifunovic Jan 02 '09 at 22:12
  • 2
    Well, if your typing is so bad that you are thinking about typing, that's time you could have spent thinking about the problem you are working on. And if your typing speed is a bottleneck in recording ideas, you may have to throttle your thinking until your output buffer is flushed. – KeyserSoze Jan 03 '09 at 01:01
  • 2
    @Nemanja Trifunovic - I hear what you are saying but, respectfully, I think you are dead wrong. Being able to type makes a huge difference. – jwpfox Jan 03 '09 at 13:43
  • 1
    @keysersoze: I have never worked on a project when typing speed made any difference. Even when I write code from scratch and not fighting some crazy frameworks, a good editor makes typing skill almost worthless. With vim I usually just type a couple of letters before pressing Ctrl+P. – Nemanja Trifunovic Jan 04 '09 at 03:18
  • 1
    @duncan: No hard feelings, but you are dead wrong - it makes no difference :) – Nemanja Trifunovic Jan 04 '09 at 03:19
  • 1
    Even though I never learned to touch type my typing is very quick, and optimized towards writing code - not english. I always felt touch typists must be at a little bit of disadvantage, considering the heavy use of symbols in coding which touch typing is not optimized for. – Kendall Helmstetter Gelner Jan 05 '09 at 04:48
  • I know how to type. After twenty years of typing my index and middle fingers know where all the keys are, so I don't have to look down at keyboard all that often. But I had this argument in a different context long back: a colleague argued that camel case is [contd...] – Miserable Variable Jan 07 '09 at 08:21
  • [...contd] better then underscores because it is easier to type. My argument is that you are not supposed to write code at speed of typing. – Miserable Variable Jan 07 '09 at 08:22
  • I don't mind looking at the keyboard once in a while to relieve eye strain. You HAVE to change your focus at times. If you are a good typist, chances are you either have glasses or contacts. – Andrei Taranchenko Jan 10 '09 at 23:27
  • While I can't touch type and confirm this I do suspect that it helps. I have encountered many situations where slow typing speed gets in the way. Sadly learning is mind-numbingly dull. Yes, I know there are all kinds of fun games to help you, but it's still dull for me. Still trying though... – Manos Dilaverakis Jan 12 '09 at 11:36
  • 2
    +1. I repeatedly see people make tons of mistake because they are watching their keyboard instead of watching the code on their screen. Most common are syntax and code-formatting issues, but also real bugs that aren't caught by the compiler. – flodin Feb 28 '09 at 10:52
  • 1
    You must be using some ridiculously verbose language like Java. Thinking is the bottleneck when programming, not typing. – nosatalian May 31 '09 at 01:50
  • 1
    I agree here. Though thinking is important, watching the screen is key. – Chet Jun 23 '09 at 23:27
  • I agree that thought is the limiting factor behind programming, but who codes from the hip so much that they design the software as they type it? While I'm coding/typing, I have largely already designed the software... as a result, my thinking easily keeps up with my 80wpm+ typing speed. – Steven Evers Jul 21 '09 at 18:33
  • 1
    I can't think faster then I type. I am hunt and peck, using six fingers and the thumbs. Problem is not that I wouldn't benefit from ten finger, but that trying to train it slows me down to much. – peterchen Sep 24 '09 at 08:00
  • The strange thing is that hunters and peckers are just a hair's breadth away from full blown ten finger typing. After using a keyboard for years you know exactly where the keys are - you just don't know where your hands are without looking. And that's only a little bit of technique. BTW: using a Kinesis contoured keyboard helps a LOT. And using an english keyboard instead of a localized one. – Dr. Hans-Peter Störr Dec 11 '09 at 20:02
  • 2
    @hstoerr: When I first took a typing course, in sixth grade, I cheated and looked at my fingers. I was the fastest one in the class, the star pupil. Only I didn't really know how to type. Luckily, in seventh grade, I took typing again and this time did it right. It's the only useful thing I learned in junior high. (Well, that and "Always carry your books in a backpack so they can't get knocked out of your hands and scattered down the hall.") – Ryan Lundy Dec 15 '09 at 21:17
  • The way I look at it, if you don't know how to type, how much programming experience could you really have? So yeah, I think a good programmer is one who knows how to type. – Nicole Feb 22 '10 at 21:51
  • I disagree. I never took any typing lessons, but spending most of my life behind a computer has made me remember where all the keys are so I can quickly type without looking at the keyboard. Maybe my hands aren't placed in the optimal position as you would learn in a typing lesson, or I don't use a DVORAK keyboard, but my typing is fine. And I sure don't want to type faster than I can think. – Dennis Jul 28 '10 at 15:02
  • I generally type with 4 fingers or so and I've tested my typing speed - 90 wpm. – Jake Petroules Aug 17 '10 at 10:44
  • Since when does wpm matter when programming? Programming requires thought, not just mindless typing. – pondpad Sep 13 '10 at 15:22
  • Typing is mindless by definition. If you're not typing, but hunt-and-pecking, you're using up brain cells to type that you could otherwise be using to think about your program. – Ryan Lundy Sep 14 '10 at 03:22
  • -1 for dead wrong: you don't need to type at all to be a programmer. Then, +2 for what it really means: you must know how to type to be a ***good*** programmer. When I interview people I'd pass immediately if they can't touch type. – Geoffrey Zheng Sep 19 '10 at 18:35
89

A degree in Computer Science or other IT area DOES make you a more well rounded programmer

I don't care how many years of experience you have, how many blogs you've read, how many open source projects you're involved in. A qualification (I'd recommend longer than 3 years) exposes you to a different way of thinking and gives you a great foundation.

Just because you've written some better code than a guy with a BSc in Computer Science, does not mean you are better than him. What you have he can pick up in an instant which is not the case the other way around.

Having a qualification shows your commitment, the fact that you would go above and beyond experience to make you a better developer. Developers which are good at what they do AND have a qualification can be very intimidating.

I would not be surprized if this answer gets voted down.

Also, once you have a qualification, you slowly stop comparing yourself to those with qualifications (my experience). You realize that it all doesn't matter at the end, as long as you can work well together.

Always act mercifully towards other developers, irrespective of qualifications.

Maltrap
  • 2,620
  • 1
  • 33
  • 32
  • "degree in Computer Science or other IT area DOES make you more well rounded" ... "realize that it all doesn't matter at the end, as long as you can work well together" <- sounds a tiny bit inconsistent and self-contradictory. – dreftymac Jan 04 '09 at 04:48
  • IT referring to the fact that the other guy has a degree. It's strange, once you have a qualification, you might stop comparing yourself to others. – Maltrap Jan 04 '09 at 10:41
  • 4
    Agree - qualifications are indicators of commitment. They can be more but if even if that's all they are then they have value. It is only those without pieces of paper who decry them. Those with them know the limits of their value but know their value too. – jwpfox Jan 04 '09 at 11:35
  • From past experience I'd generally rather work with someone that at least has an EE degree, than someone who came into the field after college. – Kendall Helmstetter Gelner Jan 05 '09 at 05:41
  • i would even say a good university degree. i met a programmer at my work who finished some small it-schoold i've never heard of and didn't know how many different numbers can be written on 8 bits! – agnieszka Jan 05 '09 at 12:42
  • 1
    A degree in ANY area (except maybe post-modern literary criticism) makes you a more well-rounded programmer, especially if it's in mathematics or science or engineering. Comp Sci and IT degrees tend to have incredibly narrow scope and focus. – MusiGenesis Jan 13 '09 at 17:18
  • 3
    In the spirit of healthy discussion I'll just say that I vehemently disagree (and I've got one). Past deliverables shows commitment, not that you lived somewhere for 4 years and read some books. – Steven Evers Jan 23 '09 at 22:20
  • 6
    I don't believe in degrees as measurements of value or skill, but studying at a university gives you the opportunity to learn the foundations of many different fields that can be useful to you in a work situation. I'm doubtful if being able to graduate is an acceptable proof that you've learned anything, but I know that you CAN learn a lot of useful skills, if you're ambitious enough. – Lucas Lindström May 05 '09 at 21:11
  • "What you have he can pick up in an instant" - Not necessarily. The ability to write good code is something that tends to come with experience, though some people pick it up quickly and some never seem to get there. The guy with the CS degree will certainly be able to pick up the languages and APIs you use in an instant, but there's no guarantee he'll ever be a good programmer. And he certainly won't become one overnight if he's not one now. – Mark Baker Aug 17 '09 at 12:41
  • I learned far more from my college library than the classes them selfs. – gradbot Oct 13 '09 at 17:13
  • Disagree - Self learning can be quite better than university learning. As for University, they make you think they way they want (as better marks for thinking their way). A self learner will think far better (for a given value of better) that a person teached to lern one way. I'm fascinated that you agree with me, btw: "You realize that it all doesn't matter at the end, as long as you can work well together." – Random May 20 '10 at 15:35
  • As someone about to complete a degree in Information Technology (with a specialization in Applications Development, no less), let me assure you that it is a small step above useless for someone interested in software development. You're more than likely to learn UML and object-orientedness which is supposedly good, but beyond that you're on your own. – ravibhagw Jun 08 '10 at 19:53
89

Lazy Programmers are the Best Programmers

A lazy programmer most often finds ways to decrease the amount of time spent writing code (especially a lot of similar or repeating code). This often translates into tools and workflows that other developers in the company/team can benefit from.

As the developer encounters similar projects he may create tools to bootstrap the development process (e.g. creating a DRM layer that works with the company's database design paradigms).

Furthermore, developers such as these often use some form of code generation. This means all bugs of the same type (for example, the code generator did not check for null parameters on all methods) can often be fixed by fixing the generator and not the 50+ instances of that bug.

A lazy programmer may take a few more hours to get the first product out the door, but will save you months down the line.

Jonathan C Dickinson
  • 7,181
  • 4
  • 35
  • 46
  • 16
    You are mistaken "lazy" for "clever". A clever programmer will actually have to work less, which may make him/her look "lazy". – Captain Sensible Jan 26 '09 at 10:16
  • @Diego, tnx, changed it to make it more appropriate. – Jonathan C Dickinson Jan 26 '09 at 14:46
  • @Diego: I disagree! The Term "lazy" as applied to programmers is something I've heard and used many times before. (I think I first read it in a article by Larry Wall) It is a badge of honor! – Shalom Craimer Jan 29 '09 at 07:47
  • lazyness is the fulcrum of all human advancements. if we were not lazy we would still be hunting boars with spears for supper. – Adrian Zanescu Jan 30 '09 at 13:25
  • I like to say, "I'm not lazy; I'm efficient." – Tracy Probst Mar 02 '09 at 18:28
  • 3
    I agree with what you're trying to say, but I disagree with your definition of lazy. A lazy programmer does not look ahead; they will copy-paste a block of code between 4 different functions if it's the easiest thing to do at the time. – DisgruntledGoat May 10 '09 at 00:38
  • 7
    lazy/clever programmer... Programmers have to be clever to be reasonable programmers, so that's a given. A lazy programmer picks the shortest/easiest path to the solution of a problem. And this is not about copy/pasting the same code snippet 400 times, but rather finding a way to avoid copying the same code 400 times. That way the code can be easily changed in once place! The lazy programmer likes to only change the code in once place ;) The lazy programmer also knows that the code is likely to be changed several times. And the lazy programmer just hate finding the 400 snippets twice. – Zuu Jun 15 '09 at 11:33
  • 1
    Though I agree with your explanation Lazy it isn't really the best word to describe this. Lazy - Resistant to work or exertion; I know a lazy programmer that is too lazy to create a bat file to automate a simple task that I see him type out all the time. If he would just spend a little time to make a few bat files it would increase his productivity. It turns out he is a good developer however he could be even better. – gradbot Oct 13 '09 at 17:23
  • For the most part, I agree. However in HTML coding this is not the case. Lazy HTML coders use tables for layouts, and lazy back end ciders cut and paste instead of using includes. Having just slogged through someone else's code, I am very much aware of this phenomena. *shudder* – Elizabeth Buckwalter Oct 22 '09 at 18:33
  • It's hard to tell whether programmers are the hardest-working lazy people on the planet, or the laziest hard-working people on the planet. – ravibhagw Jun 08 '10 at 19:54
  • -1 . I'm VERY lazy + I never wrote tools to automate things because I never saw any value in them. Developing tools is a one time huge additional amount of work that no true lazy person will be able to commit to. – Blub Jul 14 '10 at 17:12
  • +1 for Seventh Element/Zuu. Lazy programmers = much code. Smart programmers = less + better code. – Exa Aug 23 '10 at 09:26
87

Don't use inheritance unless you can explain why you need it.

theschmitzer
  • 12,190
  • 11
  • 41
  • 49
  • Inheritance is the second strongest relationship in C++ and the strongest relationship in most other languages. It strongly couples your code with that of your ascendant. If you can just use it through interfaces go for it. Prefer composition over inheritance always. – David Rodríguez - dribeas Jan 05 '09 at 16:35
  • Most uses of inheritance as a form of reuse, overriding whatever is needed to change. They generally don't know/care if they violate LSP, and can achieve what they need with composition. – theschmitzer Jan 09 '09 at 15:47
  • 2
    I tend to think that delegation is cleaner in most cases where people use inheritance (esp. lib development) because: - abstraction is better - coupling is looser - maintenance is easier Delegation defines a contract between the delegating and the delegate that is easier to enforce among versions. – fbonnet Jan 15 '09 at 08:50
  • He's not saying don't use inheritance at all, just don't use it if you can't explain why you need it. If you're wanting to code an OO application and think throwing a little inheritance in here and there is just gonna make it OO, then you're dumb and should be fired from the ability program. – Wes P Jan 29 '09 at 20:37
  • Like many other programming constructs, the purpose of inheritance is to avoid duplicated code. – Ryan Lundy May 16 '09 at 02:12
  • Or as Sutter and Alexandrescu said in C++ Coding Standards: Inherit an interface, not the implementation. – blwy10 Oct 15 '09 at 10:15
  • 8
    You should expand that to: "Don't ever code *anything* that you can't explain." Everything you do in code should have a reason. – Oorang Dec 11 '09 at 02:10
86

The world needs more GOTOs

GOTOs are avoided religiously often with no reasoning beyond "my professor told me GOTOs are bad." They have a purpose and would greatly simplify production code in many places.

That said, they aren't really necessary in 99% of the code you'll ever write.

Alex B
  • 24,678
  • 14
  • 64
  • 87
Max
  • 1,044
  • 10
  • 19
  • 4
    I agree. Not necessarily that we need more gotos, but that sometimes programmers go to ridiculous lengths to avoid them: such as creating bizarre constructs like: do { ... break; ... } while (false); to simulate a goto while pretending not to use one. – Ferruccio Jan 02 '09 at 13:20
  • Especially when you're taught what GOTOs are for an entire semester and how to use them, then the next semester a new lecturer comes along chanting the death of the GOTO statement in a folly of unexplained and illogical rage. – Kieran Senior Jan 02 '09 at 13:22
  • I agree as well, one of my old lectures would go mental if you ever thought about using them. But coding to avoid them may end up being worse than using them. – Mark Davidson Jan 02 '09 at 13:24
  • I've used GOTOs in switch statements to have logic jump all over the place, and had no problem with it (apart from the fact that I got FxCop to actually complain about the complexity of the method in question). – Dmitri Nesteruk Jan 02 '09 at 13:49
  • 4
    I have seen only 1 example of a good usage for the last 5 years, so make it 99,999 percent. – Paco Jan 02 '09 at 13:51
  • 10
    I've never had to use a goto for anything. Anytime when I actually thought goto might be a good idea, it was instead an indicator that things weren't flowing properly. – Gene Roberts Jan 02 '09 at 15:06
  • No no no no no. So much production code is so wildly obfuscated and unclear already. You would be giving more tools to the monkeys. – Steve B. Jan 02 '09 at 17:32
  • I don't think I can come up with a single good use of GoTo in a .NET application... can you give an example of a good use of it? – BenAlabaster Jan 02 '09 at 23:14
  • 1
    Goto is very useful in native code. It lets you move all of your error handling to the end of you function and helps ensure that all necessary cleanup happens(freeing memory/resources, etc). The pattern which I like to see is to have exactly two labels in each function: Error and Cleanup. – Jesse Weigert Jan 03 '09 at 03:39
  • The explanation I've heard is that GOTOs make the stack non-deterministic. If you got to a line with a GOTO, there's no way of telling how you got there. Makes debugging much harder. – dj_segfault Jan 03 '09 at 04:05
  • As the years have gone by the need for GOTOs goes down and down as languages add constructs that remove the need for some uses. I'm down to about 1 GOTO per year now but there are times it's the right answer. – Loren Pechtel Jan 03 '09 at 05:07
  • Nice to see that this did indeed generate a great bit of controversy! – Max Jan 03 '09 at 09:24
  • I find goto's are not very readable. I despise them in SQL, so why would I use them anywhere else? – Jeremy Jan 03 '09 at 21:18
  • @Jeremy, Can you do goto in SQL? SQL is a declarative language. Which db vendor has SQL that knows a goto? – tuinstoel Jan 04 '09 at 22:09
  • @tuinstoel, MSSQL has supported it since at least 6.5. I use it a lot to begin, commit/rollback transactions in stored procedures. – Jeremy Jan 05 '09 at 02:58
  • @Jeremy, Don't you mean T-SQL instead of SQL? – tuinstoel Jan 05 '09 at 10:48
  • 2
    To my knowledge in assembly/machine language all branching are forms of goto. What does your high level language get compiled into? Nothing wrong with the occasional "low level style" shortcut if it is done properly. – Andy Webb Jan 05 '09 at 19:23
  • Continue = goto for loops; Break = goto for blocks; switch = goto madness; Goto is obviously not a problem if used with some sense then. If you are using an OO language and you use Goto for Error and Cleanup then you scare me. RAII and counterparts should be considered your friends. – Greg Domjan Jan 06 '09 at 02:34
  • 27
    +1 for controversy :). Oh, I know what GOTO's are, I started with BASIC like many of you. We need more GOTO's like we need DOS 8.3 filenames, plain ASCII encoding, FAT 16 filesystems, and 5 1/4 inch floppies. – postfuturist Jan 07 '09 at 08:26
  • Just found this: http://stackoverflow.com/questions/84556/whats-your-favorite-programmer-cartoon#301419 – Cameron MacFarland Jan 08 '09 at 04:31
  • A good example of goto: http://stackoverflow.com/questions/416464/is-it-possible-to-exit-a-for-before-time-in-c-if-an-ending-condition-is-reache#416555 – FryGuy Jan 09 '09 at 23:13
  • I used goto quite a bit in C programming - generally as a finally block. I have a file handle I need to close, memory I need to free etc, so at the point where I would return early, I just set a return code and goto the cleanup: label. – Hamish Downer Jan 10 '09 at 18:30
  • Gotos are also commonly used to code up state machines. You can use an enumeration, a switch statement, and a loop to achieve the same effect. However, all that really does is mask the true structure of your control flow (and slow things down a bit). – T.E.D. Jan 14 '09 at 18:05
  • Goto can be OK. My rule of thumb. If a good programmer, who doesn't often use Goto, is prepared to defend it - then it's OK. And it probably is a once a year thing if that. Dmitri, sounds like FxCop is right and you're wrong. – MarkJ Jan 27 '09 at 11:35
  • 10
    This thread considered harmful. Edsger Dijkstra is rolling in his grave. :) – Darcy Casselman Mar 23 '09 at 14:07
  • Agreed. I am struggling to translate numerical code from Fortran into F# because it lacks an efficient goto construct. – J D May 05 '09 at 12:18
  • The problem with GOTO's are that they are like giving a little alcohol to a recovering alcoholic. Incredibly dangerous for programmers coming over from BASIC who are unstructured happy. – Austin May 14 '09 at 16:22
  • 2
    People who think gotos are evil have never programmed in C, or if they have, they did it poorly. Gotos are the *best* way to do error handling in plain C, and repeating Dijkstras quote dogmatically only demonstrates ignorance. Please read this before complaining about gotos: http://eli.thegreenplace.net/2009/04/27/using-goto-for-error-handling-in-c/ – catphive Jun 15 '09 at 01:41
  • To add on to catphive's point about using goto's in C, here's a discussion about gotos by the Linux kernel developers when one man jumps the gun on a goto and proceeds to recommend avoiding it at all costs: http://kerneltrap.org/node/553/2131 – Coding With Style Jul 04 '09 at 05:22
  • Actually, the discussion of the use of goto in Linux made me change my mind if goto is indeed harmful in development. I've learned not just to trust what you've taught :). – OnesimusUnbound Sep 10 '09 at 14:59
  • I needed gotos in C because it has no equivalent for Java's "continue loopname;" – luiscubal Oct 15 '09 at 18:47
  • I once got sent home from college for telling someone to use a GOTO :P – ingh.am Jan 05 '10 at 17:25
  • Events are the modern GOTO statement. You arrive from anywhere, anytime, with extra baggage of data that GOTOs never had. – Tom A Jul 08 '10 at 04:37
  • I've always learned not to use GOTOs because they create spaghetti code and are for the lazy (that if you do use them, something is wrong with your flow). However, JUMP statements, which are essentially GOTOs, are very useful in assembly. – Dennis Jul 28 '10 at 15:05
  • "They have a purpose and would greatly simplify production code in many places. That said, they aren't really necessary in 99% of the code you'll ever write." +2 if I could, sir, that could not have been written better. – Jake Petroules Aug 17 '10 at 10:47
  • Sorry but I'm very very glad to have not seen a GOTO statement since porting a QuickBasic program to C#. Give me a break statement anyday. – wonea Sep 23 '10 at 08:40
80

I've been burned for broadcasting these opinions in public before, but here goes:

Well-written code in dynamically typed languages follows static-typing conventions

Having used Python, PHP, Perl, and a few other dynamically typed languages, I find that well-written code in these languages follows static typing conventions, for example:

  • Its considered bad style to re-use a variable with different types (for example, its bad style to take a list variable and assign an int, then assign the variable a bool in the same method). Well-written code in dynamically typed languages doesn't mix types.

  • A type-error in a statically typed language is still a type-error in a dynamically typed language.

  • Functions are generally designed to operate on a single datatype at a time, so that a function which accepts a parameter of type T can only sensibly be used with objects of type T or subclasses of T.

  • Functions designed to operator on many different datatypes are written in a way that constrains parameters to a well-defined interface. In general terms, if two objects of types A and B perform a similar function, but aren't subclasses of one another, then they almost certainly implement the same interface.

While dynamically typed languages certainly provide more than one way to crack a nut, most well-written, idiomatic code in these languages pays close attention to types just as rigorously as code written in statically typed languages.

Dynamic typing does not reduce the amount of code programmers need to write

When I point out how peculiar it is that so many static-typing conventions cross over into dynamic typing world, I usually add "so why use dynamically typed languages to begin with?". The immediate response is something along the lines of being able to write more terse, expressive code, because dynamic typing allows programmers to omit type annotations and explicitly defined interfaces. However, I think the most popular statically typed languages, such as C#, Java, and Delphi, are bulky by design, not as a result of their type systems.

I like to use languages with a real type system like OCaml, which is not only statically typed, but its type inference and structural typing allow programmers to omit most type annotations and interface definitions.

The existence of the ML family of languages demostrates that we can enjoy the benefits of static typing with all the brevity of writing in a dynamically typed language. I actually use OCaml's REPL for ad hoc, throwaway scripts in exactly the same way everyone else uses Perl or Python as a scripting language.

Juliet
  • 80,494
  • 45
  • 196
  • 228
  • 7
    100% right. If only the Python developers would finally acknowledge this and change their otherwise exceptional language accordingly. Thanks for posting this. – Konrad Rudolph Jan 09 '09 at 19:50
  • But there is already one statically-typed Python-like language. Tt's called C# ;-) – zuber Feb 04 '09 at 23:08
  • C# is python-like? Maybe you meant Boo ;) – Juliet Feb 05 '09 at 03:14
  • 3
    If anyone says dynamic typing is more terse, just point them to Haskell =). I agree with all but your 3rd bullet point. Dynamic code often accepts parameters that can be one of two types. For example, Prototype functions accept either HTMLElements, or strings which you can use $() to look up to get HTMLElements. A good static typing system will allow you to do this =). – Claudiu May 06 '09 at 07:16
  • 3
    #2 is only true if you follow #1, which in my opinion is unnecessary. If it's clear what the code does, then it is correct. I have a code I use a lot that reads in data from a tab delimited file, and parses that into an array of floats. Why do I need a different variable for each step of the process? The data(as the variable is called) is still the data in each step. – davidtbernal May 08 '09 at 01:38
76

Programmers who spend all day answering questions on Stackoverflow are probably not doing the work they are being paid to do.

Dan Diplo
  • 25,076
  • 4
  • 67
  • 89
72

Code layout does matter

Maybe specifics of brace position should remain purely religious arguments - but it doesn't mean that all layout styles are equal, or that there are no objective factors at all!

The trouble is that the uber-rule for layout, namely: "be consistent", sound as it is, is used as a crutch by many to never try to see if their default style can be improved on - and that, furthermore, it doesn't even matter.

A few years ago I was studying Speed Reading techniques, and some of the things I learned about how the eye takes in information in "fixations", can most optimally scan pages, and the role of subconsciously picking up context, got me thinking about how this applied to code - and writing code with it in mind especially.

It led me to a style that tended to be columnar in nature, with identifiers logically grouped and aligned where possible (in particular I became strict about having each method argument on its own line). However, rather than long columns of unchanging structure it's actually beneficial to vary the structure in blocks so that you end up with rectangular islands that the eye can take in in a single fixture - even if you don't consciously read every character.

The net result is that, once you get used to it (which typically takes 1-3 days) it becomes pleasing to the eye, easier and faster to comprehend, and is less taxing on the eyes and brain because it's laid out in a way that makes it easier to take in.

Almost without exception, everyone I have asked to try this style (including myself) initially said, "ugh I hate it!", but after a day or two said, "I love it - I'm finding it hard not to go back and rewrite all my old stuff this way!".

I've been hoping to find the time to do more controlled experiments to collect together enough evidence to write a paper on, but as ever have been too busy with other things. However this seemed like a good opportunity to mention it to people interested in controversial techniques :-)

[Edit]

I finally got around to blogging about this (after many years parked in the "meaning to" phase): Part one, Part two, Part three.

philsquared
  • 22,403
  • 12
  • 69
  • 98
  • Generally when things are aligned in a columnar way it creates a maintenance burden for a developer. Ie aligning the data type and identifier in a method declaration... Line1(int id,) line 2(char id,) ... making sure the data type, variable name, and even commas all are in a column is a MESS – Cervo Jan 02 '09 at 18:55
  • it usually just takes a couple of extra keypresses, if that.I didn't go into too many specifics, but I usually only break it into two columns for alignment purposes (usually type - id). I have some other rules to ease the burden where parantheses are concerned. The biggest obstacle I have [cont...] – philsquared Jan 02 '09 at 22:34
  • [...cont] is fighting against auto-formatting editors. In fact, unless it's easy to disable I usually give up in those circumstances and "go with the flow". But with especially verbose languages like C++ I still prefer it. – philsquared Jan 02 '09 at 22:36
  • Interesting. I would like to see some examples. Do you have a blog? – Jay Bazuzi Jan 02 '09 at 22:37
  • Well, I have: http://www.levelofindirection.com (yes, it forwards to blogspot - the pun *was* intended), and also http://organic-programming.blogspot.com . However, you'll notice neither have been updated for quite a while - due in large part to http://www.vconqr.com ;-) [cont...] – philsquared Jan 03 '09 at 16:59
  • [...cont] - and I don't mention the layout stuff on either. I'll consider myself prodded - again! – philsquared Jan 03 '09 at 16:59
  • Code formatting matters so much, it doesn't matter at all. By that I mean that editors should always reformat code when you load it, and SCM systems should reformat to a canonical style on checkin. Then everyone sees the code the way that works best for them. – Kendall Helmstetter Gelner Jan 05 '09 at 04:53
  • @Kendall: Sounds nice. It's hard, though, because you have to be able to specify the exact formatting of every possible bit of code, including code that isn't legal in the language! – Jay Bazuzi Jan 05 '09 at 16:25
  • This is a pretty much standard opinion. Or, at least, it should be. If this is controversial, then there is a problem. – isekaijin Aug 06 '09 at 16:43
  • [1TBS](http://en.wikipedia.org/wiki/One_True_Brace#Variant:_1TBS) and [elastic tabs](http://nickgravgaard.com/elastictabstops/), or death. ps: @Kendall - but yes, sounds nice :) – zanlok Dec 02 '10 at 01:19
71

Opinion: explicit variable declaration is a great thing.

I'll never understand the "wisdom" of letting the developer waste costly time tracking down runtime errors caused by variable name typos instead of simply letting the compiler/interpreter catch them.

Nobody's ever given me an explanation better than "well it saves time since I don't have to write 'int i;'." Uhhhhh... yeah, sure, but how much time does it take to track down a runtime error?

David Basarab
  • 72,212
  • 42
  • 129
  • 156
John Rose
  • 1,943
  • 2
  • 18
  • 29
  • What's your view on whether the *type* of the variable should be explicit or not? (Thinking of "var" in C#.) – Jon Skeet Jan 02 '09 at 14:46
  • Good one. If you have to work with legacy Fortran code, you wouldn't believe the headaches caused by this issue. – Mike Dunlavey Jan 02 '09 at 14:58
  • 2
    I actually wanted to write this same opinion, as well. IMHO, this is the major drawback of both Python and Ruby, for no good reason at all. Perl at least offers `use strict`. – Konrad Rudolph Jan 02 '09 at 15:36
  • 2
    Explicit declaration is good, to avoid typos. Assigning types to variables is frequently premature optimization. – David Thornley Jan 02 '09 at 16:08
  • 5
    Yup. *ONE* bug hunt involving an l (between k and m) becoming a 1 (between 0 and 2) wasted a lifetime of declaring variables. – Loren Pechtel Jan 03 '09 at 05:13
  • 1
    Anything else is not a real language. Now THAT'S controversial. – Andrei Taranchenko Jan 10 '09 at 23:31
  • 1
    I remember learning Visual Basic 6 in high school. If OPTION EXPLICIT was not the first line in each source file, we would fail. – rlbond Mar 21 '09 at 05:00
68

Opinion: Never ever have different code between "debug" and "release" builds

The main reason being that release code almost never gets tested. Better to have the same code running in test as it is in the wild.

Cameron MacFarland
  • 70,676
  • 20
  • 104
  • 133
  • I released something week before last that I'd only tested in debug mode. Unfortunately, while it worked just fine in debug, with no complaints, it failed in release mode. – David Thornley Jan 02 '09 at 22:30
  • The only thing I differ between Debug/Release builds is the default logging level. Anything else always comes back to bite you. – devstuff Jan 02 '09 at 22:45
  • ummm - what about asserts? Do you either not use them, or do you leave them in the release build? – Daniel Paull Jan 03 '09 at 00:53
  • Again, I don't tend to use them. If you're asserting something in debug shouldn't you have it fail in release too? Use an exception if it's critical, or don't use an assert (or not care if the assert doesn't make it to release). – Cameron MacFarland Jan 03 '09 at 06:39
  • @Cameron MacFarland - a good point; code with assertions in Debug mode either ends up not handling the failure condition in Release mode, or with a second failure-handling path which only works in Release mode. –  Jan 03 '09 at 12:10
  • It would be like writing to different applications. you're debug version would be nicely debugged, and your release version wouldn't. Tragic! – Jeremy Jan 03 '09 at 21:16
  • @Daniel Paull, if there is something fishy it is often better to stop the processing than having corrupt data. – tuinstoel Jan 04 '09 at 21:34
  • Agreed: Exceptions > Asserts. – postfuturist Jan 07 '09 at 08:30
  • Agree: there are some very nasty bugs in there that could be real detrimental to your rep! – Captain Sensible Jan 26 '09 at 08:09
  • Hmmm. So, release code almost never gets tested, right? No offence Cameron, but remind me never to use any of your software – MarkJ Jan 27 '09 at 11:33
  • 1
    @MarkJ: That's what I'm saying, you should be testing the code that goes out the door, and not have a difference between "Release" that is not tested, and "Debug" that is tested, but never released. – Cameron MacFarland Jan 27 '09 at 13:50
  • Asserts & exceptions have different purposes. Exception are for user errors -- things that "shouldn't happen". Asserts are for pre-conditions -- things that "CANNOT happen". Asserts bring the app to a crashing halt saying "You've got a big problem -- fix this now!!!" – James Curran Feb 18 '09 at 15:28
  • @James: Exceptions also bring the app crashing down. Also what happens when a user sees an assert error? Are they supposed to fix it? – Cameron MacFarland Feb 19 '09 at 06:17
  • All development and testing should be done on the release build, but a debug build should exist to assist in debugging. (Hello #ifdef!) – rpetrich Apr 19 '09 at 17:34
  • 4
    You just need to switch. Our QA uses debugging builds during development but switches to release towards the end. There are certain levels of sanity checking that you would like to be performed as much as possible before shipping, but cannot afford to ship due to performance reasons. – nosatalian May 31 '09 at 01:52
64

Opinion: developers should be testing their own code

I've seen too much crap handed off to test only to have it not actually fix the bug in question, incurring communication overhead and fostering irresponsible practices.

Alex B
  • 24,678
  • 14
  • 64
  • 87
Kevin Davis
  • 2,698
  • 1
  • 22
  • 27
  • +1. This a matter of ownership, we tend to care better for things we own than the things we don't. Want proof? Take a look at your company vehicles. – AnthonyWJones Jan 03 '09 at 13:55
  • It also comes with the onus that people reporting bugs can report in sufficient detail so that it can be reproduced and tested to be proven fixed. It sucks to be so maligned when you reproduce a defect according to description, fix it, and find that the tester still has issues you didn't. – Greg Domjan Jan 07 '09 at 07:11
  • 1
    I think testing and developing are different skills, they should be done by those who are good at them. Isolating testers from developers and making it hard for testers to get ther bugs fixed: no excuse. – Benjamin Confino Feb 27 '09 at 19:34
  • 1
    Sounds like bad developers to me. I'd file this under not all lazy developers are good developers. – gradbot Oct 13 '09 at 17:25
  • 2
    +1 for controversy: I'm only going to test the things I think to test for, and if I design the particular method... I've already thought of everything that can go wrong (from my point of view). A good tester will see another point of view -> like your users. – Steven Evers Oct 14 '09 at 19:21
63

Pagination is never what the user wants

If you start having the discussion about where to do pagination, in the database, in the business logic, on the client, etc. then you are asking the wrong question. If your app is giving back more data than the user needs, figure out a way for the user to narrow down what they need based on real criteria, not arbitrary sized chunks. And if the user really does want all those results, then give them all the results. Who are you helping by giving back 20 at a time? The server? Is that more important than your user?

[EDIT: clarification, based on comments]

As a real world example, let's look at this Stack Overflow question. Let's say I have a controversial programming opinion. Before I post, I'd like to see if there is already an answer that addresses the same opinion, so I can upvote it. The only option I have is to click through every page of answers.

I would prefer one of these options:

  1. Allow me to search through the answers (a way for me to narrow down what I need based on real criteria).

  2. Allow me to see all the answers so I can use my browser's "find" option (give me all the results).

The same applies if I just want to find an answer I previously read, but can't find anymore. I don't know when it was posted or how many votes it has, so the sorting options don't help. And even if I did, I still have to play a guessing game to find the right page of results. The fact that the answers are paginated and I can directly click into one of a dozen pages is no help at all.

--
bmb

bmb
  • 6,058
  • 2
  • 37
  • 58
  • 13
    Google does pagination, Google is very popular. – tuinstoel Jan 03 '09 at 22:31
  • Good point. I would argue that google is narrowing down what users need based on real criteria -- the criteria is "ten best results." I'm not saying that showing less than the full results is always bad, if you give the user what they want. – bmb Jan 03 '09 at 22:47
  • 2
    maybe you should give conrete example of a thing that's paginated but shouldn't. for example, how would you "narrow down" answers to this question? – hasen Jan 04 '09 at 19:58
  • @bmb: Where does this put this thread? @tuinstoel: I claim that nobody ever (i.e. about 0.1% of all page views, probably much more for image search) use more than the first page of results. Pagination done right. – Konrad Rudolph Jan 04 '09 at 20:37
  • @Konrad Rudolph, Once of twice each year I search on my own name, I use all the page results (I'm not famous). That is probably the only time I use all the pages. – tuinstoel Jan 04 '09 at 20:45
  • Sometimes it's easier for the user to read if all the controls are visible at the same time (no scroll bars). But in any case, you have to ask: Should I use paging or scrollbars? Either way it's still a click to the user. – Travis Apr 28 '09 at 20:54
  • 5
    @tuinstoel google does a lot of things but is not cooking fish. That google is doing pagination has no consequence in its popularity. Pagination is an antiquated model from books time. It will disappear soon in favor of ajax like refreshes, used by Google Reader for example. – Elzo Valugi Jun 23 '09 at 09:01
  • 1
    I really, really hate the default 10 results from Google. I turn it up to 100 on every browser I use. I'd probably turn it to 1000 if there were an option (and it still was speedy) – nos Jul 14 '09 at 19:55
  • You'll have much more trouble coming up with those query-based requirements than just implementing a simple pagination system. Sure, if you can suggest an alternative, go right ahead and reduce the number of items to return but not every problem will be as amenable. – Kelly S. French Jul 16 '09 at 15:14
  • In the end pagination isn't really interesting. What's more important is the question: do you count all the search results and show the exact count or do you just provide an estimaton? Google shows only an estimation, showing only an approximation has great performance benefits. Ajax like refreshes don't change this. – tuinstoel Aug 05 '09 at 18:19
  • "Who are you helping by giving back 20 at a time? The server? Is that more important than your user?" If only 1% of users actually need this feature, then the server and thus the other 99% of users. – Brian Ortiz Oct 08 '09 at 22:38
  • Ortzinator, I would agree with you if I thought the number was really 99%. But since my (controversial) contention is that pagination is "never" what the user wants, then I think helping the server helps no one. However, users who don't want all the results don't have to get them. Then everyone is happy. – bmb Oct 09 '09 at 21:18
  • 1
    I came across this answer while paging through and searching every answer to this question to see if anyone had already posted about anonymous functions. Just sayin' – Larry Lustig Oct 14 '09 at 18:15
  • So what about resultsets that have thousands or millions of results? What if it's only hundreds but each one shows a bunch of detail? Returning over 100K violates web best practices and such result sets could result in *huge* server loads. – tsilb Oct 17 '09 at 06:38
  • tsilb, then "allow the user to narrow down what they need based on real criteria". The point here is not that subsets are always bad, it's that pagination is not a method of subsetting that helps anyone. And huge server loads? Boo hoo. Did you build your app to make your server happy? Or your users? – bmb Oct 17 '09 at 15:04
  • slashdot uses an approach where if you try to scroll below the last entry an extra set is added to the page. I love it! – Thorbjørn Ravn Andersen Oct 23 '09 at 17:54
  • Thorbjørn Ravn Andersen, that helps a little, but it would still be tedious if you want to use your browser's "find" function. – bmb Oct 23 '09 at 22:01
62

Respect the Single Responsibility Principle

At first glance you might not think this would be controversial, but in my experience when I mention to another developer that they shouldn't be doing everything in the page load method they often push back ... so for the children please quit building the "do everything" method we see all to often.

Toran Billups
  • 27,111
  • 40
  • 155
  • 268
60

Source Control: Anything But SourceSafe

Also: Exclusive locking is evil.

I once worked somewhere where they argued that exclusive locks meant that you were guaranteeing that people were not overwriting someone else's changes when you checked in. The problem was that in order to get any work done, if a file was locked devs would just change their local file to writable and merging (or overwriting) the source control with their version when they had the chance.

Cameron MacFarland
  • 70,676
  • 20
  • 104
  • 133
  • I've always local-mirrored the code. Then I would do the merging with Windiff and an emacs-macro, then lock it only long enough to check in the changes. I hated it when people would lock a file, then go on vacation. – Mike Dunlavey Jan 02 '09 at 16:40
  • I used to think that it was impossible to work in a team without file locks in your SCM. But after working with Subversion in four companies (and rolling it out myself in two of them, I find merging (auto when possible, manual when not) much better 99% of the time. – dj_segfault Jan 03 '09 at 04:10
  • 6
    Not controversial. Nobody used SourceSafe by choice. – MusiGenesis Jan 13 '09 at 17:13
  • 3
    @MusiGenesis: Yes they do. They exist. – Cameron MacFarland Jan 14 '09 at 09:33
  • 3
    My company is still using SourceSafe. The main reasons are a) General inertia and b) The devs are scared of the idea of working without exclusive locks. – T.E.D. Jan 14 '09 at 18:11
  • 2
    My personal feeling is that the ability to merge code files should be a skill all programmers need, like all programmers need to know how to compile their code. It's part of what we do as a byproduct of using source control. – Cameron MacFarland Jan 15 '09 at 00:52
  • @MusiGenesis: I've headed a move away from SourceSafe in two different companies over the last 5 years, and in both cases the reason for using SourceSafe was ignorance of the alternatives. – Shalom Craimer Jan 26 '09 at 06:57
  • SourceSafe doesn't even work on anything based on IIS7. So soon enough it's going to be pretty much redundant. – Ed James Mar 27 '09 at 15:37
  • 1
    Just to be pedantic...while exclusive locks were the default until recently, SourceSafe has actually supported edit-merge-commit mode since 1998. – Richard Berg Jun 12 '09 at 06:11
  • @Ed - SourceSafe can work with IIS7 if you have WebDAV installed. The WebDAV plugin didn't ship with Vista but it's available as a free plugin, and also comes with Win2008. That said, I hope as much as anyone that it finally fizzles out. There are far better tools on the market (free & otherwise). – Richard Berg Jun 12 '09 at 06:13
  • @Richard: Yes but nobody who uses Source Unsafe uses it in Merge mode because they're afraid to, etc. – Cameron MacFarland Jun 12 '09 at 11:18
  • MKS baby! Finally just killing it off now. – TJR Sep 29 '09 at 03:04
  • I would never want to put my precious source in something notorious for corrupting files. Had to use it once due to a lack of alternatives, got burnt. – Oorang Dec 11 '09 at 02:13
  • @MusiGenesis we do at my work place, but I don't particularly enjoy it. I'm much happier with SVN. – ravibhagw Jun 08 '10 at 19:55
60

Architects that do not code are useless.

That sounds a little harsh, but it's not unreasonable. If you are the "architect" for a system, but do not have some amount of hands-on involvement with the technologies employed then how do you get the respect of the development team? How do you influence direction?

Architects need to do a lot more (meet with stakeholders, negotiate with other teams, evaluate vendors, write documentation, give presentations, etc.) But, if you never see code checked in from by your architect... be wary!

Jay Bazuzi
  • 45,157
  • 15
  • 111
  • 168
kstewart
  • 452
  • 4
  • 8
  • 1
    Architects that *do* code are worse than those that don't. i.e. their productivity is negative. – finnw Jan 17 '09 at 16:46
60

Objects Should Never Be In An Invalid State

Unfortunately, so many of the ORM framework mandate zero-arg constructors for all entity classes, using setters to populate the member variables. In those cases, it's very difficult to know which setters must be called in order to construct a valid object.

MyClass c = new MyClass(); // Object in invalid state. Doesn't have an ID.
c.setId(12345); // Now object is valid.

In my opinion, it should be impossible for an object to ever find itself in an invalid state, and the class's API should actively enforce its class invariants after every method call.

Constructors and mutator methods should atomically transition an object from one valid state to another. This is much better:

MyClass c = new MyClass(12345); // Object starts out valid. Stays valid.

As the consumer of some library, it's a huuuuuuge pain to keep track of whether all the right setters have been invoked before attempting to use an object, since the documentation usually provides no clues about the class's contract.

benjismith
  • 16,559
  • 9
  • 57
  • 80
  • 3
    TOTALLY agree! And I get very frustrated when I see concepts like this become so popular. +1 – John MacIntyre Jan 22 '09 at 14:33
  • Invalid States lead to exceptions in my experience. – Cameron MacFarland Jan 22 '09 at 22:25
  • @Cameron, are you saying that you should be able to initialize with a default constructor, then set each property, then setting checking for an invalid state in each setter and throwing an exception? If so, how can you possibly handle a situation where 2 properties need to be in synch to be valid? – John MacIntyre Jan 23 '09 at 15:24
  • 1
    That's why I hate ORM frameworks, despite the fact I need them all the time. – isekaijin Feb 01 '09 at 06:09
  • I feel your pain Eduardo. I can't stand ORM frameworks, but sometimes they're the least-worst way to solve a particular problem. But yeah, I hate them too. – benjismith Feb 02 '09 at 16:46
  • I dunno. If was uncontroversial, then all of the major frameworks for Java (notably, Spring and Hibernate) wouldn't require me to break the rule in order to use their code. – benjismith Feb 04 '09 at 15:33
  • @John: If two properties should be in sync, they are obviously related and should be edited together in a method: SetBothProperties( a, b ) – Lennaert Mar 30 '09 at 14:32
  • Sadly serialization requires the existince of zero-arg constructors. – tuinstoel Aug 16 '09 at 04:57
  • RAII - resource aquisition is instantation. FTW – George Godik Dec 03 '09 at 00:10
  • Sometimes it's sufficient to have protected zero arg constructors. That might help a little. – Dr. Hans-Peter Störr Dec 11 '09 at 20:21
  • This sort of reminds me of structs in Windows API programming. I could never figure out which fields I needed to set in order to have a valid instance of a struct like STARTUPINFO for example. Very frustrating. – dacris Jul 19 '10 at 08:58
  • I had never heard anyone explicitly this before. It is brilliantly simple--I like it. – riwalk Oct 07 '10 at 18:21
58

Opinion: Unit tests don't need to be written up front, and sometimes not at all.

Reasoning: Developers suck at testing their own code. We do. That's why we generally have test teams or QA groups.

Most of the time the code we write is to intertwined with other code to be tested separately, so we end up jumping through patterned hoops to provide testability. Not that those patterns are bad, but they can sometimes add unnecessary complexity, all for the sake of unit testing...

... which often doesn't work anyway. To write a comprehensive unit test requires alot of time. Often more time than we're willing to give. And the more comprehensive the test, the more brittle it becomes if the interface of the thing it's testing changes, forcing a rewrite of a test that no longer compiles.

Cameron MacFarland
  • 70,676
  • 20
  • 104
  • 133
  • Yes. And code can only be tested if it has room to fail. Simple structures without inconsistent states have nothing to unit test. – Mike Dunlavey Jan 02 '09 at 14:27
  • 3
    Yeah, unit tests up front don't really make sense. If I wrote it down, I thought about the possibility. If I thought about the possibility, unless I'm a complete moron it'll at least work the first time around where the test would apply. Testing needs to catch what I DIDN'T think about! – Gene Roberts Jan 02 '09 at 15:00
  • 1
    Phoenix - you have a point about only catching what you didn't think about but I disagree with your overall point. The value of the tests is that they form a spec. Later, when I make a "small change" - the tests tell me I'm still Ok. – Mark Brittingham Jan 02 '09 at 15:13
  • I worked a company that wanted 95% test coverage, even for classes containing which contained fields to assign and no business logic whatsoever. The code produced at that company was horrible. My current company does not write any unit tests, relying instead on intense QA, and the code is top-notch. – Juliet Jan 02 '09 at 17:54
  • I write unit tests when I think I need them, but more importantly I write random test drivers, because my code might work fine in 100% of predictable cases. It's the unpredictable cases I'm worried about. – Mike Dunlavey Jan 02 '09 at 18:15
  • In my current project, I've introduced up-front unit tests, and code quality has improved drastically. People had to be convinced at first, but soon noticed the positive effects themselves. So my experience says you're wrong. And PhoenixRedeemer, you ARE a complete moron... just like everyone else. – Michael Borgwardt Jan 03 '09 at 17:54
  • @Brazzy: Why weren't your devs writing better code to start with? Notice my opinion says you don't "need" to write tests up front. I'm not saying you shouldn't, just that you should think about why you're writing that way. – Cameron MacFarland Jan 04 '09 at 00:33
  • @brazzy: Hey, complete morons rule! :) I've seen code that is improved by unit tests, because it needed them. I've seen code that didn't need many unit tests, because it had few invalid states. My code tends to need randomly generated tests, due to the problem space. – Mike Dunlavey Jan 05 '09 at 14:11
  • 5
    Unit tests are also about managing change. It's not the code that you are writing right now that needs the tests, but the code after the next iteration of change that will need it. How can you re-factor code if you have no way to prove that what it did before the change is still what it does after? – Greg Domjan Jan 06 '09 at 02:39
  • @Greg: While it is true to say how can you refactor if you can't prove you didn't break stuff, but then I do write tests designed to show changes after a refactor. My opinion of tests is mainly confined to their use up front. Tests are very useful when refactoring. – Cameron MacFarland Jan 06 '09 at 13:37
  • 5
    Everyone writes the unit test that checks open() fails if the file doesn't exit. No one writes the unit test for what happens if the username is 100characters on a tablet PC with a right-left language and a turkish keyboard. – Martin Beckett Jan 09 '09 at 17:42
  • 1
    I think this misses the point of test driven development, which hurts the argument. It isn't about testing edge cases, it is about driving design. – Yishai Apr 29 '09 at 18:29
  • 1
    You don't need to catch every edge case. If you are testing the best case and a few common errors, when an edge case pops up you can write a test for it, fix it, AND ensure that all you don't introduce new bugs. Apart from that, writing tests first forces you to think about what you are trying to acheive, and how. It helps you write small maintainable methods. I don't see how any programmer with a desire to write good software could be against this. – nitecoder Jul 11 '09 at 00:42
  • Although I agree that "unit tests only catch the issues I've thought about", there are many times where I'm *positive* the code I just wrote satisfies a particular condition, yet the test reveals something I totally overlooked. Furthermore, the act of writing tests first forces you to think about all the edge cases in a manner that you might not have to as great a degree. – Ether Nov 07 '09 at 23:32
  • 1
    For me, an eye-opener about testing was this: you need to try out your code anyway - so why not do it in form of a test? Extensive testing is controversial, of course, but a little can get you a long way. – Dr. Hans-Peter Störr Dec 11 '09 at 20:16
58

All variables/properties should be readonly/final by default.

The reasoning is a bit analogous to the sealed argument for classes, put forward by Jon. One entity in a program should have one job, and one job only. In particular, it makes absolutely no sense for most variables and properties to ever change value. There are basically two exceptions.

  1. Loop variables. But then, I argue that the variable actually doesn't change value at all. Rather, it goes out of scope at the end of the loop and is re-instantiated in the next turn. Therefore, immutability would work nicely with loop variables and everyone who tries to change a loop variable's value by hand should go straight to hell.

  2. Accumulators. For example, imagine the case of summing over the values in an array, or even a list/string that accumulates some information about something else.

    Today, there are better means to accomplish the same goal. Functional languages have higher-order functions, Python has list comprehension and .NET has LINQ. In all these cases, there is no need for a mutable accumulator / result holder.

    Consider the special case of string concatenation. In many environments (.NET, Java), strings are actually immutables. Why then allow an assignment to a string variable at all? Much better to use a builder class (i.e. a StringBuilder) all along.

I realize that most languages today just aren't built to acquiesce in my wish. In my opinion, all these languages are fundamentally flawed for this reason. They would lose nothing of their expressiveness, power, and ease of use if they would be changed to treat all variables as read-only by default and didn't allow any assignment to them after their initialization.

Konrad Rudolph
  • 530,221
  • 131
  • 937
  • 1,214
  • Most functional languages are just like this; for example F# explicitly requires you to declare something as "mutable" if you want to be able to change it. – Greg Beech Jan 02 '09 at 15:22
  • 1
    Functional languages are just superior that way. Of the non-functional languages, Nemerle seems to be the only one offering this feature. – Konrad Rudolph Jan 02 '09 at 15:39
  • I like the bit in SICP where the authors dismiss 'looping constructs such as do, repeat, until, for, and while' as a language defect. – fizzer Jan 02 '09 at 15:54
  • 5
    Disagree but made me think. Interesting. – Steve B. Jan 02 '09 at 17:31
  • 1
    I personally like this. "Everything is immutable" makes multithreaded code a lot easier to write: locks are no longer needed since you never have to worry about another thread changing your object under your feet, so a whole class of errors related to race-conditions and deadlocking cease to exist. – Juliet Jan 02 '09 at 18:04
  • There's no such thing as a free lunch. Immutability despite its many benefits will have a cost. Generally I like the idea, in the same way I like the idea of functional programming. Can I get my head round that, no. Am I particular thick, may be, but I don't think so. – AnthonyWJones Jan 02 '09 at 20:59
  • 2
    @AnthonyWJones: what costs does immutable-by-default have? – Juliet Jan 02 '09 at 21:25
  • This makes me wonder what my code would be like and how I would need to change my understanding of programming paradigms. Could I deal with immutable variables? I can't begin to grasp the extent of the repercussions of doing this in C#, but I can't imagine anything good coming of it. – BenAlabaster Jan 02 '09 at 23:12
  • The thing I don't like about immutability is the amount of copying required. – TraumaPony Jan 03 '09 at 01:17
  • I though this was too much when I read it in Effective Java: Favor immutability. Then, when applied it make totally sense. Apps are MUCH easier to create and maintain using immutability. The only extra thing needed is a macro template to "code" the copy methods just as TraumaPony pointed out. – OscarRyz Jan 03 '09 at 03:53
  • Language constructs can't take care of all accumulator cases. Sometimes what you are adding up isn't a simple list. It also could make hairy logic in some cases as you can't have a default value. – Loren Pechtel Jan 03 '09 at 05:11
  • @TraumaPony: The nice thing about immutability is that in (almost?) all cases copying can be replaced by simple aliasing. This *does* require some changes in data structures, though. – Konrad Rudolph Jan 03 '09 at 10:38
  • Another case that can't be immutable: Any sort of iterative calculation or calculation within a loop. More generally, the data you are working on. How well would Microsoft Immutable Word sell?? – Loren Pechtel Jan 03 '09 at 20:02
  • 1
    @Princess: immutable-by-default has a comprehension cost. It's much more difficult to think about (not reason about, think about) immutable-by-default objects/variables/what-have-you. – Jeff Hubbard Jan 03 '09 at 21:18
  • I agree that variables should be readonly whenever possible. It lets the compiler optimize and it lets the developer know the value never changes after a certain point. – Jeremy Jan 03 '09 at 21:20
  • @Loren: about your “other case”: how is that different from a special accumulator? It is actually just that, and well covered by many frameworks, such as LINQ. Notice that any kind of user interaction rarely benefits from immutability so Immutable Word is probably not a good idea. – Konrad Rudolph Jan 04 '09 at 21:23
  • 5
    @Jeff: I think this is *at least* debatable. Programming in general has a comprehension cost, any style of programming does. But I doubt that immutable-by-default incurs *any* additional comprehension cost at all, especially since it's much closer to the mathematical use of variables in equations. – Konrad Rudolph Jan 04 '09 at 21:25
  • @Loren Pectel, I think that databases should be immutable too. – tuinstoel Jan 04 '09 at 21:26
  • There's an obvious cost in complexifying and slowing down the code, to a huge degree. This idea must have been thought of by those who don't have to do too much math programming. – Lance Roberts Jan 06 '09 at 01:55
  • @Lance, The opposite is true. Immutability actually helps the compiler a great deal in producing *more efficient* code because it can apply many more automated optimizations. This style of coding works perfectly with “math programming” (I guess you mean arithmetically dense code). – Konrad Rudolph Jan 06 '09 at 07:57
  • 1
    I want an immutable apple. When I take a bite of the apple I get your apple with the bite taken out of it, and can give my apple to the next person who wants a whole apple. It's all so simple! – Greg Domjan Jan 07 '09 at 04:16
  • @Greg, Things always change, we developers are the orchestrators and conductors of this change, because we change and shape the future with our ideas and our code. That's the reason we want immutability! – tuinstoel Jan 07 '09 at 07:29
  • 10
    Yes, and we'll only access read-only databases, stored on read-only media. Maybe once our programs have no mutable state, and therefore accomplish nothing we can move on to truly pure functional programming where nothing happens and the compiler with the best optimization outputs nothing. – postfuturist Jan 07 '09 at 08:46
  • Might be little hard to animate anything if variables describing object to animate were immutable. – Kamil Szot Sep 06 '09 at 20:22
  • 1
    @Kamil: no, not at all. In fact, `Point` objects in .NET *are* immutable, and animate just fine. You just need to create a new object for each animation position – which *sounds* inefficient but really isn’t necessarily. – Konrad Rudolph Sep 07 '09 at 06:55
  • Interestingly, in Java even loop variables can be final: for (final item : list) { ... } Took me a while to discover that. – Dr. Hans-Peter Störr Dec 11 '09 at 20:18
  • He's not saying that all variables should be final, he's saying all variables should be final *by default*. That's reasonable. – Craig P. Motlin Oct 22 '10 at 02:49
52

Realizing sometimes good enough is good enough, is a major jump in your value as a programmer.

Note that when I say 'good enough', I mean 'good enough', not it's some crap that happens to work. But then again, when you are under a time crunch, 'some crap that happens to work', may be considered 'good enough'.

John MacIntyre
  • 12,910
  • 13
  • 67
  • 106
48

If I were being controversial, I'd have to suggest that Jon Skeet isn't omnipotent..

David Basarab
  • 72,212
  • 42
  • 129
  • 156
Gareth
  • 133,157
  • 36
  • 148
  • 157
46

"Java Sucks" - yeah, I know that opinion is definitely not held by all :)

I have that opinion because the majority of Java applications I've seen are memory hogs, run slowly, horrible user interface and so on.

G-Man

David Basarab
  • 72,212
  • 42
  • 129
  • 156
GeoffreyF67
  • 11,061
  • 11
  • 46
  • 56
  • 2
    I think what you're trying to say is Swing sucks (as in JAVA UIs). Java back ends don't suck at all...unless that's the controversial bit ;) – rustyshelf Jan 03 '09 at 04:47
  • You don't have to be a Java partisan to appreciate an application like JEdit. Java has some serious crushing deficiencies, but so does every other language. Those of Java are just easier to recognize. – dreftymac Jan 03 '09 at 05:25
  • I a C# fanboy, but I admire quite a few Java apps as being very well done. – Neil N Feb 13 '09 at 20:14
  • 9
    I think what you are trying to say is that the barrier for Java coding is so low that there are many sucky Java "programmers" out there writing complete crap. – Lawrence Dol Feb 19 '09 at 00:44
  • 1
    I agree that most Java desktop apps I've seen suck. But I wouldn't say the same of server apps. – Sergio Acosta Mar 11 '09 at 08:51
  • 3
    You]re going to blame a programming language for 'horrible user interfaces'? Surely that is a fault of the UI designer. And while I'm sure Java has its share of poorly coded software that runs slowly and consumes too much memory, it is not at all hard to write Java programs that run efficiently and use memory only as needed. Having worked on a Java based web crawler capable of crawling 100s of millions of URIs I can attest to this. – Kris May 30 '09 at 22:41
  • Java as a programming language is lacking a lot of features that you really want to make your life simpler. Java as a development platform rocks, as it got a great set of libraries and a nice community. – Knubo Nov 11 '10 at 19:46
  • Java does suck. Get to know .NET, Visual Studio, then you will never again want to code for Java. – Smur Dec 23 '10 at 14:10
45

Okay, I said I'd give a bit more detail on my "sealed classes" opinion. I guess one way to show the kind of answer I'm interested in is to give one myself :)

Opinion: Classes should be sealed by default in C#

Reasoning:

There's no doubt that inheritance is powerful. However, it has to be somewhat guided. If someone derives from a base class in a way which is completely unexpected, this can break the assumptions in the base implementation. Consider two methods in the base class, where one calls another - if these methods are both virtual, then that implementation detail has to be documented, otherwise someone could quite reasonably override the second method and expect a call to the first one to work. And of course, as soon as the implementation is documented, it can't be changed... so you lose flexibility.

C# took a step in the right direction (relative to Java) by making methods sealed by default. However, I believe a further step - making classes sealed by default - would have been even better. In particular, it's easy to override methods (or not explicitly seal existing virtual methods which you don't override) so that you end up with unexpected behaviour. This wouldn't actually stop you from doing anything you can currently do - it's just changing a default, not changing the available options. It would be a "safer" default though, just like the default access in C# is always "the most private visibility available at that point."

By making people explicitly state that they wanted people to be able to derive from their classes, we'd be encouraging them to think about it a bit more. It would also help me with my laziness problem - while I know I should be sealing almost all of my classes, I rarely actually remember to do so :(

Counter-argument:

I can see an argument that says that a class which has no virtual methods can be derived from relatively safely without the extra inflexibility and documentation usually required. I'm not sure how to counter this one at the moment, other than to say that I believe the harm of accidentally-unsealed classes is greater than that of accidentally-sealed ones.

Jon Skeet
  • 1,421,763
  • 867
  • 9,128
  • 9,194
  • I believe the default in C++ is to make all methods non-virtual, so C# was hardly taking a step in the right direction. I'd call that returning to their C++ roots. – duffymo Jan 02 '09 at 13:39
  • C# isn't really rooted in C++ though - it's rooted in Java, pretty strongly. IMO, of course :) – Jon Skeet Jan 02 '09 at 13:45
  • That's not controversial - that's common sense :) – Brian Rasmussen Jan 02 '09 at 13:49
  • I realize that the link between C# and Java is certainly stronger than C++, but if we were drawing an inheritance diagram they'd both claim C++ as parent (arguably grandparent for C++). – duffymo Jan 02 '09 at 13:55
  • 2
    +1 from me. I very rarely have to remove a sealed modifier (and I make everything sealed by default, unless it is immediately clear that it cannot be sealed). –  Jan 02 '09 at 14:18
  • My understanding is that you are saying we should be extra careful when designing object hierarchies, but I don't understand how sealing classes by default would help to achieve this. – Leonardo Herrera Jan 02 '09 at 14:49
  • "I believe the default in C++ is to make all methods non-virtual, so C# was hardly taking a step in the right direction" how is that logical?? i miss the connection. making methods nonvirtual by default in c++ is a Good Thing (imho) +1 – Johannes Schaub - litb Jan 02 '09 at 16:47
  • I think the counter-argument could be generalized to: a class that derives from a base class without overriding anything further up in the hierarchy can be done relatively safely, etc. – Chris Smith Jan 02 '09 at 17:12
  • 3
    i think this is an anti-pattern. Classes without inheritance are just modules. Please don't pretend to know what all future programmers will need to do with your code. – Steven A. Lowe Jan 02 '09 at 18:39
  • Inheritance and immutability don't go well together. If I want to know for sure that an object is immutable, I must know that it is not derived from, since a derived type can break that contract. – Jay Bazuzi Jan 02 '09 at 22:46
  • 2
    Given your reasoning, it's difficult to disagree. However - if I wished to use your class for a purpose which you didn't intend, but through some clever overriding/application of your base methods/properties it will suit my purpose, isn't that *my* prerogative rather than yours? – BenAlabaster Jan 02 '09 at 22:57
  • +1 from me too. Its about avoiding implicit assumptions - which always come back to bite you. An explicit statement is always more accurate. – devstuff Jan 02 '09 at 23:04
  • @balabaster: If you do that and then I want to make a change, it's very likely to break your code. As a code supplier, I don't want to put customers in the position of having fragile code. (Not that I'm actually a code supplier etc. This is in theory.) – Jon Skeet Jan 02 '09 at 23:08
  • I agree that inheritance should be guided, but sealing all your classes by default doesn't guide you it's a road block, removing inheritance entirely – Jeremy Jan 03 '09 at 21:09
  • 1
    Even so, I should understand the risks in deriving from a non-frozen class. Any changes you make in an unsealed class carry the same penalty, so all you're doing by making everything default-sealed is making it harder to use your code in my own way. – Jeff Hubbard Jan 03 '09 at 21:10
  • Agreed in principle, although I hated the sealed-by-default behaviour of methods when I was using early C# (at Microsoft, actually) because sometimes I would want to intercept calls to some library class's method, but couldn't just subclass it because they didn't make the methods virtual. – Joe Jan 04 '09 at 04:25
  • If a inheriting class changes behavior of the method it is wrong. Period. It does not fulfill the substitutability principle. There is no need to make a class sealed, just shoot the offender. – David Rodríguez - dribeas Jan 04 '09 at 12:47
  • One problem with having everything sealed is that it kills proper unit testing. Because methods in the .NET framework are sealed, it's almost impossible to test classes that use .NET framework classes like DirectoryEntry (which uses external resources), without writing a wrapper first – Erlend Jan 05 '09 at 09:28
  • I agree, and I would expand the scope to say that all programming language constructs should default to the "safest" or "no additional work required" state (not the opposite). Also, there should always be an optional keyword for the default whenever there is a keyword to specify a non-default. – Rob Williams Jan 05 '09 at 20:41
  • You can not mock sealed classes, except if they implement a certain interface which is used by all users of that class instead of the sealed class. (Bye Bye folks, I will descent into hell soon, as I dared to down vote Jon Skeet...) – EricSchaefer Jan 07 '09 at 14:50
  • 1
    I vastly prefer mocking of interfaces instead of classes anyway, so it's never been an issue for me. – Jon Skeet Jan 07 '09 at 14:54
  • Why not get rid of defaults all together force the developer to make a decision if it's sealed or not, same should go for public vs private. – JoshBerke Jan 13 '09 at 16:15
  • @Josh: Yes, that's definitely an interesting idea. There are some options where I don't want to have to be explicit - e.g. "nonvolatile" would be silly. How about "writable" as the opposite of "readonly" for static and instance variables though? Hmm... – Jon Skeet Jan 13 '09 at 16:26
  • I've found this to be *very* controversial in the circles I frequent. While I favor interfaces & aggregation over inheritance, I've seen some very creative and powerful techniques employed that rely on the availability of inheritance in the framework level libraries being used. My observation is that many people are reluctant to change code that they didn't write to begin with (particularly in IT organizations where it sometimes is actively discouraged). – LBushkin Oct 13 '09 at 22:37
  • Strongest argument I've seen for classes NOT to be sealed by default is that it would adversely impact the ecology of software libraries (commercial and internal). Too few people take the time to consider how their classes can be inherited - it's hard to get this right. Most will stick with the language default. Software changes relatively slowly (even when you have the code) and there will be a lag in getting inheritability changed. Finally, will people really spend more time designing for inheritance? Or just blindly add "overrideable" when the find a case where they decide they need it? – LBushkin Oct 13 '09 at 22:43
  • @LBushkin: The fact that people don't take time to consider things properly (and that it's hard to get it right) is exactly why the default ought to be the *safe* option. Give people the shotgun *unloaded*, and make them load it themselves if they want to. – Jon Skeet Oct 14 '09 at 05:59
43

Bad Programmers are Language-Agnostic

A really bad programmer can write bad code in almost any language.

Christian C. Salvadó
  • 807,428
  • 183
  • 922
  • 838
42

A Clever Programmer Is Dangerous

I have spent more time trying to fix code written by "clever" programmers. I'd rather have a good programmer than an exceptionally smart programmer who wants to prove how clever he is by writing code that only he (or she) can interpret.

Tom Moseley
  • 591
  • 7
  • 15
  • 1
    Real clever programmers are those that find the good answer while making it maintainable. Either that or those who hide their names from comments so users won't backfire asking for changes. – David Rodríguez - dribeas Jan 05 '09 at 13:55
  • 3
    Real genius is seing how really complex things can be solved in a really simple way. People who write needlesly complex code are just assholes who want to feel superior to the world around them. – Captain Sensible Jan 26 '09 at 09:56
  • +1 Good programmers know their own limitations - if it's so clever you can only just understand it when you're writing it, well, it's probably wrong now, and you'll never understand it in 6 months time when it needs changing. – MarkJ Jan 27 '09 at 11:51
  • 17
    "Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it." --unknown – Robert J. Walker May 05 '09 at 18:53
  • 3
    Robert, great quote: BTW it's from Brian Kernighan not "unknown" – MarkJ Jun 01 '09 at 18:28
40

If you have any idea how to program you are not fit to place a button on a form

Is that controversial enough? ;)

No matter how hard we try, it's almost impossible to have appropriate empathy with 53 year old Doris who has to use our order-entry software. We simply cannot grasp the mental model of what she imagines is going on inside the computer, because we don't need to imagine: we know whats going on, or have a very good idea.

Interaction Design should be done by non-programmers. Of course, this is never actually going to happen. Contradictorily I'm quite glad about that; I like UI design even though deep down I know I'm unsuited to it.

For further info, read the book The Inmates Are Running the Asylum. Be warned, I found this book upsetting and insulting; it's a difficult read if you are a developer that cares about the user's experience.

eswald
  • 8,368
  • 4
  • 28
  • 28
AnthonyWJones
  • 187,081
  • 35
  • 232
  • 306
  • Excellent point. I re-learn this point the hard way every time I try to teach my parents (in their early 70s) how to use something on the computer or their cell phones. – MusiGenesis Jan 13 '09 at 17:20
  • 4
    I disagree. I don't think they are mutually exclusive. To take the opposite, people who have never used a computer before are the best interface designers. – James McMahon Jan 13 '09 at 19:47
  • I disagree, but only in the sense that most interface design decisions seem to be made by management. – Dave Jan 13 '09 at 23:55
  • 1
    I'd say they're definitely not mutually exclusive. I would more likely say that management should never decide where to put the button. I've had some of the most complicated interfaces ever created that way. – Sam Erwin Apr 02 '09 at 18:43
  • I wish I could upvote this twice. Yes, it's not universally true, but programmers tend to have the completely wrong mindset to design UI. We are too forgiving of interface flaws when it gives power and flexibility that end users don't need. – Robert J. Walker May 05 '09 at 18:55
  • That's one of my favorite books. Should be a must read - particularly for programmers who think they are web designers... – CMPalmer Jun 11 '09 at 14:49
  • This is like saying "If you know anything about how a car works, you should not be allowed to design the interior." There is an entire discipline around UI design and if you are doing things just based on your mental model of some imaginary elderly user, then you are not doing it correctly. No one can account for everyone's mental model. Applying extensive research, best practices, statistical analysis, and user testing are the ways to get to your desired result. Programmers can learn this discipline too. – Ben Reierson Jun 17 '09 at 20:15
  • @Ben: no you can't account for "everyones" mental model but its a sure thing that the developers mental model is entirely different from everyone else. Thats why an Interaction design professional will invent a person that best represents the typical user. If a system has users of very different persona (e.g., in addition to Doris we may invent Jeff the IT admin guy) then good interaction design will use Jeff as the target audience for the tasks he is likely to engage in. – AnthonyWJones Jun 17 '09 at 21:25
  • 9
    Interaction Design by users is what gave MySpace its reputation for vomit-inducing pages. – Kelly S. French Jul 16 '09 at 15:18
40

Avoid indentation.

Use early returns, continues or breaks.

instead of:

if (passed != NULL)
{
   for(x in list)
   {
      if (peter)
      {
          print "peter";
          more code.
          ..
          ..
      }
      else
      {
          print "no peter?!"
      }
   }
}

do:

if (pPassed==NULL)
    return false;

for(x in list)
{
   if (!peter)
   {
       print "no peter?!"
       continue;
   }

   print "peter";
   more code.
   ..
   ..
}
True Soft
  • 8,675
  • 6
  • 54
  • 83
Jon Clegg
  • 3,870
  • 4
  • 25
  • 22
  • 2
    I wouldn't apply this as a **rule**, but I definitely don't hesitate to take this route when it can reduce complexity and improve readability. +1 Why do you need peter so badly, though? – P Daddy Jan 09 '09 at 23:19
  • 1
    Not a fan of 'canvern code' are we? :) I have to agree however. I've actually worked on 'cavern code' that more that an ENTIRE PAGE of just closing braces.... And that was on a 1920x1600 monitor (or whatever the exact res is). – LarryF Jan 14 '09 at 00:36
  • You should check out "Spartan programming" - this seems like a similar style. – Keith Mar 09 '09 at 10:45
  • It is not indentation you are arguing against, its deeply nested conditional and loop blocks. I fully concur in that regard. I've found that enforcing a code style with a maximum line length tends to discourage this behavior somewhat. – Kris May 30 '09 at 23:48
  • Don't forget braces for "if"! use foreach! use (condition ? valueIfTrue : valueIfFalse) If you don't understand, search engine, learn! – moala Aug 12 '09 at 01:45
  • 2
    I don't like the continue here. – Loren Pechtel Oct 18 '09 at 04:30
  • This is a dupe of the higher-ranked answer http://stackoverflow.com/questions/406760/whats-your-most-controversial-programming-opinion/407507#407507 – Ether Nov 08 '09 at 20:12
38

Before January 1st 1970, true and false were the other way around...

annakata
  • 74,572
  • 17
  • 113
  • 180
  • Oh man, this is the funniest thing I've seen on SO in a long time. – MusiGenesis Jan 13 '09 at 17:24
  • I understand how *nix systems record time, and how true and false are represented. But, could someone explain this joke to me, I don't get it? Thanks. – Matt Blaine Feb 23 '10 at 02:55
  • it's like particles and anti-particles: for an arbitrary system (like a computer) it doesn't actually matter what label you ascribe to each value, the two things are defined by each other. Kaons spoil the metaphor a bit, but it's just a joke so you'll have to learn to let it go. – annakata Apr 18 '10 at 19:30
38

I'm probably gonna get roasted for this, but:

Making invisible characters syntactically significant in python was a bad idea

It's distracting, causes lots of subtle bugs for novices and, in my opinion, wasn't really needed. About the only code I've ever seen that didn't voluntarily follow some sort of decent formatting guide was from first-year CS students. And even if code doesn't follow "nice" standards, there are plenty of tools out there to coerce it into a more pleasing shape.

Paul Wicks
  • 62,960
  • 55
  • 119
  • 146
  • 1
    A well configured editor can help you here. Most editors can show invisibles and vim for one can highlight those invisible mistakes in red to make them really obvious. – mcrute Jan 10 '09 at 16:31
  • 3
    I think that the bad idea becomes more obvious when you think about the ridiculous limitation of `lambda` in Python. – Svante Jan 12 '09 at 16:49
  • The number of times I've had a python script fail because I put a blank line in my code in a for loop, and the blank line didn't have enough spaces... Makes me want to not space my code with blank lines. – Cameron MacFarland Jan 13 '09 at 23:13
  • I don't agree with you, but +1 because it _is_ controversial – hasen Jan 24 '09 at 05:23
  • It was also true of the original Unix make command. Actions had to be one tab space in; if you used spaces instead, an action looked like a syntax error. Ugh! – Jim Ferrans May 19 '09 at 02:26
  • History repeats itself. We didn't learn from Fortran output formatting or from make files so why be surprised that someone thought it was a good idea for python? It won't be the last time. – Kelly S. French Jul 16 '09 at 15:28
  • 5
    @mcrute: if you have to build a special-purpose tool just to work with the language, that sounds like a problem to me. – Paul Nathan Jul 29 '09 at 16:58
  • "About the only code I've ever seen that didn't voluntarily follow some sort of decent formatting guide was from first-year CS students." So how is this a problem? – fengb Dec 16 '09 at 00:33
  • @Paul Nathan: if you have to build a special-purpose tool to write well-indented code with a braces language, that sounds like a problem to me. – Beni Cherniavsky-Paskin Dec 31 '09 at 15:56
37

You don't have to program everything

I'm getting tired that everything, but then everything needs to be stuffed in a program, like that is always faster. everything needs to be webbased, evrything needs to be done via a computer. Please, just use your pen and paper. it's faster and less maintenance.

Aaron Digulla
  • 321,842
  • 108
  • 597
  • 820
Mafti
  • 675
  • 2
  • 11
  • 28
37

Null references should be removed from OO languages

Coming from a Java and C# background, where its normal to return null from a method to indicate a failure, I've come to conclude that nulls cause a lot of avoidable problems. Language designers can remove a whole class of errors relate to NullRefernceExceptions if they simply eliminate null references from code.

Additionally, when I call a method, I have no way of knowing whether that method can return null references unless I actually dig in the implementation. I'd like to see more languages follow F#'s model for handling nulls: F# doesn't allow programmers to return null references (at least for classes compiled in F#), instead it requires programmers to represent empty objects using option types. The nice thing about this design is how useful information, such as whether a function can return null references, is propagated through the type system: functions which return a type 'a have a different return type than functions which return 'a option.

Juliet
  • 80,494
  • 45
  • 196
  • 228
  • 2
    An interesting link to confirm your point of view: http://sadekdrobi.com/2008/12/22/null-references-the-billion-dollar-mistake/ – Nemanja Trifunovic Jan 02 '09 at 17:47
  • Nemanja: Fascinating find, too bad I can't upvote comments :) – Juliet Jan 02 '09 at 18:20
  • 19
    I would rather have "non-nullable reference types" (with compiler checking) than completely remove null. – Jon Skeet Jan 02 '09 at 18:26
  • 1
    I have to agree with Jon; "null" is frequently a valid state and indicates something completely different from zero or empty. Eliminating it would be a mistake IMO; but for those cases where it's not appropriate, a non-nullable object type would be nice. – Mike Hofer Jan 02 '09 at 19:33
  • Correction: a non-nullable reference. – Mike Hofer Jan 02 '09 at 19:33
  • I disagree, but then I use Objective-C where nil is quite a handy concept. –  Jan 02 '09 at 19:41
  • 4
    This is like prohibiting zero to prevent divide-by-zero errors. Nulls happen in real-world situations and forbidding them would force everyone to hand roll their own ad hoc implementations. – Dour High Arch Jan 02 '09 at 21:24
  • I really like Scala's approach to this: there is no null, and if you want the same effect you have to wrap it in an Option[T] object (either Some[T] or None) which forces you to notice and check it. No more accidental nulls. – Marcus Downing Jan 09 '09 at 03:15
  • 1
    I don't necessarily agree that they should be removed, but I do think the Null Object Pattern should be preferred over checking for null every four lines in your code. – moffdub Jan 09 '09 at 22:20
  • Princess, if you like Nemanja's link you can edit your answer and include it – MarkJ Jan 27 '09 at 11:47
  • Agree with Jon. It should be possible to have the language enforce that a given variable can never be assigned null. – Thorbjørn Ravn Andersen Oct 23 '09 at 17:59
  • The problem is your strongly typed language, not null. In a language where null is a valid value and calling any method on null returns null is great. – drawnonward Apr 18 '10 at 00:18
35

Singletons are not evil

There is a place for singletons in the real world, and methods to get around them (i.e. monostate pattern) are simply singletons in disguise. For instance, a Logger is a perfect candidate for a singleton. Addtionally, so is a message pump. My current app uses distributed computing, and different objects need to be able to send appropriate messages. There should only be one message pump, and everyone should be able to access it. The alternative is passing an object to my message pump everywhere it might be needed and hoping that a new developer doesn't new one up without thinking and wonder why his messages are going nowhere. The uniqueness of the singleton is the most important part, not its availability. The singleton has its place in the world.

Steve
  • 11,763
  • 15
  • 70
  • 103
  • 9
    +1 because I disagree so strongly. Singletons (the design pattern) make testing such a nightmare they should never be used. Note that singletons (an object only instantiated once) are fine, but they should be passed in through dependency injection. – Craig P. Motlin Jan 02 '09 at 18:35
  • 2
    A logger is certainly not a perfect candidate for a singleton. You may want to have two loggers. I've been in that exact situation before. It may be a good candidate for being *global*, but certainly not for being forced into "one instance only". Very few things require that constraint. – jalf Jan 04 '09 at 00:59
  • 1
    The way I figure it, I've used some singletons in one project, and I might well do so again before I retire. Not the most widely useable patterns, but valuable for some things. – David Thornley Jan 09 '09 at 14:49
  • 1
    I really recommend reading http://misko.hevery.com/2008/08/25/root-cause-of-singletons/ to you. – balu Feb 02 '09 at 20:33
  • I would like to add that in C++, the singleton pattern is extremely important due to the static initialization fiasco. – rlbond Mar 21 '09 at 05:05
  • Logging is the only common use of the singleton pattern, all others uses are mostly bad. – Emmanuel Caradec Aug 24 '09 at 00:19
  • I have never found a case of singleton that could not be substituted for a static, besides in languages that do not have a proper static inicialization time, bringing static fiasco. – kurast Oct 22 '09 at 19:35
35

It's fine if you don't know. But you're fired if you can't even google it.

Internet is a tool. It's not making you stupider if you're learning from it.

Tordek
  • 10,628
  • 3
  • 36
  • 67
34

A picture is not worth a thousand words.

Some pictures might be worth a thousand words. Most of them are not. This trite old aphorism is mostly untrue and is a pathetic excuse for many a lazy manager who did not want to read carefully created reports and documentation to say "I need you to show me in a diagram."

My wife studied for a linguistics major and saw several fascinating proofs against the conventional wisdom on pictures and logos: they do not break across language and cultural barriers, they usually do not communicate anywhere near as much information as correct text, they simply are no substitute for real communication.

In particular, labeled bubbles connected with lines are useless if the lines are unlabeled and unexplained, and/or if every line has a different meaning instead of signifying the same relationship (unless distinguished from each other in some way). If your lines sometimes signify relationships and sometimes indicate actions and sometimes indicate the passage of time, you're really hosed.

Every good programmer knows you use the tool suited for the job at hand, right? Not all systems are best specified and documented in pictures. Graphical specification languages that can be automatically turned into provably-correct, executable code or whatever are a spectacular idea, if such things exist. Use them when appropriate, not for everything under the sun. Entity-Relationship diagrams are great. But not everything can be summed up in a picture.

Note: a table may be worth its weight in gold. But a table is not the same thing as a picture. And again, a well-crafted short prose paragraph may be far more suitable for the job at hand.

skiphoppy
  • 97,646
  • 72
  • 174
  • 218
  • I don't agree that a picture is not worth a thousand words. I do agree with the sentiment in the answer. Perhaps it would be better to ask "Would you use a 1000 words when only a few (or even one) would do?". Using an image instead of well choosen text is may effectively be just that. – AnthonyWJones Jan 03 '09 at 13:46
  • Some words are worth thousands pictures. (What about sounds, music, odours, etc?) – moala Aug 12 '09 at 01:49
  • 2
    Yes but a 32,000 byte bitmap IS one thousand words. At least until you move to a 64-bit CPU. – Kelly S. French Nov 06 '09 at 17:10
33

You need to watch out for Object-Obsessed Programmers.

e.g. if you write a class that models built-in types such as ints or floats, you may be an object-obsessed programmer.

Ferruccio
  • 98,941
  • 38
  • 226
  • 299
33

There's an awful lot of bad teaching out there.

We developers like to feel smugly superior when Joel says there's a part of the brain for understanding pointers that some people are just born without. The topics many of us discuss here and are passionate about are esoteric, but sometimes that's only because we make them so.

Brian Willis
  • 22,768
  • 9
  • 46
  • 50
  • Those who can't do, teach. By that logic, the people who can't program are the ones teaching us how to program. I've experienced it myself where the professors I've had have admitted to being unable to do the problems and exercises they assign. Protip: Take the classes with the teachers contracted by the university, not tenure (or tenure-pathed) professors. – ravibhagw Jun 08 '10 at 20:00
33

Don't write code, remove code!

As a smart teacher once told me: "Don't write code, Writing code is bad, Removing code is good. and if you have to write code - write small code..."

Gal Goldman
  • 8,641
  • 11
  • 45
  • 45
32

It's a good idea to keep optimisation in mind when developing code.

Whenever I say this, people always reply: "premature optimisation is the root of all evil".

But I'm not saying optimise before you debug. I'm not even saying optimise ever, but when you're designing code, bear in mind the possibility that this might become a bottleneck, and write it so that it will be possible to refactor it for speed, without tearing the API apart.

Hugo

Rocketmagnet
  • 5,656
  • 8
  • 36
  • 47
  • 5
    That sounds very much like my way of thinking: optimise the architecture/design, not the implementation. – Jon Skeet Jan 17 '09 at 14:15
31

If a developer cannot write clear, concise and grammatically correct comments then they should have to go back and take English 101.

We have developers and (the horror) architects who cannot write coherently. When their documents are reviewed they say things like "oh, don't worry about grammatical errors or spelling - that's not important". Then they wonder why their convoluted garbage documents become convoluted buggy code.

I tell the interns that I mentor that if you can't communicate your great ideas verbally or in writing you may as well not have them.

BBonifield
  • 4,983
  • 19
  • 36
  • I agree clear communication is important. But grammar is secondary. Some people have poor grammar but can communicate clearly (I'm thinking of some non-native English speakers) and some people have perfect grammar but can hardly communicate at all. – John D. Cook Jan 11 '09 at 03:40
  • Ironically, there are many developers that think this is beneath them. Comments and documentation that looks like it's written by a retard should somehow convey that they are truly great hackers. – Captain Sensible Jan 26 '09 at 10:08
  • This isn't just about grammar and spelling either. It is possible to write something that has correct grammar and spelling yet is nearly impossible for others to understand (just as you can write a program that compiles and runs yet is impossible to understand the code). Being able to express yourself clearly in writing is very important. Having taught a comp-sci course that involves writing design documents for the last six years I've found it distressing how few of my students seem to possess this ability. And it seems to be getting worse each year. – Kris May 30 '09 at 23:57
  • 1
    @John D Cook Poor grammar is most often detrimental for communication. These rules weren't invented for no reason (goes to check if there are no grammar mistaeks in those comment). – quant_dev Jul 09 '09 at 00:08
  • 1
    "If a developer cannot write **a** clear, concise and grammatical comment **s**..." Deliberate irony? –  Jun 25 '10 at 13:18
  • Never ceases to amaze me how often I get to paste this: http://english.stackexchange.com/ – Barrie Reader Dec 09 '10 at 14:37
31

C++ is a good language

I practically got lynched in another question a week or two back for saying that C++ wasn't a very nice language. So now I'll try saying the opposite. ;)

No, seriously, the point I tried to make then, and will try again now, is that C++ has plenty of flaws. It's hard to deny that. It's so extremely complicated that learning it well is practically something you can dedicate your entire life to. It makes many common tasks needlessly hard, allows the user to plunge head-first into a sea of undefined behavior and unportable code, with no warnings given by the compiler.

But it's not the useless, decrepit, obsolete, hated language that many people try to make it. It shouldn't be swept under the carpet and ignored. The world wouldn't be a better place without it. It has some unique strengths that, unfortunately, are hidden behind quirky syntax, legacy cruft and not least, bad C++ teachers. But they're there.

C++ has many features that I desperately miss when programming in C# or other "modern" languages. There's a lot in it that C# and other modern languages could learn from.

It's not blindly focused on OOP, but has instead explored and pioneered generic programming. It allows surprisingly expressive compile-time metaprogramming producing extremely efficient, robust and clean code. It took in lessons from functional programming almost a decade before C# got LINQ or lambda expressions.

It allows you to catch a surprising number of errors at compile-time through static assertions and other metaprogramming tricks, which eases debugging vastly, and even beats unit tests in some ways. (I'd much rather catch an error at compile-time than afterwards, when I'm running my tests).

Deterministic destruction of variables allows RAII, an extremely powerful little trick that makes try/finally blocks and C#'s using blocks redundant.

And while some people accuse it of being "design by committee", I'd say yes, it is, and that's actually not a bad thing in this case. Look at Java's class library. How many classes have been deprecated again? How many should not be used? How many duplicate each others' functionality? How many are badly designed?

C++'s standard library is much smaller, but on the whole, it's remarkably well designed, and except for one or two minor warts (vector<bool>, for example), its design still holds up very well. When a feature is added to C++ or its standard library, it is subjected to heavy scrutiny. Couldn't Java have benefited from the same? .NET too, although it's younger and was somewhat better designed to begin with, is still accumulating a good handful of classes that are out of sync with reality, or were badly designed to begin with.

C++ has plenty of strengths that no other language can match. It's a good language

jalf
  • 243,077
  • 51
  • 345
  • 550
  • 1
    true dat. My main beef is that every 3rd party library has its own string class. I waste too much time converting between CString to std::string to WxString to char*. Can't everyone just use std::string or const char*. – Doug T. Jan 14 '09 at 14:43
  • 2
    Not true "C++ has plenty of strengths that no other language can match. It's a good language." EVERY language has strengths that no longer language can match (even LOLCODE, hey it's a lot of fun). – Jonathan C Dickinson Jan 29 '09 at 09:36
  • 2
    Perhaps. But C++'s strengths are a bit more commonly useful. Let me know when your language of choice supports compile-time metaprogramming or RAII. – jalf Jan 29 '09 at 12:01
31

Source files are SO 20th century.

Within the body of a function/method, it makes sense to represent procedural logic as linear text. Even when the logic is not strictly linear, we have good programming constructs (loops, if statements, etc) that allow us to cleanly represent non-linear operations using linear text.

But there is no reason that I should be required to divide my classes among distinct files or sort my functions/methods/fields/properties/etc in a particular order within those files. Why can't we just throw all those things within a big database file and let the IDE take care of sorting everything dynamically? If I want to sort my members by name then I'll click the member header on the members table. If I want to sort them by accessibility then I'll click the accessibility header. If I want to view my classes as an inheritence tree, then I'll click the button to do that.

Perhaps classes and members could be viewed spatially, as if they were some sort of entities within a virtual world. If the programmer desired, the IDE could automatically position classes & members that use each other near each other so that they're easy to find. Imaging being able to zoom in and out of this virtual world. Zoom all the way out and you can namespace galaxies with little class planets in them. Zoom in to a namespace and you can see class planets with method continents and islands and inner classes as orbitting moons. Zoom in to a method, and you see... the source code for that method.

Basically, my point is that in modern languages it doesn't matter what file(s) you put your classes in or in what order you define a class's members, so why are we still forced to use these archaic practices? Remember when Gmail came out and Google said "search, don't sort"? Well, why can't the same philosophy be applied to programming languages?

Walt D
  • 4,491
  • 6
  • 33
  • 43
  • Stored proc code like T-SQL or PL/SQL is not stored in files. – tuinstoel Jan 27 '09 at 19:29
  • The main problem is that a picture is worth 1000 vague words while you can be very specific in text. But I agree that we really need a "birds eye development" mode where you can hack together a rough outline and let the IDE fill 99% of the gaps with defaults. – Aaron Digulla Mar 02 '09 at 14:22
  • I believe smalltalk does this. Yet strangely it's still not a widely used language. – Jeremy Wall Jun 14 '09 at 02:41
  • +1 for really cool idea. -1 not terribly controversial. I like the idea of perhaps seeing method declarations in a 3d space with calls to other methods shown using lines / color / something. Perhaps it would be a mess, perhaps it would make an overall code overview easier to grasp? Dunno how much of this smalltalk does as suggested above. – Ray Jul 17 '09 at 12:55
  • YES! I really wish programmers get over the cult of linear plaintext soon. Modern IDEs do take steps in the right direction, but it's not enough - they are still just annotating and working on the plaintext, bending it already almost to the breaking point. Instead of hackarounds, we should be shifting the paradigm into expressing application design in forms that are much more suitable for it! – Ilari Kajaste Oct 13 '09 at 13:18
  • YES! I miss Visual Age for Java every day I get into Eclipse. VAJ had no source files, just some kind of binary repository :S – JuanZe Oct 13 '09 at 21:26
  • Boo! I currently work on a project that is doing all of its development in one of your magic IDEs. The problem with abstracting details is that they have to be defined somewhere. The more obscure (according to your IDE designer), the harder the detail is to find. And if the detail happens to be causing a bug or compiler error, you get to hunt through the IDE for where that detail is. It may be possible to have one of those magic IDEs some day, but not without the Mozart of the HCI field and not without enormous vendor lock-in. – thebretness Feb 26 '10 at 00:39
  • The stuff you're talking about is called Model Driven Development. I don't know what the right alternative is, but I suspect it is a blending of scripts, OO, and low-level code in one program, using the right language for each job. – thebretness Feb 26 '10 at 00:40
30

One I have been tossing around for a while:

The data is the system.

Processes and software are built for data, not the other way around.

Without data, the process/software has little value. Data still has value without a process or software around it.

Once we understand the data, what it does, how it interacts, the different forms it exists in at different stages, only then can a solution be built to support the system of data.

Successful software/systems/processes seem to have an acute awareness, if not fanatical mindfulness of "where" the data is at in any given moment.

Jas Panesar
  • 6,597
  • 3
  • 36
  • 47
  • Hey, I like that a lot! Thanks for sharing. – Jas Panesar Jan 12 '09 at 13:08
  • It's an interesting idea but I think it depends what kind of program you're writing. Five worlds man. http://www.joelonsoftware.com/articles/FiveWorlds.html – MarkJ Jan 27 '09 at 11:52
  • I couldn't agree more (granted I'm a DBA so all we deal with is data). – mrdenny Feb 04 '09 at 01:43
  • The system also seems to lose it's way if the data is out. – Jas Panesar Feb 04 '09 at 15:55
  • I'd take the relations of the data into account, too, so "The model is the system". I mean the second letter of a name is relatively useless without the rest and the first name needs the family name and the employee the department, etc. – Aaron Digulla Feb 27 '09 at 09:28
30

Design Patterns are a symptom of Stone Age programming language design

They have their purpose. A lot of good software gets built with them. But the fact that there was a need to codify these "recipes" for psychological abstractions about how your code works/should work speaks to a lack of programming languages expressive enough to handle this abstraction for us.

The remedy, I think, lies in languages that allow you to embed more and more of the design into the code, by defining language constructs that might not exist or might not have general applicability but really really make sense in situations your code deals with incessantly. The Scheme people have known this for years, and there are things possible with Scheme macros that would make most monkeys-for-hire piss their pants.

dwf
  • 3,503
  • 1
  • 20
  • 25
  • I agree with your general feeling. Try and see them as temporal observations. See http://blog.plover.com/prog/design-patterns.html for example. – JB. Jan 07 '09 at 13:24
28

Regurgitating well known sayings by programming greats out of context with the zeal of a fanatic and the misplaced assumption that they are ironclad rules really gets my goat. For example 'premature optimization is the root of all evil' as covered by this thread.

IMO, many technical problems and solutions are very context sensitive and the notion of global best practices is a fallacy.

Community
  • 1
  • 1
SmacL
  • 22,555
  • 12
  • 95
  • 149
  • 3
    There are two types of optimisation, by architecture and by code. Architecural optimisation is clearly needed before you write code. However the term 'premature optimization' really applies to efforts to write code optimally instead of simply. This is evil. – AnthonyWJones Jan 02 '09 at 18:48
  • 1
    I am often called in to straighten out big messes that were architected ostensibly with the objective of "performance". – Mike Dunlavey Jan 02 '09 at 19:06
  • 1
    @Mike: There has to be some understanding of volumes and response requirements before the app is developed. Such things have to be considered in the inital archecture. Of course specific performance choices need to be justified. – AnthonyWJones Jan 02 '09 at 20:52
  • 1
    @Mike, as I mentioned, it's all to do with context. I work in the geospatial domain, where the default complexity of many problems is O(n^3). In this arena, optimization is a must, and it has to happen at design time. Analysing underperforming code with a profiler is rarely helpful. – SmacL Jan 03 '09 at 14:04
26

I often get shouted down when I claim that the code is merely an expression of my design. I quite dislike the way I see so many developers design their system "on the fly" while coding it.

The amount of time and effort wasted when one of these cowboys falls off his horse is amazing - and 9 times out of 10 the problem they hit would have been uncovered with just a little upfront design work.

I feel that modern methodologies do not emphasize the importance of design in the overall software development process. Eg, the importance placed on code reviews when you haven't even reviewed your design! It's madness.

Jim Ferrans
  • 30,582
  • 12
  • 56
  • 83
Daniel Paull
  • 6,797
  • 3
  • 32
  • 41
  • 2
    I don't know, I've seen an upfront design be a very good guide to development. I've never seen it work out such that the upfront design is followed exactly. It seems in my experience that when the rubber hits the road, designs have to be reworked. – Doug T. Jan 02 '09 at 14:11
  • fine with that, so you iterate... amend the design now that you have discovered something new and get on with the job. Your code is, once again, an expression of your design. It's developers that think that a design follows code the urk me. – Daniel Paull Jan 02 '09 at 14:19
  • I wish I was allowed to design before I code. In this job it's "I have an idea" from somoene followed by a directive to get something in a demo ASAP. – David Jan 02 '09 at 14:46
  • Much of my design is noted in header files and/or a few diagrams on a white board. I'm not saying anything about the how formal your design should be, or how to do it, but for the love of God, get your thoughts sorted before coding! – Daniel Paull Jan 02 '09 at 14:50
  • 1
    I've been irritated by the opposite, too much value placed in the design. The mantra "reuse the design not the code" forgets the time spent on implementing, testing, debugging and generally hardening the codebase. You cannot just throw that amount of work out. – JeffV Jan 02 '09 at 17:23
  • @Daniel: I think I agree with you. At the same time, it's important to be ready and able to revise the design and the code late and often. That takes skill that, I'm afraid, is not taught. – Mike Dunlavey Jan 02 '09 at 18:38
  • @Mike - I'm not saying that we all return to a waterfall model. Quite the opposite - as a developer you should expect things to change, so design your system to cater for change (eg, minimise coupling) and expect unexpected iterations that affect your design. You are right - this is not taught. – Daniel Paull Jan 03 '09 at 00:51
  • So if you have to iterate anyway, the choice to design first or code first is essentially the same thing. – Kendall Helmstetter Gelner Jan 05 '09 at 04:40
  • @Kendall: you are kidding, right? Perhaps you are thinking of a proof by induction for your statement, but I'd hope that the number of iterations to write a bit of code that is closed against change is small. In that case, I believe that design first is far more efficient. – Daniel Paull Jan 05 '09 at 05:05
  • 2
    I believe in iterative design. If you invest too much time upfront in design, you won't have the time to do the necessary rewrite (which always happens). – quant_dev Jul 09 '09 at 00:09
25

Emacs is better

Reverend Gonzo
  • 39,701
  • 6
  • 59
  • 77
25

1-based arrays should always be used instead of 0-based arrays. 0-based arrays are unnatural, unnecessary, and error prone.

When I count apples or employees or widgets I start at one, not zero. I teach my kids the same thing. There is no such thing as a 0th apple or 0th employee or 0th widget. Using 1 as the base for an array is much more intuitive and less error-prone. Forget about plus-one-minus-one-hell (as we used to call it). 0-based arrays are an unnatural construct invented by the computer science - they do not reflect reality and computer programs should reflect reality as much as possible.

Jack Straw
  • 293
  • 5
  • 9
  • 4
    Actually, 0-based arrays are based in the reality of pointer addressing, which stems from how memory is laid out. – Paul Nathan Aug 03 '09 at 23:12
  • 29
    Can you tell me which is the first minute of the hour, please? I always forget... – Jon Skeet Aug 03 '09 at 23:35
  • @Paul: Agreed! And it's completely abstract and has nothing to do with counting. @Jon: The first minute is one, when we get to one we have counted off the first minute. Just like your first birthday celebrates your first year of life. There is no 0th anything. – Jack Straw Aug 03 '09 at 23:58
  • +1 @Jack, this is the perfect sort of controversial programming opinion. As much as my inner programmer hates to admit it, you've actually got a point. It even enticed Jon Skeet to enter the controversy. – Ash Aug 06 '09 at 14:26
  • 19
    I completely disagree with this opinion, so I'm upvoting it. – Theran Aug 24 '09 at 03:54
  • It's offset vs. index, fencepost vs. fence-segment. Posts work well for open-end ranges and segments work well for closed-end ranges. – Samuel Danielson Sep 08 '09 at 22:38
  • Jon skeet sleeps with a pillow under his gun – Egg Sep 21 '09 at 15:40
  • 1
    0-based arrays are (at least for me) very natural, and indeed, natural numbers begin with 0. +1 to this, is veeeeery controversial. – Lucas Gabriel Sánchez Oct 14 '09 at 12:22
  • Who says you have to use element 0 if it's not appropriate for the domain? Are you *that* hard up for memory that you can't waste one element? – Jeanne Pindar Oct 16 '09 at 10:44
  • @Jeanne: If you're not using the 0th element, effectively that's one-based :). – Jack Straw Oct 16 '09 at 16:44
  • I interpreted your post as saying compilers should default to using one-based arrays. – Jeanne Pindar Oct 17 '09 at 21:25
  • +1 I often have trouble in real life situations because I'm so used to start counting at 0 o.o. – helpermethod May 11 '10 at 13:03
25

Cowboy coders get more done.

I spend my life in the startup atmosphere. Without the Cowboy coders we'd waste endless cycles making sure things are done "right".

As we know it's basically impossible to forsee all issues. The Cowboy coder runs head-on into these problems and is forced to solve them much more quickly than someone who tries to forsee them all.

Though, if you're Cowboy coding you had better refactor that spaghetti before someone else has to maintain it. ;) The best ones I know use continuous refactoring. They get a ton of stuff done, don't waste time trying to predict the future, and through refactoring it becomes maintainable code.

Process always gets in the way of a good Cowboy, no matter how Agile it is.

Thorbjørn Ravn Andersen
  • 73,784
  • 33
  • 194
  • 347
JD Conley
  • 2,916
  • 1
  • 19
  • 17
  • If they're refactoring where appropriate, I probably wouldn't call them cowboys... – Jon Skeet Jan 12 '09 at 23:03
  • 1
    To me a cowboy is someone who just jumps into a problem and recklessly writes code, rather than thinking about, estimating, and designing something first. They do it without any regard to a process or accountability other than "it better get done, as fast as possible". – JD Conley Jan 13 '09 at 01:08
  • 14
    You! You're the idiot who came up with the legacy system that 5 years later I'm hired to deal with. I've spent most of my life working on 5+ year old code that because cowboys worked on it has ossified into an inflexible mess that is too brittle to be modified or added to. – Cameron MacFarland Jan 13 '09 at 22:59
  • 4
    Cameron: I think you need a new profession. Sounds like your job sucks. :) – JD Conley Jan 17 '09 at 02:03
  • 2
    No my current job doesn't suck, but that's because I'm not working on a creaking legacy system. I suppose it's unfair to only blame the cowboys for those systems, as they started ok, and then 5+ years of patches got applied. Now I ask how old the code is in interviews. – Cameron MacFarland Jan 22 '09 at 22:47
  • 1
    I'd like the cowboys to think a little, but not so much they need to write a supporting design document first or anything like that. I agree that often designers get stuck in the "what about this scenario" syndrome. – Cameron MacFarland Jan 22 '09 at 22:54
  • @Cameron: Yes, it's unfair to blame only the cowboys. Blame their managers. – Daniel Daranas Oct 29 '09 at 17:20
  • Wel call them "Ninja programmers" because there's nothing they can't do. (Just like ninjas) – Faruz Nov 01 '09 at 12:19
25

The users aren't idiots -- you are.

So many times I've heard developers say "so-and-so is an idiot" and my response is typically "he may be an idiot but you allowed him to be one."

Austin Salonen
  • 49,173
  • 15
  • 109
  • 139
24

The more process you put around programming, the worse the code becomes

I have noticed something in my 8 or so years of programming, and it seems ridiculous. It's that the only way to get quality is to employ quality developers, and remove as much process and formality from them as you can. Unit testing, coding standards, code/peer reviews, etc only reduce quality, not increase it. It sounds crazy, because the opposite should be true (more unit testing should lead to better code, great coding standards should lead to more readable code, code reviews should improve the quality of code) but it's not.

I think it boils down to the fact we call it "Software Engineering" when really it's design and not engineering at all.


Some numbers to substantiate this statement:

From the Editor

IEEE Software, November/December 2001

Quantifying Soft Factors

by Steve McConnell

...

Limited Importance of Process Maturity

... In comparing medium-size projects (100,000 lines of code), the one with the worst process will require 1.43 times as much effort as the one with the best process, all other things being equal. In other words, the maximum influence of process maturity on a project’s productivity is 1.43. ...

... What Clark doesn’t emphasize is that for a program of 100,000 lines of code, several human-oriented factors influence productivity more than process does. ...

... The seniority-oriented factors alone (AEXP, LTEX, PEXP) exert an influence of 3.02. The seven personnel-oriented factors collectively (ACAP, AEXP, LTEX, PCAP, PCON, PEXP, and SITE §) exert a staggering influence range of 25.8! This simple fact accounts for much of the reason that non-process-oriented organizations such as Microsoft, Amazon.com, and other entrepreneurial powerhouses can experience industry-leading productivity while seemingly shortchanging process. ...

The Bottom Line

... It turns out that trading process sophistication for staff continuity, business domain experience, private offices, and other human-oriented factors is a sound economic tradeoff. Of course, the best organizations achieve high motivation and process sophistication at the same time, and that is the key challenge for any leading software organization.

§ Read the article for an explanation of these acronyms.

MaD70
  • 4,078
  • 1
  • 25
  • 20
rustyshelf
  • 44,963
  • 37
  • 98
  • 104
  • 4
    It sounds like you're seeing process being used to compensate for poor programmers, not to enhance great developers. This is why the Agile Manifest says "Individuals and interactions over processes and tools". Instead of adding process for poor programmers, add it when # of programmers grows. – Jay Bazuzi Jan 03 '09 at 17:40
  • 1
    @jay not quite. I think that process even put around the best developers causes a decrease in code quality. I would liken it to meeting a famous painter, and then telling him the rules he needs to abide by to make a good painting. It might make sense to you, but it's ridiculous. – rustyshelf Jan 04 '09 at 10:59
  • I suspect great painters have their own processes. – Alex Baranosky Jan 04 '09 at 20:22
  • Process takes away energy that makes code better - that applies to coders good and bad. Some process is useful but process breeds process and you always end up with too much. – Kendall Helmstetter Gelner Jan 05 '09 at 05:09
  • I couldn't agree with you more! The arguments I've gotten into with other programmers over their strict adherence to processes could fill a book the size of War And Peace. That includes both "good" and bad processes, though. –  Jan 05 '09 at 11:14
  • I've seen the opposite effect. I worked a company which used an Agile methodology, and the code quality was nightmarishly bad, beyond awful. I now work at a company with a very rigid process, lots of red tape around undocumented changes, and the resulting code is top notch. – Juliet Jan 09 '09 at 02:37
  • One size does not fit all. Small project, small team in one location, experienced developers, domain expert on site, software not absolutely critical? (some software, if you have a bug someone might die.) Then yes, just run wild. If not, you need more process. – MarkJ Jan 27 '09 at 11:56
  • 3
    If your processes make things harder, you're doing it wrong. It should be like a aircraft takeoff checklist, helps you remember to do stuff in the right order. Automate things: you're a software developer dammit. Make the easy thing the right thing. – Tim Williscroft Feb 02 '09 at 02:00
23

Opinion: That frameworks and third part components should be only used as a last resort.

I often see programmers immediately pick a framework to accomplish a task without learning the underlying approach it takes to work. Something will inevitably break, or we'll find a limition we didn't account for and we'll be immediately stuck and have to rethink major part of a system. Frameworks are fine to use as long it is carefully thought out.

kemiller2002
  • 113,795
  • 27
  • 197
  • 251
  • I disagree, how many StringUtils classes you have in your project? I once found project that had 5 of them. Most of that stuff could be replaced by third part lib. – IAdapter Jan 03 '09 at 03:44
  • Frameworks, yes. Useless overhead, many times. Third party components, no! Portions of the task already completed, tested and debugged by thousands of other people! – skiphoppy Jan 03 '09 at 11:04
  • @skiphoppy -- I can't help it. I really am a roll your own type of guy at heart. I will fully admit that I might be jaded as places I've worked at the past tried to buy the absolute cheapest things possible. It bit us in the end. – kemiller2002 Jan 03 '09 at 17:14
  • 1
    Joel in defence of not-invented here syndrome: http://www.joelonsoftware.com/articles/fog0000000007.html – MarkJ Jan 27 '09 at 11:58
23

Generated documentation is nearly always totally worthless.

Or, as a corollary: Your API needs separate sets of documentation for maintainers and users.

There are really two classes of people who need to understand your API: maintainers, who must understand the minutiae of your implementation to be effective at their job, and users, who need a high-level overview, examples, and thorough details about the effects of each method they have access to.

I have never encountered generated documentation that succeeded in either area. Generally, when programmers write comments for tools to extract and make documentation out of, they aim for somewhere in the middle--just enough implementation detail to bore and confuse users yet not enough to significantly help maintainers, and not enough overview to be of any real assistance to users.

As a maintainer, I'd always rather have clean, clear comments, unmuddled by whatever strange markup your auto-doc tool requires, that tell me why you wrote that weird switch statement the way you did, or what bug this seemingly-redundant parameter check fixes, or whatever else I need to know to actually keep the code clean and bug-free as I work on it. I want this information right there in the code, adjacent to the code it's about, so I don't have to hunt down your website to find it in a state that lends itself to being read.

As a user, I'd always rather have a thorough, well-organized document (a set of web pages would be ideal, but I'd settle for a well-structured text file, too) telling me how your API is architectured, what methods do what, and how I can accomplish what I want to use your API to do. I don't want to see internally what classes you wrote to allow me to do work, or files they're in for that matter. And I certainly don't want to have to download your source so I can figure out exactly what's going on behind the curtain. If your documentation were good enough, I wouldn't have to.

That's how I see it, anyway.

chazomaticus
  • 15,476
  • 4
  • 30
  • 31
  • When I use Doxygen, I use /internal tags very often. This makes it easy to generate two sets of documentation exactly as you describe. (Of course, I also continue to use regular comments throughout code where required.) – Zooba Jan 14 '09 at 02:38
  • 2
    I don't just like JavaDoc. I love it. – Captain Sensible Jan 26 '09 at 10:42
23

To produce great software, you need domain specialists as much as good developers.

Daniel Daranas
  • 22,454
  • 9
  • 63
  • 116
  • 8
    This is as controversial as a cup of coffee. – Andrew not the Saint Oct 14 '09 at 03:19
  • @Andrew from NZSG Like many of the sentences posed here, it has been "controversial" during my past work experience, because more usually than not software projects have been developed without keeping that in mind. If something happens most of the times and I disagree with it, I qualify my own opinion as somewhat "controversial", even though it is obviously that I am right. – Daniel Daranas Oct 14 '09 at 07:45
  • @Andrew: I once phoned a company about a Java developer job ad, long time ago. They asked me, "Do you know Java?" Yes. "Could you write a book-keeping application?" Err, by myself? No. With a financial advisor next to me, yes. "I see. Thank you for your interest." WTF? – Amadan Aug 10 '10 at 23:17
22

My most controversial programming opinion is that finding performance problems is not about measuring, it is about capturing.

If you're hunting for elephants in a room (as opposed to mice) do you need to know how big they are? NO! All you have to do is look. Their very bigness is what makes them easy to find! It isn't necessary to measure them first.

The idea of measurement has been common wisdom at least since the paper on gprof (Susan L. Graham, et al 1982)*, when all along, right under our noses, has been a very simple and direct way to find code worth optimizing.

As a small example, here's how it works. Suppose you take 5 random-time samples of the call stack, and you happen to see a particular instruction on 3 out of 5 samples. What does that tell you?

.............   .............   .............   .............   .............
.............   .............   .............   .............   .............
Foo: call Bar   .............   .............   Foo: call Bar   .............
.............   Foo: call Bar   .............   .............   .............
.............   .............   .............   Foo: call Bar   .............
.............   .............   .............   .............   .............
                .............                                   .............

It tells you the program is spending 60% of its time doing work requested by that instruction. Removing it removes that 60%:

...\...../...   ...\...../...   .............   ...\...../...   .............
....\.../....   ....\.../....   .............   ....\.../....   .............
Foo: \a/l Bar   .....\./.....   .............   Foo: \a/l Bar   .............
......X......   Foo: cXll Bar   .............   ......X......   .............
...../.\.....   ...../.\.....   .............   Foo: /a\l Bar   .............
..../...\....   ..../...\....   .............   ..../...\....   .............
   /     \      .../.....\...                      /     \      .............

Roughly.

If you can remove the instruction (or invoke it a lot less), that's a 2.5x speedup, approximately. (Notice - recursion is irrelevant - if the elephant's pregnant, it's not any smaller.) Then you can repeat the process, until you truly approach an optimum.

  • This did not require accuracy of measurement, function timing, call counting, graphs, hundreds of samples, any of that typical profiling stuff.

Some people use this whenever they have a performance problem, and don't understand what's the big deal.

Most people have never heard of it, and when they do hear of it, think it is just an inferior mode of sampling. But it is very different, because it pinpoints problems by giving cost of call sites (as well as terminal instructions), as a percent of wall-clock time. Most profilers (not all), whether they use sampling or instrumentation, do not do that. Instead they give a variety of summary measurements that are, at best, clues to the possible location of problems. Here is a more extensive summary of the differences.

*In fact that paper claimed that the purpose of gprof was to "help the user evaluate alternative implementations of abstractions". It did not claim to help the user locate the code needing an alternative implementation, at a finer level then functions.


My second most controversial opinion is this, or it might be if it weren't so hard to understand.

Community
  • 1
  • 1
Mike Dunlavey
  • 40,059
  • 14
  • 91
  • 135
  • I can add one more type of reaction: "This is a great technique, but why not use one of the tools that automates it?" – Crashworks Nov 03 '09 at 05:31
  • @Crash: Happy Halloween! You're right, that is another reaction I get, and of course the answer is: "You could if they exist". I don't want much: 1) take *and retain* stackshots, 2) rank statements (not functions) by inclusive time (i.e. % of samples containing them), 3) let you pick representative stackshots and study them. – Mike Dunlavey Nov 03 '09 at 13:49
  • ... I built one ages ago, to run under DOS. It didn't do (3) but it had a "butterfly view" between statements (not functions). The real value was that it would focus my attention on costly call sites, and then I would take manual samples until one of those showed up under the debugger, and then I could really look to see what was going on, because just knowing the location was not enough. – Mike Dunlavey Nov 03 '09 at 13:53
  • ... as a recent example, this C# app takes it time about starting up. Half a dozen stackshots show about half the time is spent looking up strings in a resource and converting them to string objects, so they can be internationalized. What the stack sample by itself doesn't show is how often the string is something you would never want to internationalize, which in this case is most of the time. Just finding a slow function, or looking at numbers after a run, would never reveal that. – Mike Dunlavey Nov 03 '09 at 13:59
  • @Crash: Actually there's a tool called RotateRight/Zoom that is close to doing it how I think is right. It takes and retains stackshots. You can manually control when it samples. It has a butterfly view that can work at the statement level. It gives you total time as a percent, which is the fraction of samples containing the line. – Mike Dunlavey Dec 12 '09 at 04:27
  • People with a low boredom threshold might press `Ctrl+C` after one second, which may not be a representative sample of the program as a whole. – Andrew Grimm Feb 11 '10 at 02:17
  • @Andrew-Grimm: The problem, when removed, will save some %. Pick a %. 20%, 50%, 90%, 10%? Whatever it is, that is (at least) the probability that each `^C` will see it. One way is take 20 samples - 20 * x%/100 will show it. Another way is, just take samples until something appears more than once. It's a big one, guaranteed. – Mike Dunlavey Feb 15 '10 at 20:00
  • ... **one** sample is not enough **unless** you know there is a big (high percentage) problem. In the limit, if you know there is an infinite loop, it only takes one sample to see it. In general, you don't know, so take multiple samples. – Mike Dunlavey Feb 15 '10 at 21:26
  • If all you're interested in is "is there enough space in this room" then you definitely need to know how big the elephants are. Measuring and capturing go well together - you don't need to commit yourself to only using one technique. – Jon Skeet Mar 22 '10 at 19:53
  • @Jon: That's just a metaphor I'm using to try to get the idea across that if something's taking too long, stackshots can find the problem with precision of location, but not necessarily precision of time measurement. I've seen one profiler that does this (Zoom), but I haven't seen them all. Mainly I'm zealot-ing for an orthogonal way of thinking about performance tuning - to expect big speedup factors, which are typically mid-stack lines of code doing stuff you didn't realize. – Mike Dunlavey Mar 22 '10 at 20:42
  • @Jon ... and there's a central phenomenon that I never hear discussed on SO (magnification), and it's the route to big speedups. If there's a series of problems accounting for 50%, 25%, 12.5%, 6.25% of time, each time you fix the biggest one, the rest get twice as big (thus easier to find). If any one of these along the way is not something your profiler can pinpoint, you're stuck, not getting the full speedup. – Mike Dunlavey Mar 22 '10 at 21:00
  • @Mike: Absolutely. Most profilers I've used *have* shown figures as "percentage of time spent in method" mind you - with raw figures as well, but they tend not to be as useful. But yes, it's certainly possible to find big speed-ups. I recently found some in Noda Time :) – Jon Skeet Mar 22 '10 at 21:53
  • @Jon: Right. What I like about Zoom is it gives you % time (wall-clock) *at the line-of-code level*, it ignores recursion (yay!), and it has a butterfly view, although it's a function-level not line-level butterfly. But still, those things are cute & helpful, but when I've got serious tuning to do, the fact that you can see all the variables, and you can read the *why* off of individual stack samples, is what, for me, makes all the difference for the manual method. Cheers. – Mike Dunlavey Mar 22 '10 at 23:01
22

It's fine if you don't know. But you're fired if you can't even google it.

Internet is a tool. It's not making you stupider if you're learning from it.

Omu
  • 69,856
  • 92
  • 277
  • 407
  • This should be a rule in any position that has anything to do with using a computer. Not just restricted to programmers. – awright18 Mar 04 '10 at 01:28
22

If your text editor doesn't do good code completion, you're wasting everyone's time.

Quickly remembering thousands of argument lists, spellings, and return values (not to mention class structures and similarly complex organizational patterns) is a task computers are good at and people (comparatively) are not. I buy wholeheartedly that slowing yourself down a bit and avoiding the gadget/feature cult is a great way to increase efficiency and avoid bugs, but there is simply no benefit to spending 30 seconds hunting unnecessarily through sourcecode or docs when you could spend nil... especially if you just need a spelling (which is more often than we like to admit).

Granted, if there isn't an editor that provides this functionality for your language, or the task is simple enough to knock out in the time it would take to load a heavier editor, nobody is going to tell you that Eclipse and 90 plugins is the right tool. But please don't tell me that the ability to H-J-K-L your way around like it's 1999 really saves you more time than hitting escape every time you need a method signature... even if you do feel less "hacker" doing it.

Thoughts?

eswald
  • 8,368
  • 4
  • 28
  • 28
  • 5
    IMHO, if you need so badly code completion, it's a code smell, or even a design smell : it indicates that the design has grown too complicated, too interdependent, too tightly coupled to other module's responsibilities. It's a bit controversial too: refactor it until it fits into your brain ! – vincent Jan 05 '09 at 01:08
  • 1
    Code completion slows typing. Even set to zero delay, there's the tiniest pause while you wait for code completion. I agree that if you need code completion on your own code, that may well be a sign something needs simplification. But libraries are so large now, I think it helps more than hurts. – Kendall Helmstetter Gelner Jan 05 '09 at 05:40
  • 2
    @vincent: Do you never use massive libraries (.NET Framework / Windows API etc)? – erikkallen Jan 05 '09 at 11:49
  • I'm using Django, and RoR before. Both encourage cohesion and small files. At the same time I'm helping out a beginner with VB.net, and I have to say VS is impressive, and it certainly influences the code style itself ; but code completion has to be a double-edged sword. ( BTW, I *HATE* eclipse ) – vincent Jan 05 '09 at 22:18
  • VS has really fast completion @Kendall: it doesn't impede my typing. Half the time I write Con.Wr[Down]( for Console.WriteLine(. That's 10 keystrokes less. @vincent, I agree, Eclipse needs to improve their code completion. – Jonathan C Dickinson Jan 29 '09 at 09:34
  • I work with only one other developer on a project with 240k lines of code and almost a thousand files. We couldn't live without code completion. – Matthew Iselin Aug 03 '09 at 23:47
22

The customer is not always right.

In most cases that I deal with, the customer is the product owner, aka "the business". All too often, developers just code and do not try to provide a vested stake in the product. There is too much of a misconception that the IT Department is a "company within a company", which is a load of utter garbage.

I feel my role is that of helping the business express their ideas - with the mutual understanding that I take an interest in understanding the business so that I can provide the best experience possible. And that route implies that there will be times that the product owner asks for something that he/she feels is the next revolution in computing leaving someone to either agree with that fact, or explain the more likely reason of why no one does something a certain way. It is mutually beneficial, because the product owner understands the thought that goes into the product, and the development team understands that they do more than sling code.

This has actually started to lead us down the path of increased productivity. How? Since the communication has improved due to disagreements on both sides of the table, it is more likely that we come together earlier in the process and come to a mutually beneficial solution to the product definition.

Joseph Ferris
  • 12,576
  • 3
  • 46
  • 72
21

It's okay to be Mort

Not everyone is a "rockstar" programmer; some of us do it because it's a good living, and we don't care about all the latest fads and trends; we just want to do our jobs.

Wayne Molina
  • 19,158
  • 26
  • 98
  • 163
  • I agree, with the caveat (and I'm turning and looking in the direction of several teams in Redmond, Washington) that Mort is often unfairly scoped and not always well understood. – Gabriel Jun 01 '09 at 19:10
  • 1
    I'm with you Wayne, though to stay in the industry, I think we all need to go Elvis and Einstein at times. And we need to put in effort outside of work too. I rested on my laurels for a while (got married, moved, had other stuff going on) and I can see tech moving beyond me and now I have to play catch up. Tech is moving too fast for extra effort not to be put in. I'm learning and doing side projects again, and I'm having fun. But I do resent the 14 hour a day folks. They will blossom, whither, and then fade. Balance is the key, but the day of being exclusively a Mort are numbered. – infocyde Jun 25 '09 at 21:36
21

Don't comment your code

Comments are not code and therefore when things change it's very easy to not change the comment that explained the code. Instead I prefer to refactor the crap out of code to a point that there is no reason for a comment. An example:

if(data == null)  // First time on the page

to:

bool firstTimeOnPage = data == null;
if(firstTimeOnPage)

The only time I really comment is when it's a TODO or explaining why

Widget.GetData(); // only way to grab data, TODO: extract interface or wrapper
Jay Bazuzi
  • 45,157
  • 15
  • 111
  • 168
rball
  • 6,925
  • 7
  • 49
  • 77
  • Your "explaining why" rationale is also subject to change if the API you are working with, for example, gets updated or improved. – dreftymac Jan 04 '09 at 04:52
  • In my small example I'm trying to show why I already did what I did. Like there's a better way to grab data, but this is the only way right now. Kind of like a note to refactor or why something happened. Also it's mainly related to my own code and not an external dependency. – rball Jan 06 '09 at 17:11
  • 14
    Icky. Don't declare a variable if you're only going to use it once. Your suggestion is not much better than, "int i,this_is_a_counter;". If you're forced to *add* extra code to get rid of comments, you've made things MORE complicated! – Brian Jan 12 '09 at 22:21
  • I have to agree with Brian, nothing worst then having a bunch of one time use variables. – James McMahon Jan 13 '09 at 20:00
  • 2
    I'm sick of reading this crap. The reality is that the large majority of code out there is badly written, let alone reasonably refactored. If you can't write decent (understandable) code at least have the decency of adding comments. – Captain Sensible Jan 26 '09 at 10:05
  • 8
    Why are one-time variables bad? They explain what you do, they don't cost anything (if you have a half decent compiler), and you can easily use them again for the same thing. Without the firstTimeOnPage, I would be very likely to put in the if (data == null) condition somewhere else as well. – erikkallen May 19 '09 at 09:59
  • -1: Comments are good. Comments are a cornerstone of code. I'd rather spend 10 seconds reading a one-line comment than spend two hours trying to figure out what some really complex code does. – tsilb Oct 17 '09 at 07:17
  • 3
    You might spend 10 seconds reading a one-line comment and then 3 hours finding out that the comment is outdated and led you down the wrong path. A well named variable or method is preferable, then I know what your intentions were and know that it hasn't changed. Also easily refactorable. – rball Oct 19 '09 at 15:48
  • 2
    @brian, one time variables can give names to faceless expressions, which is nice, especially in long parameter lists. – Thorbjørn Ravn Andersen Oct 23 '09 at 18:15
  • @rball: I agree and disagree, depending on how declarative or domain-specific the language is. You have a functional spec somewhere, if only in your head. If the language is declarative enough to directly encode the functional spec, then there's no need for comments. Usually, that is not the case, so IMO the purpose of comments is to express the mapping between implementation and functional spec, to the extent that the code itself is not able to. That way, when the spec changes, as it always does, you know what code to change. – Mike Dunlavey Mar 12 '10 at 17:38
21

A Good Programmer Hates Coding

Similar to "A Good Programmer is a Lazy Programmer" and "Less Code is Better." But by following this philosophy, I have managed to write applications which might otherwise use several times as much code (and take several times as much development time). In short: think before you code. Most of the parts of my own programs which end up causing problems later were parts that I actually enjoyed coding, and thus had too much code, and thus were poorly written. Just like this paragraph.

A Good Programmer is a Designer

I've found that programming uses the same concepts as design (as in, the same design concepts used in art). I'm not sure most other programmers find the same thing to be true; maybe it is a right brain/left brain thing. Too many programs out there are ugly, from their code to their command line user interface to their graphical user interface, and it is clear that the designers of these programs were not, in fact, designers.

Although correlation may not, in this case, imply causation, I've noticed that as I've become better at design, I've become better at coding. The same process of making things fit and feel right can and should be used in both places. If code doesn't feel right, it will cause problems because either a) it is not right, or b) you'll assume it works in a way that "feels right" later, and it will then again be not right.

Art and code are not on opposite ends of the spectrum; code can be used in art, and can itself be a form of art.

Disclaimer: Not all of my code is pretty or "right," unfortunately.

  • 1
    Definitely agree! Making beautiful applications requires beautiful code. – Matt Dec 19 '09 at 09:31
  • 1
    Only just seen this: agreed 100%. Ugly code is far more likely to be buggy. An appreciation of elegance and beauty is essential to good coding. – Keith Williams Apr 09 '10 at 17:13
20

Boolean variables should be used only for Boolean logic. In all other cases, use enumerations.


Boolean variables are used to store data that can only take on two possible values. The problems that arise from using them are frequently overlooked:

  • Programmers often cannot correctly identify when some piece of data should only have two possible values
  • The people who instruct programmers what to do, such as program managers or whomever writes the specs that programmers follow, often cannot correctly identify this either
  • Even when a piece of data is correctly identified as having only two possible states, that guarantee may not hold in the future.

In these cases, using Boolean variables leads to confusing code that can often be prevented by using enumerations.

Example

Say a programmer is writing software for a car dealership that sells only cars and trucks. The programmer develops a thorough model of the business requirements for his software. Knowing that the only types of vehicles sold are cars and trucks, he correctly identifies that he can use a boolean variable inside a Vehicle class to indicate whether the vehicle is a car or a truck.

class Vehicle {
 bool isTruck;
 ...
}

The software is written so when isTruck is true a vehicle is a truck, and when isTruck is false the vehicle is a car. This is a simple check performed many times throughout the code.

Everything works without trouble, until one day when the car dealership buys another dealership that sells motorcycles as well. The programmer has to update the software so that it works correctly considering the dealership's business has changed. It now needs to identify whether a vehicle is a car, truck, or motorcycle, three possible states.

How should the programmer implement this? isTruck is a boolean variable, so it can hold only two states. He could change it from a boolean to some other type that allows many states, but this would break existing logic and possibly not be backwards compatible. The simplest solution from the programmer's point of view is to add a new variable to represent whether the vehicle is a motorcycle.

class Vehicle {
 bool isTruck;
 bool isMotorcycle;
 ...
}

The code is changed so that when isTruck is true a vehicle is a truck, when isMotorcycle is true a vehicle is a motorcycle, and when they're both false a vehicle is a car.

Problems

There are two big problems with this solution:

  • The programmer wants to express the type of the vehicle, which is one idea, but the solution uses two variables to do so. Someone unfamiliar with the code will have a harder time understanding the semantics of these variables than if the programmer had used just one variable that specifies the type entirely.
  • Solving this motorcycle problem by adding a new boolean doesn't make it any easier for the programmer to deal with such situations that happen in the future. If the dealership starts selling buses, the programmer will have to repeat all these steps over again by adding yet another boolean.

It's not the developer's fault that the business requirements of his software changed, requiring him to revise existing code. But using boolean variables in the first place made his code less flexible and harder to modify to satisfy unknown future requirements (less "future-proof"). When he implemented the changes in the quickest way, the code became harder to read. Using a boolean variable was ultimately a premature optimization.

Solution

Using an enumeration in the first place would have prevented these problems.

enum EVehicleType { Truck, Car }

class Vehicle {
 EVehicleType type;
 ...
}

To accommodate motorcycles in this case, all the programmer has to do is add Motorcycle to EVehicleType, and add new logic to handle the motorcycle cases. No new variables need to be added. Existing logic shouldn't be disrupted. And someone who's unfamiliar with the code can easily understand how the type of the vehicle is stored.

Cliff Notes

Don't use a type that can only ever store two different states unless you're absolutely certain two states will always be enough. Use an enumeration if there are any possible conditions in which more than two states will be required in the future, even if a boolean would satisfy existing requirements.

Chris
  • 747
  • 3
  • 11
  • 23
  • I guess this is not very controversial. – Ikke Nov 24 '09 at 21:15
  • The argument isn't controversial per se, but try writing your code like that and see if your team object. I'd bet 9/10 teams would try and argue you back to booleans. – David Feb 19 '10 at 03:18
  • 1
    Of course, OOP guys in the corner would mutter something along the lines of "class Truck extends/implements Vehicle, class Car extends/implements Vehicle..." – Ivan Vrtarić Mar 12 '10 at 10:09
  • 1
    I worked on a project that used a collection of booleans to try to distinguish among models of printer. It was ... execrable. Nobody would want to do that after having seen it in action. But here's some controversy for you: In languages which allow it, it's perfectly reasonable to use a bool for one of three values: true, false, and don't know. – Integer Poet Mar 15 '10 at 19:19
  • Thanks. Never thought about that. I guess I should give enums a better look. – Sylverdrag Jun 08 '10 at 06:01
  • Just to make things clear to me. Should I use `_Bool isGirl;` or `enum { boy, girl }; typedef unsigned gender;`? –  Dec 20 '10 at 14:43
20

I work in ASP.NET / VB.NET a lot and find ViewState an absolute nightmare. It's enabled by default on the majority of fields and causes a large quantity of encoded data at the start of every web page. The bigger a page gets in terms of controls on a page, the larger the ViewState data will become. Most people don't turn an eye to it, but it creates a large set of data which is usually irrelevant to the tasks being carried on the page. You must manually disable this option on all ASP controls if they're not being used. It's either that or have custom controls for everything.

On some pages I work with, half of the page is made up of ViewState, which is a shame really as there's probably better ways of doing it.

That's just one small example I can think of in terms of language/technology opinions. It may be controversial.

By the way, you might want to edit voting on this thread, it could get quite heated by some ;)

Kieran Senior
  • 17,960
  • 26
  • 94
  • 138
  • Could you highlight your controversial opinion... is it "viewstate is bad" or something else? – Ed Guiness Jan 02 '09 at 14:07
  • Nope, it's "ViewState is enabled by default, when I really don't think it should be, but having it disabled by default required custom controls" – Kieran Senior Jan 02 '09 at 14:21
  • I expect anyone who has worked on ASP.NET would agree with this. We have a page to search a third party system that has some LARGE drop down lists on it. The ViewState doubled the already 200Kb page size. – pipTheGeek Jan 02 '09 at 14:34
  • I don't think that experienced webforms developers will find this particularly controversial...most of us will agree with you! – Mark Brittingham Jan 02 '09 at 15:20
  • Yup, we encounter the page size doubling from time to time, and sometimes even more. The page renders slower, more bandwidth is used, and it's a nightmare to track down problems when you're viewing the rendered page source. – Kieran Senior Jan 02 '09 at 15:29
  • The intersting thing about this is that in the majority of cases ViewState is not needed at all! – etsuba Jan 02 '09 at 19:55
  • Don't throw so much crap on a page if Viewstate is really a problem. You probably have a design problem if you really have that much viewstate stuff on a page. – Paul Mendoza Jan 03 '09 at 19:51
  • Have you tried programming without ViewState? I can promise you that 5 minutes with JSP will make you *run* back to ViewState. Seriously, the ViewState is *NEVER* the problem, the problem is the developer using the ViewState! – Thomas Hansen Jan 10 '09 at 20:51
  • @Paul, I insanely agree! Don't throw so much crap in your page if you're having ViewState problems - go back to design! – Thomas Hansen Jan 10 '09 at 20:53
  • 2
    Try ASP.NET MVC, it's a joy to program with. – Dave Jan 13 '09 at 23:59
  • You do not have to turn ViewState off for each and every control. You can do it in the @Page directive. – xanadont Jan 06 '10 at 17:02
20

My controversial opinion: Object Oriented Programming is absolutely the worst thing that's ever happened to the field of software engineering.

The primary problem with OOP is the total lack of a rigorous definition that everyone can agree on. This easily leads to implementations that have logical holes in them, or language like Java that adhere to this bizarre religious dogma about what OOP means, while forcing the programmer into doing all these contortions and "design patterns" just to work around the limitations of a particular OOP system.

So, OOP tricks the programmer into thinking they're making these huge productivity gains, that OOP is somehow a "natural" way to think, while forcing the programmer to type boatloads of unnecessary boilerplate.

Then since nobody knows what OOP actually means, we get vast amounts of time wasted on petty arguments about whether language X or Y is "truly OOP" or not, what bizarre cargo cultish language features are absolutely "essential" for a language to be considered "truly OOP".

Instead of demanding that this language or that language be "truly oop", we should be looking at what language features are shown by experiment, to actually increase productivity, instead of trying to force it into being some imagined ideal language, or indeed forcing our programs to conform to some platonic ideal of a "truly object oriented program".

Instead of insisting that our programs conform to some platonic ideal of "Truly object oriented", how about we focus on adhering to good engineering principles, making our code easy to read and understand, and using the features of a language that are productive and helpful, regardless of whether they are "OOP" enough or not.

Adam Neal
  • 2,147
  • 7
  • 24
  • 39
Breton
  • 15,401
  • 3
  • 59
  • 76
  • It sounds like you're mixing programming methodologies and language design philosophies, while also recognizing the damage of zealotry. As a result, your potentially interesting thoughts are cluttered and unclear. – Jay Bazuzi Jan 03 '09 at 17:35
  • The "Truly XYZ" idiom is usually a case of the "No True Scotsman" fallacy. As far as the rest, have you read http://xahlee.org/Periodic_dosage_dir/t2/oop.html? Also, this seems very similar to a perlmonks post, have you written on this before? – dreftymac Jan 03 '09 at 19:21
  • a Language is user interface that can make a programming methodology easier. An OOP language, therefore, is a language designed to make OOP programming easier, making them closely related subjects. This position was argued better by Apocalisp, elsewhere in this question. – Breton Jan 03 '09 at 22:39
  • I've never hear anyone pontificate on the phrase "truly object oriented" in the past 10 years I've been programming. Never. Not even once. Are you actually quoting some obnoxious manager? – Juliet Jan 04 '09 at 04:05
  • Anyone who started with java, or C++, and then tried lua, or javascript, or some other language that doesn't have some arbirary java feature. Anyone entrenched in the Java world who has a self superior view that singletons are a terrible idea. Anyone who's read teh GoF book and thought it was future – Breton Jan 04 '09 at 23:06
  • Almost, IMHO. I think OOP is the ideal way to deal with some aspects of programming, but it's not what it's made out to be: It's not a replacement for every methodology and/or piece of code you ever come across; It's not immune from being taken too far; It's not your master; It's not irreplaceable. – jTresidder Jan 07 '09 at 22:04
  • Do you come from a VB6 background and never embraced OOP? – Chad May 24 '09 at 05:48
  • 1
    Incorrect. There's nothing wrong with OOP, it's just a strategy. What the problem is, is the attitude that I should have "embraced" it, or the only alternative is I'm some backwards beginner. It is not the end all be all, it is not a religion, and I don't have to be crucified in order to expunge me from the pool of programmers so that all "right" thinking programmers can live free of sin. I posted my answer to this question because it is the most controversial opinion I have. That was the question. – Breton May 26 '09 at 02:22
  • 1
    the reason it's the worst thing to happen to programming is that it prevents programmers from looking at other solutions that may actually be better suited to the problem, and it prevents us from looking ot or accepting new paradigms that might be better suited to most problems. – Breton May 26 '09 at 02:25
  • 1
    I hate when newcomers lecture me about the greatness of OOP when I program in OO languages from mid '80s. They are totally blind to OOP shortcomings, they don't know that "OOP" is an ill-defined concept and, worst of all, they ignore a whole world of options w.r.t. programming paradigms. – MaD70 Nov 06 '09 at 00:55
  • 2
    +1 Wish I could upvote more. This field is rife with bandwagons, gurus, "right thinking", and occasionally good ideas made into religions. To a mechanical/electrical engineer (like me) this is so weird. I assume if something is true there's a scientific reason why. I also assume inventiveness is a good thing. Very little of that in this field. – Mike Dunlavey Feb 18 '10 at 14:59
19

C (or C++) should be the first programming language

The first language should NOT be the easy one, it should be one that sets up the student's mind and prepare it for serious computer science.
C is perfect for that, it forces students to think about memory and all the low level stuff, and at the same time they can learn how to structure their code (it has functions!)

C++ has the added advantage that it really sucks :) thus the students will understand why people had to come up with Java and C#

hasen
  • 161,647
  • 65
  • 194
  • 231
  • 1
    so everybody should suffer, because you have suffered? its always nice to learn useless things, but come on. – IAdapter Jan 03 '09 at 04:00
  • Not really, I really loved C++ back in the day, I was in denial when I heard from a prof that it's the worst language he's ever seen. – hasen Jan 03 '09 at 08:18
  • 9
    +1: Everyone should learn C first because programming isn't for everyone and it isn't for anyone that can't grasp C. – Robert Gamble Jan 05 '09 at 04:38
  • Blast them with raw machine code. Suffer!!! The assembler course was the most fun in had (during class time) in university. – Jonathan C Dickinson Jan 29 '09 at 09:40
  • Mythology. Before encountering C I learned the assembly of 2/3 CPUs and familiarized with others. Some CPUs are a pleasure to program because of their orthogonal instruction sets, others are a pain but less idiosyncratic than C. C fails for its intended use, i.e. a portable assembly. – MaD70 Nov 05 '09 at 23:05
  • .. and I find pathetic the elitism that too many programmers show. – MaD70 Nov 05 '09 at 23:07
  • My university taught programming almost exclusively in Java. I felt simultaneously aroused and cheated when I finally got around to learning C and C++. – iandisme Dec 09 '09 at 20:50
  • I disagree. Its hard to get first-timers excited about memory allocation.. Start with a language where you can get near instant gratification. The web languages are good for this. – Matt Dec 19 '09 at 09:14
  • @Matt: you're not supposed to agree ;) – hasen Dec 19 '09 at 09:53
  • I did a lot of teaching introductory CS. What I found was most useful was first a few weeks on a decimal machine simulator, to set up the basic mental framework of addresses, memory, instructions, and stepwise execution. Then we did Basic (sorry), then Pascal. I like C (and C++) but those are hell to teach to newbies, because there are too many subtle ways for students to get confused, like the difference between pointers and array referencing, and nested types. It's not acceptable to say "sink or swim" - they pay tuition. – Mike Dunlavey Mar 08 '10 at 14:18
19

You don't always need a database.

If you need to store less than a few thousand "things" and you don't need locking, flat files can work and are better in a lot of ways. They are more portable, and you can hand edit them in a pinch. If you have proper separation between your data and business logic, you can easily replace the flat files with a database if your app ever needs it. And if you design it with this in mind, it reminds you to have proper separation between your data and business logic.

--
bmb

bmb
  • 6,058
  • 2
  • 37
  • 58
  • True, but Sqlite is very portable too. I'm not gonna start with flat files if there is a change it should be moved to Sqlite. – tuinstoel Jan 03 '09 at 22:37
  • There are other benefits of a DB. Shared access across a network for a client/server program. Easy access and manipulation of data (although technologies like LINQ help with that). – Cameron MacFarland Jan 04 '09 at 08:26
  • There are thousands of benefits of a database and reasons why we need them most of the time. But not *always*. – bmb Jan 04 '09 at 15:58
  • having a database from the start is easier than first having proper separation between data storage and biznis logic with flat files so that you can switch to a database later :) – hasen Feb 15 '09 at 19:42
  • Are you saying it's easier to do it wrong with a database than it is to do it right without one? – bmb Feb 16 '09 at 00:52
  • 3
    I am 100% convinced that developers over use databases. The crutch that kills. – Stu Thompson Mar 30 '09 at 11:40
  • 1
    @Stu Thompson, I'm not. At work I'm refactoring an application so that it stores its data in a database instead of xml files. It is a lot of work and I hope it is the last time that I have to do this. – tuinstoel Sep 25 '09 at 09:40
  • tuinstoel, don't blame XML files for a missing or poorly designed data access layer. – bmb Sep 25 '09 at 16:53
  • @bmb, Even refactoring 'just' a data access layer can be a lot of work. And it is totally unnecessary work. – tuinstoel Sep 26 '09 at 07:27
17

Here's one which has seemed obvious to me for many years but is anathema to everyone else: it is almost always a mistake to switch off C (or C++) assertions with NDEBUG in 'release' builds. (The sole exceptions are where the time or space penalty is unacceptable).

Rationale: If an assertion fails, your program has entered a state which

  • has never been tested
  • the developer was unable to code a recovery strategy for
  • the developer has effectively documented as being inconceivable.

Yet somehow 'industry best practice' is that the thing should just muddle on and hope for the best when it comes to live runs with your customers' data.

David Basarab
  • 72,212
  • 42
  • 129
  • 156
fizzer
  • 13,551
  • 9
  • 39
  • 61
  • "has never been tested" You do pre-release testing with assertions on and accept the assertion being triggered as passing the test? Weird idea. If you do that than I agree with you but I don't understand why you are doing this. – jwpfox Jan 03 '09 at 13:24
  • No, I'm just assuming that a failed assertion during testing causes a build to be rejected. Therefore, if one happens in the wild, the program has necessarily entered a state outside of test coverage. – fizzer Jan 03 '09 at 13:37
  • If during testing your assertions never failed and it does fail during production code, there is a problem with testing, but nevertheless, the error should be logged and the applications should end. Assertions or code that warrants the same should be in production. I agree. – David Rodríguez - dribeas Jan 05 '09 at 12:04
  • 1
    The problem is when the action of doing the assertion costs something that would otherwise slow down your code. If it is not in a hot path, I totally agree, the asserts should always be on. – nosatalian May 31 '09 at 02:07
  • ++ I've followed this path, in the spirit of "hope for the best - plan for the worst". We test to the very best of our ability, but never assume we have found *every* possible problem. Assert (or throwing an exception) is a way of guarding against doing further damage if a problem occurs (heaven forbid). – Mike Dunlavey Mar 22 '10 at 21:29
  • It depends. Software that controls pacemakers or nuclear power stations should not be written like that. – MarkJ Jul 17 '10 at 21:04
17

All source code and comments should be written in English

Writing source code and/or comments in languages other than English makes it less reusable and more difficult to debug if you don't understand the language they are written in.

Same goes for SQL tables, views, and columns, especially when abbrevations are used. If they aren't abbreviated, I might be able to translate the table/column name on-line, but if they're abbreviated all I can do is SELECT and try to decipher the results.

Scott
  • 6,411
  • 6
  • 39
  • 43
  • If English is the main language of wherever you work, I guess. Otherwise, that's just stupid. This suggestion seems pointless imo. – Coding With Style Jul 04 '09 at 22:10
  • Especially when you code ABAP in SAP-Systems it's always funny to read some German comments, that nobody out of German speaking regions will ever understand. (I'm a native German speaker so it's double funny) – capfu Jul 23 '09 at 00:52
  • 3
    All comments in English is great - if you speak English, and the maintainers will as well. I am a native English speaker, but ocassionally plop other languages in just because I can. If I were coding for an app that would be used and eventually maintained in, say, France - I'd expect the comments to be in French – warren Oct 22 '09 at 03:57
  • Using multiple languages in code makes it harder to read as you have to switch between the two languages in your head. English only (with native terms if needed in parenthesis). – Thorbjørn Ravn Andersen Oct 23 '09 at 19:46
  • That's not controversial, is simply idiotic when you know that a piece of code will never leave a non English-speaking country. I know perfectly that my English sucks and I don't want to inflict it on my fellow countryman programmers. Of course, if I'm quoting documentation in English I don't translate it. – MaD70 Nov 06 '09 at 00:00
  • This only makes sense for open source application where you expect (or hope) to get a number of people from all over the place to help. Otherwise just use what ever language suits you best. – hasen Mar 08 '10 at 15:16
  • You guys may not intend for your code to leave your country, but none of us can see the future. Our ERP system is written half in Dutch and half in English because a Dutch company bought an American company and rolled two different products into one. How can I be expected to know what gbkmut means? – Scott Mar 08 '10 at 16:03
  • If you work for a customer and that customer has a set of terms he is using in his trade, then your code will be using them. If that customer is not using English, you will end up translating them to and from English. This will probably end up causing bugs and misunderstandings. Yes it sucks adding Norwegian (in my case) domain names in the code and it makes your head spin for a while, but at least you are on the same page as your customer. – Knubo Nov 11 '10 at 19:55
16

Only write an abstraction if it's going to save 3X as much time later.

I see people write all these crazy abstractions sometimes and I think to myself, "Why?"

Unless an abstraction is really going to save you time later or it's going to save the person maintaining your code time, it seems people are just writing spaghetti code more and more.

Paul Mendoza
  • 5,709
  • 12
  • 53
  • 82
  • 5
    If you're writing abstraction using spaghetti code, then you're doing something very, very, wrong. – JesperE Feb 27 '09 at 20:01
16

The word 'evil' is an abused and overused word on Stackoverflow and simular forums.

People who use it have too little imagination.

tuinstoel
  • 7,248
  • 27
  • 27
16

Newer languages, and managed code do not make a bad programmer better.

LarryF
  • 4,925
  • 4
  • 32
  • 40
15

I generally hold pretty controversial, strong and loud opinions, so here's just a couple of them:

"Because we're a Microsoft outfit/partner/specialist" is never a valid argument.

The company I'm working in now identifies itself, first and foremost, as a Microsoft specialist. So the aforementioned argument gets thrown around quite a bit, and I've yet to see a context where it's valid.

I can't see why it's a reason to promote Microsoft's technology and products in every applicable corner, overriding customer and employee satisfaction, and general pragmatics.

This just a cornerstone of my deep hatred towards politics in software business.

MOSS (Microsoft Office Sharepoint Server) is a piece of shit.

Kinda echoes the first opinion, but I honestly think MOSS, as it is, should be shot out of the market. It costs gazillions to license and set up, pukes on web standards and makes developers generally pretty unhappy. I have yet to see a MOSS project that has an overall positive outcome.

Yet time after time, a customer approaches us and asks for a MOSS solution.

theiterator
  • 335
  • 5
  • 12
15

I really dislike when people tell me to use getters and setters instead of making the variable public when you should be able to both get and set the class variable.

I totally agree on it if it's to change a variable in an object in your object, so you don't get things like: a.b.c.d.e = something; but I would rather use: a.x = something; then a.setX(something); I think a.x = something; actually are both easier to read, and prettier then set/get in the same example.

I don't see the reason by making:

void setX(T x) { this->x = x; }

T getX() { return x; }

which is more code, more time when you do it over and over again, and just makes the code harder to read.

David Basarab
  • 72,212
  • 42
  • 129
  • 156
martiert
  • 1,636
  • 2
  • 18
  • 23
  • Agreed. Getters and setters violate encapsulation just as much as exposing objects directly does. There is no real point to them (except maybe in an external interface). – Ferruccio Jan 02 '09 at 13:37
  • 13
    There's actually a good reason to use setters: You can do some checking on constraints before assigning the new value to your variable. Even if your current code doesn't require it, it will be much easier to add such checks when there's a setter. – Jorn Jan 02 '09 at 13:43
  • 3
    I was very glad there was a setter on a variable once when I had to make sure some processing was done when it changed. – David Thornley Jan 02 '09 at 14:51
  • 1
    Actually, I think Ruby has something that gets you both - it's called virtual attributes. It allows you to have checks on your assignments and still be able to access the data as if it were a public member. – Cristián Romo Jan 03 '09 at 15:35
  • Python lets you do that as well. –  Jan 05 '09 at 11:41
  • Setters allow you to add contention in multithreading environments. Just lock when you set. Of course, it is not always the case that your code will end up being accessed by multiple threads, or is it? – David Rodríguez - dribeas Jan 05 '09 at 13:57
  • But this being 2009, who's still using an IDE that does not create the getters and setters on the press of a key...? – Arjan Apr 21 '09 at 22:47
  • It's not just that I have to write the code, but the getters and setters obfuscates the code itself by, in 95% of the time of my applications, taking up space and just being plane ugly. – martiert Apr 21 '09 at 23:20
  • I guess C# gives you a easy way to have both, is this Java? – rball May 19 '09 at 15:59
  • I had / have this opinion in some cases, but, one VERY important fact for me is that you can't 'override' a public variable. If the class in question is final, sealed, whatever - cool... AND if you're basically saying extenders should never be able to do anything on set / get ... ever ... – Gabriel Jun 01 '09 at 18:48
  • In many languages you can change a public field to a property without requiring any changes to code that consumes it. You would, however, force a recompile (in non-interpreted languages at least), which adds some constraints if you're shipping opaque libraries to external customers. – Richard Berg Jun 12 '09 at 05:53
  • 4
    And you set a breakpoint on a public field how, exactly? Setters are brilliant for exactly this reason - you can easily see what code is influencing a value. – Mark Jul 07 '09 at 13:52
  • You *must* use getters and setters when you code to an interface! – Thorbjørn Ravn Andersen Oct 23 '09 at 19:41
  • 1. Use an editor that shortens the process 2. Using setters and getters are much more safe than directly accessing the variable: what if you write a class with a variable inside: counter, and incorporate it into code (maybe in 100 classes) and now suddenly decide that counter cannot be negative ? using setter can help solve problems like these... 3. Sometimes exposing variables can be dangerous; eg: Exposing TOS in a stack class – Salvin Francis Dec 15 '09 at 05:33
  • @Richard Berg In VB6 you could change a public field to a property and vice versa without requiring any changes to code that consumes it, not even a recompile. It's one of the few areas where VB6 was IMHO better than .Net – MarkJ Jul 17 '10 at 21:15
  • @Thorbjørn -- not necessarily. Just because the designers of C#/Java decided to disallow fields in interfaces doesn't make it an inherently bad idea. Direct access is the dominant idiom in languages as diverse as C and Ruby. – Richard Berg Jul 20 '10 at 04:09
  • @Mark -- set a data breakpoint. Your CPU has hardware interrupts for this exact purpose. Getting it to work in a managed language is a little challenging, but not any harder than the problems inherent to soft-mode debugging generally. – Richard Berg Jul 20 '10 at 04:10
  • @Richard Berg: I don't get you - direct access *is* a dominant idiom for C, but definitely not for Ruby - actually, without reflection, there is no way in Ruby to do direct access. What Ruby does is give you an extremely easy way (`attr_accessor :x`) to generate getters/setters for an attribute which are syntactically transparent; i.e. you'd still use `p.x` and `p.x = 3` instead of `p.getX()` and `p.setX(3)`, but they're still methods. "Direct" instance variable would be `@x`, and you can't use a dot notation with it (i.e. `p.@x` is ungrammatical). – Amadan Aug 11 '10 at 01:03
  • I agree. I think your point is valid as long as we are not talking about exposed interfaces (where obviously you want to provide as much encapsulation as possible). For internal code I prefer to use setters/getters when there is actual checking of constraints before setting/getting anything. I don't like setters/getters that do nothing because you have to browse the code to actually see if they do something... – matias Nov 11 '10 at 21:13
15

Classes should fit on the screen.

If you have to use the scroll bar to see all of your class, your class is too big.

Code folding and miniature fonts are cheating.

Jay Bazuzi
  • 45,157
  • 15
  • 111
  • 168
  • 4
    You must have a really large screen then. Do you also think, that class can have no more than 3 or 4 methods, because no more clearly fits on the 41 lines that fit on my screen. Voting up, because this is really controversial. – Rene Saarsoo Jan 03 '09 at 19:40
  • Rene: thanks for disagreeing with me without dismissing my answer out of hand. I sense an open mind. – Jay Bazuzi Jan 03 '09 at 20:17
  • 1
    I have to disagree as well. I write a lot of Python classes and not many of them fit on my screen. Of course, I'm not counting my netbook's screen because that would just be unfair to me. =P –  Jan 05 '09 at 11:12
  • Screen size varies widely depending on your visual acuity. I keep my screens running at 1680×1250, and use Consolas 8pt. What I can see on one screen is likely *much* more than a guy running at 640×480 using Courier New 10pt. – Mike Hofer Jan 06 '09 at 12:24
  • Make that, "Screen capacity varies widely depending on your visual acuity and display settings." :-) Not enough coffee yet. :-) – Mike Hofer Jan 06 '09 at 12:25
  • @Mike: it's true, screen capacity varies. To follow my guideline, you have to decide which screen you want to fit on. On a team, you have to make that decision together. Still, the principle is sound: I want to be able to look at a whole class and comprehend it in its entirety, without scrolling. – Jay Bazuzi Jan 06 '09 at 15:40
  • This might be quite challenging to implement in some languages that are more verbose (require more plumbing), but I admire the general sentiment. – Rob Williams Jan 06 '09 at 22:53
  • @Rob: thanks, and you're right. In some languages you can Extract Class and get some compactness, hopefully for the benefit of your code. In others (C++ I'm looking at you!) even simple classes have to work very hard to function. – Jay Bazuzi Jan 07 '09 at 05:25
  • Do you have any other rules to go with this? The list of classes in an API should fit on one screen? What is it in the class that you need to see anyway, surely the name tells you all about what it can do! What need for to look at the methods on a list. – Greg Domjan Jan 07 '09 at 07:05
  • Some other rules that may fit: "Methods should have one statement" and "blocks should have only one statement" and "switch cases much be trivial" and "each 'enum' type should be mentioned in a conditional only once". :-) – Jay Bazuzi Jan 08 '09 at 00:06
  • Ouch. It can be hard enough to make a method fit on the screen, never mind an entire class (my main language is Java BTW) – finnw Jan 17 '09 at 16:45
  • 9
    For some of my classes, I can barely fit the member list on the screen. If an obect is to represent something, it should do so in its entirety. Breaking it up into many smaller classes is just adding visual complexity (right click > go to definition - ad nauseum) where it need not exist. – Steven Evers Jan 23 '09 at 22:31
  • @SnOrfus: I bet that there are bits of self-contained, general-purpose, reusable bits of functionality in those big classes, that would make COMPLETE SENSE as a new class. You wouldn't be confused when looking at a reference to one, because the name and its functionality would be obvious. – Jay Bazuzi Jan 24 '09 at 18:28
  • 1
    I think this is baiting. The implication is that a class should have a limit to the number of attributes it can have because their declaration eats into the space for method bodies. This sounds like a language troll as in, any language that can't fit a class onto one screen isn't fit to use. Try coding something complex like the contact details for a person which includes an international address including phone numbers, email, fax, etc. – Kelly S. French Jul 16 '09 at 15:37
  • r u talking abt classes in c++ where function body is declared outside the class? then may be u r right... – Amarghosh Sep 10 '09 at 08:18
  • @Amarghosh No, that's not what I'm talking about. It's not possible to do this in C++ because the language is too complex and unwieldy. Also, I wish you would write Englis. – Jay Bazuzi Sep 10 '09 at 20:58
  • 1
    Not if you're programming for a mobile phone. – Daniel Daranas Dec 21 '09 at 17:28
15

The best code is often the code you don't write. As programmers we want to solve every problem by writing some cool method. Anytime we can solve a problem and still give the users 80% of what they want without introducing more code to maintain and test we have provided waaaay more value.

Todd Friedlich
  • 662
  • 1
  • 5
  • 13
14

A Developer should never test their own software

Development and testing are two diametrically opposed disciplines. Development is all about construction, and testing is all about demolition. Effective testing requires a specific mindset and approach where you are trying to uncover developer mistakes, find holes in their assumptions, and flaws in their logic. Most people, myself included, are simply unable to place themselves and their own code under such scrutiny and still remain objective.

Bruce McLeod
  • 1,362
  • 15
  • 21
  • 2
    Do you include unit testing in that? Do you not see any value in unit testing? If so, I don't agree. I would agree that a developer shouldn't be the *only* tester of their software (where possible, of course). – Jon Skeet May 29 '09 at 06:12
  • 6
    Jon, I am talking from the point of view that yes they SHOULD do unit testing but no they should NOT be the only tester of their code. As you rightly point out, if they are the only one then they don't have much choice. This question did ask for your most controversial opinion so I think that mine is right up there. The other key point is that the "we don't need no stinking testers" cause' the dev's or anyone can just do it is completely wrong as well – Bruce McLeod May 29 '09 at 13:44
  • I suggest you reword the rule to "should never be RESPONSIBLE for testing their own software", as your current wording could imply you were not allowed to test your pgorams at all. – Thorbjørn Ravn Andersen Oct 23 '09 at 18:17
13

Object Oriented Programming is overused

Sometimes the best answer is the simple answer.

  • For most competent worldly-wise OO devs, classes are only broken out from a root class once it becomes apparent that complexity is becoming hard to manage. Oddly (or not so oddly), it is often at that very point that it becomes apparent just _what_ needs to be broken out. And until you do break out from a root class, you _are_ programming procedurally (at least within the context of that class). Premature proliferation of classes during the development process is something that OO greenhorns do. – Engineer Sep 17 '10 at 15:38
13

Stay away from Celko!!!!

http://www.dbdebunk.com/page/page/857309.htm

I think it makes a lot more sense to use surrogate primary keys then "natural" primary keys.


@ocdecio: Fabian Pascal gives (in chapter 3 of his book Practical issues in database management, cited in point 3 at the page that you link) as one of the criteria for choosing a key that of stability (it always exists and doesn't change). When a natural key does not possesses such property, than a surrogate key must be used, for evident reasons, to which you hint in comments.

You don't know what he wrote and you have not bothered to check, otherwise you could discover that you actually agree with him. Nothing controversial there: he was saying "don't be dogmatic, adapt general guidelines to circumstances, and, above all, think, use your brain instead of a dogmatic/cookbook/words-of-guru approach".

MaD70
  • 4,078
  • 1
  • 25
  • 20
Otávio Décio
  • 73,752
  • 17
  • 161
  • 228
  • Yes! His ideas about Heiarchical data structures are academically elegant and totally useless. – Charles Bretana Jan 02 '09 at 14:28
  • Well, I like Celko but I agree with you re: surrogate primary keys! – Mark Brittingham Jan 02 '09 at 15:19
  • Agree in part, surrogate keys are definitely more convenient when accessing data, but I try to identify a natural key as well and usually set it up as a constraint. So why not both?! – tekiegreg Jan 03 '09 at 16:34
  • I have no problems with natural keys to be used for convenience, but primary keys should be immutable. I once had a system that used SSN's as PK's, and sometimes persons wouldn't have one (as children) and then they would. Try to change a PK, what a mess... – Otávio Décio Jan 03 '09 at 16:56
  • I can agree with the concept that once your autonumber keys get mismatched, there's no way to fix them. But the solution isn't "natural" keys; the solution is never to expose the keys to your users. – Ryan Lundy Jan 04 '09 at 03:21
  • I wish I could go back a few years on my current project and tell myself not to use a natural key. Now we're stuck with it and kludging around it. +1 – Marcus Downing Jan 09 '09 at 03:08
  • @ocdecio: Fabian Pascal gives (in chapter 3 of his book, as cited in point 3 at the page that you link) as one of the criteria for choosing a key that of **stability** (it always exists and doesn't change). When a natural key does not possesses such property, than a surrogate key must be used, for evident reasons, to which you hint. So you actually agree with him, but think otherwise. Nothing controversial there: he was saying "don't be dogmatic, adapt general guidelines to circumstances, and, above all, **think**, use your brain instead of a dogmatic/cookbook/words-of-guru approach". – MaD70 Oct 22 '09 at 04:50
  • One of the classic mistakes is to assume that just because a candidate natural key, such as SSN, is by definition unique, that you will receive unique values. People may lie or make mistakes and you then have a chance of collision when the "real person" comes along. – Andy Dent Dec 21 '09 at 03:46
13

New web projects should consider not using Java.

I've been using Java to do web development for over 10 years now. At first, it was a step in the right direction compared to the available alternatives. Now, there are better alternatives than Java.

This is really just a specific case of the magic hammer approach to problem solving, but it's one that's really painful.

sapientpants
  • 445
  • 2
  • 12
13

Developers are all different, and should be treated as such.

Developers don't fit into a box, and shouldn't be treated as such. The best language or tool for solving a problem has just as much to do with the developers as it does with the details of the problem being solved.

commondream
  • 504
  • 3
  • 5
13

I have a few... there's exceptions to everything so these are not hard and fast but they do apply in most cases

Nobody cares if your website validates, is XHTML strict, is standards-compliant, or has a W3C badge.

It may earn you some high-fives from fellow Web developers, but the rest of people looking at your site could give a crap whether you've validated your code or not. the vast majority of Web surfers are using IE or Firefox, and since both of those browsers are forgiving of nonstandards, nonstrict, invalidated HTML then you really dont need to worry about it. If you've built a site for a car dealer, a mechanic, a radio station, a church, or a local small business, how many people in any of those businesses' target demographics do you think care about valid HTML? I'd hazard a guess it's pretty close to 0.

Most open-source software is useless, overcomplicated crap.

Let me install this nice piece of OSS I've found. It looks like it should do exactly what I want! Oh wait, first I have to install this other window manager thingy. OK. Then i need to get this command-line tool and add it to my path. Now I need the latest runtimes for X, Y, and Z. now i need to make sure i have these processes running. ok, great... its all configured. Now let me learn a whole new set of commands to use it. Oh cool, someone built a GUI for it. I guess I don't need to learn these commands. Wait, I need this library on here to get the GUI to work. Gotta download that now. ok, now its working...crap, I can't figure out this terrible UI.

sound familiar? OSS is full of complication for complication's sake, tricky installs that you need to be an expert to perform, and tools that most people wouldn't know what to do with anyway. So many projects fall by the wayside, others are so niche that very few people would use them, and some of the decent ones (FlowPlayer, OSCommerce, etc) have such ridiculously overcomplicated and bloated source code that it defeats the purpose of being able to edit the source. You can edit the source... if you can figure out which of the 400 files contains the code that needs modification. You're really in trouble when you learn that its all 400 of them.

nerdabilly
  • 1,248
  • 4
  • 15
  • 34
  • I wish I could vote to make you God. Really, this is amazing stuff. – Jonathan C Dickinson Jan 15 '09 at 07:54
  • 1
    On the other hand the best OSS packages are huge force multipliers. These are the well-designed, well-maintained ones that have big communities of users and developers (and real published books). Some examples of these are Rhino (Javascript interpreter), Xerces (XML Parser), Restlet (REST Web Services), and jQuery (Javascript GUI development). Others really do suck, like Axis 1.x. – Jim Ferrans May 19 '09 at 02:34
  • Screen readers and other accessibility tools perform better if the HTML conforms to standards. As for OSS .. your reasoning is deeply flawed in applying your own negative experience to all OSS works. Sure modifying OSS projects can be difficult (impossible for many) but I've lost count of the OSS libraries I've used to save myself tons of work on various projects. If most OSS is useless it is only because there is so much of it. There is a lot of very useful OSS out there. – Kris May 31 '09 at 00:17
  • Everything WWW sucks anyway, so for the first point I cannot care less. +100 for the second. – MaD70 Nov 05 '09 at 23:41
  • 3
    long live the `sudo apt-get install` – hasen Nov 06 '09 at 06:22
13

Programming is in its infancy.

Even though programming languages and methodologies have been evolving very quickly for years now, we still have a long way to go. The signs are clear:

  1. Language Documentation is spread haphazardly across the internet (stackoverflow is helping here).

  2. Languages cannot evolve syntactically without breaking prior versions.

  3. Debugging is still often done with printf.

  4. Language libraries or other forms of large scale code reuse are still pretty rare.

Clearly all of these are improving, but it would be nice if we all could agree that this is the beginning and not the end=).

Evan Moran
  • 3,825
  • 34
  • 20
  • 1
    I have upvoted it although I believe this is completely uncontroversial to anyone who knows a *minimum* about programming methodology and history. We've got a long road ahead, hence the many insulting jokes about programmers’ abilities compared to architects, airplane pilots etc. – Konrad Rudolph Apr 11 '09 at 20:18
  • 1
    Actual there are many who would say the opposite. Everything interesting to do with programming languages was done in 60s with Lisp. We are just waiting for people to figure this out - Witness the growing popularity of Python/Java closures, etc. So this _is_ controversial. – nosatalian May 31 '09 at 02:37
  • printf debugging is actually mentioned on a higher-rated comment in this thread as being a good idea – OJW Dec 06 '09 at 22:10
13

If you want to write good software then step away from your computer

Go and hang out with the end users and the people who want and need the software. Only from them will you understand what your software needs to accomplish and how it needs to do that.

  • Ask them what the love & hate about the existing processes.
  • Ask them about the future of their processes, where it is headed.
  • Hang out and see what they use now and figure out their usage patterns. You need to meet and match their usage expectations. See what else they use a lot, particularly if they like it and can use it efficiently. Match that.

The end user doesn't give a rat's how elegant your code is or what language it's in. If it works for them and they like using it, you win. If it doesn't make their lives easier and better - they hate it, you lose.

Walk a mile in their shoes - then go write your code.

CAD bloke
  • 8,578
  • 7
  • 65
  • 114
  • Great answer - I always try to do this... but sometimes you got to protect users from their own ideas. Because e.g. in business software (financial) I always encounter some users with the tendency to wish for "creative bookkeeping". Hehe. – capfu Jul 23 '09 at 00:58
  • This is why I love being a domain expert. For my whole career I've worked alongside people who use the type of software I write. – Jeanne Pindar Oct 17 '09 at 17:42
  • @Jeanne: Ditto - my major project is based on what I do for a living. I do a lot of talking to myself. – CAD bloke Oct 18 '09 at 10:40
12

Greater-than operators (>, >=) should be deprecated

I tried coding with a preference for less-than over greater-than for awhile and it stuck! I don't want to go back, and indeed I feel that everyone should do it my way in this case.

Consider common mathematical 'range' notation: 0 <= i < 10

That's easy to approximate in code now and you get used to seeing the idiom where the variable is repeated in the middle joined by &&:

if (0 <= i && i < 10)
    return true;
else
    return false;

Once you get used to that pattern, you'll never look at silliness like

if ( ! (i < 0 || i >= 9))
    return true;

the same way again.

Long sequences of relations become a bit easier to work with because the operands tend towards nondecreasing order.

Furthermore, a preference for operator< is enshrined in the C++ standards. In some cases operator= is defined in terms of it! (as !(a<b || b<a))

Marsh Ray
  • 2,827
  • 21
  • 20
  • Ick, no. If I want code to throw an exception when a string is over a certain length (for example) I'd *far* rather use `if (text.Length > 30) { throw new ... }` than `if (!(text.Length <= 30)) { throw new ... }`. – Jon Skeet Jul 29 '09 at 16:11
  • 1
    `if (30 < text.Length) throw ....` is another option Actually, I prefer `(!(text.Length <= 30))` because it nicely matches `assert(text.Length <= 30)`. Think about when multiple conditions get compounded. Keeping the error checking logic in that 'positive assertion' sense helps reduce logic bugs. I know it looks a little strange the first time. It's controversial and I don't push it on others. But try it with an open mind and you might grow to like it better. Or you might not. :-) – Marsh Ray Jul 29 '09 at 16:57
  • to be pedantic, `if(text.Length > 30)` is equivalent to `if(30 <= text.Length)` because the comparison goes from *exclusive* to *inclusive* – warren Oct 22 '09 at 04:25
  • s/is equivalent/is not equivalent/ is I think what you meant. In any case, I never said those two were or were not equivalent. – Marsh Ray Oct 22 '09 at 15:00
  • 1
    Why not just return your if-condition? – GManNickG Jan 11 '10 at 21:42
  • I would if that was really what was needed. Perhaps my example was a bit too trivial. Imagine something more interesting and useful in the if/else bodies. – Marsh Ray Jan 12 '10 at 03:51
  • It's language dependent, but in C++ `3 > getAirplane()` throws a compiler error, but `getAirplane() < 3` might not depending on which constructors are defined for your Airplane class. – thebretness Feb 26 '10 at 01:00
12

Software is like toilet paper. The less you spend on it, the bigger of a pain in the ass it is.

That is to say, outsourcing is rarely a good idea.

I've always figured this to be true, but I never really knew the extent of it until recently. I have been "maintaining" (read: "fixing") some off-shored code recently, and it is a huge mess. It is easily costing our company more than the difference had it been developed in-house.

People outside your business will inherently know less about your business model, and therefore will not do as good a job programming any system that works within your business. Also, they know they won't have to support it, so there's no incentive to do anything other than half-ass it.

iandisme
  • 6,346
  • 6
  • 44
  • 63
  • @ iandisme - Probably you didn't spare some time to tell those guys what your business is? Another point, why did you sign such a contract where they just develop some sh** and flee? You should have done a long term contract with dev, maintainance and support clubbed together. As a customer controlling quality was in your hand. – Pradeep Dec 18 '10 at 08:51
  • @ Seventh Element - Don't blame India because somebody else didn't manage his offshoring project and quality properly. – Pradeep Dec 18 '10 at 08:55
  • @Pradeep - I didn't have anything to do with setting up the contract. Either way, your point adds to my original statement: Doing it right the first time would have been more expensive up-front, but worth it. – iandisme Dec 20 '10 at 20:25
12

Every developer should spend several weeks, or even months, developing paper-based systems before they start building electronic ones. They should also then be forced to use their systems.

Developing a good paper-based system is hard work. It forces you to take into account human nature (cumbersome processes get ignored, ones that are too complex tend to break down), and teaches you to appreciate the value of simplicity (new work goes in this tray, work for QA goes in this tray, archiving goes in this box).

Once you've worked out how to build a system on paper, it's often a lot easier to build an effective computer system - one that people will actually want to (and be able to) use.

The systems we develop are not manned by an army of perfectly-trained automata; real people use them, real people who are trained by managers who are also real people and have far too little time to waste training them how to jump through your hoops.

In fact, for my second point:

Every developer should be required to run an interactive training course to show users how to use their software.

Keith Williams
  • 2,257
  • 3
  • 19
  • 29
  • 1
    Programming has a lot in common with cleaning your room. The same principles of organization apply. – Alex Baranosky Jan 04 '09 at 20:11
  • Maybe... rather than dealing with your accounts as bits of paper you abstract them into folders, and encapsulate them in a filing cabinet or box. If you find a way to unit test laundry, let me know! – Keith Williams Jan 06 '09 at 14:12
  • Generally having a plan before building a web site/ desktop app/ house/ nuclear sub is always a good idea! Mapping things out, either with a sketches on a pad of paper, a wireframe, visio, work flow, mind map, whatever. And the training users...I see this missed by even the most brillant programmers. User acceptance in the long run determins your apps success. If they don't understand it, no matter what it does or how well it is done, your app will fail. – infocyde Jun 25 '09 at 21:46
12

Most consulting programmers suck and should not be allowed to write production code.

IMHO-Probably about 60% or more

John MacIntyre
  • 12,910
  • 13
  • 67
  • 106
  • That is not controversial; that is fact! – icelava Jan 07 '09 at 13:31
  • 2
    Most non-consulting programmers are stuck in a rut and live in a company bubble maintaining dinosaur code while never being exposed to anything that challenges there assumptions; except for the occasional outside consultant. How's that for controversial? ;-) – Captain Sensible Jan 26 '09 at 10:37
  • 2
    @Diego; true and consultants have an opportunity to become amazing programmers with everything they are exposed to. But in my experience, I've seen too much crap written by hacks who just picked up enough knowledge to make it work, knowing they'd never have to maintain it, and they just don't care. – John MacIntyre Jan 26 '09 at 18:11
  • 1
    I consulted for many years. There were cases where the company programmers were good but didn't understand how I was doing things, and so were inclined to criticize. Nevertheless, I'm inclined to agree with you - there are half-hearted programmers in contracting positions. – Mike Dunlavey Mar 08 '10 at 14:10
12

Non-development staff should not be allowed to manage development staff.

Correction: Staff with zero development experience should not be allowed to manage development staff.

Chris
  • 6,702
  • 8
  • 44
  • 60
  • 2
    Better non-development staff with management skills than developer staff without management skills. – tuinstoel Jan 05 '09 at 15:44
  • So you reckon every company that employs any developers should have a developer as CEO? – finnw Jan 17 '09 at 18:10
  • Yes, if you going to manage people with a special skill set it would be helpful if you also had a background in that skill set. Would you hire a CEO with no Management experience? – Chris Jan 22 '09 at 17:24
  • 3
    C-level comparisons are weak. More realistic would be "Would you hire an untrained mechanic to manage mechanics?" Well...yes. I'm not saying that non-developers make better managers of developers, or that management & development abilities are mutually exclusive, but rather *the ability to manage an employee is significantly more important to the ability to do the employee's work.* – Stu Thompson Apr 28 '09 at 20:25
12

Most Programmers are Useless at Programming

(You did say 'controversial')

I was sat in my office at home pondering some programming problem and I ended up looking at my copy of 'Complete Spectrum ROM Disassembly' on my bookshelf and thinking:

"How many programmers today could write the code used in the Spectrum's ROM?"

The Spectrum, for those unfamiliar with it, had a Basic programming language that could do simple 2D graphics (lines, curves), file IO of a sort and floating point calculations including transendental functions all in 16K of Z80 code (a < 5Mhz 8bit processor that had no FPU or integer multiply). Most graduates today would have trouble writing a 'Hello World' program that was that small.

I think the problem is that the absolute number of programmers that could do that has hardly changed but as a percentage it is quickly approaching zero. Which means that the quality of code being written is decreasing as more sub-par programmers enter the field.

Where I'm currently working, there are seven programmers including myself. Of these, I'm the only one who keeps up-to-date by reading blogs, books, this site, etc and doing programming 'for fun' at home (my wife is constantly amazed by this). There's one other programmer who is keen to write well structured code (interestingly, he did a lot of work using Delphi) and to refactor poor code. The rest are, well, not great. Thnking about it, you could describe them as 'brute force' programmers - will force inappropriate solutions until they work after a fashion (e.g. using C# arrays with repeated array.Resize to dynamically add items instead of using a List).

Now, I don't know if the place I'm currently at is typical, although from my previous positions I would say it is. With the benefit of hindsight, I can see common patterns that certainly didn't help any of the projects (lack of peer review of code for one).

So, 5 out of 7 programmers are rubbish.

Skizz

Skizz
  • 69,698
  • 10
  • 71
  • 108
  • 1
    There are fewer programmers with the skillset to tackle a problem that no longer matters. Now we have higher levels of abstraction that allow the big picture to come together in more loosely coupled, highly OO ways. Its not that I'm not smart enough to write it, its that I can write something better – Steve Mar 13 '09 at 04:48
  • 1
    BIOS's and hardware drivers probably feature a lot of assembler. Many embedded systems are assembler only (or primitive C compilers if you're lucky). Even with high level OO, how many coders could write the equivalent of a Spectrum basic interpreter. – Skizz Mar 13 '09 at 09:35
12

A programming task is only fun while it's impossible, that is up til the point where you've convinced yourself you'll be able to solve it successfully.

This, I suppose, is why so many of my projects end up halfway finished in a folder called "to_be_continued".

Mia Clarke
  • 8,134
  • 3
  • 49
  • 62
11

Copy/Paste IS the root of all evil.

OscarRyz
  • 196,001
  • 113
  • 385
  • 569
11

Junior programmers should be assigned to doing object/ module design and design maintenance for several months before they are allowed to actually write or modify code.

Too many programmers/developers make it to the 5 and 10 year marks without understanding the elements of good design. It can be crippling later when they want to advance beyond just writing and maintaining code.

kloucks
  • 1,549
  • 9
  • 11
  • 2
    I will tell you from having dealt with entry-level and junior developers that they learn precisely nothing by performing "maintanence and bug fixes", they never develop any skills. Letting juniors build an app something from scratch teaches them an incredible amount in a short period of time. – Juliet Jan 02 '09 at 18:13
  • Quite so. Aptitude has very little to do with experience, which often just entrenches bad habits. – ChrisA Jan 02 '09 at 22:21
  • 1
    I would say the exact opposite. Let them write implementations of existing interfaces, that must pass existing unit tests. They will pick up some design skills just by working with the senior developer's designs for a few months. – finnw Jan 17 '09 at 17:38
  • @Juliet, absolute rubbish. When I was an entry-level developer I did maintenance and bug fixwork and learnt directly why consistency and separation of concerns is so essential in software. Maintaining code with "issues" it THE best way to improve your own designs. – Ash Aug 06 '09 at 14:55
  • Nothing teaches you the value of doing things the right way like the pain of doing things the wrong way and then having to live with the results. – Jeremy Friesner Nov 05 '09 at 22:32
11

SQL could and should have been done better. Because its original spec was limited, various venders have been extending the language in different directions for years. SQL that is written for MS-SQL is different than SQL for Oracle, IBM, MySQL, Sybase, etc. Other serious languages (take C++ for example) were carefully standardized so that C++ written under one compiler will generally compile unmodified under another. Why couldn't SQL have been designed and standardized better?

HTML was a seriously broken choice as a browser display language. We've spent years extending it through CSS, XHTML, Javascript, Ajax, Flash, etc. in order to make a useable UI, and the result is still not as good as your basic thick-client windows app. Plus, a competent web programmer now needs to know three or four languages in order to make a decent UI.

Oh yeah. Hungarian notation is an abomination.

Kluge
  • 3,567
  • 3
  • 24
  • 21
  • +1 for the abomination. Anything that's harder to read than write has got to be wrong. – ChrisA Jan 02 '09 at 19:40
  • This is a statement that two things that had been around for a long time, and have been heavily used, would be much better done if they'd known then what we know now. That is much closer to being a tautology than a controversy. – David Thornley Oct 13 '09 at 21:33
  • 1
    html layout is a lot easier than assembling widgets in C++ – hasen Nov 06 '09 at 06:24
11

Don't use stored procs in your database.

The reasons they were originally good - security, abstraction, single connection - can all be done in your middle tier with ORMs that integrate lots of other advantages.

This one is definitely controversial. Every time I bring it up, people tear me apart.

Bill the Lizard
  • 398,270
  • 210
  • 566
  • 880
  • I worked on a project that I consider to be an exception to this rule, but it did mean constantly hitting against all the reasons I mostly agree with you. They're not a good solution, 99% of the time. – Marcus Downing Jan 09 '09 at 03:17
  • SQL is just another language? Tough to reason with that mindset. – Lurker Indeed Jan 12 '09 at 18:00
  • SPROCs eliminate SQL injection attacks. In MSSQL they are pre-compiled (and hence faster). @Christopher, can you give me the address of any websites that you built? I want to make some money :P. – Jonathan C Dickinson Jan 29 '09 at 09:51
11

A random collection of Cook's aphorisms...

  • The hardest language to learn is your second.

  • The hardest OS to learn is your second one - especially if your first was an IBM mainframe.

  • Once you've learned several seemingly different languages, you finally realize that all programming languages are the same - just minor differences in syntax.

  • Although one can be quite productive and marketable without having learned any assembly, no one will ever have a visceral understanding of computing without it.

  • Debuggers are the final refuge for programmers who don't really know what they're doing in the first place.

  • No OS will ever be stable if it doesn't make use of hardware memory management.

  • Low level systems programming is much, much easier than applications programming.

  • The programmer who has a favorite language is just playing.

  • Write the User's Guide FIRST!

  • Policy and procedure are intended for those who lack the initiative to perform otherwise.

  • (The Contractor's Creed): Tell'em what they need. Give'em what they want. Make sure the check clears.

  • If you don't find programming fun, get out of it or accept that although you may make a living at it, you'll never be more than average.

  • Just as the old farts have to learn the .NET method names, you'll have to learn the library calls. But there's nothing new there.
    The life of a programmer is one of constantly adapting to different environments, and the more tools you have hung on your belt, the more versatile and marketable you'll be.

  • You may piddle around a bit with little code chunks near the beginning to try out some ideas, but, in general, one doesn't start coding in earnest until you KNOW how the whole program or app is going to be layed out, and you KNOW that the whole thing is going to work EXACTLY as advertised. For most projects with at least some degree of complexity, I generally end up spending 60 to 70 percent of the time up front just percolating ideas.

  • Understand that programming has little to do with language and everything to do with algorithm. All of those nifty geegaws with memorable acronyms that folks have come up with over the years are just different ways of skinning the implementation cat. When you strip away all the OOPiness, RADology, Development Methodology 37, and Best Practice 42, you still have to deal with the basic building blocks of:

    • assignments
    • conditionals
    • iterations
    • control flow
    • I/O

Once you can truly wrap yourself around that, you'll eventually get to the point where you see (from a programming standpoint) little difference between writing an inventory app for an auto parts company, a graphical real-time TCP performance analyzer, a mathematical model of a stellar core, or an appointments calendar.

  • Beginning programmers work with small chunks of code. As they gain experience, they work with ever increasingly large chunks of code.
    As they gain even more experience, they work with small chunks of code.
Jay Bazuzi
  • 45,157
  • 15
  • 111
  • 168
cookre
  • 705
  • 1
  • 5
  • 7
  • "Once you've learned several seemingly different languages, you finally realize that all programming languages are the same - just minor differences in syntax." - you just broke many hearts, some people learn new language every year. – IAdapter Jan 03 '09 at 04:04
  • And it gets easier and easier, doesn't, doesn't it? – cookre Jan 03 '09 at 06:13
  • 3
    "you finally realize that all programming languages are the same" -- you hear that a lot from people who have only programmed in C#, C++, flavors of VB, Java, and maybe Python. Then you finally learn Haskell, Ocaml, Erlang, Prolog, and Lisp, and you feel like an idiot for having missed so much. – Juliet Jan 04 '09 at 03:30
  • 1
    It's always nice to have lots of toys, but we know they all serve the same purpose - to entertain us in some way. Likewise with every programming language I've seen over the past forty some odd years. As mentioned above, it's all about algorithm - not syntax. – cookre Jan 04 '09 at 21:17
  • @cookre: try to use algorithms designed to be expressed in an imperative programming language (PL) with a pure lazy functional PL like Haskell or in a (constraint) logic PL like Prolog (and derivatives) or in a PL designed for fault tolerance and massive concurrency, like Erlang and you will discover that semantics differences are all that really counts. – MaD70 Nov 05 '09 at 23:34
11

The worst thing about recursion is recursion.

Ferruccio
  • 98,941
  • 38
  • 226
  • 299
Mike
  • 41
  • 1
  • 6
11

This one is mostly web related but...

Use Tables for your web page layouts

If I was developing a gigantic site that needed to squeeze performance I might think about it, but nothing gives me an easier way to get a consistent look out on the browser than tables. The majority of applications that I develop are for around 100-1000 users and possible 100 at a time max. The extra bloat of the tables aren't killing my server by any means.

rball
  • 6,925
  • 7
  • 49
  • 77
  • 2
    Its not so much about code bloat but more about letting the page degrade gracefully. – Ólafur Waage Jan 07 '09 at 11:13
  • And you think div's and css does this? I don't. – rball Jan 07 '09 at 20:53
  • 1
    I always try to make a layout that avoids tables, and I always fail. Div-based layouts just don't have the flexibility of a table. +1 – Marcus Downing Jan 09 '09 at 04:15
  • 2
    Marcus: Are you kidding? Use tables for what they were meant for - tabular data. – Tom Apr 04 '09 at 12:47
  • I'm starting to believe in using CSS frameworks like blueprint and 960. These seem to be giving me the consistency along with it being a lot easier to make the layout. Seems to be meeting my needs so I'm pretty jazzed. – rball Jan 11 '10 at 18:48
11

coding is not typing

It takes time to write the code. Most of the time in the editor window, you are just looking at the code, not actually typing. Not as often, but quite frequently, you are deleting what you have written. Or moving from one place to another. Or renaming.

If you are banging away at the keyboard for a long time you are doing something wrong.

Corollary: Number of lines of code written per day is not a linear measurement of a programmers productivity, as programmer that writes 100 lines in a day is quite likely a better programmer then the one that writes 20, but one that writes 5000 is certainly a bad programmer

Miserable Variable
  • 28,432
  • 15
  • 72
  • 133
  • 1
    Very much agree with this. Did you see that recent thread where the consensus seemed to be that if you can't touch type at 80wpm you aren't a real programmer? Complete nonsense, although people seem to like that sort of testosterone-driven "productivity". – ChrisA Jan 07 '09 at 17:53
  • @ChrisA: I actually read that thread and came back to write this response. While coding, I like to take time dotting my i's and crossing my t's, so to say. – Miserable Variable Jan 08 '09 at 06:05
  • The typing issue isn't that typing faster allows you to type more code. The issue is that if typing is really a second nature, all of your attention can be on what you are coding rather than on typing. Conversely if you are constantly looking at the keyboard and correcting typos, you are wasting a lot of your attention on typing. Your train of thought is interrupted all the time by the mechanical action of typing. Doesn't mean that you are a bad programmer, but you are certainly not as good as you could be if 30% of your attention is stuck on the keyboard. Programmer, master your tools. – Sylverdrag Jun 08 '10 at 05:46
11

90 percent of programmers are pretty damn bad programmers, and virtually all of us have absolutely no tools to evaluate our current ability level (although we can generally look back and realize how bad we USED to suck)

I wasn't going to post this because it pisses everyone off and I'm not really trying for a negative score or anything, but:

A) isn't that the point of the question, and

B) Most of the "Answers" in this thread prove this point

I heard a great analogy the other day: Programming abilities vary AT LEAST as much as sports abilities. How many of us could jump into a professional team and actually improve their chances?

Bill K
  • 62,186
  • 18
  • 105
  • 157
11

Estimates are for me, not for you

Estimates are a useful tool for me, as development line manager, to plan what my team is working on.

They are not a promise of a feature's delivery on a specific date, and they are not a stick for driving the team to work harder.

IMHO if you force developers to commit to estimates you get the safest possible figure.

For instance -

I think a feature will probably take me around 5 days. There's a small chance of an issue that would make it take 30 days.

If the estimates are just for planning then we'll all work to 5 days, and account for the small chance of an issue should it arise.

However - if meeting that estimate is required as a promise of delivery what estimate do you think gets given?

If a developer's bonus or job security depends on meeting an estimate do you think they give their most accurate guess or the one they're most certain they will meet?

This opinion of mine is controversial with other management, and has been interpreted as me trying to worm my way out of having proper targets, or me trying to cover up poor performance. It's a tough sell every time, but one that I've gotten used to making.

Keith
  • 150,284
  • 78
  • 298
  • 434
  • +1 "do you want the estimate for average case or worst case?" "average case" "then don't treat that estimate as a hard limit" duh! – OJW Dec 06 '09 at 22:07
11

I don't know if it's really controversial, but how about this: Method and function names are the best kind of commentary your code can have; if you find yourself writing a comment, turn the the piece of of code you're commenting into a function/method.

Doing this has the pleasant side-effect of forcing you to decompose your program well, avoids having comments that can quickly become out of sync with reality, gives you something you can grep the codebase for, and leaves your code with a fresh lemon odour.

Keith Gaughan
  • 21,367
  • 3
  • 32
  • 30
  • This can be taken too far. Often there is a subtle business case for a particular method or implimentation strategy that you cannot convey without several lines of comments. – Tom Leys May 28 '09 at 21:49
  • Quite true, but it's a rule of thumb rather than a hard rule. Indicating subtleties is, after all, what comments are best used for. – Keith Gaughan May 29 '09 at 10:08
10

Recursion is fun.

Yes, I know it can be an ineffectual use of stack space, and all that jazz. But some times a recursive algorithm is just so nice and clean compared to it's iterative counterpart. I always get a bit gleeful when I can sneak a recursive function in somewhere.

Community
  • 1
  • 1
Stu Thompson
  • 38,370
  • 19
  • 110
  • 156
  • "Ineffectual use of stack space" -- only in crappy languages. See http://en.wikipedia.org/wiki/Tail_recursion – Juliet Oct 14 '09 at 16:51
  • 1
    That's what's great about being a programmer - cheap thrills :-) At least Electrical Engineers get to sniff rosin smoke. – Mike Dunlavey Nov 03 '09 at 19:12
  • @Juliet: Only crap languages? So all languages that don't have tail recursion are crap? Spare me. – Stu Thompson Nov 06 '09 at 06:29
10

Making software configurable is a bad idea.

Configurable software allows the end-user (or admin etc) to choose too many options, which may not all have been tested together (or rather, if there are more than a very small number, I can guarantee will not have been tested).

So I think software which has its configuration hard-coded (but not necessarily shunning constants etc) to JUST WORK is a good idea. Run with sensible defaults, and DO NOT ALLOW THEM TO BE CHANGED.

A good example of this is the number of configuration options on Google Chrome - however, this is probably still too many :)

MarkR
  • 62,604
  • 14
  • 116
  • 151
10

Microsoft should stop supporting anything dealing with Visual Basic.

Omar
  • 39,496
  • 45
  • 145
  • 213
10

Intranet Frameworks like SharePoint makes me think the whole corporate world is one giant ostrich with its head in the sand

I'm not only talking about MOSS here, I've worked with some other CORPORATE INTRANET products, and absolutely not one of them are great, but SharePoint (MOSS) is by far the worst.

  • Most of these systems don't easily bridge the gap between Intranet and Internet. So as a remote worker you're forced to VPN in. External customers just don't have the luxury of getting hold of your internal information first hand. Sure this can be fixed at a price $$$.
  • The search capabilities are always pathetic. Lots of time other departments simply don't know about information is out there.
  • Information fragments, people start boycotting workflows or revert to email
  • SharePoint development is the most painful form of development on the planet. Nothing sucks like SharePoint. I've seen a few developers contemplating quitting IT after working for over a year with MOSS.
  • No matter how the developers hate MOSS, no matter how long the most basic of projects take to roll out, no matter how novice the results look, and no matter how unsearchable and fragmented the content is:

EVERYONE STILL CONTINUES TO USE AND PURCHASE SHAREPOINT, AND MANAGERS STILL TRY VERY HARD TO PRETEND ITS NOT SATANS SPAWN.

Microformats

Using CSS classes originally designed for visual layout - now being assigned for both visual and contextual data is a hack, loads of ambiguity. Not saying the functionality should not exist, but fix the damn base language. HTML wasn't hacked to produce XML - instead the XML language emerged. Now we have these eager script kiddies hacking HTML and CSS to do something it wasn't designed to do, thats still fine, but I wish they would keep these things to themselves, and no make a standard out of it. Just to some up - butchery!

JL.
  • 78,954
  • 126
  • 311
  • 459
  • Your programming opinion doesn't look very controversial to me. In fact I can't even see what your programming opinion is. – Windows programmer Dec 15 '09 at 00:28
  • I agree with your attacks on sharepoint. In my dealings with the beast, there is a lot of confusion about what it can and should do. I guess that comes from the office world were people abuse, word, excel, and access to do ungodly things that should be handled by programmers creating real applications. The running joke around sharpoint's abilities at my work is that it can "wash your car", or "mow your lawn" or that it has infinite super powers. – awright18 Mar 04 '10 at 01:25
  • I agree that this is not controversial. As a MOSS dev I can only conclude that SP was written by Microsoft's best team of monkeys with down syndrome. – Jacobs Data Solutions Apr 01 '10 at 20:10
  • What is controversial is that MOSS is considered by most business users to be a perfect all round intranet solution, but honestly its a pile of dog crap under the hood. – JL. Apr 04 '10 at 14:54
10

Using Stored Procedures

Unless you are writing a large procedural function composed of non-reusable SQL queries, please move your stored procedures of the database and into version control.

Shawn
  • 19,465
  • 20
  • 98
  • 152
  • I concur: you can't version stored procedures, and having 200+ stored procedures in a large project becomes a maintenance nightmare. Embedded SQL is ok for small projects, but I'd rather use an ORM to write my queries for me. – Juliet Jan 02 '09 at 17:05
  • Princess: I must disagree with your statement that you can't version stored procedures. I version them myself by keeping the SQL for them in source code control. If you make a change to the database, re-export the script for it and check it into the repository. – Mike Hofer Jan 02 '09 at 18:01
  • I agree about versioning stored procedures. If you are writing SP, you need to take it upon yourself to version them in source control. – casperOne Jan 02 '09 at 19:24
  • Out of *your* database? There speaks a 1970s DBA – ChrisA Jan 02 '09 at 22:23
  • We can version SPs. The build process moves them from source control into the database. – Joshua Jan 02 '09 at 22:44
  • In DB2/400 stored procedures are an interface to native code on the system... In other words, hard to move over to the calling system. – Thorbjørn Ravn Andersen Oct 23 '09 at 19:44
10

The ability to create UML diagrams similar to pretzels with mad cow disease is not actually a useful software development skill.

The whole point of diagramming code is to visualise connections, to see the shape of a design. But once you pass a certain rather low level of complexity, the visualisation is too much to process mentally. Making connections pictorially is only simple if you stick to straight lines, which typically makes the diagram much harder to read than if the connections were cleverly grouped and routed along the cardinal directions.

Use diagrams only for broad communication purposes, and only when they're understood to be lies.

HTTP 410
  • 17,300
  • 12
  • 76
  • 127
10

How about this one:

Garbage collectors actually hurt programmers' productivity and make resource leaks harder to find and fix

Note that I am talking about resouces in general, and not only memory.

Nemanja Trifunovic
  • 24,346
  • 3
  • 50
  • 88
  • I've seen 50mb leaked bescause some library programmer hooked an event and didn't make absolutely sure to unhook it. – Joshua Jan 02 '09 at 22:49
  • 8gb RAM is nothing to a repetitive leak on a server under high load. – Kendall Helmstetter Gelner Jan 05 '09 at 05:26
  • 1
    I guess it refers to RIIA idiom. In that case I must adhere to the proposal. RIIA is a solution for all resources, GC is a partial solution for memory resources only. – David Rodríguez - dribeas Jan 05 '09 at 14:04
  • 1
    +1 to that. Before GC, programmers took care of leaks before deployment. These days, applications are deployed and then when a 100 users are using the application, we discover that we've run out of database connections. – Agnel Kurian Jan 07 '09 at 10:58
  • Anyone who expects garbage collection to handle all resource management has desperately misunderstood garbage collection. GC is only for managing *memory* – benjismith Jan 21 '09 at 22:30
  • 1
    I'd give a +1 if you had said: "GC because it's not available for all resoures; only memory. So you can leak DB connections." GC has solved 100 issues and introduced 20 new ones, so it's still an advantage. – Aaron Digulla Feb 27 '09 at 15:56
  • Which "100 issues"? It has solved only one - memory management, and IMHO even that poorly. – Nemanja Trifunovic Feb 27 '09 at 17:11
10

Explicit self in Python's method declarations is poor design choice.

Method calls got syntactic sugar, but declarations didn't. It's a leaky abstraction (by design!) that causes annoying errors, including runtime errors with apparent off-by-one error in reported number of arguments.

Community
  • 1
  • 1
Kornel
  • 97,764
  • 37
  • 219
  • 309
  • I've certainly forgotten to type "self" many times myself, but what would you have done instead? You can't just imply self in all method declarations because of classmethods and staticmethods. – Kiv Jan 03 '09 at 02:18
  • I often mistype it as `slef` and I get errors because `self` is undeclared – hasen Jan 04 '09 at 03:30
  • I think that `def` in `class` should imply `self`, and other types of methods could use different/additional keyword, like `defstatic`/`static def`. – Kornel Jan 05 '09 at 15:41
  • 1
    It's actually due to an implementation problem early on in the language design -- apparently Guido and team could not figure out how to bind the implicit self parameter to its enclosing environment, short of just passing it explicitly. Hope I got that right, not a compiler/translator guru. – cygil Mar 16 '09 at 03:45
  • Please read around and reconsider your opinion: http://effbot.org/pyfaq/why-must-self-be-used-explicitly-in-method-definitions-and-calls.htm and http://www.artima.com/weblogs/viewpost.jsp?thread=214325 are two good places to start. – WhatIsHeDoing Apr 19 '09 at 00:50
  • @Daz: links you've given talk about either body of a function (but I'm talking about declaration of arguments) or semantics of functions being 1st class (which is completely orthogonal issue to the syntax). – Kornel May 22 '09 at 00:10
10

My controversial opinion? Java doesn't suck but Java API's do. Why do java libraries insist on making it hard to do simple tasks? And why, instead of fixing the APIs, do they create frameworks to help manage the boilerplate code? This opinion can apply to any language that requires 10 or more lines of code to read a line from a file.

Jeremy Wall
  • 23,907
  • 5
  • 55
  • 73
10

The vast majority of software being developed does not involve the end-user when gathering requirements.

Usually it's just some managers who are providing 'requirements'.

Agnel Kurian
  • 57,975
  • 43
  • 146
  • 217
10

Any sufficiently capable library is too complicated to be useable and any library simple enough to be usable lacks that capabilities needed to be a good general solution.

I run in to this constantly. Exhaustive libraries that are so complicated to use I tear my hair out and simple easy to use libraries that don't quite do what I need them to do.

Starkii
  • 1,149
  • 2
  • 9
  • 17
10

Most developers don't have a clue

Yup .. there you go. I've said it. I find that from all the developers that I personally know .. just a handful are actually good. Just a handful understand that code should be tested ... that the Object Oriented approach to developing is actually there to help you. It frustrates me to no end that there are people who get the title of developer while in fact all they can do is copy and paste a bit of source code and then execute it.

Anyway ... I'm glad initiatives like stackoverflow are being started. It's good for developers to wonder. Is there a better way? Am I doing it correctly? Perhaps I could use this technique to speed things up, etc ...

But nope ... the majority of developers just learn a language that they are required by their job and stick with it until they themselves become old and grumpy developers that have no clue what's going on. All they'll get is a big paycheck since they are simply older than you.

Ah well ... life is unjust in the IT community and I'll be taking steps to ignore such people in the future. Hooray!

SpoBo
  • 2,100
  • 2
  • 20
  • 28
10

Coding is an Art

Some people think coding is an art, and others think coding is a science.

The "science" faction argues that as the target is to obtain the optimal code for a situation, then coding is the science of studying how to obtain this optimal.

The "art" faction argues there are many ways to obtain the optimal code for a situation, the process is full of subjectivity, and that to choose wisely based on your own skills and experience is an art.

Jonathan
  • 11,809
  • 5
  • 57
  • 91
  • 1
    Electronics designers will always tell you that designing electronic circuits is 'an imprecise science'. I think the opposite is true of constructing computer programs - it is an exact art. I think this partly because I don;t know where my programming ability comes from. I sit at the keyboard and "it just happens". I'm not following any rules or processes when I write code, thereore it is an art. But whatever I write has to be exactly right, or it will not work. Hence, it is an exact art. – Tim Long May 17 '09 at 04:46
9

MIcrosoft is not as bad as many say they are.

Aftershock
  • 5,205
  • 4
  • 51
  • 64
9

Programming is neither art nor science. It is an engineering discipline.

It's not art: programming requires creativity for sure. That doesn't make it art. Code is designed and written to work properly, not to be emotionally moving. Except for whitespace, changing code for aesthetic reasons breaks your code. While code can be beautiful, art is not the primary purpose.

It's not science: science and technology are inseparable, but programming is in the technology category. Programming is not systematic study and observation; it is design and implementation.

It's an engineering discipline: programmers design and build things. Good programmers design for function. They understand the trade-offs of different implementation options and choose the one that suits the problem they are solving.


I'm sure there are those out there who would love to parse words, stretching the definitions of art and science to include programming or constraining engineering to mechanical machines or hardware only. Check the dictionary. Also "The Art of Computer Programming" is a different usage of art that means a skill or craft, as in "the art of conversation." The product of programming is not art.

Paul
  • 339
  • 2
  • 4
9

That most language proponents make a lot of noise.

Varun Mahajan
  • 7,037
  • 15
  • 43
  • 65
9

My one:

Long switch statements are your friends. Really. At least in C#.

People tend to avoid and discourage others to use long switch statements beause they are "unmanagable" and "have bad performance characteristics".

Well, the thing is that in C#, switch statements are always compiled automagically to hash jump tables so actually using them is the Best Thing To Do™ in terms of performance if you need simple branching to multiple branches. Also, if the case statements are organized and grouped intelligently (for example in alphabetical order), they are not unmanageable at all.

Tamas Czinege
  • 118,853
  • 40
  • 150
  • 176
  • 2
    Define long. I've seen a 13,000 line switch statement (admittedly it was C++ but still...) – Cameron MacFarland Jan 02 '09 at 15:14
  • Well, (in c#) if the switch statement is generated (as opposed to manually edited), I see nothing wrong with a 13K line switch statement to be honest. It's going to end up as a hashtable anyway. – Tamas Czinege Jan 02 '09 at 15:19
  • 2
    Of course, if it has 13K lines because there is loads of code in each "case" clause, that's totally different. It should be refactored then. – Tamas Czinege Jan 02 '09 at 15:21
  • Actually, I do. Was it either that or if, and replacing all if's with switch's would have been a bit too verbose, even for python? – JB. Jan 02 '09 at 16:42
  • 1
    What I want a compiler to do is generate good assembly code for me, and switch is how I tell it I want a jump table. That said, it's easy to think you're doing things for "performance" reasons when in fact you'll never notice the difference. – Mike Dunlavey Jan 02 '09 at 16:49
  • @Mike: if you you have a switch statement with thousands of cases, you _will_ notice the performance difference between a jump table and a series of if-else statements. – Tamas Czinege Jan 02 '09 at 17:13
  • 1
    How can you have thousands of cases? I can't imagine it, do you have an example? – tuinstoel Jan 04 '09 at 21:16
  • 2
    @tuinstoel: It's not that hard to imagine it if you try. Before the rise of floating point units, it was a common practice to keep trigonometric functions in lookup tables. I think that keeping the results of complex math functions in premade lookup tables still makes sense today. – Tamas Czinege Jan 05 '09 at 13:41
  • Great answer. Agree completely. – Jonathan C Dickinson Jan 29 '09 at 09:26
9

Rob Pike wrote: "Data dominates. If you've chosen the right data structures and organized things well, the algorithms will almost always be self-evident. Data structures, not algorithms, are central to programming."

And since these days any serious data is in the millions of records, I content that good data modeling is the most important programming skill (whether using a rdbms or something like sqlite or amazon simpleDB or google appengine data storage.)

Fancy search and sorting algorithms aren't needed any more when the data, all the data, is stored in such a data storage system.

Christopher Mahan
  • 7,621
  • 9
  • 53
  • 66
  • 1
    It depends on the rawness of your original data. If the data is accumuleted by data entry in a UI it is true. But if you do something like Text Mining you need to process your data first, algos become more important. – tuinstoel Jan 02 '09 at 15:47
  • tuinstoel: ok, but text mining is eminently parallelisable, so the algo should be ultra simple and then be run by a few hundreds or thousand processes. Image processing needs solid algos though. – Christopher Mahan Jan 02 '09 at 16:03
  • I would agree if you also mean that data should be kept as minimal and normalized as reasonable. I see far too much data structure whose ostensible purpose is "better performance" that causes the opposite. – Mike Dunlavey Jan 02 '09 at 16:19
  • 1
    +1 If I was speaking to an assembly of CS Freshmen my first advice would be to "Know Thou Data_Structures" Amen Brother. – WolfmanDragon May 23 '09 at 18:22
  • 1
    Brooks, in "The Mythical Man-Month", had a comment that he'd be confused if you hid your tables and showed him your flow charts, but if you showed him your tables he wouldn't need to see your flow charts. This should give you an idea of how old this idea is. – David Thornley Oct 13 '09 at 21:39
9

Web applications suck

My Internet connection is veeery slow. My experience with almost every Web site that is not Google is, at least, frustrating. Why doesn't anybody write desktop apps anymore? Oh, I see. Nobody wants to be bothered with learning how operating systems work. At least, not Windows. The last time you had to handle WM_PAINT, your head exploded. Creating a worker thread to perform a long task (I mean, doing it the Windows way) was totally beyond you. What the hell was a callback? Oh, my God!


Garbage collection sucks

No, it actually doesn't. But it makes the programmers suck like nothing else. In college, the first language they taught us was Visual Basic (the original one). After that, there was another course where the teachers pretended they taught us C++. But the damage was done. Nobody actually knew how to use this esoteric keyword delete did. After testing our programs, we either got invalid address exceptions or memory leaks. Sometimes, we got both. Among the 1% of my faculty who can actually program, only one who can manage his memory by himself (at least, he pretends) and he's writing this rant. The rest write their programs in VB.NET, which, by definition, is a bad language.


Dynamic typing suck

Unless you're using assembler, of course (that's the kind of dynamic typing that actually deserves praise). What I meant is the overhead imposed by dynamic, interpreted languages makes them suck. And don't come with that silly argument that different tools are good for different jobs. C is the right language for almost everything (it's fast, powerful and portable), and, when it isn't (it's not fast enough), there's always inline assembly.


I might come up with more rants, but that will be later, not now.

isekaijin
  • 19,076
  • 18
  • 85
  • 153
  • 1
    C may be fast to execute, but dynamic, interpreted languages are faster to develop in. I think you're being a little close-minded here. – Kiv Jan 03 '09 at 02:23
  • C is NOT the right tool for everything! it's not the tool for web development! there's _that_ at least! – hasen Jan 03 '09 at 03:12
  • What are dynamic, interpreted languages good for, besides Web development? Note, I happen to hate Web apps. – isekaijin Jan 03 '09 at 03:39
  • 1
    Sure, dynamic languages should be burned. From now on I shall always compile my shell scripts to machine code. – Rene Saarsoo Jan 03 '09 at 20:11
  • Dynamic languages are good for different jobs. They tend to be ideal for quick and dirty throw away scripts for admin stuff, as well they tend to be better geared for applications that require a lot of string manipulation and need to be developed quickly. – Rontologist Jan 09 '09 at 19:12
  • 3
    That's 3 opinions in one answer, and they're all dupes – finnw Jan 17 '09 at 17:55
9

Preconditions for arguments to methods/functions should be part of the language rather than programmers checking it always.

kal
  • 28,545
  • 49
  • 129
  • 149
9

Requirements analysis, specification, design, and documentation will almost never fit into a "template." You are 100% of the time better off by starting with a blank document and beginning to type with a view of "I will explain this in such a way that if I were dead and someone else read this document, they would know everything that I know and see and understand now" and then organizing from there, letting section headings and such develop naturally and fit the task you are specifying, rather than being constrained to some business or school's idea of what your document should look like. If you have to do a diagram, rather than using somebody's formal and incomprehensible system, you're often better off just drawing a diagram that makes sense, with a clear legend, which actually specifies the system you are trying to specify and communicates the information that the developer on the other end (often you, after a few years) needs to receive.

[If you have to, once you've written the real documentation, you can often shoehorn it into whatever template straightjacket your organization is imposing on you. You'll probably find yourself having to add section headings and duplicate material, though.]

The only time templates for these kinds of documents make sense is when you have a large number of tasks which are very similar in nature, differing only in details. "Write a program to allow single-use remote login access through this modem bank, driving the terminal connection nexus with C-Kermit," "Produce a historical trend and forecast report for capacity usage," "Use this library to give all reports the ability to be faxed," "Fix this code for the year 2000 problem," and "Add database triggers to this table to populate a software product provided for us by a third-party vendor" can not all be described by the same template, no matter what people may think. And for the record, the requirements and design diagramming techniques that my college classes attempted to teach me and my classmates could not be used to specify a simple calculator program (and everyone knew it).

skiphoppy
  • 97,646
  • 72
  • 174
  • 218
9

For a good programmer language is not a problem.

It may be not very controvertial but I hear a lot o whining from other programmers like "why don't they all use delphi?", "C# sucks", "i would change company if they forced me to use java" and so on.
What i think is that a good programmer is flexible and is able to write good programms in any programming language that he might have to learn in his life

agnieszka
  • 14,897
  • 30
  • 95
  • 113
  • On the other hand, I *would* change company if, say, I was told that the rest of my job (forever) would be in GWBasic. There's a significant difference in how easy it is to express designs in different languages. – Jon Skeet Jan 06 '09 at 08:55
  • yeah, of course it's not applicable to all situations. but still a programmer has to be flexible to some extent because this is what computer science is all about - constant change. – agnieszka Jan 07 '09 at 12:19
  • Totally agreed. I hate those religious language wars :/ – driAn Jan 07 '09 at 14:31
  • While I agree that a good programmer can understand any language, working with it 40+ hours a week is a different story. I can understand VB.NET just fine, but I don't want to spend most of my day plowing through it! – Cameron MacFarland Jan 08 '09 at 04:35
  • 2
    I can agree with this. The real truth here is that there is a tool for every job. Sometimes that tool may be Perl. Sometimes it may be vbScript, sometimes Java, sometimes C#, and sometime even C++... The good developer knows WHICH tool is right for the job. – LarryF Jan 14 '09 at 23:57
  • While it may be true that you can learn the *syntax* of a new language in a few hours, you can't learn a *language* in a few hours. It takes years to master a new language with all the corner cases, etc. – Aaron Digulla Feb 27 '09 at 14:55
  • "A good carpenter can cut wood with a hammer..." (I'm sure: carpenters are much more knowledgeable than programmers.) – MaD70 Nov 06 '09 at 00:13
9

Sometimes jumping on the bandwagon is ok

I get tired of people exhibiting "grandpa syndrome" ("You kids and your newfangled Test Driven Development. Every big technology that's come out in the last decade has sucked. Back in my day, we wrote real code!"... you get the idea).

Sometimes things that are popular are popular for a reason.

Jason Baker
  • 192,085
  • 135
  • 376
  • 510
  • 3
    Not controversial enough. To be controversial, replace sometimes with always. – Coding With Style Jul 04 '09 at 22:11
  • My problem is that otherwise good ideas become bandwagons. My favorite example is OOP, a useful idea that became a binge. In most of the performance tuning I do, the culprit, ultimately, is that a Queen Mary was built, when a rowboat would have sufficed. – Mike Dunlavey Feb 06 '10 at 16:14
  • @Mike Dunlavey - I agree 100%. But it's not fair to reject an idea on that basis (which a lot of people do). – Jason Baker Feb 06 '10 at 16:34
  • ... talk about old-time code, how about this: `//SYSUT2 DD UNIT=(TAPE1600,,DEFER),VOL=SER=SPROOOF,LABEL=(1,NL),DISP=(,KEEP)` cranked out standing up at a keypunch. – Mike Dunlavey Feb 06 '10 at 17:48
9

VB 6 could be used for good as well as evil. It was a Rapid Application Development environment in a time of over complicated coding.

I have hated VB vehemently in the past, and still mock VB.NET (probably in jest) as a Fisher Price language due to my dislike of classical VB, but in its day, nothing could beat it for getting the job done.

johnc
  • 39,385
  • 37
  • 101
  • 139
9

Code Generation is bad

I hate languages that require you to make use of code generation (or copy&paste) for simple things, like JavaBeans with all their Getters and Setters.

C#'s AutoProperties are a step in the right direction, but for nice DTOs with Fields, Properties and Constructor parameters you still need a lot of redundancy.

Lemmy
  • 3,177
  • 3
  • 17
  • 16
  • 3
    Code Generation is bad... so do you hate compilers also? (Hint: code generation is a broad subject, don't be deceived by crappy languages/frameworks). – MaD70 Nov 05 '09 at 23:45
9

If you need to read the manual, the software isn't good enough.

Plain and simple :-)

Mike Dunlavey
  • 40,059
  • 14
  • 91
  • 135
Simon P
  • 1,196
  • 1
  • 12
  • 26
  • 1
    I agree that there is a lot of software that could do without a manual if it had been designed with a greater emphasis on usability. But even when you can figure out stuff without a manual, having a manual might let you figure out stuff quicker! – Captain Sensible Jan 26 '09 at 14:15
  • I kinda agree with this. I've seen quite a few applications that were badly designed but sometimes the bad design is by the customers own request (this is how our previous application worked and we don't want to change to much even if it sucks). In these circumstances, whether something is good enough or not is not decided by the development team but by the customer. You may disagree but the customer is always right ;-) – Captain Sensible Apr 22 '10 at 08:31
8

Programming: It's a fun job.

I seem to see two generalized groups of developers. Those that don't love it but they are competent and the money is good. The other group that love it to a point that is kinda creepy. It seems to be their life.

I just think it well paying job that is usually interesting and fun. There is all kinds of room to learn something new every minute of every day. I can't think of another job I would prefer. But it is still a job. Compromises will be made and the stuff you produce will not always be as good as it could be.

Given my druthers would be on a beach drinking beer or playing with my kids.

ElGringoGrande
  • 638
  • 6
  • 13
8

Assembler is not dead

In my job (copy protection systems) assembler programming is essential, I was working with many hll copy protection systems and only assembler gives you the real power to utilize all the possibilities hidden in the code (like code mutation, low level stuff).

Also many code optimizations are possible only with an assembler programming, look at the sources of any video codecs, sources are written in assembler and optimized to use MMX/SSE/SSE2 opcodes whatever, many game engines uses assembler optimized routines, even Windows kernel has SSE optimized routines:

NTDLL.RtlMoveMemory

.text:7C902CD8                 push    ebp
.text:7C902CD9                 mov     ebp, esp
.text:7C902CDB                 push    esi
.text:7C902CDC                 push    edi
.text:7C902CDD                 push    ebx
.text:7C902CDE                 mov     esi, [ebp+0Ch]
.text:7C902CE1                 mov     edi, [ebp+8]
.text:7C902CE4                 mov     ecx, [ebp+10h]
.text:7C902CE7                 mov     eax, [esi]
.text:7C902CE9                 cld
.text:7C902CEA                 mov     edx, ecx
.text:7C902CEC                 and     ecx, 3Fh
.text:7C902CEF                 shr     edx, 6
.text:7C902CF2                 jz      loc_7C902EF2
.text:7C902CF8                 dec     edx
.text:7C902CF9                 jz      loc_7C902E77
.text:7C902CFF                 prefetchnta byte ptr [esi-80h]
.text:7C902D03                 dec     edx
.text:7C902D04                 jz      loc_7C902E03
.text:7C902D0A                 prefetchnta byte ptr [esi-40h]
.text:7C902D0E                 dec     edx
.text:7C902D0F                 jz      short loc_7C902D8F
.text:7C902D11
.text:7C902D11 loc_7C902D11:                           ; CODE XREF: .text:7C902D8Dj
.text:7C902D11                 prefetchnta byte ptr [esi+100h]
.text:7C902D18                 mov     eax, [esi]
.text:7C902D1A                 mov     ebx, [esi+4]
.text:7C902D1D                 movnti  [edi], eax
.text:7C902D20                 movnti  [edi+4], ebx
.text:7C902D24                 mov     eax, [esi+8]
.text:7C902D27                 mov     ebx, [esi+0Ch]
.text:7C902D2A                 movnti  [edi+8], eax
.text:7C902D2E                 movnti  [edi+0Ch], ebx
.text:7C902D32                 mov     eax, [esi+10h]
.text:7C902D35                 mov     ebx, [esi+14h]
.text:7C902D38                 movnti  [edi+10h], eax

So if you hear next time that assembler is dead, think about the last movie you have watched or the game you've played (and its copy protection heh).

simon
  • 12,666
  • 26
  • 78
  • 113
Bartosz Wójcik
  • 1,079
  • 2
  • 13
  • 31
8

Writing it yourself can be a valid option.

In my experience there seems to be too much enthusiasm when it comes to using 3rd party code to solve a problem. The option of solving the problem by themselves does usually not cross peoples minds. Although don't get me wrong, I am not propagating to never ever use libraries. What I am saying is: among the possible frameworks and modules you are considering to use, add the option of implementing the solution yourself.

But why would you code your own version?

  • Don't reinvent the wheel. But, if you only need a piece of wood, do you really need a whole cart wheel? In other words, do you really need openCV to flip an image along an axis?
  • Compromise. You usually have to make compromises concerning your design, in order to be able to use a specific library. Is the amount of changes you have to incorporate worth the functionality you will receive?
  • Learning. You have to learn to use these new frameworks and modules. How long will it take you? Is it worth your while? Will it take longer to learn than to implement?
  • Cost. Not everything is for free. Although, this includes your time. Consider how much time this software you are about to use will save you and if it is worth it's price? (Also remember that you have to invest time to learn it)
  • You are a programmer, not ... a person who just clicks things together (sorry, couldn't think of anything witty).

The last point is debatable.

Stefan Schmidt
  • 1,152
  • 11
  • 18
8

Relational database systems will be the best thing since sliced bread...

... when we (hopefully) get them, that is. SQL databases suck so hard it's not funny.

What I find amusing (if sad) is certified DBAs who think an SQL database system is a relational one. Speaks volumes for the quality of said certification.

Confused? Read C. J. Date's books.

edit

Why is it called Relational and what does that word mean?

These days, a programmer (or a certified DBA, wink) with a strong (heck, any) mathematical background is an exception rather than the common case (I'm an instance of the common case as well). SQL with its tables, columns and rows, as well as the joke called Entity/Relationship Modelling just add insult to the injury. No wonder the misconception that Relational Database Systems are called that because of some Relationships (Foreign Keys?) between Entities (tables) is so pervasive.

In fact, Relational derives from the mathematical concept of relations, and as such is intimately related to set theory and functions (in the mathematical, not any programming, sense).

[http://en.wikipedia.org/wiki/Finitary_relation][2]:

In mathematics (more specifically, in set theory and logic), a relation is a property that assigns truth values to combinations (k-tuples) of k individuals. Typically, the property describes a possible connection between the components of a k-tuple. For a given set of k-tuples, a truth value is assigned to each k-tuple according to whether the property does or does not hold.

An example of a ternary relation (i.e., between three individuals) is: "X was-introduced-to Y by Z", where (X,Y,Z) is a 3-tuple of persons; for example, "Beatrice Wood was introduced to Henri-Pierre Roché by Marcel Duchamp" is true, while "Karl Marx was introduced to Friedrich Engels by Queen Victoria" is false.

Wikipedia makes it perfectly clear: in a SQL DBMS, such a ternary relation would be a "table", not a "foreign key" (I'm taking the liberty to rename the "columns" of the relation: X = who, Y = to, Z = by):

CREATE TABLE introduction (
  who INDIVIDUAL NOT NULL
, to INDIVIDUAL NOT NULL
, by INDIVIDUAL NOT NULL
, PRIMARY KEY (who, to, by)
);

Also, it would contain (among others, possibly), this "row":

INSERT INTO introduction (
  who
, to
, by
) VALUES (
  'Beatrice Wood'
, 'Henri-Pierre Roché'
, 'Marcel Duchamp'
);

but not this one:

INSERT INTO introduction (
  who
, to
, by
) VALUES (
  'Karl Marx'
, 'Friedrich Engels'
, 'Queen Victoria'
);

Relational Database Dictionary:

relation (mathematics) Given sets s1, s2, ..., sn, not necessarily distinct, r is a relation on those sets if and only if it's a set of n-tuples each of which has its first element from s1, its second element from s2, and so on. (Equivalently, r is a subset of the Cartesian product s1 x s2 x ... x sn.)

Set si is the ith domain of r (i = 1, ..., n). Note: There are several important logical differences between relations in mathematics and their relational model counterparts. Here are some of them:

  • Mathematical relations have a left-to-right ordering to their attributes.
  • Actually, mathematical relations have, at best, only a very rudimentary concept of attributes anyway. Certainly their attributes aren't named, other than by their ordinal position.
  • As a consequence, mathematical relations don't really have either a heading or a type in the relational model sense.
  • Mathematical relations are usually either binary or, just occasionally, unary. By contrast, relations in the relational model are of degree n, where n can be any nonnegative integer.
  • Relational operators such as JOIN, EXTEND, and the rest were first defined in the context of the relational model specifically; the mathematical theory of relations includes few such operators.

And so on (the foregoing isn't meant to be an exhaustive list).

just somebody
  • 18,602
  • 6
  • 51
  • 60
  • Would you agree that today's RDMSs *support* the relational model however rarely are the schema designers implementing it? – Jé Queue Dec 15 '09 at 05:38
  • You're asking a loaded question, but do you consider DB2 &| Oracle systems that don't support a true *relation*al model? – Jé Queue Dec 15 '09 at 23:09
  • yes. SQL database systems are just that: SQL database systems, not relational database systems. – just somebody Dec 15 '09 at 23:43
  • Do you mean Object Databases when you say relational databases? That is, db4o et al.? Relational Database system in my opinion are systems where you model relations between entities, also known as Foreign Keys and Update/Delete Cascades. Sadly, most of the time these entities are flat 2-Dimensional tables in RDBMS... – Michael Stum Dec 16 '09 at 00:11
  • @Michael Stum: no, see expanded answer, and excuse me if it's not very coherent. It's well past midnight here and I'm almost done with second bottle of wine. – just somebody Dec 16 '09 at 01:55
  • @just somebody, all is still well, you should start another question here to discuss further. i.e. Oracle can still support the model you've described above a classic RELATION model (not relational). (Non)Existence can be checked and this can be employed in set queries minus/intersect/&c. Yes, SQL is used to put these all together. Not all schemas are written with foreign keys in the tables. – Jé Queue Dec 16 '09 at 04:49
8

Relational databases are awful for web applications.

For example:

  • threaded comments
  • tag clouds
  • user search
  • maintaining record view counts
  • providing undo / revision tracking
  • multi-step wizards
Alex B
  • 24,678
  • 14
  • 64
  • 87
  • 1
    The reason OODB didn't take off for web apps is because web apps are the single area where scalability and speed matter most - and OODB fall flat when load gets high. That's why MySQL took off instead of something more robust like Postgres, because of sheer read speed and scalability. – Kendall Helmstetter Gelner Jan 05 '09 at 05:31
  • 1
    kendall, that's just trash. the biggest databases in the world have traditionally been oodbs. they handle all kinds of workload. – nes1983 Apr 12 '09 at 10:02
  • Only deep ignorance can prevent someone to implement such things even in SQL, which is a badly designed language and not faithful to relational data model. – MaD70 Nov 06 '09 at 00:28
8

We do a lot of development here using a Model-View-Controller framework we built. I'm often telling my developers that we need to violate the rules of the MVC design pattern to make the site run faster. This is a hard sell for developers, who are usually unwilling to sacrifice well-designed code for anything. But performance is our top priority in building web applications, so sometimes we have to make concessions in the framework.

For example, the view layer should never talk directly to the database, right? But if you are generating large reports, the app will use a lot of memory to pass that data up through the model and controller layers. If you have a database that supports cursors, it can make the app a lot faster to hit the database directly from the view layer.

Performance trumps development standards, that's my controversial view.

jjriv
  • 61
  • 3
  • An excellent example of how sometimes rules are made to be broken. Do everything right but be prepared to do some things wrong from necessity. – Kendall Helmstetter Gelner Jan 05 '09 at 05:43
  • 1
    Performance trumps development standards -- if it is too poor to stand. As long as performance is not a problem, there is no need to fix it. – Aaron Digulla Feb 27 '09 at 14:53
  • Don't forget, what is considered "right" in terms of development standards was just somebody's common-sense temporary opinion that happened to get picked up by a lot of people. It is not a commandment from "on high" - common sense can change but is always useful. Good work. – Mike Dunlavey Feb 06 '10 at 15:56
8

I believe the use of try/catch exception handling is worse than the use of simple return codes and associated common messaging structures to ferry useful error messages.

Littering code with try/catch blocks is not a solution.

Just passing exceptions up the stack hoping whats above you will do the right thing or generate an informative error is not a solution.

Thinking you have any chance of systematically verifying the proper exception handlers are avaliable to address anything that could go wrong in either transparent or opague objects is not realistic. (Think also in terms of late bindings/external libraries and unecessary dependancies between unrelated functions in a call stack as system evolves)

Use of return codes are simple, can be easily systematically verified for coverage and if handled properly forces developers to generate useful error messages rather than the all-too-common stack dumps and obscure I/O exceptions that are "exceptionally" meaningless to even the most clueful of end users.

--

My final objection is the use of garbage collected languages. Don't get me wrong.. I love them in some circumstances but in general for server/MC systems they have no place in my view.

GC is not infallable - even extremely well designed GC algorithms can hang on to objects too long or even forever based on non-obvious circular refrences in their dependancy graphs.

Non-GC systems following a few simple patterns and use of memory accounting tools don't have this problem but do require more work in design and test upfront than GC environments. The tradeoff here is that memory leaks are extremely easy to spot during testing in Non-GC while finding GC related problem conditions is a much more difficult proposition.

Memory is cheap but what happens when you leak expensive objects such as transaction handles, synchronization objects, socket connections...etc. In my environment the very thought that you can just sit back and let the language worry about this for you is unthinkable without significant fundental changes in software description.

Einstein
  • 4,450
  • 1
  • 23
  • 20
  • Return codes have the problem of coupling too many elements of a chain of calls to understand what they mean. That is to say, that everything between a called function and something that might handle an error has to understand the return codes, at least to pass them along - that can be a mess. – Kendall Helmstetter Gelner Jan 05 '09 at 05:45
  • 1
    My general advice is to follow a convention and don't fall into the trap of attempting to have them indiciate specific error conditions. At each level you should take steps to ensure meaning is normalized. (Which ususally isn't hard/necessary if you follow a convention) – Einstein Jan 05 '09 at 06:33
  • Good error code compared with bad exception code is better. But then again, there is good exception handling code, where exceptions are thrown and caught only where it makes sense... good exception code separates error handling from the error, and need not be replicated in each function of the stack – David Rodríguez - dribeas Jan 05 '09 at 18:23
  • If a GC platform is not right for your particular situation, use good judgmenet and don't use it. It's as simple as that. – Captain Sensible Apr 22 '10 at 08:39
8

The best programmers trace all their code in the debugger and test all paths.

Well... the OP said controversial!

Enigme
  • 31
  • 3
  • Please justify your position. Note: test all paths requires that you only write paths you can test. Mindless error handlers go away. – Jay Bazuzi Jan 03 '09 at 17:41
  • Ever heard of unit tests? Using unit tests you don't need to "test all paths" after each change you made to the code. (Anyway, I think it's is impossible to test all paths except in a tiny little application) – Stefan Steinegger Nov 16 '09 at 11:41
8

Correct every defect when it's discovered. Not just "severity 1" defects; all defects.

Establish a deployment mechanism that makes application updates immediately available to users, but allows them to choose when to accept these updates. Establish a direct communication mechanism with users that enables them to report defects, relate their experience with updates, and suggest improvements.

With aggressive testing, many defects can be discovered during the iteration in which they are created; immediately correcting them reduces developer interrupts, a significant contributor to defect creation. Immediately correcting defects reported by users forges a constructive community, replacing product quality with product improvement as the main topic of conversation. Implementing user-suggested improvements that are consistent with your vision and strategy produces community of enthusiastic evangelists.

eswald
  • 8,368
  • 4
  • 28
  • 28
Dave
  • 521
  • 1
  • 4
  • 7
8

Web services absolutely suck, and are not the way of the future. They are ridiculously inefficient and they don't guarantee ordered delivery. Web services should NEVER be used within a system where both client and server are being written. They are mostly useful for micky mouse mash-up type applications. They should definitely not be used for any kind of connection-oriented communication.

This stance has gotten myself and colleagues into some very heated discussions, since web services is such a buzzy topic. Any project that mandates the use of web services is doomed because it is clearly already having ridiculous demands pushed down from management.

Jesse Pepper
  • 3,225
  • 28
  • 48
  • My company writes auto-insurance software, and we rely on several off-site web services to verify VIN numbers and run OFAC checks on people. We also make some of our APIs available through web services to third-party vendors. How would you suggest our software be written without web services? – Juliet Jan 04 '09 at 03:55
  • @Juliet: what in " Web services should NEVER be used within a system **where both client and server are being written** " do you not understand? It's clear that in your situation you don't control both parts of the system, so your rhetorical question is irrelevant. – MaD70 Nov 06 '09 at 01:21
8

Test Constantly

You have to write tests, and you have to write them FIRST. Writing tests changes the way you write your code. It makes you think about what you want it to actually do before you just jump in and write something that does everything except what you want it to do.

It also gives you goals. Watching your tests go green gives you that little extra bump of confidence that you're getting something accomplished.

It also gives you a basis for writing tests for your edge cases. Since you wrote the code against tests to begin with, you probably have some hooks in your code to test with.

There is not excuse not to test your code. If you don't you're just lazy. I also think you should test first, as the benefits outweigh the extra time it takes to code this way.

Joel Coehoorn
  • 399,467
  • 113
  • 570
  • 794
PJ Davis
  • 822
  • 5
  • 6
  • OMG how did anyone down vote this. Amazing, i'd + 1000 if i could –  Jan 04 '09 at 07:54
  • 1
    Sometimes, watching all your test go green gives you a FALSE confidence, while your code fails somewhere your test didn't anticipate. – Cameron MacFarland Jan 04 '09 at 08:29
  • @acidzombie24, you should vote for it if you think it is controversial, not when you agree it. – tuinstoel Jan 04 '09 at 11:45
  • @Cameron MacFarland there is no excuse for not doing user testing. The point of the test isn't to cover every edge case from the beginning, it's to make sure your code meets the requirements for what it's supposed to do. No matter how much you test, you'll never cover everything that could happen. – PJ Davis Jan 06 '09 at 00:26
  • @Cameron MacFarland, having a test suite helps you even when your code fails in the sense that you can easily add a new test case, correct the bug and remain sure that the bug will be detected if some dev introduce it again. – pero Jan 06 '09 at 19:03
  • 2
    You're accruing "offensive" votes... suggest you remove the profanity. – Marc Gravell May 15 '09 at 13:59
8

The code is the design

bjnortier
  • 2,008
  • 18
  • 18
8

Assembly is the best first programming language.

  • +1 for that... probably too hard for most people to grasp... nothing like weeding out the weak ones. ;) – oz10 Jan 14 '09 at 03:14
  • I learned Mostek 6502 assembly at 12 and was my second programming language (the first was an unstructured Basic - pure crap). It was easy with a book and some computer magazines (those with long listing of source code). K&R C disgust me even today. – MaD70 Nov 06 '09 at 01:09
8

A good developer needs to know more than just how to code

Wiren
  • 477
  • 3
  • 9
8

Premature optimization is NOT the root of all evil! Lack of proper planning is the root of all evil.

Remember the old naval saw

Proper Planning Prevents P*ss Poor Performance!

WolfmanDragon
  • 7,851
  • 14
  • 49
  • 61
8

Developers overuse databases

All too often, developers store data in a DBMS that should be in code or in file(s). I've seen a one-column-one-row table that stored the 'system password' (separate from the user table.) I've seen constants stored in databases. I've seen databases that would make a grown coder cry.

There is some sort of mystical awe that the offending coders have of the DBMS--the database can do anything, but they don't know how it works. DBAs practice a black art. It also allows responsibility transference: "The database is too slow," "The database did it" and other excuses are common.

Left unchecked, these coders go on develop databases-within-databases, systems-within-systems. (There is a name to this anti-pattern, but I forget what it is.)

Stu Thompson
  • 38,370
  • 19
  • 110
  • 156
  • Guessing you are talking about EAV (Entity Attribute Value) database design, which has been the bane of my life for about a year now :) – spooner Jan 10 '10 at 21:59
8

Sometimes you have to denormalize your databases.

An opinion that doesn't go well with most programmers but you have to sacrifice things like noramlization for performance sometimes.

Artem Russakovskii
  • 21,516
  • 18
  • 92
  • 115
7

Exceptions should only be used in truly exceptional cases

It seems like the use of exceptions has run rampant on the projects I've worked on recently.

Here's an example:

We have filters that intercept web requests. The filter calls a screener, and the screener's job is to check to see if the request has certain input parameters and validate the parameters. You set the fields to check for, and the abstract class makes sure the parameters are not blank, then calls a screen() method implemented by your particular class to do more extended validation:

public boolean processScreener(HttpServletRequest req, HttpServletResponse resp, FilterConfig filterConfig) throws Exception{           
            // 
            if (!checkFieldExistence(req)){
                    return false;
            }
            return screen(req,resp,filterConfig);
    }

That checkFieldExistance(req) method never returns false. It returns true if none of the fields are missing, and throws an exception if a field is missing.

I know that this is bad design, but part of the problem is that some architects here believe that you need to throw an exception every time you hit something unexpected.

Also, I am aware that the signature of checkFieldExistance(req) does throw an Exception, its just that almost all of our methods do - so it didn't occur to me that the method might throw an exception instead of returning false. Only until I dug through the code I noticed it.

LGriffel
  • 101
  • 1
  • 2
  • And don't forget the overhead involved when throwing an exception as well. Throw/catch might be fairly harmless performance-wise for a single operation, but start looping over it and... ho-boy. I speak from experience. – Tullo_x86 Sep 18 '09 at 21:53
7

Controversial eh? I reckon the fact that C++ streams use << and >>. I hate it. They are shift operators. Overloading them in this way is plain bad practice. It makes me want to kill whoever came up with that and thought it was a good idea. GRRR.

Goz
  • 61,365
  • 24
  • 124
  • 204
7

"Comments are Lies"

Comments don't run and are easily neglected. It's better to express the intention with clear, refactored code illustrated by unit tests. (Unit tests written TDD of course...)

We don't write comments because they're verbose and obscure what's really going on in the code. If you feel the need to comment - find out what's not clear in the code and refactor/write clearer tests until there's no need for the comment...

... something I learned from Extreme Programming (assumes of course that you have established team norms for cleaning the code...)

Dafydd Rees
  • 6,941
  • 3
  • 39
  • 48
  • 3
    Code will only explain the "how" something is done and not the "why". It is really important to distinguish between the two. Decisions sometimes have to be made and the reason for that decision needs to live on. I find that it is important to find a middle ground. The "no comments" crowd are just as much cultists as "comment everything" crowd. – Joseph Ferris Oct 29 '09 at 19:07
  • You're right about this: "Code will only explain the "how" something is done" If I want to know what it does, I'll find the TDD-written test that's covering it. If there's a mystery about what it does and it's important enough, I'll insert a breakage (e.g. throw new RuntimeException("here it is") ) and run all the acceptance tests to see what scenarios need that code path to run. – Dafydd Rees Oct 29 '09 at 19:28
  • Thia is why i said comments are evil in my post http://stackoverflow.com/questions/406760/whats-your-most-controversial-programming-opinion/409825#409825 I am proud my answer is the most serious most downvoted answer :) –  Nov 20 '09 at 12:09
  • If you want to know why something is running, just inject a bug e.g. throw new RuntimeException("HERE"); into it and run the functional tests. Read off the names of the failing system-level tests - that's why you need that piece of code. – Dafydd Rees Nov 20 '09 at 15:24
  • No, that's just more what. Good comments explain why the function works THE WAY it does, not why it exists, which is ultimately just a what. – Integer Poet Mar 15 '10 at 19:42
7

Modern C++ is a beautiful language.

There, I said it. A lot of people really hate C++, but honestly, I find modern C++ with STL/Boost style programming to be a very expressive, elegant, and incredibly productive language most of the time.

I think most people who hate C++ are basing that on bad experiences with OO. C++ doesn't do OO very well because polymorphism often depends on heap-allocated objects, and C++ doesn't have automatic garbage collection.

But C++ really shines when it comes to generic libraries and functional-programming techniques which make it possible to build incredibly large, highly-maintainable systems. A lot of people say C++ tries to do everything, but ends up doing nothing very well. I'd probably agree that it doesn't do OO as well as other languages, but it does generic programming and functional programming better than any other mainstream C-based language. (C++0x will only further underscore this truth.)

I also appreciate how C++ lets me get low-level if necessary, and provides full access to the operating system.

Plus RAII. Seriously. I really miss destructors when I program in other C-based languages. (And no, garbage collection does not make destructors useless.)

Charles Salvia
  • 52,325
  • 13
  • 128
  • 140
  • 1
    I really dislike the C++ compilers. They have terrible error messages. – thesmart Nov 04 '09 at 02:21
  • "any mainstream C-based language" would include C# and Scala, both of which are now quite good for functional programming. You should look at them again if you haven't tried the latest versions yet. – finnw Feb 11 '10 at 17:04
7

JavaScript is a "messy" language but god help me I love it.

Avi Y
  • 2,456
  • 4
  • 29
  • 35
7

Use unit tests as a last resort to verify code.

If you I want to verify that code is correct, I prefer the following techniques over unit testing:

  1. Type checking
  2. Assertions
  3. Trivially verifiable code

For everything else, there's unit tests.

cdiggins
  • 17,602
  • 7
  • 105
  • 102
  • 0. Re-read your code. Seems trivial, but often can be the best at finding errors. – Matt Hamsmith Nov 24 '09 at 17:26
  • Enthusiasts of unit tests too often position their arguments as defenses for weak typing and late binding as if a disciplined engineer chooses exactly one approach to reliability. – Integer Poet Mar 15 '10 at 19:31
  • I'm very ambivalent about unit tests. My personal opinion is that zealots that want 100% code coverage for there unit tests are wasting a lot of time and money. But they're not completely useless either, so I guess I agree with the statement. – Captain Sensible Apr 22 '10 at 08:33
  • I've pretty much been forced to this conclusion by a very tight schedule. I agree that unit tests are not for everything. But having said that, the more critical a piece of code is, the wiser you'd be to write tests for it regardless. – Engineer Sep 17 '10 at 15:26
7

Not really programming, but I can't stand css only layouts just for the sake of it. It's counter productive, frustrating, and makes maintenance a nightmare of floats and margins where changing the position of a single element can throw the entire page out of whack.

It's definitely not a popular opinion, but i'm done with my table layout in 20 minutes while the css gurus spend hours tweaking line-height, margins, padding and floats just to do something as basic as vertically centering a paragraph.

Rob
  • 8,042
  • 3
  • 35
  • 37
  • 1
    Whoever spends hours writing `margin: 0 auto;` is one hell of a bad css-designer... Still, tables are tables and tables store data. Not design. – F.P Nov 18 '09 at 14:18
  • 1
    That is why there are 3 different ways to use styles. For re-usability, and scope of need. – awright18 Mar 04 '10 at 01:32
7

There is no such thing as Object-Oriented programming.

Apocalisp
  • 34,834
  • 8
  • 106
  • 155
  • The problem I have with that article is that it argues that OOP doesn't model the real world properly and so it doesn't exist. I agree that OOP is a poor real-world model but that doesn't mean it doesn't exist. – Cameron MacFarland Jan 02 '09 at 14:48
  • @Cameron MacFarland: That's not what the article argues at all. It argues that there's no distinction between "OOP" and other kinds of programming, other than a rhetorical one. – Apocalisp Jan 02 '09 at 15:03
  • Why is there no reference to ADT which I believe OOP was sprung from? – epatel Jan 02 '09 at 19:03
  • 1
    @Apocalisp: You're right, I only skimmed the article. Now that I've read it properly, he compared making distinctions between code styles with making distinctions about race by using the argument made by capitalist libertarians, who believe in things that lead to slavery and killing poor people. – Cameron MacFarland Jan 03 '09 at 06:56
  • 1
    See I told you it was controversial. Enough to draw an ad hominem with a non-sequitur and a straw man in a single sentence. I'm impressed. – Apocalisp Jan 03 '09 at 20:07
  • @Cameron, actually liberals are the one killing poor people by telling them that they don't need to be responsible for their life, they just need to do what 'superior' liberals tell them to do. Liberalism is all about emotional and intellectual ego. – Lance Roberts Jan 06 '09 at 16:56
  • @Apocalisp: You impress easy. "Valid concepts are arrived at by induction" completely ignores Kant's idea of a priori concept, which is what OOP and Smurfs would be considered. Restricting concepts to facts of reality is itself a straw man argument. – Cameron MacFarland Jan 09 '09 at 14:42
  • "It is a useless distinction, in exactly the same way that “race” is a useless distinction." - And nationality, religion, sex, occupation. They are all useless distinctions if you follow the logic of the Ayn Rand article. – Cameron MacFarland Jan 09 '09 at 15:10
  • @Cameron: You've hit the nail on the head. I'm deliberately and completely in defiance of Kant because his ideas are drivel. To think is to think about something. – Apocalisp Jan 09 '09 at 15:12
  • "Java is object-disoriented" -- me – Svante Jan 12 '09 at 17:26
  • Nice answer... "No such thing as OOP"... And it's easy to prove. Just look at the assembly generated from any C++ compiler. I don't see any OOP in there... :) – LarryF Jan 14 '09 at 23:46
  • There needs to be an Object-Action Oriented Language. Actions are not Objects. It makes me angry when I write a void to modify an Object. ARRRGH............................ – WolfmanDragon May 23 '09 at 18:20
  • @epatel: perhaps because the idea that OOP was sprung from ADT is wrong. See "OOP vs ADTs" (http://www.cs.utexas.edu/~wcook/papers/OOPvsADT/CookOOPvsADT90.pdf) and "On Understanding Data Abstraction, Revisited" (http://www.cs.utexas.edu/~wcook/Drafts/2009/essay.pdf) by William R. Cook. – MaD70 Nov 06 '09 at 02:20
7

Write your spec when you are finished coding. (if at all)

In many projects I have been involved in, a great deal of effort was spent at the outset writing a "spec" in Microsoft Word. This process culminated in a "sign off" meeting when the big shots bought in on the project, and after that meeting nobody ever looked at this document again. These documents are a complete waste of time and don't reflect how software is actually designed. This is not to say there are not other valuable artifacts of application design. They are usually contained on index cards, snapshots of whiteboards, cocktail napkins and other similar media that provide a kind of timeline for the app design. These are usually are the real specs of the app. If you are going to write a Word document, (and I am not particularly saying you should) do it at the end of the project. At least it will accurately represent what has been done in the code and might help someone down the road like the the QA team or the next version developers.

Andrew Cowenhoven
  • 2,778
  • 22
  • 27
  • "nobody ever looked at this document again" Not true. When I started at a new job I was given "the spec" folder, and told to read it as my first task. Then I started asking "where's this feature" the answer was "I didn't know that was in the spec." The spec folder was given to all new employees. – Cameron MacFarland Jan 22 '09 at 22:36
  • Yes, that happened to me too - several times. I stand corrected. – Andrew Cowenhoven Jan 30 '09 at 14:13
  • I think this is *highly* situational. Do you need a spec for an internal project that you'll write in 3 days? Probably not. Are you willing to bet a multi-million dollar project on the client understanding everything you say to the letter? I would hope not. – Jason Baker Feb 06 '10 at 16:38
7

That best practices are a hazard because they ask us to substitute slogans for thinking.

Flinkman
  • 17,732
  • 8
  • 32
  • 53
7

Social skills matter more than technical skills

Agreable but average programmers with good social skills will have a more successful carreer than outstanding programmers who are disagreable people.

Captain Sensible
  • 4,946
  • 4
  • 36
  • 46
  • +1 I couldn't agree more. Building software is a social activity more than a technical one. – JuanZe Oct 13 '09 at 21:34
7

My controversial opinion: OO Programming is vastly overrated [and treated like a silver bullet], when it is really just another tool in the toolbox, nothing more!

torial
  • 13,085
  • 9
  • 62
  • 89
7

Most of programming job interview questions are pointless. Especially those figured out by programmers.

It is a common case, at least according to my & my friends experience, where a puffed up programmer, asks you some tricky wtf he spent weeks googling for. The funny thing about that is, you get home and google it within a minute. It's like they often try to beat you up with their sophisticated weapons, instead of checking if you'd be a comprehensive, pragmatic team player to work with.

Similar stupidity IMO is when you're being asked for highly accessible fundamentals, like: "Oh wait, let me see if you can pseudo-code that insert_name_here-algorithm on a sheet of paper (sic!)". Do I really need to remember it while applying for a high-level programming job? Should I efficiently solve problems or puzzles?

ohnoes
  • 5,542
  • 6
  • 34
  • 32
  • +1 fully agree, its also usually the case that during the interview they check to see if you are the rocket scientist they require. Asking you all sorts of rough questions. Then we you get the job, you realize actually what they were after was a coding monkey, who shouldn't get too involved in business decisions. I know this is not always the case, but usually the work you end up doing is very easy compared to the interview process, where you would think they were looking for someone to develop organic rocket fuel. – JL. Apr 04 '10 at 16:13
7

Tools, Methodology, Patterns, Frameworks, etc. are no substitute for a properly trained programmer

I'm sick and tired of dealing with people (mostly managers) who think that the latest tool, methodology, pattern or framework is a silver bullet that will eliminate the need for hiring experienced developers to write their software. Although, as a consultant who makes a living rescuing at-risk projects, I shouldn't complain.

Jeff Leonard
  • 3,284
  • 7
  • 29
  • 27
  • I will second "Thou Shalt Not Complain". Those who manage based on idealistic expedience and feel good tools always find themselves in trouble like this. Unfortunately I have noticed that no matter how many times you deliver the reality that you need to use good people. The bottom line bean counters always try to find the cheap/easy way out. In the end they always have to poney up the money. They either pony up to get it done correctly the first time or they pony up to have it fied properly by someone who chanrges a premium. Sometimes far in excess of the cost to do it right the 1st time. – Axxmasterr Jul 27 '09 at 17:05
7

The simplest approach is the best approach

Programmers like to solve assumed or inferred requirements that add levels of complexity to a solution.

"I assume this block of code is going to be a performance bottleneck, therefore I will add all this extra code to mitigate this problem."

"I assume the user is going to want to do X, therefore I will add this really cool additional feature."

"If I make my code solve for this unneeded scenario it will be a good opportunity to use this new technology I've been interested in trying out."

In reality, the simplest solution that meets the requirements is best. This also gives you the most flexibility in taking your solution in a new direction if and when new requirements or problems come up.

Brad C
  • 421
  • 3
  • 4
  • Yeah, the best way to compare implemntations is by their line count. People wont reuse your code unless it's less than one page long. – AareP Jun 13 '09 at 16:03
  • 1
    ++ I don't think this is controversial in one sense - everybody agrees with it. But in another sense it is controversial - because few people follow it. – Mike Dunlavey Oct 13 '09 at 22:52
6

Garbage collection is overrated

Many people consider the introduction of garbage collection in Java one of the biggest improvements compared to C++. I consider the introduction to be very minor at best, well written C++ code does all the memory management at the proper places (with techniques like RAII), so there is no need for a garbage collector.

Anders Rune Jensen
  • 3,758
  • 2
  • 42
  • 53
  • The advocates of garbage collection have an unhealthy obsession with one particular resource when RAII covers all of them. – Integer Poet Mar 15 '10 at 19:49
  • Lazy programmers suck. GC is for lazy programmers. Conclusion: you are totally right, Anders Rune Jensen. –  Dec 20 '10 at 14:58
6

Don't be shy, throw an exception. Exceptions are a perfectly valid way to signal failure, and are much clearer than any return-code system. "Exceptional" has nothing to do with how often this can happen, and everything to do with what the class considers normal execution conditions. Throwing an exception when a division by zero occurs is just fine, regardless of how often the case can happen. If the problem is likely, guard your code so that the method doesn't get called with incorrect arguments.

Mathias
  • 15,191
  • 9
  • 60
  • 92
6

Using regexs to parse HTML is, in many cases, fine

Every time someone posts a question on Stack Overflow asking how to achieve some HTML manuipulation with a regex, the first answer is "Regex is a insufficient tool to parse HTML so don't do it". If the questioner was trying to build a web browser, this would be a helpful answer. However, usually the questioner wants to do some thing like add a rel tag to all the links to a certain domain, usually in a case when certain assumptions can be made about the style of the incoming markup, something that is entiely reasonable to do with a regex.

Nick Higgs
  • 1,712
  • 1
  • 18
  • 21
6

I firmly believe that unmanaged code isn't worth the trouble. The extra maintainability expenses associated with hunting down memory leaks which even the best programmers introduce occasionally far outweigh the performance to be gained from a language like C++. If Java, C#, etc. can't get the performance you need, buy more machines.

marcumka
  • 1,695
  • 3
  • 12
  • 14
  • I think you overestimate the amount of memory management that occur in modern C++. C++ now uses the RAII idiom everywhere. Memory leaks aren't really much of a concern or an issue anymore. – Doug T. Jan 02 '09 at 14:16
  • 2
    if you can't track memory leaks, you're not worth to use high-powered tools. – Javier Jan 02 '09 at 14:16
  • agree with Doug; with some simple rules of thumb, memory leaks are mostly elliminated. – Javier Jan 02 '09 at 14:17
  • 2
    Not to mention that not all programs run exclusively on a recent version of Windows. – David Thornley Jan 02 '09 at 14:54
  • I completely agree. Using a non-memory-managed language is like taking a shortcut through a minefield rather than going a slightly longer route on a comfortable and well paved road. – glenatron Jan 02 '09 at 15:17
  • And sometimes you need to take the shortcut, no matter what. I need all the performance I can get, in what I'm paid to do. This is not true for most people. – David Thornley Jan 02 '09 at 16:09
  • Should I buy more machines to all users of the software I write? There are millions of them, and all of them want their programs to run fast. – Nemanja Trifunovic Jan 02 '09 at 17:45
  • Hey how about not worrying about performance until it actually becomes an issue, and then when it does, profile, Profile PROFILE. It is at that point when it's legitimate to decide whether to take that shortcut through the minefield. It's a cavalier waste of money and time to decide before necessary – Breton Jan 04 '09 at 10:47
  • 2
    I firmly believe that we don't need airplanes, we can always use cars, right...? And if we need to cross the open sea, we could just use a boat, right...? – Thomas Hansen Jan 10 '09 at 20:54
  • Hi.My name is Larry.It's nice to meet all of you. :) I thought I was alone in this world, then I find all of you who think just like me... As you'll see in MY answer to this question. I'm a HUGE fan of C/C++, and feel that if you can't do C/C++ right, then don't do it at all. C# is NOT required. – LarryF Jan 14 '09 at 23:53
  • 1
    Pipe-dream reasoning. Earth calling marcumka – Captain Sensible Jan 26 '09 at 10:41
  • 7
    **Right tool, right job.** Go try and code that kernel or NIC driver in C# and get back to us. Yes, there are plenty of folks who stick with the language they know, but your unqualified answer is overly broad. (And that from a Java developer!) – Stu Thompson Apr 28 '09 at 20:44
  • If we had really well written frameworks to run managed code on, then I'd say you have a good point. Sadly, the .NET framework gets more bloat heaped onto it with every release, and the truth is that C++ remains about the only way for a developer to write at a reasonably high level and be assured of [the ability to attain] good performance. – Mark Jul 07 '09 at 13:50
  • Memory leaks are not possible in C++ if you use the right techniques: Use RAII/Smart pointers instead of raw pointers/handles In the worst case, use Valgrind – blwy10 Oct 15 '09 at 10:57
6

Globals and/or Singletons are not inherently evil

I come from more of a sysadmin, shell, Perl (and my "real" programming), PHP type background; last year I was thrown into a Java development gig.

Singletons are evil. Globals are so evil they are not even allowed. Yet, Java has things like AOP, and now various "Dependency Injection" frameworks (we used Google Guice). AOP less so, but DI things for sure give you what? Globals. Uhh, thanks.

Jeff Warnica
  • 772
  • 5
  • 11
  • I think you have some misconceptions about DI. You should watch Misko Hevery's Clean Code talks. – Craig P. Motlin Jan 02 '09 at 18:37
  • I agree about globals. The problem is not the concept of a global itself, but what type of thing is made global. Used correctly, globals are very powerful. – Gene Roberts Jan 02 '09 at 21:15
  • Perhaps I am. But if you had globals, you wouldn't need DI. I'm entirely prepared to believe that I'm mis-understanding a technology that solves a self-imposed problem. – Jeff Warnica Jan 04 '09 at 05:24
  • We use Globals all the time in java, every time we use a final public static in place of a Constant (C, C++, C#). I think the thought is that if it needs to be global then it should be a static. I can (Mostly) agree with this. – WolfmanDragon Mar 30 '09 at 19:21
6

I think that using regions in C# is totally acceptable to collapse your code while in VS. Too many people try to say it hides your code and makes it hard to find things. But if you use them properly they can be very helpful to identify sections of code.

6

According to the amount of feedback I've gotten, my most controversial opinion, apparently, is that programmers don't always read the books they claim to have read. This is followed closely by my opinion that a programmer with a formal education is better than the same programmer who is self-taught (but not necessarily better than a different programmer who is self-taught).

Cody Gray - on strike
  • 239,200
  • 50
  • 490
  • 574
Bill the Lizard
  • 398,270
  • 210
  • 566
  • 880
  • I'm proud to say I've read all the programming books I own. Even the monsterous Programming Python and Programming Perl. –  Jan 05 '09 at 11:42
  • I have a B.A. in English. It is likely that I'm a better programmer for it. Is that what you mean? – postfuturist Jan 10 '09 at 00:41
  • You over-estimate the value of education. I've been a full time programmer for 15 years and am self-taught. When I meet developers who are fresh out of school, I sometimes wonder if there whole education wasn't a big waste of time. They know next to nothing about "the real world", can seldomly work independently and their skills are average at best. – Captain Sensible Apr 22 '10 at 08:43
  • @Seventh Element: I would expect someone fresh out of school with no work experience to have average skills. Comparing a fresh graduate to someone with 15 years of work experience is comparing apples to oranges. I worked as a programmer for 8 years before going back to school to get my degree. I think I have a pretty strong grasp of the value of my education *to me*. You get out of it what you put into it. – Bill the Lizard Apr 22 '10 at 12:14
6

You shouldn't settle on the first way you find to code something that "works."

I really don't think this should be controversial, but it is. People see an example from elsewhere in the code, from online, or from some old "Teach yourself Advanced Power SQLJava#BeansServer in 3.14159 minutes" book dated 1999, and they think they know something and they copy it into their code. They don't walk through the example to find out what each line does. They don't think about the design of their program and see if there might be a more organized or more natural way to do the same thing. They don't make any attempt at keeping their skill sets up to date to learn that they are using ideas and methods deprecated in the last year of the previous millenium. They don't seem to have the experience to learn that what they're copying has created specific horrific maintenance burdens for programmers for years and that they can be avoided with a little more thought.

In fact, they don't even seem to recognize that there might be more than one way to do something.

I come from the Perl world, where one of the slogans is "There's More Than One Way To Do It." (TMTOWTDI) People who've taken a cursory look at Perl have written it off as "write-only" or "unreadable," largely because they've looked at crappy code written by people with the mindset I described above. Those people have given zero thought to design, maintainability, organization, reduction of duplication in code, coupling, cohesion, encapsulation, etc. They write crap. Those people exist programming in every language, and easy to learn languages with many ways to do things give them plenty of rope and guns to shoot and hang themselves with. Simultaneously.

But if you hang around the Perl world for longer than a cursory look, and watch what the long-timers in the community are doing, you see a remarkable thing: the good Perl programmers spend some time seeking to find the best way to do something. When they're naming a new module, they ask around for suggestions and bounce their ideas off of people. They hand their code out to get looked at, critiqued, and modified. If they have to do something nasty, they encapsulate it in the smallest way possible in a module for use in a more organized way. Several implementations of the same idea might hang around for awhile, but they compete for mindshare and marketshare, and they compete by trying to do the best job, and a big part of that is by making themselves easily maintainable. Really good Perl programmers seem to think hard about what they are doing and looking for the best way to do things, rather than just grabbing the first idea that flits through their brain.

Today I program primarily in the Java world. I've seen some really good Java code, but I see a lot of junk as well, and I see more of the mindset I described at the beginning: people settle on the first ugly lump of code that seems to work, without understanding it, without thinking if there's a better way.

You will see both mindsets in every language. I'm not trying to impugn Java specifically. (Actually I really like it in some ways ... maybe that should be my real controversial opinion!) But I'm coming to believe that every programmer needs to spend a good couple of years with a TMTOWTDI-style language, because even though conventional wisdom has it that this leads to chaos and crappy code, it actually seems to produce people who understand that you need to think about the repercussions of what you are doing instead of trusting your language to have been designed to make you do the right thing with no effort.

I do think you can err too far in the other direction: i.e., perfectionism that totally ignores your true needs and goals (often the true needs and goals of your business, which is usually profitability). But I don't think anyone can be a truly great programmer without learning to invest some greater-than-average effort in thinking about finding the best (or at least one of the best) way to code what they are doing.

skiphoppy
  • 97,646
  • 72
  • 174
  • 218
6

Opinion: There should not be any compiler warnings, only errors. Or, formulated differently You should always compile your code with -Werror.

Reason: Either the compiler thinks it is something which should be corrected, in case it should be an error, or it is not necessary to fix, in which case the compiler should just shut up.

JesperE
  • 63,317
  • 21
  • 138
  • 197
  • I have to disagree. A really good warning system will warn you about things that are probably bad code, but may not be depending on how you use them. If you have lint set to full, I believe there are even cases where you can't get rid of all the warnings. – Bill K Jan 09 '09 at 18:02
  • That would mean I would have to throw out my C# compiler. I have 2 (AFAIK, unfixable) warnings about environment references (that are indeed set correctly) that don't appear to break anything. Unless -Werror merely supresses warnings and doesn't turn them into errors >_> –  Jan 09 '09 at 20:53
  • Finally, someone disagrees. It wouldn't really be a controversial opinion otherwise, now would it? – JesperE Jan 09 '09 at 22:35
  • Doesn't your C# compiler allow you to disable the warnings? If you know they are unfixable and "safe", why should the compiler keep warning? And yes, -Werror turns all warnings into errors. – JesperE Jan 09 '09 at 22:37
  • I try to get the warnings down to zero but some warnings are 50:50: They make sense in case A but not in case B. So I end up sprinkling my code with "ignore warning"... :( – Aaron Digulla Feb 27 '09 at 16:00
  • Well, as long as I'm the one writing the compiler, then I agree with you. But is someone else wrote the compiler, I would the ability to disagree with them when they claim perfectly valid constructs are warning worthy. – nosatalian May 31 '09 at 02:40
  • That is why most compilers allow you to disable warnings. That's fine. What I mean is that you either disable the warning or fix it. Don't just leave it there. – JesperE May 31 '09 at 19:15
6

VB sucks
While not terribly controversial in general, when you work in a VB house it is

gillonba
  • 897
  • 9
  • 24
  • 2
    That this is not generally controversial shows how generally up themselves so many programmers are. Have a preference - fine. But when it comes down to whether you have a word (that you don't even have to type) or a '}' to terminate a block, it's just a style choice... – ChrisA Jan 07 '09 at 14:15
  • ... plenty of VB programmers suck, though. As do plenty of C# programmers. – ChrisA Jan 07 '09 at 14:16
  • VB doesn't suck. People who use VB like VBA suck. – Chris Jan 08 '09 at 03:30
  • VB *does* suck. So many things have been shoe-horned into what was originally a simple instructional language to allow novices to enter the domain of professionals that it's no longer appropriate for either novices nor professionals. – P Daddy Jan 09 '09 at 13:57
  • It's not the language that sucks but a lot of the programmers that (used to) program in VB. – Captain Sensible Jan 26 '09 at 10:39
6

Relational Databases are a waste of time. Use object databases instead!

Relational database vendors try to fool us into believing that the only scaleable, persistent and safe storage in the world is relational databases. I am a certified DBA. Have you ever spent hours trying to optimize a query and had no idea what was going wrong? Relational databases don't let you make your own search paths when you need them. You give away much of the control over the speed of your app into the hands of people you've never met and they are not as smart as you think.

Sure, sometimes in a well-maintained database they come up with a quick answer for a complex query. But the price you pay for this is too high! You have to choose between writing raw SQL every time you want to read an entry of your data, which is dangerous. Or use an Object relational mapper which adds more complexity and things outside your control.

More importantly, you are actively forbidden from coming up with smart search algorithms, because every damn roundtrip to the database costs you around 11ms. It is too much. Imagine you know this super-graph algorithm which will answer a specific question, which might not even be expressible in SQL!, in due time. But even if your algorithm is linear, and interesting algorithms are not linear, forget about combining it with a relational database as enumerating a large table will take you hours!

Compare that with SandstoneDb, or Gemstone for Smalltalk! If you are into Java, give db4o a shot.

So, my advice is: Use an object-DB. Sure, they aren't perfect and some queries will be slower. But you will be surprised how many will be faster. Because loading the objects will not require all these strange transofmations between SQL and your domain data. And if you really need speed for a certain query, object databases have the query optimizer you should trust: your brain.

nes1983
  • 15,209
  • 4
  • 44
  • 64
  • Wow that is controversial! Surprised you haven't been flamed by the other DBAs here ;) – Meff Jan 11 '09 at 20:36
  • Even more important than performance: Development is much much faster with oo-databases! – wilth Apr 07 '09 at 08:02
  • "Unskilled and Unaware of It: How Difficulties in Recognizing One's Own Incompetence Lead to Inflated Self-Assessments", Justin Kruger and David Dunning, Cornell University, Journal of Personality and Social Psychology, 1999, Vol. 77, No. 6., 121-1134. Fortunately it is curable (I'm the evidence): ".. Paradoxically, improving the skills of participants, and thus increasing their metacognitive competence, helped them recognize the limitations of their abilities." – MaD70 Nov 06 '09 at 01:42
6

Two brains think better than one

I firmly believe that pair programming is the number one factor when it comes to increasing code quality and programming productivity. Unfortunatly it is also a highly controversial for management who believes that "more hands => more code => $$$!"

Martin Wickman
  • 19,662
  • 12
  • 82
  • 106
  • I sometimes dream about extreme extreme programming. How cool would it be if everyone in a group sat down to do the architecture and implementation as a group (4-8 devs). I wonder if it would work or be completely dysfunctional. I tend to think it could work with the "right" group. – oz10 Jan 14 '09 at 03:18
6

1. You should not follow web standards - all the time.

2. You don't need to comment your code.

As long as it's understandable by a stranger.

davethegr8
  • 11,323
  • 5
  • 36
  • 61
6

As there are hundreds of answers to this mine will probably end up unread, but here's my pet peeve anyway.

If you're a programmer then you're most likely awful at Web Design/Development

This website is a phenomenal resource for programmers, but an absolutely awful place to come if you're looking for XHTML/CSS help. Even the good Web Developers here are handing out links to resources that were good in the 90's!

Sure, XHTML and CSS are simple to learn. However, you're not just learning a language! You're learning how to use it well, and very few designers and developers can do that, let alone programmers. It took me ages to become a capable designer and even longer to become a good developer. I could code in HTML from the age of 10 but that didn't mean I was good. Now I am a capable designer in programs like Photoshop and Illustrator, I am perfectly able to write a good website in Notepad and am able to write basic scripts in several languages. Not only that but I have a good nose for Search Engine Optimisation techniques and can easily tell you where the majority of people are going wrong (hint: get some good content!).

Also, this place is a terrible resource for advice on web standards. You should NOT just write code to work in the different browsers. You should ALWAYS follow the standard to future-proof your code. More often than not the fixes you use on your websites will break when the next browser update comes along. Not only that but the good browsers follow standards anyway. Finally, the reason IE was allowed to ruin the Internet was because YOU allowed it by coding your websites for IE! If you're going to continue to do that for Firefox then we'll lose out yet again!

If you think that table-based layouts are as good, if not better than CSS layouts then you should not be allowed to talk on the subject, at least without me shooting you down first. Also, if you think W3Schools is the best resource to send someone to then you're just plain wrong.

If you're new to Web Design/Development don't bother with this place (it's full of programmers, not web developers). Go to a good Web Design/Development community like SitePoint.

Mike B
  • 12,768
  • 20
  • 83
  • 109
  • Goes for GUI design too. Especially with new technologies like WPF making GUI design more like web design with CSS like files defining styles for the interface. – Cameron MacFarland Jan 13 '09 at 23:11
  • I completely agree, unfortunately, I find at most companies I'm the developer and the designer at the same time. Its like saying "hey, you're a good writer, you'd be a great illustrator too!" -- ummm, no. – Juliet Feb 04 '09 at 00:35
6

You can't measure productivity by counting lines of code.

Everyone knows this, but for some reason the practice still persists!

Noel Walters
  • 1,843
  • 1
  • 14
  • 20
6

It's not the tools, it's you

Whenever developers try to do something new like doing UML diagrams, charts of any sort, project management they first look for the perfect tool to solve the problem. After endless searches finding not the right tool their motivation starves. All that is left then is complaints about the lack of useable software. It is the insight that the plan to be organized died in absence of a piece of software.

Well, it is only yourself dealing with organization. If you are used to organize you can do it with or without the aid of a software (and most do without). If you are not used to organize nobody can help you.

So "not having the right software" is just the simplest excuse for not being organized at all.

Mike Dunlavey
  • 40,059
  • 14
  • 91
  • 135
Norbert Hartl
  • 10,481
  • 5
  • 36
  • 46
  • I think this is true in spite of people agreeing with it (figure that out). I make a pest of myself telling people that to do performance tuning you do not need a tool, in fact you may do better without one. – Mike Dunlavey Oct 31 '09 at 17:42
6

Reflection has no place in production code

Reflection breaks static analysis including refactoring tools and static type checking. Reflection also breaks the normal assumptions developers have about code. For example: adding a method to a class (that doesn't shadow some other method in the class) should never have any effect, but when reflection is being used, some other piece of code may "discover" the new method and decide to call it. Actually determining if such code exists is intractable.

I do think it's fine to use reflection and tests and in code generators.

Yes, this does mean that I try to avoid frameworks that use reflection. (it's too bad that Java lacks proper compile-time meta-programming support)

Laurence Gonsalves
  • 137,896
  • 35
  • 246
  • 299
  • Wouldn't this negate the possibility of developing an application that supports 3rd party plugins? – Steven Evers Jun 19 '09 at 18:13
  • You're right, I should have been more clear. When I said "reflection" I meant java.lang.reflect. For plug-ins you just need Class.forName() and Class.newInstance(). I still consider the latter a "bad smell" (it's overused) but if you're implementing a system with third-party plugins then that's the way to do it. – Laurence Gonsalves Jun 20 '09 at 01:14
5

Notepad is a perfectly fine text editor. (And sometimes wordpad for non-windows line breaks)

  • Edit config files
  • View log files
  • Development

I know people who actually believe this! They will however use an IDE for development, but continue to use Notepad for everything else!

TJR
  • 3,617
  • 8
  • 38
  • 41
  • 1
    That's fair enough, notepad is good at what it does, and what it does is plain text editing. However, when you're editing config files, you want something that can handle indents a little better, maybe some syntax highlighting. With log files, a regex search is invaluable. – Jasarien Oct 13 '09 at 21:28
  • yep and thats why I use EditPlus www.editplus.com great editor!! – Dalbir Singh Oct 22 '09 at 15:48
  • That's why i only use textpad! www.textpad.com awesome for old skoolers! – crosenblum Jan 04 '10 at 20:01
5

All project managers should be required to have coding tasks

In teams that I have worked where the project manager was actually a programmer who understood the technical issues of the code well enough to accomplish coding tasks, the decisions that were made lacked the communication disconnect that often happens in teams where the project manager is not involved in the code.

Edward Tanguay
  • 189,012
  • 314
  • 712
  • 1,047
  • you: "boss, the code you just checked in is sub-par. please get it up to the standard, or I'll have to back it out." him: "about that raise you wanted..." – just somebody Dec 15 '09 at 01:24
5

If it isn't worth testing, it isn't worth building

Chirantan
  • 15,304
  • 8
  • 49
  • 75
5

Open Source software costs more in the long run

For regular Line of Business companies, Open Source looks free but has hidden costs.

When you take into account inconsistency of quality, variable usability and UI/UX, difficulties of interoperability and standards, increased configuration, associated increased need for training and support, the Total Cost of Ownership for Open Source is much higher than commercial offerings.

Tech-savvy programmer-types take the liberation of Open Source and run with it; they 'get it' and can adopt it and customise it to suit their purposes. On the other hand, businesses that are primarily non-technical, but need software to run their offices, networks and websites are running the risk of a world of pain for themselves and heavy costs in terms of lost time, productivity and (eventually) support fees and/or the cost of abandoning the experiement all together.

Gordon Mackie JoanMiro
  • 3,499
  • 3
  • 34
  • 42
  • A lot of the cost saving from OSS comes from being able to fix bugs in 3rd party tools. It's not just about license fees. – finnw Feb 11 '10 at 17:02
  • You've undermined your claim to controversy here simply by pointing out that not every tool is best for every job. You need less reason and more dogma. Instead, tell us SQL Server is industrial-strength and MySQL is just a toy. Stack Overflow needs more page views and you are not helping. – Integer Poet Mar 15 '10 at 19:35
  • WTF?? Who mentioned SQL databases? Page views? This comment is baffling. – Gordon Mackie JoanMiro Mar 15 '10 at 21:35
5

Writing extensive specifications is futile.
It's pretty difficult to write correct programs, but compilers, debuggers, unit tests, testers etc. make it possible to detect and eliminate most errors. On the other hand, when you write specs with a comparable level of detail like a program (i.e. pseudocode, UML), you are mostly on your own. Consider yourself lucky if you have a tool that helps you get the syntax right.

Extensive specifications are most likely bug riddled.
The chance that the writer got it right at the first try is about the same like the chance that a similarily large program is bugfree without ever being tested. Peer reviews eliminate some bugs, just like code reviews do.

Erich Kitzmueller
  • 36,381
  • 5
  • 80
  • 102
  • 1
    This is controversial only to the extent that you expect a specification to resemble the finished product. If instead the purpose is to make you think through the issues involved, then specifications work great. This is especially true if the finished product doesn't suck, doesn't resemble the spec, and you look back and realize you were able to change your mind effectively because you had gone through the exercise of writing the spec. Note: this only works if you have only smart people on your team. – Integer Poet Mar 15 '10 at 19:25
5

Lower camelCase is stupid and unsemantic

Using lower camelCase makes the name/identifier ("name" used from this point) look like a two-part thing. Upper CamelCase however, gives the clear indication that all the words belong together.

Hungarian notation is different ... because the first part of the name is a type indicator, and so it has a separate meaning from the rest of the name.

Some might argue that lower camelCase should be used for functions/procedures, especially inside classes. This is popular in Java and object oriented PHP. However, there is no reason to do that to indicate that they are class methods, because BY THE WAY THEY ARE ACCESSED it becomes more than clear that these are just that.

Some code examples:

# Java
myobj.objMethod() 
# doesn't the dot and parens indicate that objMethod is a method of myobj?

# PHP
$myobj->objMethod() 
# doesn't the pointer and parens indicate that objMethod is a method of myobj?

Upper CamelCase is useful for class names, and other static names. All non-static content should be recognised by the way they are accessed, not by their name format(!)

Here's my homogenous code example, where name behaviours are indicated by other things than their names... (also, I prefer underscore to separate words in names).

# Java
my_obj = new MyObj() # Clearly a class, since it's upper CamelCase
my_obj.obj_method() # Clearly a method, since it's executed
my_obj.obj_var # Clearly an attribute, since it's referenced

# PHP
$my_obj = new MyObj()
$my_obj->obj_method()
$my_obj->obj_var
MyObj::MyStaticMethod()

# Python
MyObj = MyClass # copies the reference of the class to a new name
my_obj = MyObj() # Clearly a class, being instantiated
my_obj.obj_method() # Clearly a method, since it's executed
my_obj.obj_var # clearly an attribute, since it's referenced
my_obj.obj_method # Also, an attribute, but holding the instance method.
my_method = myobj.obj_method # Instance method
my_method() # Same as myobj.obj_method()
MyClassMethod = MyObj.obj_method # Attribute holding the class method
MyClassMethod(myobj) # Same as myobj.obj_method()
MyClassMethod(MyObj) # Same as calling MyObj.obj_method() as a static classmethod

So there goes, my completely obsubjective opinion on camelCase.

Tor Valamo
  • 33,261
  • 11
  • 73
  • 81
5

Programmers need to talk to customers

Some programmers believe that they don't need to be the ones talking to customers. It's a sure way for your company to write something absolutely brilliant which no one can work out what it's for or how it was intended to be used.

You can't expect product managers and business analysts to make all the decisions. In fact, programmers should be making 990 out of the 1000 (often small) decisions that go into creating a module or feature, otherwise the product would simply never ship! So make sure your decisions are informed. Understand your customers, work with them, watch them use your software.

If you're going the write the best code, you want people to use it. Take an interest in your user base and learn from the "dumb idiots" who are out there. Don't be afraid, they'll actually love you for it.

Vincent
  • 2,963
  • 1
  • 19
  • 26
5

In my workplace, I've been trying to introduce more Agile/XP development habits. Continuous Design is the one I've felt most resistance on so far. Maybe I shouldn't have phrased it as "let's round up all of the architecture team and shoot them"... ;)

Giraffe
  • 1,993
  • 3
  • 20
  • 20
  • That's good. Along the same lines is casually insulting people in the name of "truth". That particular virus seems to have a reservoir in grad schools, like the one I attended. – Mike Dunlavey Nov 03 '09 at 14:38
5

Manually halting a program is an effective, proven way to find performance problems.

Believable? Not to most. True? Absolutely.

Programmers are far more judgmental than necessary.

Witness all the things considered "evil" or "horrible" in these posts.

Programmers are data-structure-happy.

Witness all the discussions of classes, inheritance, private-vs-public, memory management, etc., versus how to analyze requirements.

Jim Ferrans
  • 30,582
  • 12
  • 56
  • 83
Mike Dunlavey
  • 40,059
  • 14
  • 91
  • 135
  • By manually halting you're acting as a simple sampling profiler, so there's certainly some logic behind it, but I tend to find that instrumenting profilers give better results on the whole (albeit with more performance impact on the running application). – Greg Beech Jan 02 '09 at 14:59
  • Yes it is a sampling method. The difference is that you're trading precision of timing for precision of insight. Concern about slowing down the app is confusing means with ends. You're trying to find cycles spent for poor reasons. This does not require running fast. – Mike Dunlavey Jan 02 '09 at 15:07
  • I would humbly assert, from logic as well as experience, low-frequency sampling of the program state beats any profiler for the purpose of finding things that can be optimized. However, for asynchronous message-driven software, other methods are needed. – Mike Dunlavey Jan 02 '09 at 15:25
  • What I do think profilers are very good for is monitoring program health, to see if performance problems are creeping in as development proceeds. – Mike Dunlavey Jan 02 '09 at 15:30
  • The "best" way to analyze requirements varies both on who is giving them, and who is receiving them. Therefore discussion around the "best" way to do that is not very quantifiable. – Kendall Helmstetter Gelner Jan 05 '09 at 05:49
  • @Kendall: I've never seen "any" work in how to analyze requirements, and propose and evaluate alternative solutions, let alone "best". If we were doctors, we would know all about treatments but diagnosis would be "obvious". – Mike Dunlavey Jan 05 '09 at 13:57
5

Arrays should by default be 1-based rather than 0-based. This is not necessarily the case with system implementation languages, but languages like Java swallowed more C oddities than they should have. "Element 1" should be the first element, not the second, to avoid confusion.

Computer science is not software development. You wouldn't hire an engineer who studied only physics, after all.

Learn as much mathematics as is feasible. You won't use most of it, but you need to be able to think that way to be good at software.

The single best programming language yet standardized is Common Lisp, even if it is verbose and has zero-based arrays. That comes largely from being designed as a way to write computations, rather than as an abstraction of a von Neumann machine.

At least 90% of all comparative criticism of programming languages can be reduced to "Language A has feature C, and I don't know how to do C or something equivalent in Language B, so Language A is better."

"Best practices" is the most impressive way to spell "mediocrity" I've ever seen.

David Basarab
  • 72,212
  • 42
  • 129
  • 156
David Thornley
  • 56,304
  • 9
  • 91
  • 158
  • Your last sencence is +1. The rest is IMHO wrong because zero-based indices are very useful because make cause the indices of a container of size N to be the set of integers in the half-open interval [0, N[. This has some nice mathematical/algorithmic/practical consequences. – Konrad Rudolph Jan 02 '09 at 15:10
  • Personally, I haven't seen as much use for the half-open intervals as you have. If you could leave a pointer in a comment, I'd be interested. – David Thornley Jan 02 '09 at 15:39
  • +1 because A) I disagree with paragraph 1, so I guess it answers the question, and, 2) I like the other paragraphs :) – Mike Dunlavey Jan 02 '09 at 16:23
  • Should array indices start at 0 or 1? My compromise of 0.5 was rejected without, I thought, proper consideration. - Stan Kelly-Bootle – Gavin Miller Jan 02 '09 at 17:40
  • Yup, +1 for your final sentence. –  Jan 02 '09 at 19:46
  • +1 for learning math, -1 for saying Lisp is best (it takes more than parentheses to make a good language) – Lance Roberts Jan 06 '09 at 16:37
  • in smalltalk arrays start with 1 – nes1983 Jan 25 '09 at 18:46
  • It's just a convention and it doesn't matter. – Captain Sensible Jan 26 '09 at 10:19
  • 1
    Can't agree with the 1-based arrays, either. Would make add/remove elements much more complex (because you'd have to rebase your indexes during the operation). I'd opt for -1 being the last element in an array, though :) – Aaron Digulla Feb 27 '09 at 15:54
  • What's the difference between 0-based and 1-based arrays for add/remove? Python's notation using negative numbers for measuring from the end is kinda neat. – David Thornley Feb 27 '09 at 16:54
5

Goto is OK! (is that controversial enough)
Sometimes... so give us the choice! For example, BASH doesn't have goto. Maybe there is some internal reason for this but still.
Also, goto is the building block of Assembly language. No if statements for you! :)

Lucas Jones
  • 19,767
  • 8
  • 75
  • 88
  • bash has break n; and continue n; instead. imho the only reason to use goto is when you don't have those (or don't have labelled break/continue) – Johannes Schaub - litb Jan 02 '09 at 17:04
  • In assembly everything is implemented as goto (jump/branch). Most languages have if and some form of loop, but many are lacking try/catch or break/continue all of which can be implemented by the goto. Admittedly it can be used really badly so be careful :) – Cervo Jan 02 '09 at 18:52
  • I see headaches in making gotos in a language that is parsed while running. – Joshua Jan 02 '09 at 22:45
  • 2
    @Joshua, you mean interpreted languages? A language like Basic used to be a interpreted language and it certainly had the goto statement. How old are you? – tuinstoel Jan 04 '09 at 21:19
  • @Joshua, I'd say it was simpler. I wrote a simple interpreted language (by "simple", I mean "didn't really do anything at all" :D) which had goto. No conditions though. – Lucas Jones Feb 15 '09 at 11:48
  • and there are `cmp` statements (`if` statements) in Assembly - otherwise you'd never know when to `jmp` – warren Oct 22 '09 at 05:16
5

Opinion: Data driven design puts the cart before the horse. It should be eliminated from our thinking forthwith.

The vast majority of software isn't about the data, it's about the business problem we're trying to solve for our customers. It's about a problem domain, which involves objects, rules, flows, cases, and relationships.

When we start our design with the data, and model the rest of the system after the data and the relationships between the data (tables, foreign keys, and x-to-x relationships), we constrain the entire application to how the data is stored in and retrieved from the database. Further, we expose the database architecture to the software.

The database schema is an implementation detail. We should be free to change it without having to significantly alter the design of our software at all. The business layer should never have to know how the tables are set up, or if it's pulling from a view or a table, or getting the table from dynamic SQL or a stored procedure. And that type of code should never appear in the presentation layer.

Software is about solving business problems. We deal with users, cars, accounts, balances, averages, summaries, transfers, animals, messsages, packages, carts, orders, and all sorts of other real tangible objects, and the actions we can perform on them. We need to save, load, update, find, and delete those items as needed. Sometimes, we have to do those things in special ways.

But there's no real compelling reason that we should take the work that should be done in the database and move it away from the data and put it in the source code, potentially on a separate machine (introducing network traffic and degrading performance). Doing so means turning our backs on the decades of work that has already been done to improve the performance of stored procedures and functions built into databases. The argument that stored procedures introduce "yet another API" to be manged is specious: of course it does; that API is a facade that shields you from the database schema, including the intricate details of primary and foreign keys, transactions, cursors, and so on, and it prevents you from having to splice SQL together in your source code.

Put the horse back in front of the cart. Think about the problem domain, and design the solution around it. Then, derive the data from the problem domain.

Mike Hofer
  • 16,477
  • 11
  • 74
  • 110
  • 1
    I agree with the principal, but the problem is in real world IT development you often have existing data stores that you must make use of - while total constraint to existing code might be bad you can save a ton of development effort if you conform to data standards that exist when you can. – Kendall Helmstetter Gelner Jan 05 '09 at 05:28
  • Hey, someone who understands the real purpose of stored procedures! – Lurker Indeed Jan 12 '09 at 18:04
  • 2
    Hmmm. Take the data out of a system and what do you have? A system that computes nothing. Put bad data into your system and what happens? Crash. Analogy: Bake your bricks (create strong data types) and mix your cement (enforce the constraints), then design/build your system with perfect blocks. – Triynko Apr 17 '09 at 02:53
5

Not very controversial AFAIK but... AJAX was around way before the term was coined and everyone needs to 'let it go'. People were using it for all sorts of things. No one really cared about it though.

Then suddenly POW! Someone coined the term and everyone jumped on the AJAX bandwagon. Suddenly people are now experts in AJAX, as if 'experts' in dynamically loading data weren't around before. I think its one of the biggest contributing factors that is leading to the brutal destruction of the internet. That and "Web 2.0".

  • 1
    Couldn't agree with this more! It shows just how fashion conscious our industry really is. When I looked into what all the AJAX fuss was about I discovered I had already been doing it for 2 years. But it takes a marketing style buzzword to make stuff happen. – AnthonyWJones Jan 02 '09 at 21:24
  • A vision on the history of AJAX: http://www.theregister.co.uk/2008/11/27/microsoft_ignored_ajax/ – tuinstoel Jan 03 '09 at 07:54
  • 1
    I remember when it was called DHTML :P – GhassanPL Jan 09 '09 at 18:21
5

Primitive data types are premature optimization.

There are languages that get by with just one data type, the scalar, and they do just fine. Other languages are not so fortunate. Developers just throw "int" and "double" in because they have to write in something.

What's important is not how big the data types are, but what the data is used for. If you have a day of the month variable, it doesn't matter much if it's signed or unsigned, or whether it's char, short, int, long, long long, float, double, or long double. It does matter that it's a day of the month, and not a month, or day of week, or whatever. See Joel's column on making things that are wrong look wrong; Hungarian notation as originally proposed was a Good Idea. As used in practice, it's mostly useless, because it says the wrong thing.

David Thornley
  • 56,304
  • 9
  • 91
  • 158
  • It makes programs quite quite slower. Compare python to C or C++ and you'll see a huge performance difference when working with integers. It will avoid overflows at the expense of full checking all the time. That is a source of premature-pessimization in many cases. – David Rodríguez - dribeas Jan 05 '09 at 13:53
  • 1
    In at least Common Lisp, you can specify data types later, once you get the program working correctly. That's how CMU Common Lisp beat out a Fortran compiler in a number-crunching contest once. – David Thornley Jan 09 '09 at 15:57
  • That's basically Alan Perlis: "Functions delay binding: data structures induce binding. Moral: Structure data late in the programming process." – just somebody Dec 15 '09 at 02:17
5

Inheritance is evil and should be deprecated.

The truth is aggregation is better in all cases. Static typed OOP languages can't avoid inheritance, it's the only way to describe what method wants from a type. But dynamic languages and duck typing can live without it. Ruby mixins is much more powerful then inheritance and a lot more controllable.

vava
  • 24,851
  • 11
  • 64
  • 79
  • When I teach this, I make a big point of telling people that I'm only teaching it because they have to know the syntax to do it. There are other things we have to teach because there is special syntax involved, and people take what they learn from special syntax and use it all the time. – brian d foy Jan 03 '09 at 05:11
  • My controversial opinion in this regard is anyone who describes a technology as "evil" is evil. Patterns don't kill people, people kill people. – dreftymac Jan 03 '09 at 05:11
  • I don't think I agree, but I found your post interesting: upvoted. – Jay Bazuzi Jan 03 '09 at 17:37
  • "Static typed OOP languages can't avoid inheritance," -- OCaml is a statically typed OOP language, but it also supports structural typing ((http://en.wikipedia.org/wiki/Structural_type_system), which is more or less "duck typing for static languages". It also downplays the role of inheritance. – Juliet Jan 04 '09 at 03:46
  • Even in statically typed languages inheritance is overused. Prefer composition to inheritance in each and any language. – David Rodríguez - dribeas Jan 05 '09 at 18:28
  • "Static typed OOP languages can't avoid inheritance," Of course they can, with interfaces, delegations and programming by contract. Apart from that, and the "in all cases" part (I'd have said "in most cases"), I agree. – fbonnet Jan 15 '09 at 09:03
5

Variable_Names_With_Bloody_Underscores

or even worse

CAPITALIZED_VARIABLE_NAMES_WITH_BLOODY_UNDERSCORES

should be globally expunged... with prejudice! CamelCapsAreJustFine. (Glolbal constants not withstanding)

GOTO statements are for use by developers under the age of 11

Any language that does not support pointers is not worthy of the name

.Net = .Bloat The finest example of microsoft's efforts for web site development (Expressionless Web 2) is the finest example of slow bloated cr@pw@re ever written. (try Web Studio instead)

Response: OK well let me address the Underscore issue a little. From the C link you provided:

-Global constants should be all caps with '_' separators. This I actually agree with because it is so BLOODY_OBVIOUS

-Take for example NetworkABCKey. Notice how the C from ABC and K from key are confused. Some people don't mind this and others just hate it so you'll find different policies in different code so you never know what to call something.

I fall into the former category. I choose names VERY carefully and if you cannot figure out in one glance that the K belongs to Key then english is probably not your first language.

  • C Function Names

    • In a C++ project there should be very few C functions.
    • For C functions use the GNU convention of all lower case letters with '_' as the word delimiter.

Justification

* It makes C functions very different from any C++ related names. 

Example

int some_bloody_function() { }

These "standards" and conventions are simply the arbitrary decisions handed down through time. I think that while they make a certain amount of logical sense, They clutter up code and make something that should be short and sweet to read, clumsy, long winded and cluttered.

C has been adopted as the de-facto standard, not because it is friendly, but because it is pervasive. I can write 100 lines of C code in 20 with a syntactically friendly high level language.

This makes the program flow easy to read, and as we all know, revisiting code after a year or more means following the breadcrumb trail all over the place.

I do use underscores but for global variables only as they are few and far between and they stick out clearly. Other than that, a well thought out CamelCaps() function/ variable name has yet to let me down!

Mike Trader
  • 8,564
  • 13
  • 55
  • 66
  • Any justification for your positions? – Jay Bazuzi Jan 03 '09 at 17:33
  • So you see no value in using style (camelCase vs CamelCase vs ALL_CAPS) to indicate whether the reference is to a Class a variable an const or whatever? I can't agree. It seems you may not be aware of naming conventions as an idea. e.g. http://www.possibility.com/Cpp/CppCodingStandard.html#names – jwpfox Jan 04 '09 at 11:44
5

A majority of the 'user-friendly' Fourth Generation Languages (SQL included) are worthless overrated pieces of rubbish that should have never made it to common use.

4GLs in general have a wordy and ambiguous syntax. Though 4GLs are supposed to allow 'non technical people' to write programs, you still need the 'technical' people to write and maintain them anyway.

4GL programs in general are harder to write, harder to read and harder to optimize than.

4GLs should be avoided as far as possible.

Mike Dunlavey
  • 40,059
  • 14
  • 91
  • 135
Alterlife
  • 6,557
  • 7
  • 36
  • 49
5

Debuggers are a crutch.

It's so controversial that even I don't believe it as much as I used to.

Con: I spend more time getting up to speed on other people's voluminous code, so anything that help with "how did I get here" and "what is happening" either pre-mortem or post-mortem can be helpful.

Pro: However, I happily stand by the idea that if you don't understand the answers to those questions for code that you developed yourself or that you've become familiar with, spending all your time in a debugger is not the solution, it's part of the problem.

Before hitting 'Post Your Answer' I did a quick Google check for this exact phrase, it turns out that I'm not the only one who has held this opinion or used this phrase. I turned up a long discussion of this very question on the Fog Creek software forum, which cited various luminaries including Linus Torvalds as notable proponents.

Liudvikas Bukys
  • 5,790
  • 3
  • 25
  • 36
  • I totally agree, though I'd go a bit further: *testing your code* is a crutch. I know too many programmers who don't concentrate enough when writing code, and rely on failed compiles and runtime errors to save them... And how many bugs *don't* get caught? – Artelius Jan 16 '10 at 07:41
  • -1 There is nothing wrong with using a crutch when your leg is broken - why should there be anything wrong with using one when your code is broken? – Kramii Apr 08 '10 at 11:18
5

There are far too many programmers who write far too much code.

keithb
  • 103
  • 4
5

Separation of concerns is evil :)

Only separate concerns if you have good reason for it. Otherwise, don't separate them.

I have encountered too many occasions of separation only for the sake of separation. The second half of Dijkstra's statement "Minimal coupling, maximal cohesion" should not be forgotten. :)

Happy to discuss this further.

  • +1 for the Dijkstra quote... but I disagree with you... so +1 for the controversial opinion... everything in moderation. – oz10 Jan 14 '09 at 03:16
5

I hate universities and institutes offering short courses for teaching programming to new comers. It is outright disgrace and contempt for the art1 and science of programming.

They start teaching C, Java, VB (disgusting) to the people without good grasp on hardware and fundamental principals of computers. The should first be taught about the MACHINE by books like Morris Mano's Computer System Architecture and then taught the concept of instructing machine to solve problems instead of etching semantics and syntax of one programming language.

Also I don't understand government schools, colleges teaching children basics of computers using commercial operating systems and softwares. At least in my country (India) not many students afford to buy operating systems and even discounted office suits let alone the development software juggernaut (compilers, IDEs etc). This prompts theft and piracy and make this act of copying and stealing software from their institutes' libraries a justified act.

Again they are taught to use some products not the fundamental ideas.

Think about it if you were taught only that 2x2 is 4 and not the concept of multiplication?

Or if you were taught now to measure the length of pole inclined to some compound wall of your school but not the Pythagoras theorem

TheVillageIdiot
  • 40,053
  • 20
  • 133
  • 188
5

Design patterns are a waste of time when it comes to software design and development.

Don't get me wrong, design patterns are useful but mainly as a communication vector. They can express complex ideas very concisely: factory, singleton, iterator...

But they shouldn't serve as a development method. Too often developers architect their code using a flurry of design pattern-based classes where a more concise design would be better, both in term of readability and performance. All that with the illusion that individual classes could be reused outside their domain. If a class is not designed for reuse or isn't part of the interface, then it's an implementation detail.

Design patterns should be used to put names on organizational features, not to dictate the way code must be written.

(It was supposed to be controversial, remember?)

fbonnet
  • 2,325
  • 14
  • 23
5

Microsoft Windows is the best platform for software development.

Reasoning: Microsoft spoils its developers with excellent and cheap development tools, the platform and its API's are well documented, the platform is evolving at a rappid rate which creates a lot of opportunities for developers, The OS has a large user base which is important for obvious commercial reasons, there is a big community of Windows developers, I haven't yet been fired for choosing Microsoft.

Captain Sensible
  • 4,946
  • 4
  • 36
  • 46
  • There are plenty of free well documented stuff for Linux, and the message boards are filled with activity. I've done both and I always have just as much hassle setting up a development environment on either OS. – Bernard Igiri Feb 04 '09 at 15:44
  • 1
    I think the keyword here is "spoils". Little else in my life (no even the bullies at school) have caused me so much pain and suffering as anything which originated from M$. – Aaron Digulla Mar 02 '09 at 09:04
  • Microsoft Windows is the best platform for developing Desktop Applications. That isn't controversial. It is the worst platform for developing anything low level - such as filesystems, or kernel code. It is also worse in general for webapps. – nosatalian May 31 '09 at 02:34
  • 1
    The only platform that 'spoils' its dev with "excellent and cheap development tools" is Apple with Xcode. Sure - VisualStudio Express is free. But VS isn't. Linux tools are just as free as the Mac OS X ones, but harder to setup merely because you don't just copy Xcode to your Applications folder and start going. – warren Oct 22 '09 at 04:59
  • I never had the same experience with windows. I switched to Linux and am much happier with it. – Shawn Buckley Jan 19 '10 at 00:07
5

I can live without closures.

Looks like nowadays everyone and their mother want closures to be present in a language because it is the greatest invention since sliced bread. And I think it is just another hype.

serg
  • 109,619
  • 77
  • 317
  • 330
  • I thought along the same lines before I used LINQ, at which point I became a complete convert. – Jon Skeet Mar 16 '09 at 06:18
  • I agreed before I used them with multithreading in C#. Access to the previous thread's local variables is enormously useful and greatly simplifies syntax. – Steve Apr 04 '09 at 12:52
5

Keep your business logic out of the DB. Or at a minimum, keep it very lean. Let the DB do what it's intended to do. Let code do what code is intended to do. Period.

If you're a one man show (basically, arrogant & egotistical, not listening to the wisdom of others just because you're in control), do as you wish. I don't believe you're that way since you're asking to begin with. But I've met a few when it comes to this subject and felt the need to specify.

If you work with DBA's but do your own DB work, keep clearly defined partitions between your business objects, the gateway between them and the DB, and the DB itself.

If you work with DBA's and aren't allowed to do your DB work (either by policy or because they're premadonnas), you're very close to being a fool placing your reliance on them to get anything done by putting code-dependant business logic in your DB entities (sprocs, functions, etc.).

If you're a DBA, make developers keep their DB entities clean & lean.

Community
  • 1
  • 1
TheHolyTerrah
  • 2,859
  • 3
  • 43
  • 50
  • I'm keeping my fingers crossed I don't have to work with you, or ever maintain your leavings. Being a one man show should be an incentive to do as well as possible--because without other developers to cross-check your work you are already predisposed to writing strange and queer code. – STW Apr 21 '09 at 20:19
  • 1
    As for the database: if your database is just a bucket that holds anything then I agree that business logic has no place (SQLite is a great DB for these systems)--however if you are holding business data in the database then it is ultimately the DBs responsibility to ensure that its contents are valid. This is never more true than in cases where a database is consumed or maintained by multiple clients. – STW Apr 21 '09 at 20:20
  • LOL...by saying, "do as you wish", I wasn't saying I do so. I was pointing more toward those who believe they're an island and don't need to listen to anyone. Basically, arrogant & egotistical dev's who believe their crack don't stink. I've met a few. My apologies for not clarifying. I'll edit my statement after this comment. – TheHolyTerrah Apr 22 '09 at 14:41
  • And sorry, but I disagree with your last statement. It's not up to the DB to validate data beyond relational database theory of data retention. It "can", but ultimately it's up to those placing the data there. Most enterprise orgs don't allow their dev's the DBA hat. The DBA's make sure things are run properly according to standards and know nothing of the business behind the data. – TheHolyTerrah Apr 22 '09 at 14:43
  • In those cases, business logic should predominantly be kept where it can be controlled by those who know the business logic: in front of the DAL and out of the database. – TheHolyTerrah Apr 22 '09 at 14:43
  • 1
    @Bodosky: If integrity of data is spread in each application that access the data I wish good luck to your clients/employer. A DB **Architect** necessarily needs to know intimately about the business, a DB **Administrator** not. – MaD70 Nov 06 '09 at 02:05
  • Agreed. However, most enterprises would be lucky to get an architect vs. an admin. Why? Because the enterprise just won't loosen the purse strings enough to pay those folks what they're worth to keep them around long enough to have a vested interest and thereby become intimate with the business. So they end up with DB admins who aren't as interested in the business as they are in RDBMS principles. – TheHolyTerrah Nov 11 '09 at 17:19
  • It's not unusual in an Oracle shop to have large parts of the application inside the DB. PL/SQL is actually a good language to express business logic. – Erich Kitzmueller Nov 18 '09 at 20:34
  • I'm actually an Oracle n00b (slightly more than a year now). And I'm finding out that PL/SQL is much different than SQL Server in a lot of ways. So my paradigm is slowly shifting concerning your comment. At least where Oracle is concerned. However, even in the shop I'm at now, there's minimal BL and it all resides in packages. I'd be very curious to see how performance is affected by tens of millions of transactions per day. – TheHolyTerrah Nov 27 '09 at 16:22
5

BAD IDE's make the programming language weak

Good programming IDEs really make working with certain languages easier and better to oversee. I have been bit spoiled in my professional carreer, the companies I worked for always had the latest Visual Studio's ready to use.

For about 8 months, I have been doing a lot of Cocoa next to my work and the Xcode editor makes working with that language just way too difficult. Overloads are difficult to find and the overal way of handling open files just makes your screen really messy, really fast. It's really a shame, because Cocoa is a cool and powerful language to work with.

Ofcourse die-hard Xcode fans will now vote down my post, but there are so many IDEs that are really a lot better.

People making a switch to IT, who just shouldn't

This is a copy/paste from a blog post of mine, made last year.


The experiences I have are mainly about the dutch market, but they also might apply to any other market.

We (as I group all Software Engineers together) are currently in a market that might look very good for us. Companies are desperately trying to get Software Engineers (from now on SE) , no matter the price. If you switch jobs now, you can demand almost anything you want. In the Netherlands there is a trend now to even give 2 lease cars with a job, just to get you to work for them. How weird is that? How am I gonna drive 2 cars at the same time??

Of course this sounds very good for us, but this also creates a very unhealthy situation..

For example: If you are currently working for a company which is growing fast and you are trying to attract more co-workers, to finally get some serious software development from the ground, there is no-one to be found without offering sky high salaries. Trying to find quality co-workers is very hard. A lot of people are attracted to our kind of work, because of the good salaries, but this also means that a lot of people without the right passion are entering our market.

Passion, yes, I think that is the right word. When you have passion for your job, your job won’t stop at 05:00 PM. You will keep refreshing all of your development RSS feeds all night. You will search the internet for the latest technologies that might be interesting to use at work. And you will start about a dozen new ‘promising’ projects a month, just to see if you can master that latest technology you just read about a couple of weeks ago (and find an useful way of actually using that technology).

Without that passion, the market might look very nice (because of the cars, money and of course the hot girls we attract), but I don’t think it will be that interesting very long as, let’s say: fireman or fighter-pilot.

It might sound that I am trying to protect my own job here and partly that is true. But I am also trying to protect myself against the people I don’t want to work with. I want to have heated discussions about stuff I read about. I want to be able to spar with people that have the same ‘passion’ for the job as I have. I want colleagues that are working with me for the right reasons.

Where are those people I am looking for!!

Wim Haanstra
  • 5,918
  • 5
  • 41
  • 57
  • Cocoa isn't a language - it's an API http://en.wikipedia.org/wiki/Cocoa_(API) in Objective-C – warren Oct 22 '09 at 05:08
5

Commenting is bad

Whenever code needs comments to explain what it is doing, the code is too complicated. I try to always write code that is self-explanatory enough to not need very many comments.

Zifre
  • 26,504
  • 11
  • 85
  • 105
  • I was going to vote this down, but then I realized these are SUPPOSED to be controversial, and voted it up. – GoatRider Apr 04 '09 at 12:52
  • 1
    I don't think good code replaces comments any more than comments replace good code. You have to do both. Plus, these days there's a half decent chance that your comments might well be generating the documentation (and IntelliSense) so you'd better get used to adding those comments! – Tim Long May 17 '09 at 04:55
5

HTML 5 + JavaScript will be the most used UI programming platform of the future.Flash,Silverlight,Java Applets etc. etc. are all going to die a silent death

HashName
  • 651
  • 7
  • 17
5

Nobody Cares About Your Code

If you don't work on a government security clearance project and you're not in finance, odds are nobody cares what you're working on outside of your company/customer base. No one's sniffing packets or trying to hack into your machine to read your source code. This doesn't mean we should be flippant about security, because there are certainly a number of people who just want to wreak general havoc and destroy your hard work, or access stored information your company may have such as credit card data or identity data in bulk. However, I think people are overly concerned about other people getting access to your source code and taking your ideas.

brokenbeatnik
  • 720
  • 6
  • 15
  • Hmmm, so basically you've combined "don't take yourself so seriously, nobody else does" with "it's not the implementation that is valuable but the idea". – STW Apr 21 '09 at 21:01
  • ...and I forget "why lock the door, if someone wants to break in it's one more thing to have to replace" – STW Apr 21 '09 at 21:02
  • I disagree with your assessment. To follow your analogy, it's more like thinking someone wants to break into your house to steal some timbers out of or take pictures of your collection of model ships that you painstakingly built because the finished ships might be valuable on the open market. If they bother to break in, they'd much rather just take your cash or TV. My third sentence clearly states that I think security is still important, just for different reasons. – brokenbeatnik Apr 22 '09 at 15:30
4

Linq2Sql is not that bad

I've come across a lot of posts trashing Linq2Sql. I know it's not perfect, but what is?

Personally, I think it has its drawbacks, but overall it can be great for prototyping, or for developing small to medium apps. When I consider how much time it has saved me from writing boring DAL code, I can't complain, especially considering the alternatives we had not so long ago.

Dkong
  • 2,748
  • 10
  • 54
  • 73
4

There is no difference between software developer, coder, programmer, architect ...

I've been in the industry for more than 10 yeast and still find it absolutely idiotic to try to distinguish between these "roles". You write code? You're a developer. You are spending all day drawing fancy UML diagrams. You're a ... well.. I have no idea what you are, you're probably just trying to impress somebody. (Yes, I know UML).

user188658
  • 99
  • 4
4

"Programmers must do programming on the side, or they're never as good as those who do."

As kpollock said, imagine saying that for doctors, or soldiers...

The main thing isn't so much as whether they code, but whether they think about it. Computing Science is an intellectual exercise, you don't necessarily need to code to think about problems that makes you better as a programmer.

It's not like Einstein gets to play with play with particles and waves when he's off his research.

Calyth
  • 1,673
  • 3
  • 16
  • 26
4

Ternary operators absolutely suck. They are the epitome of lazy ass programing.

user->isLoggedIn() ? user->update() : user->askLogin();

This is so easy to screw up. A little change in revision #2:

user->isLoggedIn() && user->isNotNew(time()) ? user->update() : user->askLogin();

Oh yeah, just one more "little change."

user->isLoggedIn() && user->isNotNew(time()) ? user->update() 
    : user->noCredentials() ? user->askSignup
        : user->askLogin();

Oh crap, what about that OTHER case?

user->isLoggedIn() && user->isNotNew(time()) && !user->isBanned() ? user->update() 
    : user->noCredentials() || !user->isBanned() ? user->askSignup()
        : user->askLogin();

NO NO NO NO. Just save us the code change. Stop being freaking lazy:

if (user->isLoggedIn()) {
    user->update()
} else {
    user->askLogin();
}

Because doing it right the first time will save us all from having to convert your crap ternaries AGAIN and AGAIN:

if (user->isLoggedIn() && user->isNotNew(time()) && !user->isBanned()) {
    user->update()
} else {
    if (user->noCredentials() || !user->isBanned()) {
        user->askSignup();
    } else {
        user->askLogin();
    }
}
thesmart
  • 2,993
  • 2
  • 31
  • 34
  • 4
    That'd be the issue of using the wrong paradigm for what you're trying to do. If you want to branch, use a goddamn `if`. If you want to print slightly differnt text (Say "Mr." or "Mrs" in a greeting), use the conditional operator – 3Doubloons Nov 26 '09 at 06:31
  • use them for assignment, and not for branching. its a good replacement for `if(c) { x=a; } else { x=b; }`, which becomes `x=c?a:b;` but not for anything else! – Frunsi Dec 15 '09 at 00:48
  • Nope. I'm sorry. I agree completely with the OP in that the ternary operator sucks, because you are giving some nameless/faceless dev out there the opportunity to make code much harder to read. And that's on top of the fact that, as he says, its a duplicated language feature anyway. Its okay to be impressed by this sort of stuff when you're in college. As a professional, you're part of a greater development machine that relies on readability. – Engineer Sep 17 '10 at 15:34
4

Procedural programming is fun. OOP is boring.

Peter
  • 5,138
  • 5
  • 29
  • 38
4

small code is always better, but then complex ?: instead of if-else made me realize that sometime large code is more readable.

Vinay Pandey
  • 8,589
  • 9
  • 36
  • 54
4

Zealous adherence to standards stands in the way of simplicity.

MVC is over-rated for websites. It's mostly just VC, sometimes M.

Justin Johnson
  • 30,978
  • 7
  • 65
  • 89
4

Whenever you expose a mutable class to the outside world, you should provide events to make it possible to observe its mutation. The extra effort may also convince you to make it immutable after all.

Alexey Romanov
  • 167,066
  • 35
  • 309
  • 487
4

The class library guidelines for implementing IDisposable are wrong.

I don't share this too often, but I believe that the guidance for the default implementation for IDisposable is completely wrong.

My issue isn't with the overload of Dispose and then removing the item from finalization, but rather, I despise how there is a call to release the managed resources in the finalizer. I personally believe that an exception should be thrown (and yes, with all the nastiness that comes from throwing it on the finalizer thread).

The reasoning behind it is that if you are a client or server of IDisposable, there is an understanding that you can't simply leave the object lying around to be finalized. If you do, this is a design/implementation flaw (depending on how it is left lying around and/or how it is exposed), as you are not aware of the lifetime of instances that you should be aware of.

I think that this type of bug/error is on the level of race conditions/synchronization to resources. Unfortunately, with calling the overload of Dispose, that error is never materialized.

Edit: I've written a blog post on the subject if anyone is interested:

http://www.caspershouse.com/post/A-Better-Implementation-Pattern-for-IDisposable.aspx

casperOne
  • 73,706
  • 19
  • 184
  • 253
  • I like it! Now I wish that all the IDisposable objects in the framework would do this. – Jay Bazuzi Jan 02 '09 at 22:33
  • On a related note, MemoryStream is disposable but safe to leak. Think about it. – Joshua Jan 02 '09 at 22:51
  • Joshua: The fact that MemoryStream is disposable is an implementation detail, and as we all know, it's not good practice to rely on implementation details if you don't have to. It could very easily be changed to use a unmanaged memory pointer for it's buffer in the future. Think about that. =) – casperOne Jan 03 '09 at 00:18
  • I would prefer that all types that implement IDisposable were forced to be stack allocated, or some similar concept. – Daniel Paull Jan 03 '09 at 01:13
4

QA should know the code (indirectly) better than development. QA gets paid to find things development didn't intend to happen, and they often do. :) (Btw, I'm a developer who just values good QA guys a whole bunch -- far to few of them... far to few).

Sam
  • 2,939
  • 19
  • 17
4

Although I'm in full favor of Test-Driven Development (TDD), I think there's a vital step before developers even start the full development cycle of prototyping a solution to the problem.

We too often get caught up trying to follow our TDD practices for a solution that may be misdirected because we don't know the domain well enough. Simple prototypes can often elucidate these problems.

Prototypes are great because you can quickly churn through and throw away more code than when you're writing tests first (sometimes). You can then begin the development process with a blank slate but a better understanding.

  • I don't know how controversial that opinion is. What you describe seems to be the well-known “Spike Solution” pattern http://c2.com/xp/SpikeSolution.html and is a good pattern to have. – bignose Apr 14 '09 at 04:11
4

Reuse of code is inversely proportional to its "reusability". Simply because "reusable" code is more complex, whereas quick hacks are easy to understand, so they get reused.

Software failures should take down the system, so that it can be examined and fixed. Software attempting to handle failure conditions is often worse than crashing. ie, is it better to have a system reset after crashing, or should it be indefinitely hung because the failure handler has a bug?

Matthias Wandel
  • 6,383
  • 10
  • 33
  • 31
  • "failures should take down the system" - you're definitely on crack with this one! My entire system should ***NEVER*** die because **one** component hicoughed – warren Oct 22 '09 at 05:12
4

Java is not the best thing out there. Just because it comes with an 'Enterprise' sticker does not make it good. Nor does it make it fast. Nor does it make it the answer to every question.

Also, ROR is not all it is cracked up to be by the Blogsphere.

While I am at it, OOP is not always good. In fact, I think it is usually bad.

Alex UK
  • 57
  • 2
  • oop is really bad for small-size software because it has so much overhead. but, my prof said that it's super good for large scale software, and I think you can tell by my wording that I don't know so I will just believe my prof until proven false =P – hasen Jan 29 '09 at 11:28
4

Opinion: most code out there is crappy, because that's what the programmers WANT it to be.

Indirectly, we have been nurturing a culture of extreme creativeness. It's not that I don't think problem solving has creative elements -- it does -- it's just that it's not even remotely the same as something like painting (see Paul Graham's famous "Hackers and Painters" essay).

If we bend our industry towards that approach, ultimately it means letting every programmer go forth and whack out whatever highly creative, crazy stuff they want. Of course, for any sizable project, trying to put together dozens of unrelated, unstructured, unplanned bits into one final coherent bit won't work by definition. That's not a guess, or an estimate, it's the state of the industry that we face today. How many times have you seen sub-bits of functionality in a major program that were completely inconsistent with the rest of the code? It's so common now, it's a wonder anyone cause use any of these messes.

Convoluted, complicated, ugly stuff that just keeps getting worse and more unstable. If we were building something physical, everyone on the planet would call us out on how horribly ugly and screwed up the stuff is, but because it more or less hidden by being virtual, we are able to get away with some of the worst manufacturing processing that our species will ever see. (Can you imagine a car where four different people designed the four different wheels, in four different ways?)

But the sad part, the controversial part of it all, is that there is absolutely NO reason for it to be this way, other than historically the culture was towards more freedom and less organization, so we stayed that way (and probably got a lot worse). Software development is a joke, but it's a joke because that's what the programmers want it to be (but would never in a million years admit that it was true, a "plot by management" is a better reason for most people).

How long will we keep shooting ourselves in the foot, before we wake up and realize that we the ones holding the gun, pointing it and also pulling the trigger?

Paul.

Paul W Homer
  • 2,728
  • 1
  • 19
  • 25
  • That's just a lesson one has to learn through time and experience. Nevertheless, the "problem" won't get fixed because the "novices" don't realize or call it out, and too many "experienced" suffer from "not invented here" syndrome. By the way, this influences *every* profession to some extent. – dreftymac Jan 04 '09 at 04:42
  • You might want to check what the original meaning of "shoot yourself in the foot" means (as opposed to the 'new' meaning) and then think if maybe creating a bit of pain and confusion for the return of long-term survival is what is going on here. There is a survival strategy in hard to maintain code. – jwpfox Jan 04 '09 at 11:31
  • That type of survival strategy only works in a few large static corporate environments. If hard-to-maintain code causes the project to fail and be disbanded, it provides no long term gain. But even if it works, it's a miserable existence ... – Paul W Homer Jan 04 '09 at 17:35
  • 1
    Kudos for pointing this out. The truth is that sloppiness and heroism in software development are NOT self-evident. It's an effect of the (SW development) culture of the 60s/70s. – Thorsten79 Jan 05 '09 at 12:39
  • "If we were building houses like we're building software, the first woodpecker would be the end of mankind." -- dunno who said that but he is still right ;) – Aaron Digulla Mar 02 '09 at 14:14
  • You sense the disease but the diagnosis is incorrect: writing software is **not** a manufacturing process, period. It is a wrong analogy. "Manufacturing" is reproducing a physical "thing" n times, starting from a blueprint. Now this process is not perfect, so you need to control this process of reproduction. Writing software is more akin to design, i.e. producing the blueprint. Given the blueprint (the program) a computer perfectly reproduce it, i.e. it accurately solves every instance of the problem for which it was designed (it "manufacture" each solution, given the blueprint). – MaD70 Nov 06 '09 at 02:50
  • Now, designing something in engineering disciplines is certainly a creative process but equally certainly it is **not** unconstrained, undisciplined. For example: structural engineers use math, sciences and other disciplines. Their practice is founded on knowledge, theory, experience. What you correctly describe, with an uneasy that I concur, is a field not even at a level of good craftsmanship, not engineering and certainly not art. – MaD70 Nov 06 '09 at 03:05
4

Uncommented code is the bane of humanity.

I think that comments are necessary for code. They visually divide it up into logical parts, and provide an alternative representation when reading code.

Documentation comments are the bare minimum, but using comments to split up longer functions helps when writing new code and allows quicker analysis when returning to existing code.

Jeff M
  • 700
  • 3
  • 10
  • "using comments to split up longer functions" means your functions are too long. – Jay Bazuzi Jan 05 '09 at 15:44
  • If you can't understand code WITHOUT comments, you can't understand it WITH, either. – Aaron Digulla Mar 02 '09 at 14:14
  • Voted up, because this surely is controversial; I disagree with you :-) I'm on the side that says “Don't comment bad code, re-write it so it's clear”. If your justification for comments is to break up code visually, that's far better done with separate well-named functions with whitespace between. – bignose Apr 14 '09 at 04:15
4

We're software developers, not C/C#/C++/PHP/Perl/Python/Java/... developers.

After you've been exposed to a few languages, picking up a new one and being productive with it is a small task. That is to say that you shouldn't be afraid of new languages. Of course, there is a large difference between being productive and mastering a language. But, that's no reason to shy away from a language you've never seen. It bugs me when people say, "I'm a PHP developer." or when a job offer says, "Java developer". After a few years experience of being a developer, new languages and APIs really shouldn't be intimidating and going from never seeing a language to being productive with it shouldn't take very long at all. I know this is controversial but it's my opinion.

  • You're correct, but after investing years mastering a language, starting over in a new language has somewhat less appeal. It isn't necessarily fear, but the joy of higher order productivity that stems the desire to learn something new. – oz10 Jan 14 '09 at 03:23
  • That said, hacks cling to their one language like Grandpa to his comb-over. – oz10 Jan 14 '09 at 03:24
  • Someone who calls himself a Java developer (substitute with language of choice) means that he/she is an expert in the "platform", not just the language. But it sounds kinda stupid to say I'm a "Java platform" programmer. The language is only a tiny fraction of the platform. – Captain Sensible Jan 26 '09 at 10:56
  • Introducing a new language with little syntactic and semantic variation (w.r.t. mainstream) every year (just an hyperbole) is totally, utterly cretin, an enormous vast of resources. Nothing controversial here, is the usual way with which this "industry" distracts people from real issues. – MaD70 Nov 06 '09 at 01:28
4

When someone dismisses an entire programming language as "clumsy", it usually turns out he doesn't know how to use it.

Jami
  • 301
  • 2
  • 5
  • 13
4

Sometimes it's appropriate to swallow an exception.

For UI bells and wistles, prompting the user with an error message is interuptive, and there is ussually nothing for them to do anyway. In this case, I just log it, and deal with it when it shows up in the logs.

John MacIntyre
  • 12,910
  • 13
  • 67
  • 106
  • I always took the 'rule' as don't do the following, rather than "don't raise to the user": try {evil();} catch(Exception e){//swallow} – Stu Thompson Apr 28 '09 at 20:17
4

"Everything should be made as simple as possible, but not simpler." - Einstein.

JeffO
  • 7,957
  • 3
  • 44
  • 53
4

"Programmers are born, not made."

Elroy
  • 605
  • 4
  • 12
  • 20
4

I believe in the Zen of Python

Andrew Szeto
  • 1,199
  • 1
  • 9
  • 13
4

It IS possible to secure your application.

Every time someone asks a question about how to either prevent users from pirating their app, or secure it from hackers, the answer is that it's impossible. Nonsense. If you truly believe that, then leave your doors unlocked (or just take them off the house!). And don't bother going to the doctor, either. You're mortal - trying to cure a sickness is just postponing the inevitable.

Just because someone might be able to pirate your app or hack your system doesn't mean you shouldn't try to reduce the number of people who will do so. What you're really doing is making it require more work to break in than the intruder/pirate is willing to do.

Just like a deadbolt and ADT on your house will keep the burglars out, reasonable anti-piracy and security measures will keep hackers and pirates out of your way. Of course, the more tempting it would be for them to break in, the more security you need.

Jon B
  • 51,025
  • 31
  • 133
  • 161
  • 1
    It is not possible to make an application 100% secure because, in the end, applications are just a collection of bits on a storage device that can be copied and modified. Encryption is not copy protection. It's a trade off between the inevitable pirate and time to develop the defenses. – Skizz Mar 18 '09 at 14:55
  • @Skizz: My point is that the impossibility of 100% security is not a reason to give up on "ample" security. You can make your app not worth pirating/hacking just like you can make your house not worth breaking into. – Jon B Mar 18 '09 at 15:43
4

Getting paid to program is generally one of the worst uses of a man's time.

For one thing, you're in competition with the Elbonians, who work for a quarter a day. You need to convince your employer that you offer something the Elbonians never can, and that your something is worth a livable salary. As the Elbonians get more and more overseas business, the real advantage wears thin, and management knows it.

For another thing, you're spending time solving someone else's problems. That's time you could spend advancing your own interests, or working on problems that actually interest you. And if you think you're saving the world by working on the problems of other men, then why don't you just get the Elbonians to do it for you?

Last, the great innovations in software (visicalc, Napster, Pascal, etc) were not created by cubicle farms. They were created by one or two people without advance pay. You can't forcibly recreate that. It's just magic that sometimes happens when a competent programmer has a really good idea.

There is enough software. There are enough software developers. You don't have to be one for hire. Save your talents, your time, your hair, your marriage. Let someone else sell his soul to the keyboard. If you want to program, fine. But don't do it for the money.

Ian
  • 4,421
  • 1
  • 20
  • 17
  • > "Last, the great innovations in software (visicalc, Napster, Pascal, etc)" - so many examples to the contrary that I won't even start. Bell labs to name just one location. But if I read between the lines well then I agree with you: you need a new job. – Steven Evers Jun 19 '09 at 18:17
  • +1. controversial but interesting view (at least the 2 commenters above don't seem to agree). Ian makes some good points if you ask me. – Wouter van Nifterick Jul 12 '09 at 00:15
3

Using Stored Proc is easy to maintain and less deployment vs Using ORM is OO way thus it is good

I've heard this lot in many of my projects, when ever this statements appear it is always tough get it settled.

asyncwait
  • 4,457
  • 4
  • 40
  • 53
3

I don't care how powerful a programming language is if its syntax is not intuitive and I can't set it aside for some period of time and come back to it without too much effort at refreshing on the details. I would rather a language itself be intuitive than it be cryptic but powerful for creating DSL's. A computer language is a user interface for ME, and I want it designed for intuitive ease of use like any other user interface.

Anon
  • 11,870
  • 3
  • 23
  • 19
3

Understanding "what" to do is at least as important as knowing "how" to do it, and almost always it's much more important than knowing the 'best' way to solve a problem. Domain-specific knowledge is often crucial to write good software.

  • Oops, I read question earlier, and then all the responses, and my question seemed to fit. I just read the initial question again, and I'm not sure it really answers that. Delete it if not, and sorry for the noise. –  Jul 08 '09 at 23:51
3

Defects and Enhancement Requests are the Same

Unless you are developing software on a fixed-price contract, there should be no difference when prioritizing your backlog between "bugs" and "enhancements" and "new feature" requests. OK - maybe that's not controversial, but I have worked on enterprise IT projects where the edict was that "all open bugs must be fixed in the next release", even if that left no developer time for the most desirable new features. So, a problem which was encountered by 1% of the users, 1% of the time took precedence over a new feature would might be immediately useful to 90% of the users. I like to take my entire project backlog, put estimates around each item and take it to the user community for prioritization - with items not classified as "defect", "enhancement", etc.

Ed Schembor
  • 8,090
  • 8
  • 31
  • 37
3

Software development is an art.

David
  • 2,533
  • 2
  • 18
  • 16
3

in almost all cases, comments are evil: http://gooddeveloper.wordpress.com/

Ray Tayek
  • 9,841
  • 8
  • 50
  • 90
3

I'm always right.

Or call it design by discussion. But if I propose something, you'd had better be able to demonstrate why I'm wrong, and propose an alternative that you can defend.

Of course, this only works if I'm reasonable. Luckily for you, I am. :)

chris
  • 36,094
  • 53
  • 157
  • 237
3

Usability problems are never the user's fault.

I cannot count how often a problem turned up when some user did something that everybody in the team considered "just a stupid thing to do". Phrases like "why would somebody do that?" or "why doesn't he just do XYZ" usually come up.

Even though many are weary of hearing me say this: if a real-life user tried to do something that either did not work, caused something to go wrong or resulted in unexpected behaviour, then it can be anybody's fault, but not the user's!

Please note that I do not mean people who intentionally misuse the software. I am referring to the presumable target group of the software.

galaktor
  • 1,506
  • 14
  • 16
3

Delphi is fun

Yes, I know it's outdated, but Delphi was and is a very fun tool to develop with.

Bab Yogoo
  • 6,669
  • 6
  • 22
  • 17
  • We still code in Delphi for our business. Lots of Delphi pros out there :) – Tom A Oct 10 '09 at 19:25
  • Delphi isn't just fun, it's still the best way to build Windows applications. If you want to say something controversial, the outdated bit will get you the votes. Outdated? Yeah. Unicode. Cross platform coming soon. 64 bit coming soon. More developers at Embarcadero building and improving than at any other time in Delphi's history. Yeah. Outdated. BLEAH! – Warren P Apr 01 '10 at 00:21
3

Lower level languages are inappropriate for most problems.

Imagist
  • 18,086
  • 12
  • 58
  • 77
3

Programmers should never touch Word (or PowerPoint)

Unless you are developing a word or a document processing tool, you should not touch a Word processor that emits only binary blobs, and for that matter:

Generated XML files are binary blobs

Programmers should write plain text documents. The documents a programmer writes need to convey intention only, not formatting. It must be producible with the programming tool-chain: editor, version-control, search utilities, build system and the like. When you are already have and know how to use that tool-chain, every other document production tool is a horrible waste of time and effort.

When there is a need to produce a document for non-programmers, a lightweight markup language should be used such as reStructuredText (if you are writing a plain text file, you are probably writing your own lightweight markup anyway), and generate HTML, PDF, S5, etc. from it.

Chen Levy
  • 15,438
  • 17
  • 74
  • 92
3

Detailed designs are a waste of time, and if an engineer needs them in order to do a decent job, then it's not worth employing them!

OK, so a couple of ideas are thrown together here:

1) the old idea of waterfall development where you supposedly did all your design up front, resulting in some glorified extremely detailed class diagrams, sequence diagrams etc. etc., was a complete waste of time. As I once said to a colleague, I'll be done with design once the code is finished. Which I think is what agile is partly a recognition of - that the code is the design, and that any decent developer is continually refactoring. This of course, makes the idea that your class diagrams are out of date laughable - they always will be.

2) management often thinks that you can usefully take a poor engineer and use them as a 'code monkey' - in other words they're not particularly talented, but heck - can't you use them to write some code. Well.. no! If you have to spend so much time writing detailed specs that you're basically specifying the code, then it will be quicker to write it yourself. You're not saving any time. If a developer isn't smart enough to use their own imagination and judgement they're not worth employing. (Note, I'm not talking about junior engineers who are able to learn. Plenty of 'senior engineers' fall into this category.)

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Phil
  • 2,675
  • 1
  • 25
  • 27
  • ++ I liken spec-writing to driving a car at night in a fog. You can only see so far ahead, and turning up the brightness of the lights does not help. The supply of information is simply limited. It's worth getting as much as you can, but what you really have to be able to do is adapt when more information becomes available as you proceed. – Mike Dunlavey Oct 23 '09 at 21:04
  • ... I was once handed a design like that. The design doc was about 2 inches thick in paper and projected to take 18 mm to develop. I talked them into writing a code-generator. The *final source* was 1/2 inch thick, was done in 4 mm, and had blazing performance. – Mike Dunlavey Oct 23 '09 at 21:07
  • ... That's why I believe in prototyping, rapid or not. When I'm developing some new product, I like to be able to do at least 3 throw-away versions, because that's how I can see deeper into the fog. Good post! – Mike Dunlavey Oct 30 '09 at 14:50
  • Thanks Mike, and agree with what you're saying - it's impractical to expect to be able to get all the design right up front - you've got to 'try something', then rework it as you discover more about the requirements and both how best to implement it, and often also how the technologies you're using are best used. – Phil Oct 30 '09 at 19:39
3

When Creating Unit tests for a Data Access Layer, data should be retrieved directly from the DB, not from mock objects.

Consider the following:

void IList<Customer> GetCustomers()
{
  List<Customer> res = new List<Customer>();

  DbCommand cmd = // initialize command
  IDataReader r = cmd.ExecuteQuery();

  while(r.read())
  {
     Customer c = ReadFiledsIntoCustomer(r);
     res.Add(c);
  }

  return res;
}

In a unit test for GetCustomers, should the call to cmd.ExecuteQuery() actually access the DB or should it's behavior be mocked?

I reckon that you shouldn't mock the actual call to the DB if the following holds true:

  1. A test server and the schema exist.
  2. The schema is stable (meaning you are not expecting major changes to it)
  3. The DAL has not smart logic: queries are constructed trivially (config/stored procs) and the desirialization logic is simple.

From my experience the great benefit of this approach is that you get to interact with the DB early, experiancing the 'feel', not just the 'look'. It saves you lots of headaches afterwards and is the best way to familiarize oneself with the schema.

Many might argue that as soon as the execution flow crosses the process boundaries- it seizes to be a unit test. I agree it has its drawbacks, especially when the DB is unavailable and then you cannot run UT.

However, I believe that this should be a valid thing to do in many cases.

Vitaliy
  • 8,044
  • 7
  • 38
  • 66
3

Programmers should avoid method hiding through inheritance at all costs.

In my experience, virtually every place I have ever seen inherited method hiding used it has caused problems. Method hiding results in objects behaving differently when accessed through a base type reference vs. a derived type reference - this is generally a Bad Thing. While many programmers are not formally aware of it, most intuitively expect that objects will adhere to the Liskov Substitution Principle. When objects violate this expectation, many of the assumptions inherent to object-oriented systems can begin to fray. The most egregious cases I've seen is when the hidden method alters the state of the object instance. In these cases, the behavior of the object can change in subtle ways that are difficult to debug and diagnose.

Ok, so there may be some infrequent cases where method hiding is actually useful and beneficial - like emulating return type covariance of methods in languages that don't support it. But the vast majority of time, when developers use method hiding it is either out of ignorance (or accident) or as a way to hack around some problem that probably deserves better design treatment. In general, the beneficial cases I've seen of method hiding (not to say there aren't others) is when a side-effect free method that returns some information is hidden by one that computes something more applicable to the calling context.

Languages like C# have improved things a bit by requiring the new keyword on methods that hide a base class method - at least helping avoid involuntary use of method hiding. But I find that many people still confuse the meaning of new with that of override - particularly since in simple scenarios their behavior can appear identical. It would be nice if tools like FxCop actually had built-in rules for identifying potentially bad usage of method hiding.

By the way, method hiding through inheritance should not be confused with other kinds of hiding - such as through nesting - which I believe is a valid and useful construct with fewer potential problems.

LBushkin
  • 129,300
  • 32
  • 216
  • 265
3

Anonymous functions suck.

I'm teaching myself jQuery and, while it's an elegant and immensely useful technology, most people seem to treat it as some kind of competition in maximizing the user of anonymous functions.

Function and procedure naming (along with variable naming) is the greatest expressive ability we have in programming. Passing functions around as data is a great technique, but making them anonymous and therefore non-self-documenting is a mistake. It's a lost chance for expressing the meaning of the code.

Larry Lustig
  • 49,320
  • 14
  • 110
  • 160
  • 2
    While I haven't used jQuery, I have to disagree with the general principle. The ability to express (say) a projection or a filter *right where you're using it* rather than having to introduce a separate function is one of the nicest features in C# 2 and 3. (Nicer in 3 than 2, as lambda expressions are neater than anonymous methods.) – Jon Skeet Oct 14 '09 at 20:23
  • @Jon: Well, I guess that officially makes my opinion controversial. I still disagree. While it's nice to be able to express functionality that way, for all but the most trivial cases it fundamentally detracts from the readability of the code. If you could name the function in place that would help with the issue of expressing your purpose, but it still wouldn't eliminate the problem of actually reading functions nested in the parameter list of other functions which, in turn, are often nested inside another functions parameter list. – Larry Lustig Oct 14 '09 at 20:36
  • You can name inline functions in JavaScript, if you want to. Just include a name between "function" and the arguments: var s = function square(a) { return a * a; }; – Mark Bessey Oct 23 '09 at 21:27
3

Never change what is not broken.

Varma
  • 771
  • 1
  • 9
  • 19
  • 1
    What if it works, but is unmaintanable, ugly, difficult to understand and likely to break if something else changes? – simon Oct 19 '09 at 09:14
  • That is the exact reason why I posted this as "controversial". – Varma Oct 20 '09 at 05:06
  • "Refactor Mercilessly". XP Manifesto. But only if you have comprehensive unit tests in place... – Engineer Sep 17 '10 at 15:43
3

If you have ever let anyone from rentacoder.com touch your project, both it and your business are completely devoid of worth.

Azeem.Butt
  • 5,855
  • 1
  • 26
  • 22
3

I have two:

Design patterns are sometimes a way for bad programmer to write bad code - "when you have a hammer - all the world looks like a nail" mentality. If there si something I hate to hear is two developers create design by patterns: "We should use command with facade ...".

There is no such thing as "premature optimization". You should profile and optimize the your code before you get to that point when it becomes too painful to do so.

Dror Helper
  • 30,292
  • 15
  • 80
  • 129
  • 1
    Premature optimization does indeed exist and is very much a problem. With very few exceptions, your goal is to satisfy a function as per business requirements. Make it work, make it right, then make it faster. Optimizing without understanding the whole application profile is like throwing money out of a window. Let me know where you work, because I'll be downstairs with a net to catch some of it. ;-) – Joseph Ferris Oct 29 '09 at 19:09
  • You're right - but only some of the time... I've seen the "premature optimization" card used way too many time to create a bad very hard to improve application flow. If you can write it better the first time, why not do so? – Dror Helper Oct 29 '09 at 20:30
  • 1
    I think the best rule is to always make things as simple as possible. It is much easier to optimize simple code than to simplify optimized code. – thesmart Nov 04 '09 at 02:25
3

There is only one design pattern: encapsulation

For example:

  • Factory method: you've encapsulated object creation
  • Strategy: you encapsulated different changeable algorithms
  • Iterator: you encapsulated the way to sequentially access the elements in the collection.
flybywire
  • 261,858
  • 191
  • 397
  • 503
  • 1
    wrong. the only design pattern is "take out duplicate code and put it in an external function/method/object" – hasen Nov 13 '09 at 21:31
3

Size matters! Embellish your code so it looks bigger.

fastcodejava
  • 39,895
  • 28
  • 133
  • 186
3

Java is the COBOL of our generation.

Everyone learns to code it. There code for it running in big companies that will try to keep it running for decades. Everyone comes to despise it compared to all the other choices out there but are forced to use it anyway because it pays the bills.

Kelly S. French
  • 12,198
  • 10
  • 63
  • 93
  • 3
    COBOL is still the COBOL of our generation. Maybe Java will be the COBOL three generations from now... But then, so will C#. – Kobi Nov 15 '09 at 07:13
  • I would say PHP is the COBOL of our generation. It has an important property in common - it was designed to be coded by people who were not full-time coders. Unlike Java and C# which borrow heavily from C++. – finnw Feb 11 '10 at 16:57
3

Macros, Preprocessor instructions and Annotations are evil.

One syntax and language per file please!

// does not apply to Make files, or editor macros that insert real code.

Chris Cudmore
  • 29,793
  • 12
  • 57
  • 94
  • Everyone agrees that the pre-processor is evil... except the people who would never be found on Stack Overflow. They love it. – Integer Poet Mar 15 '10 at 19:26
  • How about this: C is evil. And C++ is even more evil. However, C is a necessary evil, and C++ an unnecessary one. – Warren P Apr 01 '10 at 00:05
3

Storing XML in a CLOB in a relational database is often a horrible cop-out. Not only is it hideous in terms of performance, it shifts responsibility for correctly managing structure of the data away from the database architect and onto the application programmer.

Tim
  • 953
  • 7
  • 11
3

Development is 80% about the design and 20% about coding

I believe that developers should spend 80% of time designing at the fine level of detail, what they are going to build and only 20% actually coding what they've designed. This will produce code with near zero bugs and save a lot on test-fix-retest cycle.

Getting to the metal (or IDE) early is like premature optimization, which is know to be a root of all evil. Thoughtful upfront design (I'm not necessarily talking about enormous design document, simple drawings on white board will work as well) will yield much better results than just coding and fixing.

Dima Malenko
  • 2,805
  • 2
  • 27
  • 24
3

Debuggers should be forbidden. This would force people to write code that is testable through unit tests, and in the end would lead to much better code quality.

Remove Copy & Paste from ALL programming IDEs. Copy & pasted code is very bad, this option should be completely removed. Then the programmer will hopefully be too lazy to retype all the code so he makes a function and reuses the code.

Whenever you use a Singleton, slap yourself. Singletons are almost never necessary, and are most of the time just a fancy name for a global variable.

martinus
  • 17,736
  • 15
  • 72
  • 92
  • 1
    I have noticed a definite inverse relationship between design/coding skill and skill in using a debugger (which is not the same as having debugging skills). – Ferruccio Jan 02 '09 at 13:46
  • I second the Copy&Paste removal. I often see my co-workers copy&paste 20 lines of code and just change a value. – Rauhotz Jan 02 '09 at 13:49
  • Copy/paste is an instant Red Flag in my opinion. If code is duplicated, it should either a) be factored using OO methods; or b) model-driven/generated/dsl-defined. – Dmitri Nesteruk Jan 02 '09 at 13:52
  • 3
    I agree, all the code you see in stackoverflow should not be tested code because if it is tested it is copied from an IDE and copying from an IDE should be impossible:) So please post only untested code on SO! – tuinstoel Jan 02 '09 at 14:08
  • @tuinstoel: So maybe it should be "copy but not paste"? :) – Jon Skeet Jan 02 '09 at 14:16
  • maybe it should just not allow copy & paste of sourcecode within the same application, might be fun to write an eclipse plugin that prevents this :-) – martinus Jan 02 '09 at 14:19
  • 9
    There is no way testing can replace the usefulness of debuggers and debugging. – Tim Jan 02 '09 at 14:21
  • Singletons look really mental when bound to WPF too (all that x:Static stuff). – Dmitri Nesteruk Jan 02 '09 at 14:32
  • 1
    Ok, so you remove all debuggers, and all alternate systems for debugging. (if the easy way is bad, then the hard ways must be worse, no?) Then in testing you discover a bug. Now what do you do? Cancel the project? – Charles Bretana Jan 02 '09 at 14:34
  • @charles, when I discover a bug I try reproduce the behavior in a unit test. Then I fix it. If you need a debugger it is just a sign that you need better tests or refactor the code that it is easier to understand. – martinus Jan 02 '09 at 14:39
  • Sometimes I have to maintain and extend programs that make extensive use of complex pointer arithmetic. You can pry my debugger from my cold, dead hands. And if any developer mentions "global" in the same room I am, he can consider himself slapped. – Leonardo Herrera Jan 02 '09 at 14:40
  • @Jon Skeet, if only copy is possible I can't paste from SO:) – tuinstoel Jan 02 '09 at 14:41
  • 8
    Right.. Get rid of debuggers - so that you can't see the results of your code until then end, rather than step your way through to see exactly WHERE the problem crops up. I'll take debuggers over dozens of "temporary, interim display statements" *ANY* day. – David Jan 02 '09 at 14:44
  • 2
    Debuggers can be excellent for understanding how current code is working (I generally don't need them much for my own independent code), and cut/paste is part of refactoring. – David Thornley Jan 02 '09 at 14:47
  • Without any way to debug it, how can you tell what to change to fix it? are you prescient? if so, why did you put the bug in there in the first place? "Debugging" and "Debuggers" are by defintion, the tools we use to figure out what is causing the bug. Without them, you can't fix any bug. – Charles Bretana Jan 02 '09 at 14:53
  • Except perhaps by random shotgun approach and a LOT of luck (Just change something, test again, and repeat until bug goes away...) – Charles Bretana Jan 02 '09 at 14:55
  • 1
    And outputting variable values or "I am Here" statements to a text file IS a debugger too! – Charles Bretana Jan 02 '09 at 14:56
  • "Debuggers should be forbidden." -- and how do you find bugs that are not yours but come from the library/platform? – niXar Jan 02 '09 at 15:14
  • 1
    Wow. This is like saying "if a hammer can't do the job, it isn't worth doing." Seriously, how would you track a memory overwrite originating outside of your object with unit tests? – Mark Brittingham Jan 02 '09 at 15:18
  • 1
    Probably the term "Debugger" is just wrong. I have yet to see a tool, that removes bugs from (de-bugs) my program. – Simon Lehmann Jan 02 '09 at 15:20
  • 1
    @simon: `rm` or `del` will remove all bugs. Granted, it also removes the rest of the program, but such is the price for a bugless program :) – Will Mc Jan 02 '09 at 16:49
  • 3
    IMO, you can only discover bugs with unittesting, not locate them. After you found a bug with unittesting, you use debugging/debuggers to find where the bug actualy is located – Ikke Jan 02 '09 at 17:00
  • Steve Macguire uses the entirety of chapter 4 of "Writing Solid Code" to promote the idea of stepping through new or changed code in a debugger. It's good advice. Debugger *abuse* is a different story. I've seen that too, but wouldn't propose doing away with the tool because some will abuse it. – JeffK Jan 02 '09 at 17:10
  • 2
    +1: definitely a controversial opinion based on the comments on this post :) – Juliet Jan 02 '09 at 17:47
  • 1
    Hmmm... When I started writing basic on a dumb terminal back in 1979, we didn't have a debugger nor did I have a copy/paste, but that doesn't mean I wrote better code back then. – Kluge Jan 02 '09 at 17:50
  • Um, and I did use a singleton in code I was working on last night. And I'm not slapping myself. And I might use another singleton sometime in the future too. There are reasons for global state, although darn few are good ones. – David Thornley Jan 02 '09 at 22:29
  • 1
    Eek - no copy and paste? What happens when I decide I need to move a block of code from one class to another? You're gonna make me retype it all out by hand? No debugger...yeah, I could probably work around that, it would be a pain though. I could probably live without Singletons too. – BenAlabaster Jan 02 '09 at 23:05
  • @balabaster that's cut & paste, not copy & paste. So I guess I would allow that ;-) – martinus Jan 02 '09 at 23:36
  • I agree with balabaster--we need cut & paste for refactoring or the like. – Loren Pechtel Jan 03 '09 at 05:01
  • 1
    Those are the strangest, most archaic things I've read in a while. Chances of writing bugs are just as great when manually typing as when copy/pasting, There's nothing forcing someone to write good code if they have no debugger, and although a singleton may bot be necessary, does that make it bad? – Jeremy Jan 03 '09 at 21:06
  • Sounds like someone who doesn't understand debugging. How can my copying and pasting my own code be bad? As far as copying and pasting others, I think you need to test it, understand it, and reduce it to what is necessary for your application before using it in your project. – bruceatk Jan 04 '09 at 02:06
  • @bruceatk, copy & paste code is very bad. suppose you ind a bug and fix one of the copied pieces, then it is difficult to track down all the copies to fix the bug there too. – martinus Jan 04 '09 at 07:29
  • Regarding Singletons, that may be wrong in languages like C# or Java, but with less OOP strict languages like Javascript or Scala, using singletons is okay. In JS every object is a singleton! (and classed using prototypes, at least in JS 1.x) And Scala has a singleton type called object. – Alcides Jan 04 '09 at 22:00
  • There's nothing wrong with Singletons in themselves. I suspect you're upset with some particular abuse of them, not the concept. – chaos Jan 04 '09 at 23:30
  • @martinus, maybe it is for you, but I copy and paste my own code all the time. I've never had a problem having to go back and fix stuff. I've been doing it for almost 30 years. I see no practical reason to change now. – bruceatk Jan 05 '09 at 00:25
  • I want to up-vote your Singleton comment and down-vote your Debugger comment... You need the debugger to figure out why that core dump exists in the first place (only trivial code is 100% testable). – Tom Jan 05 '09 at 02:44
  • @martinus, it's not that copy paste is bad but in your example, the programmer should use a common function, rather than duplicating a chunk of code. That way if the function has a bug, you fix it in one place. But there's copy paste scenarios where you wouldn't use a common function line one liners – Jeremy Jan 05 '09 at 02:46
  • How can I fix a driver without a debugger... Write a unit test that reproduces... Wait. – Edouard A. Jan 07 '09 at 14:24
  • Fundamentalisme isn't the way forward! – Captain Sensible Jan 26 '09 at 10:01
  • 4
    I agree with getting rid of copy-paste as long as you can still cut-paste. Cutting and pasting code is essential to refactoring and keeping the code in a clean state. – Sergio Acosta Mar 11 '09 at 08:57
  • Are you nuts? I'll vote you up just because I disagree so strongly (and that makes it controversial - to me anyway). I need those tools. It would be similar to punishing everyone by taxing junk food because some people can't control themselves. – Doug L. Sep 20 '09 at 09:28
  • I agree that you shouldn't need a debugger for app code you wrote. But you need one to make sense of corefiles, you need one for driver work, and you damn well need one to make sense of weird, uncommented and undocumented code some other bloke (who has long since left the company) perpetrated. There's not only *creators* out there, but *maintainers* as well, and the debugger is our best friend. – DevSolar Jan 21 '10 at 15:05
3

MVC for the web should be far simpler than traditional MVC.

Traditional MVC involves code that "listens" for "events" so that the view can continually be updated to reflect the current state of the model. In the web paradigm however, the web server already does the listening, and the request is the event. Therefore MVC for the web need only be a specific instance of the mediator pattern: controllers mediating between views and the model. If a web framework is crafted properly, a re-usable core should probably not be more than 100 lines. That core need only implement the "page controller" paradigm but should be extensible so as to be able to support the "front controller" paradigm.

Below is a method that is the crux of my own framework, used successfully in an embedded consumer device manufactured by a Fortune 100 network hardware manufacturer, for a Fortune 50 media company. My approach has been likened to Smalltalk by a former Smalltalk programmer and author of an Oreilly book about the most prominent Java web framework ever; furthermore I have ported the same framework to mod_python/psp.

static function sendResponse(IBareBonesController $controller) {
  $controller->setMto($controller->applyInputToModel());
  $controller->mto->applyModelToView();
}
Dexygen
  • 12,287
  • 13
  • 80
  • 147
  • Your bio is scary - all washed up at 20! Here is my own anti-MVC screed. http://stackoverflow.com/questions/371898/how-does-differential-execution-work – Mike Dunlavey Nov 03 '09 at 19:00
3

Excessive HTML in PHP files: sometimes necessary

Excessive Javascript in PHP files: trigger the raptor attack

While I have a hard time figuring out all your switching between echoing and ?>< ?php 'ing html (after all, php is just a processor for html), lines and lines of javascript added in make it a completely unmaintainable mess.

People have to grasp this: They are two separate programming languages. Pick one to be your primary language. Then go on and find a quick, clean and easily maintainable way to make your primary include the secondary language.

The reason why you jump between PHP, Javascript and HTML all the time is because you are bad at all three of them.

Ok, maybe its not exactly controversial. I had the impression this was a general frustration venting topic :)

  • What? To build a dynamic, server-side generated website you'll need all three (Unless you use another system.) For PHP, you've got your templating, server power etc. For HTML you have the basis of the actual site. JS: Dynamically loaded content, special features (syntax highlighting). –  Jan 09 '09 at 20:59
3

Use type inference anywhere and everywhere possible.

Edit:

Here is a link to a blog entry I wrote several months ago about why I feel this way.

http://blogs.msdn.com/jaredpar/archive/2008/09/09/when-to-use-type-inference.aspx

Jay Bazuzi
  • 45,157
  • 15
  • 111
  • 168
JaredPar
  • 733,204
  • 149
  • 1,241
  • 1,454
  • I'd love to see reasoning about this. Very controversial, and room for lots of good points from both sides. – Jon Skeet Jan 04 '09 at 00:45
  • @Jon, added a blog link to the reasons I feel this way. – JaredPar Jan 04 '09 at 00:57
  • Jared, your blog post is about local variable declaration with `var`, but your title is much more general. Please clarify. – Jay Bazuzi Jan 04 '09 at 20:31
  • @Jay, most of the problem with type inference is around "var" vs. overload resolution and generic method type inference. I really should have added a sample or two to the article though it was discussed in the comments. – JaredPar Jan 05 '09 at 20:41
3

Extension Methods are the work of the Devil

Everyone seems to think that extension methods in .Net are the best thing since sliced bread. The number of developers singing their praises seems to rise by the minute but I'm afraid I can't help but despise them and unless someone can come up with a brilliant justification or example that I haven't already heard then I will never write one. I recently came across this thread and I must say reading the examples of the highest voted extensions made me feel a little like vomiting (metaphorically of course).

The main reasons given for their extensiony goodness are increased readability, improved OO-ness and the ability to chain method calls better.

I'm afraid I have to differ, I find in fact that they, unequivocally, reduce readability and OO-ness by virtue of the fact that they are at their core a lie. If you need a utility method that acts upon an object then write a utility method that acts on that object don't lie to me. When I see aString.SortMeBackwardsUsingKlingonSortOrder then string should have that method because that is telling me something about the string object not something about the AnnoyingNerdReferences.StringUtilities class.

LINQ was designed in such a way that chained method calls are necessary to avoid strange and uncomfortable expressions and the extension methods that arise from LINQ are understandable but in general chained method calls reduce readability and lead to code of the sort we see in obfuscated Perl contests.

So, in short, extension methods are evil. Cast off the chains of Satan and commit yourself to extension free code.

Community
  • 1
  • 1
Stephen Martin
  • 9,495
  • 3
  • 26
  • 36
3

Development teams should be segregated more often by technological/architectural layers instead of business function.

I come from a general culture where developers own "everything from web page to stored procedure". So in order to implement a feature in the system/application, they would prepare the database table schemas, write the stored procs, match the data access code, implement the business logic and web service methods, and the web page interfaces.

And guess what? Everybody has their own way to doing things! Everyone struggles to learn the ASP.NET AJAX and Telerik or Infragistic suites, Enterprise Library or other productivity and data layer and persistence frameworks, Aspect-oriented frameworks, logging and caching application blocks, DB2 or Oracle percularities. And guess what? Everybody takes heck of a long time to learn how to do things the proper way! Meaning, lots of mistakes in the meantime and plenty of resulting defects and performance bottlenecks! And heck of a longer time to fix them! Across each and every layer! Everybody has a hand in every Visual Studio project. Nobody is specialised to handle and optmise one problem/technology domain. Too many chefs spoil the soup. All the chefs result in some radioactive goo.

Developers may have cross-layer/domain responsibilities, but they should not pretend that they can be masters of all disciplines, and should be limited to only a few. In my experience, when a project is not a small one and utilises lots of technologies, covering more business functions in a single layer is more productive (as well as encouraging more test code test that layer) than covering less business functions spanning the entire architectural stack (which motivates developers to test only via their UI and not test code).

icelava
  • 9,787
  • 7
  • 52
  • 74
3

XHTML is evil. Write HTML

You will have to set the MIME type to text/html anyway, so why fooling yourself into believing that you are really writing XML? Whoever is going to download your page is going to believe that it is HTML, so make it HTML.

And with that, feel free and happy to not close your <li>, it isn't necessary. Don't close the html tag, the file is over anyway. It is valid HTML and it can be parsed perfectly.

It will create more readable, less boilerplate code and you don't lose a thing. HTML parsers work good!

And when you are done, move on to HTML5. It is better.

  • I agree with this. For a while I tried using XHTML on my personal website, but it was too much work for practically no benefit (I just used it to make sure I kept the markup well-formed). I do close all the tags though, but that's just to satisfy my own neuroses. – Matthew Crumley Jan 07 '09 at 22:27
  • 1
    I can't agree less. XML makes the code work *much* nicer with validators and this in turn makes debugging complex nested structures much easier. Perhaps other people can work without this but for me, advanced HTML documents benefit a lot from XML and its strictness. – Konrad Rudolph Jan 08 '09 at 20:27
  • 1
    I've never thought of XHTML as XML at all. I simply consider HTML and XHTML to be the same thing until I see lazy HTML code. Not closing your tags is a bad habbit and doesn't improve readability at all... especially when dealing with a large file. Tags should all be lowercase as well. –  Jan 09 '09 at 20:47
3

Hibernate is useless and damaging to the minds of developers.

sproketboy
  • 8,967
  • 18
  • 65
  • 95
3

This one is not exactly on programming, because html/css are not programming languages.

Tables are ok for layout

css and divs can't do everything, save yourself the hassle and use a simple table, then use css on top of it.

hasen
  • 161,647
  • 65
  • 194
  • 231
  • I used to think this way until I really got deep into CSS to see if I could prove myself wrong. I did and he helped to land me a job that required tableless layouts for accessiblity reasons. Do you have any examples on what you can't do? – JamesEggers Jan 09 '09 at 23:06
  • see, this "deep" thing is just hacks and black magic, you end up with a an unmaintainable css mess, and if you change an attribute by mistake, the whole thing could collapse into a hairy mess, even if the attribute doesn't seem too important. – hasen Jan 10 '09 at 12:10
  • upvoted because I cant decide whether to agree or disagree; controversial indeed – iandisme Dec 09 '09 at 20:36
3

The latest design patterns tend to be so much snake oil. As has been said previously in this question, overuse of design patterns can harm a design much more than help it.

If I hear one more person saying that "everyone should be using IOC" (or some similar pile of turd), I think I'll be forced to hunt them down and teach them the error of their ways.

ZombieSheep
  • 29,603
  • 12
  • 67
  • 114
3

Upfront design - don't just start writing code because you're excited to write code

I've seen SO many apps that are poorly designed because the developer was so excited to get coding that they just opened up a white page and started writing code. I understand that things change during the development lifecycle. However, it's difficult working with applications that have several different layouts and development methodologies from form to form, method to method.

It's difficult to hit the target your application is to handle if you haven't clearly defined the task and how you plan to code it. Take some time (and not just 5 minutes) and make sure you've laid out as much of it has you can before you start coding. This way you'll avoid a spaghetti mess that your replacement will have to support.

Bill Martin
  • 4,825
  • 9
  • 52
  • 86
3

Women make better programmers than men.

The female programmers I've worked with don't get wedded to "their" code as much as men do. They're much more open to criticism and new ideas.

WOPR
  • 5,313
  • 6
  • 47
  • 63
  • While my experience agrees with your explanation (based on only 2 data points however), I don't agree with your assessment that not being wedded to their code is all it takes to be a better programmer. – Cameron MacFarland Jan 13 '09 at 23:04
  • Good programmers form a strong emotional attachment to thier code because they're passionate about creating the best solutions they can. It almost represents the best they can be .. hence the ego attachment. It's both blinding to better solutions and a key driver to improve. IMHO; it's mostly good! – John MacIntyre Jan 22 '09 at 19:24
  • 1
    I've not seen a correlation between sex and code-base ownership, either. (Though only two data points, also.) Care to expand on your answer? – Stu Thompson Apr 28 '09 at 20:11
3

If you can only think of one way to do it, don't do it.

Whether it's an interface layout, a task flow, or a block of code, just stop. Do something to collect more ideas, like asking other people how they would do it, and don't go back to implementing until you have at least three completely different ideas and at least one crisis of confidence.

Generally, when I think something can only be done one way, or think only one method has any merit, it's because I haven't thought through the factors which ought to be influencing the design thoroughly enough. If I had, some of them would clearly be in conflict, leading to a mess and thus an actual decision rather than a rote default.

Being a solid programmer does not make you a solid interface designer

And following all of the interface guidelines in the world will only begin to help. If it's even humanly possible... There seems to be a peculiar addiction to making things 'cute' and 'clever'.

Kim Reece
  • 1,260
  • 9
  • 11
3

Programmers take their (own little limited stupid) programming language as a sacrosanct religion.

Its so funny how programmers take these discussions almost like religious believers do: no critics allowed, (often) no objective discussion, (very often) arguing based upon very limited or absent knowledge and information. For a confirmation, just read the previous answers, and especially the comments.

Also funny and another confirmation: by definition of the question "give me a controversial opinion", any controversion opinion should NOT qualify for negative votes - actually the opposite: the more controversial, the better. But how do our programmers react: like Pavlov's dogs, voting negative on disliked opinions.

PS: I upvoted some others for fairness.

blabla999
  • 3,130
  • 22
  • 24
  • P.S.: I don't have the habit to salivate on Stackoverflow... ok, it is a site for perverts, but is not porn. – MaD70 Nov 06 '09 at 01:50
3

Member variables should never be declared private (in java)

If you declare something private, you prevent any future developer from deriving from your class and extending the functionality. Essentially, by writing "private" you are implying that you know more now about how your class can be used than any future developer might ever know. Whenever you write "private", you ought to write "protected" instead.

Classes should never be declared final (in java)

Similarly, if you declare a class as final (which prevents it from being extended -- prevents it from being used as a base class for inheritance), you are implying that you know more than any future programmer might know, about what is the right and proper way to use your class. This never a good idea. You don't know everything. Someone might come up with a perfectly suitable way to extend your class that you didn't think of.

Java Beans are a terrible idea.

The java bean convention -- declaring all members as private and then writing get() and set() methods for every member -- forces programmers to write boilerplate, error-prone, tedious, and lengthy code, where no code is needed. Just make public members variables public! Trust in your ability to change it later, if you need to change the implementation (hint: 99% of the time, you never will).

user26294
  • 5,532
  • 3
  • 22
  • 18
  • 1
    Protected member variables are so very wrong! It breaks encapsulation and leads to SERIOUS problems. Only methods should ever be declared as protected. – Captain Sensible Jan 26 '09 at 15:27
3

Inversion of control does not eliminate dependencies, but it sure does a great job of hiding them.

benjismith
  • 16,559
  • 9
  • 57
  • 80
3

Good Performance VS Elegant Design

They are not mutually exclusive but I can't stand over-designed class structures/frameworks that completely have no clue about performance. I don't need to have a string of new This(new That(new Whatever())); to create an object that will tell me it's 5 AM in the morning oh by the way, it's 217 days until Obama's birthday, and the weekend is 2 days away. I only wanted to know if the gym was open.

Having balance between the 2 are crucial. The code needs to get nasty when you need to pump out all the processor do something intensive such as reading terabytes of data. Save the elegance for the places that consume the 10% of resources which is probably more than 90% of the code.

Jeremy Edwards
  • 14,620
  • 17
  • 74
  • 99
  • Ironically, reading a lot of data is often *not* CPU intensive, because CPUs are so much faster than disks :) – Jon Skeet Jan 26 '09 at 07:18
  • I agree, I wasn't super keen in making that distinguishment when writing my response but there are also many times that just the memory overhead will cause you to start swapping. – Jeremy Edwards Jan 27 '09 at 04:54
3

Software Development is a VERY small subset of Computer Science.

People sometimes seem to think the two are synonymous, but in reality there are so many aspects to computer science that the average developer rarely (if ever) gets exposed to. Depending on one's career goals, I think there are a lot of CS graduates out there who would probably have been better off with some sort of Software Engineering education.

I value education highly, have a BS in Computer science and am pursuing a MS in it part time, but I think that many people who obtain these degrees treat the degree as a means to an end and benefit very little. I know plenty of people who took the same Systems Software course I took, wrote the same assembler I wrote, and to this day see no value in what they did.

Tina Orooji
  • 1,842
  • 3
  • 15
  • 14
  • ++ I wish it worked both ways. I think software engineers could learn a lot from CS, and I *know* CS could learn a lot from SE, such as what *actually matters* to the real world. – Mike Dunlavey Oct 23 '09 at 21:19
3

That, erm, people should comment their code? It seems to be pretty controversial around here...

The code only tells me what actually it does; not what it was supposed to do

The time I see a function calculating the point value of an Australian Bond Future is the time I want to see some comments that indicate what the coder thought the calculation should be!

oxbow_lakes
  • 133,303
  • 56
  • 317
  • 449
3

Enable multiple checkout If we improve enough discipline of the developers, we will get much more efficiency from this setting by auto merge of source control.

sesame
  • 815
  • 1
  • 10
  • 18
  • If you have enough discipline you don't need source control. But nobody can have that much discipline. – GoatRider Apr 04 '09 at 13:06
  • Main ability of source control is to return to any earlier state of the design. Current middle or large software project cannot run without it. Though enabling multiple checkout option is only my prefered little setting. – sesame Apr 09 '09 at 05:23
3

Schooling ruins creativity *

*"Ruins" means "potentially ruins"

Granted, schooling is needed! Everyone needs to learn stuff before they can use it - however, all those great ideas you had about how to do a certain strategy for a specific business-field can easily be thrown into that deep brain-void of ours if we aren't careful.

As you learn new things and acquire new skills, you are also boxing your mindset on those new things and skills, since they apparently are "the way to do it". Being humans, we tend to listen to authorities - being it a teacher, a consultant, a co-worker or even a site / forum you like. We should ALWAYS be aware of that "flaw" in how our mind works. Listen to what other people say, but don't take what they say for granted. Always keep a critic point-of-view on every new information you receive.

Instead of thinking "Wow, that's smart. I will use that from now on", we should think "Wow, that's smart. Now, how can I use that in my personal toolbox of skills and ideas".

cwap
  • 11,087
  • 8
  • 47
  • 61
3

It takes less time to produce well-documented code than poorly-documented code

When I say well-documented I mean with comments that communicate your intention clearly at every step. Yes, typing comments takes some time. And yes, your coworkers should all be smart enough to figure out what you intended just by reading your descriptive function and variable names and spelunking their way through all your executable statements. But it takes more of their time to do it than if you had just explained your intentions, and clear documentation is especially helpful when the logic of the code turns out to be wrong. Not that your code would ever be wrong...

I firmly believe that if you time it from when you start a project to when you ship a defect-free product, writing well-documented code takes less time. For one thing, having to explain clearly what you're doing forces you to think it through clearly, and if you can't write a clear, concise explanation of what your code is accomplishing then it's probably not designed well. And for another purely selfish reason, well-documented and well-structured code is far easier to dump onto someone else to maintain - thus freeing the original author to go create the next big thing. I rarely if ever have to stop what I'm doing to explain how my code was meant to work because it's blatantly obvious to anyone who can read English (even if they can't read C/C++/C# etc.). And one more reason is, frankly, my memory just isn't that good! I can't recall what I had for breakfast yesterday, much less what I was thinking when I wrote code a month or a year ago. Perhaps your memory is far better than mine, but because I document my intentions I can quickly pick up wherever I left off and make changes without having to first figure out what I was thinking when I wrote it.

That's why I document well - not because I feel some noble calling to produce pretty code fit for display, and not because I'm a purist, but simply because end-to-end it lets me ship quality software in less time.

JayMcClellan
  • 1,601
  • 1
  • 11
  • 10
  • It does indeed take less time, but also more skill, but that is the case with all languages whether spoken or written. Words do not inheritly have meaning, rather they have meanings that individuals associate with them. If I were to comment that "cows are retromingent" the huge majority of people would not know what I meant, whereas saying "when cows pee it goes backwards" would give a better understanding. I comment in less time by typing faster :-D – STW Apr 21 '09 at 20:59
  • ..and my apologies for that comment, but that term is one of the few things I learned in college so I have to jump at the opportunity to use it. Time for me to masticate. – STW Apr 21 '09 at 21:00
3

Never let best practices or pattern obsessesion slave you.

These should be guidelines, not laws set in stone.

And I really like the patterns, and the GoF book more or less says it that way too, stuff to browse through, providing a common jargon. Not a ball and chain gospel.

Marco van de Voort
  • 25,628
  • 5
  • 56
  • 89
3

Sometimes it's okay to use regexes to extract something from HTML. Seriously, wrangle with an obtuse parser, or use a quick regex like /<a href="([^"]+)">/? It's not perfect, but your software will be up and running much quicker, and you can probably use yet another regex to verify that the match that was extracted is something that actually looks like a URL. Sure, it's hacky, and probably fails on several edge-cases, but it's good enough for most usage.

Based on the massive volume of "How use regex get HTML?" questions that get posted here almost daily, and the fact that every answer is "Use an HTML parser", this should be controversial enough.

Chris Lutz
  • 73,191
  • 16
  • 130
  • 183
3

Cleanup and refactoring are very important in (team) development

A lot of work in team development has to do with management. If you are using a bug tracker than it is only useful if someone takes the time to close/structure things and lower the amount of tickets. If you are using a source code management somebody needs to cleanup here and restructure the repository quite often. If you are programming than there should be people caring about refactoring of the lazy produced stuff of others. It is part of most of the aspects some will face while doing software development.

Everybody agrees to the necessity of this kind of management. And it is always the first thing that is skipped!

Norbert Hartl
  • 10,481
  • 5
  • 36
  • 46
3
  1. Good architecture is grown, not designed.

  2. Managers should make sure their team members always work below their state of the art, whatever that level is. When people work withing their comfort zone they produce higher quality code.

Andriy Volkov
  • 18,653
  • 9
  • 68
  • 83
  • But if you never try to do something different, you'll never expand your comfort zone. I found that getting out of my "comfort zone" to be quite enjoyable(though certainly not productive, it *is* needed, sometimes). – luiscubal Jul 11 '09 at 00:14
3

Many developers have an underdeveloped sense of where to put things, resulting in messy source code organization at the file, class, and method level. Further, a sizable percentage of such developers are essentially tone-deaf to issues of code organization. Attempts to teach, cajole, threaten, or shame them into keeping their code clean are futile.

On any sufficiently successful project, there's usually a developer who does have a good sense of organization very quietly wielding a broom to the code base to keep entropy at bay.

Dave W. Smith
  • 24,318
  • 4
  • 40
  • 46
3

My controversial opinion is probably that John Carmack (ID Software, Quake etc.) is not a very good programmer.

Don't get me wrong, he's a very smart programmer in my opinion, but after I noticed the line "#define private public" in the quake sourcecode I couldn't help but think he's a guy that gets the job done nomatter what, but in my definition not a good programmer :) This opinion has gotten me into a lot of heated discussions though ;)

Led
  • 2,002
  • 4
  • 23
  • 31
  • If true then I'd be inclined to agree. That looks pretty bad. – spender Jun 23 '09 at 22:41
  • I don't know many programs that are this kind of performance optimized, dealing with graphics and sound and everything, (some) platform independent, which are still **that stable** as doom and quake and everything produced by id software. Really. I wish every software is made like this. Even usability is great. – Stefan Steinegger Nov 16 '09 at 16:24
3

Software is not an engineering discipline.

We never should have let the computers escape from the math department.

james woodyatt
  • 2,170
  • 17
  • 17
2

Software-Reuse is the most important way to optimize software-development

Somehow, software-reuse seamed to be in vogue for some time, but has lost it's charm, when many companies found out that just writing powerpoint presentations with reuse slogans doesn't actually help. They reasoned that software-reuse is just not "good enough" and can't live up to their dreams. So it seams that it is not in vogue any more -- it was replaced by plenty of project management newcomers (Agile for example).

The fact is, that any really good developer by himself performs some kind of software-reuse. I would say Any developer, not doing software-reuse is a bad developer!

I have experienced myself, how much software-reuse can produce performance and stability in development. But of course, a set of PowerPoints and half-hearted confessions of management does not suffice to get its full potential in a company.

I have linked a very old article of mine about software-reuse (see title). It was originally written in German and translated thereafter -- so excuse please, when it is not that good writing.

oxbow_lakes
  • 133,303
  • 56
  • 317
  • 449
Juergen
  • 12,378
  • 7
  • 39
  • 55
  • One of the problems with software-reuse is that it requires advanced reading and adapting skills, which aren't easy. Also, using libraries as dependencies can be a nightmare if those libraries aren't stable. – luiscubal Jul 11 '09 at 00:21
  • Yes, advanced reading skills are difficult for most programmers ;-) Your second point is a good one. Reuse does not come without a price tag, of course. Not like some companies think, that there must be just a directive to make reuse. That also is the reason, why many where disappointed by reuse. Something for nothing does not work in IT either! – Juergen Jul 11 '09 at 13:29
2

It is OK to use short variable names

But not for indices in nested loops.

quant_dev
  • 6,181
  • 1
  • 34
  • 57
  • Not for indices in nested loops? Why? Its easy to distinguish them when definition is near usage. Well, I can only think of i and j as a bad choice, because the look so similar. – Frunsi Dec 15 '09 at 00:30
  • Because it is easy to forget which variable belongs to which loop. – quant_dev Dec 16 '09 at 07:36
2

Functional programming is NOT more intuitive or easier to learn than imperative programming.

There are many good things about functional programming, but I often hear functional programmers say it's easier to understand functional programming than imperative programming for people with no programming experience. From what I've seen it's the opposite, people find trivial problems hard to solve because they don't get how to manage and reuse their temporary results when you end up in a world without state.

Laserallan
  • 11,072
  • 10
  • 46
  • 67
  • Controversial? Functional programming sucks. That's controversial. However, "functional programming is hard". That's a tautology. – Warren P Apr 01 '10 at 00:22
  • Part of why it's hard is because we're damaged (pre-wired) for our iterative, procedural programming. It may be that functional programming is actually easier to absorb for a neophyte than procedural programming is. Are there any studies out there on this? – Warren P Apr 01 '10 at 00:23
2

Developing on .NET is not programming. Its just stitching together other people's code.

Having come from a coding background where you were required to know the hardware, and where this is still a vital requirements in my industry, I view high level languages as simply assembling someone else's work. Nothing essentially wrong with this, but is it 'programming'?

MS has made a mint from doing the hard work and presenting 'developers' with symbolic instruction syntax. I seem to now know more and more developers who appear constrained by the existence or non-existence of a class to perform a job.

My opinion comes from the notion that to be a programmer you should be able to programme at the lowest level your platform allows. So if you're programming .NET then you need to be able to stick your head under the hood and work out the solution, rather than rely on someone else creating a class for you. That's simply lazy and does not qualify as 'development' in my book.

  • Stated, but not reasoned. -1 – Jay Aug 15 '09 at 06:51
  • Added some reason to the opinion. –  Aug 16 '09 at 22:44
  • You may understand assembly but do you get how the hardware works? How electrons flow into different gates, how circuits are manufactured? Its all about choosing what you want to accomplish and the level of abstraction you need to achieve it – Eric Aug 24 '09 at 03:56
  • I wouldn't say that stating that some developers who program using .NET (maybe even a lot, *maybe* even the majority) are just stiching is necessarily controversial. Heck, I'd probably agree with you. Extending that to *everyone* though as you have done! Now, that's controversial! There's a lot of very smart engineers who program using .NET. Also, I'd disagree that you need to be able to program to the lowest level of the platform. You need to know enough to understand the factors that affect your app. – Phil Sep 24 '09 at 20:46
  • 2
    This is just ridiculous. Let me counter it: low-level programming is not programming. It is just stitching CPU instructions together. – reinierpost Dec 04 '09 at 20:11
  • Case and Point - Microsoft's top developers prefer old-school coding methods - http://shar.es/aE0Qj –  Dec 07 '09 at 00:02
  • @Gerard nah, down voting means you don't agree. It's what I'm doing with all the C++ promoting answers and C downmoting answers. –  Dec 20 '10 at 15:00
2

I'd rather be truly skilled/experienced in an older technology that allows me to solve real world problems effectively, as opposed to new "fashionable" technologies that still going through the adolescent stage.

Ash
  • 60,973
  • 31
  • 151
  • 169
2

Development projects are bound to fail unless the team of programmers is given as a whole complete empowerment to make all decisions related to the technology being used.

Kiffin
  • 1,048
  • 1
  • 15
  • 21
2

I'd say that my most controversial opinion on programming is that I honestly believe you shouldn't worry so much about throw-away code and rewriting code. Too many times people feel that if you write something down, then changing it means you did something wrong. But the way my brain works is to get something very simple working, and update the code slowly, while ensuring that the code and the test continue to function together. It may end up actually creating classes, methods, additional parameters, etc., I fully well know will go away in a few hours. But I do it because i want to take only small steps toward my goal. In the end, I don't think I spend any more time using this technique than the programmers that stare at the screen trying to figure out the best design up front before writing a line of code.

The benefit I get is that I'm not having to constantly deal with software that no longer works because I happen to break it somehow and am trying to figure out what stopped working and why.

zumalifeguard
  • 8,648
  • 5
  • 43
  • 56
2

If you haven't read a man page, you're not a real programmer.

Ross Light
  • 4,769
  • 1
  • 26
  • 37
2

One class per file

Who cares? I much prefer entire programs contained in one file rather than a million different files.

Ravi
  • 625
  • 1
  • 6
  • 19
2

80% of bugs are introduced in the design stage.
The other 80% are introduced in the coding stage.

(This opinion was inspired by reading Dima Malenko's answer. "Development is 80% about the design and 20% about coding", yes. "This will produce code with near zero bugs", no.)

Windows programmer
  • 7,871
  • 1
  • 22
  • 23
2

Best practices aren't.

Jé Queue
  • 10,359
  • 13
  • 53
  • 61
2

To Be A Good Programmer really requires working in multiple aspects of the field: Application development, Systems (Kernel) work, User Interface Design, Database, and so on. There are certain approaches common to all, and certain approaches that are specific to one aspect of the job. You need to learn how to program Java like a Java coder, not like a C++ coder and vice versa. User Interface design is really hard, and uses a different part of your brain than coding, but implementing that UI in code is yet another skill as well. It is not just that there is no "one" approach to coding, but there is not just one type of coding.

2

That software can be bug free if you have the right tools and take the time to write it properly.

too much php
  • 88,666
  • 34
  • 128
  • 138
2

Opinion: Not having function definitions, and return types can lead to flexible and readable code.

This opinion probably applies more to interpreted languages than compiled. Requiring a return type, and a function argument list, are great for things like intellisense to auto document your code, but they are also restrictions.

Now don't get me wrong, I am not saying throw away return types, or argument lists. They have their place. And 90% of the time they are more of a benefit than a hindrance.

There are times and places when this is useful.

J.J.
  • 4,856
  • 1
  • 24
  • 29
2

You can't write a web application without a remote debugger

Web applications typically tie together interactions between multiple languages on the client and server side, require interaction from a user and often include third-party code that can be anything from a simple API implementation to a byzantine framework.

I've lost count of the number of times I've had another developer sat with me while I step into and follow through what's actually going on in a complex web application with a decent remote debugger to see them flabbergasted and amazed that such tools exist. Often they still don't take the trouble to install and setup these kinds of tools even after seeing them in action.

You just can't debug a non trivial web application with print statements. Times ten if you didn't right all the code in your application.

If your debugger can step through all the various languages in use and show you the http transactions taking place then so much the better.

You can't develop web applications without Firebug

Along similar lines, once you have used Firebug (or very near equivalent) you look on anyone trying to develop web applications with a mixture of pity and horror. Particularly with Firebug showing computed styles, if you remember back to NOT using it and spending hours randomly changing various bits of CSS and adding "!important" in too many places to be funny you will never go back.

reefnet_alex
  • 9,703
  • 5
  • 33
  • 32
  • I agree to firebug, but ive been a web dev. for 3 years and done everything from mid to large. I've never felt the need to use a remote debugger – Ali Jan 09 '09 at 18:03
  • exackly - and before you used firebug I bet you didn't realise you needed it either :) seriously though, give it a try and then say that – reefnet_alex Jan 09 '09 at 21:06
  • Since I can unit test my whole webapp, why would I need a remote debugger? I can run any line of code I want locally... – Aaron Digulla Mar 03 '09 at 14:59
  • 1) remote doesn't mean "not local" in this case, it means it running the debugger on the php interpreter as run up by your web server and following all the interactions with the browser through. whether running locally or on a live server you need a remote debugger to see what's actually happening – reefnet_alex Mar 12 '09 at 13:02
  • 2) live server != dev machine: there are some bugs you will only see on your live (or exact copy of your live) server – reefnet_alex Mar 12 '09 at 13:04
2

Believe it or not, my belief that, in an OO language, most of the (business logic) code that operates on a class's data should be in the class itself is heresy on my team.

moffdub
  • 5,284
  • 2
  • 35
  • 32
  • I second your opinion. I can't stand it when someone goes with the excuse that "Classes should be minimal, clean, simple" and writes a close to useless class that merely aggregates data - and then builds the logic about this data everywhere else. – Daniel Daranas Jan 10 '09 at 00:28
  • Hmm. So, the `cut` method should be a member of which class? `meat`, `vegetable`, `knife`, `scissors`, `kitchen_table`, `workbench` ...? – Svante Jan 12 '09 at 16:47
  • 2
    Without any further information, I'd say that the knife cuts a cuttable object: Knife.cut(ICuttable something). Of course, if you only have one cuttable object, like meat, and many things that cut the meat, then you want Meat.cutWith(ICutter something). – moffdub Jan 17 '09 at 00:54
2

2 space indent.

No discussion. It just has to be that way ;-)

Fergie
  • 5,933
  • 7
  • 38
  • 42
  • How about a compromise? You add a space and I give up a space and everybodies happy? ;-) – Captain Sensible Jan 26 '09 at 15:25
  • There was an argument among the Delphi programmers at my company about whether to use 2-space indents or 4-space indents. We settled on 3-spaces to offend all parties equally. – Juliet Feb 04 '09 at 00:39
  • 3-space is the ugliest indent I've ever seen. It just does not fit into my happy 2^n world. 2 is just for those who want to write code with 20-way and more indentation, i.e. for those whose application consists of one monster class. Or a monster function. – Sebastian Mach Mar 19 '09 at 15:20
2

Code as Design: Three Essays by Jack W. Reeves

The source code of any software is its most accurate design document. Everything else (specs, docs, and sometimes comments) is either incorrect, outdated or misleading.

Guaranteed to get you fired pretty much everywhere.

fbonnet
  • 2,325
  • 14
  • 23
2

Tcl/Tk is the best GUI language/toolkit combo ever

It may lack specific widgets and be less good-looking than the new kids on the block, but its model is elegant and so easy to use that one can build working GUIs faster by typing commands interactively than by using a visual interface builder. Its expressive power is unbeatable: other solutions (Gtk, Java, .NET, MFC...) typically require ten to one hundred LOC to get the same result as a Tcl/Tk one-liner. All without even sacrificing readability or stability.

pack [label .l -text "Hello world!"] [button .b -text "Quit" -command exit]
fbonnet
  • 2,325
  • 14
  • 23
2

What strikes me as amusing about this question is that I've just read the first page of answers, and so far, I haven't found a single controversial opinion.

Perhaps that says more about the way stackoverflow generates consensus than anything else. Maybe I should have started at the bottom. :-)

Dominic Cronin
  • 6,062
  • 2
  • 23
  • 56
2

Dependency Management Software Does More Harm Than Good

I've worked on Java projects that included upwards of a hundred different libraries. In most cases, each library has its own dependencies, and those dependent libraries have their own dependencies too.

Software like Maven or Ivy supposedly "manage" this problem by automatically fetching the correct version of each library and then recursively fetching all of its dependencies.

Problem solved, right?

Wrong.

Downloading libraries is the easy part of dependency management. The hard part is creating a mental model of the software, and how it interacts with all those libraries.

My unpopular opinion is this:

If you can't verbally explain, off the top of your head, the basic interactions between all the libraries in your project, you should eliminate dependencies until you can.

Along the same lines, if it takes you longer than ten seconds to list all of the libraries (and their methods) invoked either directly or indirectly from one of your functions, then you are doing a poor job of managing dependencies.

You should be able to easily answer the question "which parts of my application actually depend on library XYZ?"

The current crop of dependency management tools do more harm than good, because they make it easy to create impossibly-complicated dependency graphs, and they provide virtually no functionality for reducing dependencies or identifying problems.

I've seen developers include 10 or 20 MB worth of libraries, introducing thousands of dependent classes into the project, just to eliminate a few dozen lines of simple custom code.

Using libraries and frameworks can be good. But there's always a cost, and tools which obscure that cost are inherently problematic.

Moreover, it's sometimes (note: certainly not always) better to reinvent the wheel by writing a few small classes that implement exactly what you need than to introduce a dependency on a large general-purpose library.

benjismith
  • 16,559
  • 9
  • 57
  • 80
  • I disagree. I think DM is a good thing, except Maven did it wrong. So much code depends on other stuff, but we still after so many year haven't figured out how to manage it all. That's one thing we need to fix for SW dev to move forward. – Pyrolistical Mar 23 '09 at 22:02
2

There are some (very few) legitimate uses for goto (particularly in C, as a stand-in for exception handling).

DWright
  • 9,258
  • 4
  • 36
  • 53
2

Never implement anything as a singleton.

You can decide not to construct more than one instance, but always ensure you implementation can handle more.

I have yet to find any scenario where using a singleton is actually the right thing to do.

I got into some very heated discussions over this in the last few years, but in the end I was always right.

Andreas
  • 1,379
  • 10
  • 12
  • I can give you one: to abstract away, transparently, support for different JVM versions, using reflection to load a support instance for J5, gracefully defaulting to one for J2 if the JVM is < J5 - e.g. getting time is nano seconds. – Lawrence Dol Feb 06 '09 at 09:54
2

Useful and clean high-level abstractions are significantly more important than performance

one example:

Too often I watch peers spending hours writing over complicated Sprocs, or massive LINQ queries which return unintuitive anonymous types for the sake of "performance".

They could achieve almost the same performance but with considerably cleaner, intuitive code.

andy
  • 8,775
  • 13
  • 77
  • 122
2

Automatic Updates Lead to Poorer Quality Software that is Less Secure

The Idea

A system to keep users' software up to date with the latest bug fixes and security patches.

The Reality

Products have to be shipped by fixed deadlines, often at the expense of QA. Software is then released with many bugs and security holes in order to meet the deadline in the knowledge that the 'Automatic Update' can be used to fix all the problems later.

Now, the piece of software that really made me think of this is VS2K5. At first, it was great, but as the updates were installed the software is slowly getting worse. The biggest offence was the loss of macros - I had spent a long time creating a set of useful VBA macros to automate some of the code I write - but apparently there was a security hole and instead of fixing it the macro system was disabled. Bang goes a really useful feature: recording keystrokes and repeated replaying of them.

Now, if I were really paranoid, I could see Automatic Updates as a way to get people to upgrade their software by slowly installing code that breaks the system more often. As the system becomes more unreliable, users are tempted to pay out for the next version with the promise of better reliablity and so on.

Skizz

Skizz
  • 69,698
  • 10
  • 71
  • 108
  • No, software is released early, before being fully tested, because it can be updated later - there is less emphasis on creating bug-free code and more on getting it released. This gives a 'window of opportunity' to malicious code. – Skizz Mar 11 '09 at 13:46
  • It's not as if MS had a shining reputation for secure, bug-free software before automatic updates came along. – Nate C-K Jan 07 '10 at 19:02
  • I'd actually say the reverse is true, overall security and stability have improved in more recent MS software. – Nate C-K Jan 07 '10 at 22:08
2

MS Access* is a Real Development Tool and it can be used without shame by professional programmers

Just because a particular platform is a magnet for hacks and secretaries who think they are programmers shouldn't besmirch the platform itself. Every platform has its benefits and drawbacks.

Programmers who bemoan certain platforms or tools or belittle them as "toys" are more likely to be far less knowledgable about their craft than their ego has convinced them they are. It is a definite sign of overconfidence for me to hear a programmer bash any environment that they have not personally used extensively enough to know well.

* Insert just about any maligned tool (VB, PHP, etc.) here.

JohnFx
  • 34,542
  • 18
  • 104
  • 162
  • I agree by proxy... a former colleague manipulated and massaged an Access-backed production system into a highly efficient system that was perfectly suited for its needs. Although with the availability of other desktop-based DB platforms such as SQL Compact (aka SQL Compact Edition, aka SQL Mobile) Access is becomming more of a developer's occasional assistant than his tool. It's like a toothpick kind of--developers can use it, and maybe even use it professionally (give me back my CD!)... – STW Apr 21 '09 at 20:14
  • I do have to disagree about PHP, at least in the pre/early asp.net days. PHP was a very valid competitor to classic ASP, and it wasn't until ASP.NET came along and IIS 6 was released that PHP began to lose its functionality. LAMP blew away IIS/asp in my opinion, and judging by the dominance of Apache servers running the web I would say the internet would more or less agree. – STW Apr 21 '09 at 20:16
  • I got one agree and one disagree comment. I should at least get an upvote for that since the OP wanted controversial opinions. =) – JohnFx Apr 21 '09 at 20:55
  • disagree.i decided to not to use it when i was 11.because autonumber s..ks. – Behrooz Dec 14 '09 at 19:50
  • Ok, I agree, but they really should standardize the SQL. Its crap having to work with access specific queries. – JL. Apr 04 '10 at 16:14
  • @JL: Access SQL is a superset of ANSI SQL as far as I know. So you don't HAVE to use Access specific SQL. – JohnFx Apr 04 '10 at 17:09
2

...That the "clarification of ideas" should not be the sole responsibility of the developer...and yes xkcd made me use that specific phrase...

To often we are handed project's that are specified in psuedo-meta-sorta-kinda-specific "code" if you want to call it that. There are often product managers who draw up the initial requiements for a project and perform next to 0% of basic logic validation.

I'm not saying that the technical approach shouldn't be drawn up by the architect, or that the speicifc implemntation shouldn't be the responsibility of the developer, but rather that it should the requirement of the product manager to ensure that their requirements are logically feasible.

Personally I've been involved in too many "simple" projects that encounter a little scope creep here and there and then come across a "small" change or feature addition which contradicts previous requirements--whether implicitly or explicitly. In these cases it is all too easy for the person requesting the borderline-impossible change to become enraged that developers can't make their dream a reality.

STW
  • 44,917
  • 17
  • 105
  • 161
2

switch-case is not object oriented programming

I often see a lot of switch-case or awful big if-else constructs. This is merely a sign for not putting state where it belongs and don't use the real and efficient switch-case construct that is already there: method lookup/vtable

Norbert Hartl
  • 10,481
  • 5
  • 36
  • 46
2

To be really controversial:

You know nothing!

or in other words:

I know that I know nothing.

(this could be paraphrased in many kinds but I think you get it.)

When starting with computers/developing, IMHO there are three stages everyone has to walk through:

The newbie: knows nothing (this is fact)

The intermediate: thinks he knows something/very much(/all) (this is conceit)

The professional: knows that he knows nothing (because as a programmer most time you have to work on things you have never done before). This is no bad thing: I love to familiarize myself to new things all the time.

I think as a programmer you have to know how to learn - or better: To learn to learn (because remember: You know nothing! ;)).

Inno
  • 2,567
  • 5
  • 32
  • 44
  • Strange logic, I agree be humble and learn, but to say you know nothing would just be silly. – JL. Apr 04 '10 at 16:04
2

Design patterns are bad.

Actually, design patterns aren't.

You can write bad code, and bury it under a pile of patterns. Use singletons as global variables, and states as goto's. Whatever.

A design pattern is a standard solution for a particular problem, but requires you to understand the problem first. If you don't, design patterns become a part of the problem for the next developer.

G B
  • 2,951
  • 2
  • 28
  • 50
1

You'll never use enough languages, simply because every language is the best fit for only a tiny class of problems, and it's far too difficult to mix languages.

Pet examples: Java should be used only when the spec is very well thought out (because of lots of interdependencies meaning refactoring hell) and when working with concrete concepts. Perl should only be used for text processing. C should only be used when speed trumps everything, including flexibility and security. Key-value pairs should be used for one-dimensional data, CSV for two-dimensional data, XML for hierarchical data, and a DB for anything more complex.

l0b0
  • 55,365
  • 30
  • 138
  • 223
1

I believe that the "Let's Rewrite The Past And Try To Fix That Bug Pretending Nothing Ever Worked" is a valuable debugging mantra in desperate situations:

https://stackoverflow.com/questions/978904/do-you-use-the-orwellian-past-rewriting-debugging-philosophy-closed

Community
  • 1
  • 1
JCCyC
  • 16,140
  • 11
  • 48
  • 75
1

Remove classes. Number of classes (methods of classes) in .NET Framework handles exception implicitly. It's difficult to work with a dumb person.

KV Prajapati
  • 93,659
  • 19
  • 148
  • 186
1

Don't use keywords for basic types if the language has the actual type exposed. In C#, this would refer to bool (Boolean), int (Int32), float (Single), long (Int64). 'int', 'bool', etc are not actual parts of the language, but rather just 'shortcuts' or 'aliases' for the actual type. Don't use something that doesn't exist! And in my opinion, Int16, Int32, Int64, Boolean, etc makes a heck of a lot more sense then 'short', 'long', 'int'.

David Anderson
  • 13,558
  • 5
  • 50
  • 76
  • 4
    `int`, `bool` etc most certainly *are* part of the C# language. They're right there in the specification! They may not be part of the underlying platform, but they're definitely part of the C# language. – Jon Skeet Jul 29 '09 at 05:22
  • I think platform is what I meant. * looks around * Thanks for the clarification! – David Anderson Jul 29 '09 at 06:08
1

When many new technologies appear on the scene I only learn enough about them to decide if I need them right now.

If not, I put them aside until the rough edges are knocked off by "early adopters" and then check back again every few months / years.

Ash
  • 60,973
  • 31
  • 151
  • 169
  • In what sense is this an controversial opinion? – Ikke Sep 09 '09 at 07:25
  • @Ikke, Why? Surely this makes me an out of touch dinosaur, scared of change and clinging to out-dated and obsolete technologies? (I've lost count of how many projects I've worked on use new technologies because "they're cool" and will solve all our problems.) – Ash Sep 10 '09 at 10:05
1

Agile sucks.

tsilb
  • 7,977
  • 13
  • 71
  • 98
1

Jon Bentley's 'Programming Pearls' is no longer a useful tome.

http://tinyurl.com/nom56r

Jim G.
  • 15,141
  • 22
  • 103
  • 166
  • Interesting opinion. I guess on the details I agree, but in terms of overall attitude, I think we can learn from it. I think we programmers tend to run in channels, and Jon has an attitude of inventiveness and questioning accepted "wisdom". (Not to mention **fun**.) – Mike Dunlavey Oct 13 '09 at 22:41
  • being extremely familair with C syntax carries-over to many languages. And anyone who thinks this is aimed at graduate students is off their rocker - I only know *one* person who read it in grad school. Almost ***everyone*** I know who has read it and/or owns a copy did so either before graduating college, or because they jumped to development from another field, or because they just wanted to. – warren Oct 23 '09 at 04:20
1

I think Java should have supported system-specific features via thin native library wrappers.

Phrased another way, I think Sun's determination to require that Java only support portable features was a big mistake from almost everyone's perspective.

A zillion years later, SWT came along and solved the basic problem of writing a portable native-widget UI, but by then Microsoft was forced to fork Java into C# and lots of C++ had been written that could otherwise have been done in civilized Java. Now the world runs on a blend of C#, VB, Java, C++, Ruby, Python and Perl. All the Java programs still look and act wierd except for the SWT ones.

If Java had come out with thin wrappers around native libraries, people could have written the SWT-equivalent entirely in Java, and we could have, as things evolved, made portable apparently-native apps in Java. I'm totally for portable applications, but it would have been better if that portability were achieved in an open market of middleware UI (and other feature) libraries, and not through simply reducing the user's menu to junk or faking the UI with Swing.

I suppose Sun thought that ISV's would suffer with Java's limitations and then all the world's new PC apps would magically run on Suns. Nice try. They ended up not getting the apps AND not having the language take off until we could use it for logic-only server back-end code.

If things had been done differently maybe the local application wouldn't be, well, dead.

DigitalRoss
  • 143,651
  • 25
  • 248
  • 329
1

QA can be done well, over the long haul, without exploring all forms of testing

Lots of places seem to have an "approach", how "we do it". This seems to implicitly exclude other approaches.

This is a serious problem over the long term, because the primary function of QA is to file bugs -and- get them fixed.

You cannot do this well if you are not finding as many bugs as possible. When you exclude methodologies, for example, by being too black-box dependent, you start to ignore entire classes of discoverable coding errors. That means, by implication, you are making entire classes of coding errors unfixable, except when someone else stumbles on it.

The underlying problem often seems to be management + staff. Managers with this problem seem to have narrow thinking about the computer science and/or the value proposition of their team. They tend to create teams that reflect their approach, and a whitelist of testing methods.

I am not saying you can or should do everything all the time. Lets face it, some test methods are simply going to be a waste of time for a given product. And some methodologies are more useful at certain levels of product maturity. But what I think is missing is the ability of testing organizations to challenge themselves to learn new things, and apply that to their overall performance.

Here's a hypothetical conversation that would sum it up:

Me: You tested that startup script for 10 years, and you managed to learn NOTHING about shell scripts and how they work?!

Tester: Yes.

Me: Permissions?

Tester: The installer does that

Me: Platform, release-specific dependencies?

Tester: We file bugs for that

Me: Error handling?

Tester: when errors happen to customer support sends us some info.

Me: Okay...(starts thinking about writing post in stackoverflow...)

Community
  • 1
  • 1
benc
  • 1,381
  • 5
  • 31
  • 39
1
  • Soon we are going to program in a world without databases.

  • AOP and dependency injection are the GOTO of the 21st century.

  • Building software is a social activity, not a technical one.

  • Joel has a blog.

JuanZe
  • 8,007
  • 44
  • 58
1

You only need 3 to 5 languages to do everything. C is a definite. Maybe assembly but you should know it and be able to use it. Maybe javascript and/or Java if you code for the web. A shell language like bash and one HLL, like Lisp, which might be useful. Anything else is a distraction.

Rob
  • 14,746
  • 28
  • 47
  • 65
1

Apparently it is controversial that IDE's should check to see whether they can link up the code they create before wasting time compiling

But I'm of the opinion that I shouldn't compile a zillion lines of code only to realize that Windows has a lock on the file I'm trying to create because another programmer has some weird threading issue that requires him to Delay Unloading DLLs for 3 minutes after they aren't supposed to be used.

Community
  • 1
  • 1
Peter Turner
  • 11,199
  • 10
  • 68
  • 109
  • You're asking for a language with knowledge of platforms and implementation details. They don't work that way. – Integer Poet Mar 15 '10 at 19:33
  • No, I'm asking for an IDE with knowledge of platforms and implementation details. But thanks for the controversy! I didn't realize this question was finally deleted. – Peter Turner Mar 15 '10 at 21:33
1

"XML and HTML are the "assembly language" of the web. Why still hack it?

It seems fairly obvious that very few developers these days learn/code in assembly language for reason that it's primitive and takes you far away from the problem you have to solve at high-level. So we invented high-level languages to encapsulates those level entities to boost our productivity thru the language elements that we can relate to more at higher level. Just like we can do more with a computer than just its constituent motherboard or CPU.

With the Web, it seems to me developers still are reading/writing and hacking HTML,CSS,XMl,schemas, etc.

I see these as the equivalent of "assembly language" of the Web or its substrates. Should we be done with it?. Sure, we need to hack it sometimes when things go wrong. But surely, that's an exception. I assert that we are replacing lower-level assembly language at machine level with its equivalent at Web-level.

1

Neither Visual Basic or C# trumps the other. They are pretty much the same, save some syntax and formatting.

Brad
  • 1,357
  • 5
  • 33
  • 65
  • 1
    Now... They weren't always so feature similar. So you have to fight what many of us learned once upon a time. – Jon Adams Nov 13 '09 at 22:39
1

I think we should move away from 'C'. Its too old!. But, the old dog is still barking louder!!

  • It is probably still one of the best languages to write an operating system in assuming (1) you are starting from scratch, (2) you want it to be fast but do not have time to write it in assembly, and (3) want to work on maintaining and editing operating systems written in C. – Noctis Skytower Dec 15 '09 at 00:25
  • One word: no. Oh wait, that were three words. –  Dec 20 '10 at 14:46
1

Associative Arrays / Hash Maps / Hash Tables (+whatever its called in your favourite language) are the best thing since sliced bread!

Sure, they provide fast lookup from key to value. But they also make it easy to construct structured data on the fly. In scripting languages its often the only (or at least most used) way to represent structured data.

IMHO they were a very important factor for the success of many scripting languages.

And even in C++ std::map and std::tr1::unordered_map helped me writing code faster.

Frunsi
  • 7,099
  • 5
  • 36
  • 42
1

C++ is future killer language...

... of dynamic languages.

nobody owns it, has a growing set of features like compile-time (meta-)programming or type inference, callbacks without the overhead of function calls, doesn't enforce a single approach (multi-paradigm). POSIX and ECMAScript regular expressions. multiple return values. you can have named arguments. etc etc.

things move really slowly in programming. it took JavaScript 10 years to get off the ground (mostly because of performance), and most of people who program in it still don't get it (classes in JS? c'mon!). i'd say C++ will really start shining in 15-20 years from now. that seems to me like about the right amount of time for C++ (the language as well as compiler vendors) and critical mass of programmers who today write in dynamic languages to converge.

C++ needs to become more programmer-friendly (compiler errors generated from templates or compile times in the presence of same), and the programmers need to realize that static typing is a boon (it's already in progress, see other answer here which asserts that good code written in a dynamically typed language is written as if the language was statically typed).

just somebody
  • 18,602
  • 6
  • 51
  • 60
1

Simplicity Vs Optimality

I believe its very difficult to write code that's both simple and optimal.

Salvin Francis
  • 4,117
  • 5
  • 35
  • 43
1

Python does everything that other programming languages do in half the dev time... and so does Google!!! Check out Unladen Swallow if you disagree.

Wait, this is a fact. Does it still qualify as an answer to this question?

orokusaki
  • 55,146
  • 59
  • 179
  • 257
  • Well, actually, Python still needs a bunch of C modules for some functionality. – Tor Valamo Dec 27 '09 at 07:48
  • 2
    Unladen swallow is not ready for prime time except at certain places inside google, and the "2 to 10 times" faster than interpreted python doesn't come anywhere close to real-native-code speeds for most every work load out there that is not web-slinger centric. If "everything" means "the web crap I think of as programming" then, yeah, Python can do that. And I love python. But I also see that performance-just-as-fast-as-native thing as a crock. Oh and don't forget about the global interpreter lock (GIL). – Warren P Apr 01 '10 at 00:07
1

That (at least during initial design), every Database Table (well, almost every one) should be clearly defined to contain some clearly understanable business entity or system-level domain abstraction, and that whether or not you use it as a a primary key and as Foreign Keys in other dependant tables, some column (attribute) or subset of the table attributes should be clearly defined to represent a unique key for that table (entity/abstraction). This is the only way to ensure that the overall table structure represents a logically consistent representation of the complete system data structure, without overlap or misunbderstood flattening. I am a firm believeer in using non-meaningful surrogate keys for Pks and Fks and join functionality, (for performance, ease of use, and other reasons), but I beleive the tendency in this direction has taken the database community too far away from the original Cobb principles, and we jhave lost much of the benefits (of database consistency) that natural keys provided.

So why not use both?

Charles Bretana
  • 143,358
  • 22
  • 150
  • 216
1

(Unnamed) tuples are evil

  • If you're using tuples as a container for several objects with unique meanings, use a class instead.
  • If you're using them to hold several objects that should be accessible by index, use a list.
  • If you're using them to return multiple values from a method, use Out parameters instead (this does require that your language supports pass-by-reference)

  • If it's part of a code obfuscation strategy, keep using them!

I see people using tuples just because they're too lazy to bother giving NAMES to their objects. Users of the API are then forced to access items in the tuple based on a meaningless index instead of a useful name.

Hermit
  • 1,214
  • 13
  • 24
  • I'm glad you qualified this. Thank goodness for Python 2.6 adding [named tuples](http://docs.python.org/library/collections.html#collections.namedtuple). – bignose Apr 14 '09 at 03:27
  • Hey that's cool. I didn't know there was a such thing as a named tuple. I think for a tuple-perfect-storm you should design a GUI library in python that expects 2-tuples in x,y and y,x order in various places. :-) – Warren P Apr 01 '10 at 00:18
1

Exceptions considered harmful.

Jim In Texas
  • 1,524
  • 4
  • 20
  • 31
  • 4
    Checked exceptions. Unchecked exceptions are fantastic and do a great job of stabilizing your app. – Bill K Jan 09 '09 at 17:55
1

Never make up your mind on an issue before thoroughly considering said issue. No programming standard EVER justifies approaching an issue in a poor manner. If the standard demands a class to be written, but after careful thought, you deem a static method to be more appropriate, always go with the static method. Your own discretion is always better than even the best forward thinking of whoever wrote the standard. Standards are great if you're working in a team, but rules are meant to be broken (in good taste, of course).

1
  • Xah Lee: actually has some pretty noteworthy and legitimate viewpoints if you can filter out all the invective, and rationally evaluate statements without agreeing (or disagreeing) based solely on the personality behind the statements. A lot of my "controversial" viewpoints have been echoed by him, and other notorious "trolls" who have criticized languages or tools I use(d) on a regular basis.

  • [Documentation Generators](http://en.wikipedia.or /wiki/Comparison_of_documentation_generators): ... the kind where the creator invented some custom-made especially-for-documenting-sourcecode roll-your-own syntax (including, but not limited to JavaDoc) are totally superfluous and a waste of time because:

    • 1) They are underused by the people who should be using them the most; and
    • 2) All of these mini-documentation-languages all of them could easily be replaced with YAML
dreftymac
  • 31,404
  • 26
  • 119
  • 182
1

I think its fine to use goto-statements, if you use them in a sane way (and a sane programming language). They can often make your code a lot easier to read and don't force you to use some twisted logic just to get one simple thing done.

woop
  • 36
  • 1
  • The key concept is "in a sane way". I would be shy of this idea if it were running for Grand Poo-Bah, but I understand Linus Torvalds agrees with it passionately :-) – Mike Dunlavey Oct 30 '09 at 15:07
1

Hardcoding is good!

Really ,more efficient and much easier to maintain in many cases!

The number of times I've seen constants put into parameter files really how often will you change the freezing point of water or the speed of light?

For C programs just hard code these type of values into a header file, for java into a static class etc.

When these parameters have a drastic effect on your programs behaviour you really want to do a regresion test on every change, this seems more natural with hard coded values. When things are stored in parameter/property files the temptation is to think "this is not a program cahnge so I dont need to test it".

The other advantage is it stops people messing with vital values in the parameter/property files because there aren't any!

James Anderson
  • 27,109
  • 7
  • 50
  • 78
  • 5
    Q - "how often will change the freezing point of water" A - Every time you change altitude (barometric pressure) or salt density or... (assumptions start with those three letters for a reasons) – jwpfox Jan 06 '09 at 06:17
  • 5
    the speed of light depends on the medium it's traveling through – Ferruccio Jan 07 '09 at 07:46
  • 2
    The assumption that a constant won't change (like in this post, indicated by the responses) is EXACTLY the problem and the reason you should just never hardcode. – Bill K Jan 09 '09 at 18:01
1

Having a process that involves code being approved before it is merged onto the main line is a terrible idea. It breeds insecurity and laziness in developers, who, if they knew they could be screwing up dozens of people would be very careful about the changes they make, get lulled into a sense of not having to think about all the possible clients of the code they may be affecting. The person going over the code is less likely to have thought about it as much as the person writing it, so it can actually lead to poorer quality code being checked in... though, yes, it will probably follow all the style guidelines and be well commented :)

Jesse Pepper
  • 3,225
  • 28
  • 48
  • Approvals are the bad thing? Or you just don't trust one person to do the approvals? I'd say "one person can never approve anything". Meaningful approval means everybody should have the ability to black-ball, and approval should be by stake-holder consensus. Then everybody is to blame when it fails, which it still will. :-) How's that for punchy? – Warren P Apr 01 '10 at 00:15
1

As most others here, I try to adhere to principles like DRY and not being a human compiler.

Another strategy I want to push is "tell, don't ask". Instead of cluttering all objects with getters/setters essentially making a sieve of them, I'd like to tell them to do stuff.

This seems to got straight against good enterprise practices with dumb entity objects and thicker service layer(that does plenty of asking). Hmmm, thoughts?

  • I actually agree with this. If you find too much logic happening outside of the class via querying accessor functions something may be wrong with the design. However, there are different ways of using objects... sometimes you just want an object that holds state and doesn't do anything else. – oz10 Jan 14 '09 at 03:27
  • Primitive getters setters are important in complex applications, the tell methods should be composed of them. Each level on indirection is a blessing in reducing complexity. Ignore this in a complex application (i.e., not business transaction processing or web sites) at your own peril. – Hassan Syed Apr 11 '10 at 03:05
1

Opinion: Duration in the development field does not always mean the same as experience.

Many trades look at "years of experience" in a language. Yes, 5 years of C# can make sense since you may learn new tricks and what not. However, if you are with the company and maintaining the same code base for a number of years, I feel as if you are not gaining the amount of exposure to different situations as a person who works on different situations and client needs.

I once interviewed a person who prided himself on having 10 years of programming experience and worked with VB5, 6, and VB.Net...all in the same company during that time. After more probing, I found out that while he worked with all of those versions of VB, he was only upgrading and constantly maintaining his original VB5 app. Never modified the architecture and let the upgrade wizards do their thing. I have interviewed people who only have 2 years in the field but have worked on multiple projects that have more "experience" than him.

JamesEggers
  • 12,885
  • 14
  • 59
  • 86
1

Software engineers should not work with computer science guys

Their differences : SEs care about code reusability, while CSs just suss out code SEs care about performance, while CSs just want to have things done now SEs care about whole structure, while CSs do not give a toss ...

  • I'm a computer science guy and I can agree whole heartedly. The CS guys come in, the SE guys are jealous/threatened so they spend all their time trying to prove that the CS guy can't program using "good software engineering" (whatever that is)... meanwhile the CS guy thinks his job is sh*t because – oz10 Jan 14 '09 at 03:12
  • he would rather be working on algorithms, AI, or something else interesting/useful at a larger scope. Not solving some stupid brainless SE problem that could have been avoided. Best to keep separated. – oz10 Jan 14 '09 at 03:13
  • "Don't call me a computer scientist. I'm a coder. I hate computer scientists. If I wanted to deal with people who're more concerned with correctness according to some set of made-up rules than with functionality, I'd go to a church." – chaos Apr 29 '09 at 19:06
1

Managers know everything

It's been my experience that managers didn't get there by knowing code usually. No matter what you tell them it's too long, not right or too expensive.

And another that follows on from the first:

There's never time to do it right but there's always time to do it again

A good engineer friend once said that in anger to describe a situation where management halved his estimates, got a half-assed version out of him then gave him twice as much time to rework it because it failed. It's a fairly regular thing in the commercial software world.

And one that came to mind today while trying to configure a router with only a web interface:

Web interfaces are for suckers

The CLI on the previous version of the firmware was oh so nice. This version has a web interface, which attempts to hide all of the complexity of networking from clueless IT droids, and can't even get VLANs correct.

Adam Hawes
  • 5,439
  • 1
  • 23
  • 30
1

Haven't tested it yet for controversy, but there may be potential:

The best line of code is the one you never wrote.

flq
  • 22,247
  • 8
  • 55
  • 77
1

I don't believe that any question related to optimization should be flooded with a chant of the misquoted "Premature optimization is the root of all evil"s because code that is optimized into obfuscation is what makes coding fun

Demur Rumed
  • 351
  • 3
  • 9
1

Here's mine:

"You don't need (textual) syntax to express objects and their behavior."

I subscribe to the ideas of Jonathan Edwards and his Subtext project - http://alarmingdevelopment.org/

Bjarke Ebert
  • 1,920
  • 18
  • 26
  • Johnathan Edwards? Okay, I'm getting a class: "Person" Anyone here need a "Person" class? This Person class passed quite abruptly, something to do with the chest. The Person class also has a Validate() method, it wants me to validate it. You should always validate the classes around you every day! – Peter Morris Feb 08 '09 at 21:21
1

People complain about removing 'goto' from the language. I happen to think that any sort of conditional jump is highly overrated and that 'if' 'while' 'switch' and a general purpose 'for' loop are highly overrated and should be used with extreme caution.

Everytime you make a comparison and conditional jump a tiny bit of complexity is added and this complexity adds up quickly once the call stack gets a couple hundred items deep.

My first choice is to avoid the conditional, but if it isn't practical my next preference is to keep the conditional complexity in constructors or factory methods.

Clearly this isn't practical for many projects and algorithms (like control flow loops), but it is something I enjoy pushing on.

-Rick

Rick
  • 3,285
  • 17
  • 17
1

There is a difference between a programmer and a developer. An example: a programmer writes pagination logic, a developer integrates pagination on a page.

1

There are only 2 kinds of people who use C (/C++): Those who don't know any other language, and those who are too lazy to learn a new one.

  • I worked with a guy who was doing C/C++ for 15 years, and flat out refused to learn anything else. He considered anything other than C/C++ to be a child's toy, which included any managed frameworks .NET or Java and any web related technology. Therefore the only thing he could program is Win32 desktop applications. And he made it very clear he's not going to learn anything new, and will be doing C/C++ to his retirement. – WebMatrix Apr 21 '09 at 14:52
  • 1
    And those who feel C++ is still the best way despite knowing Java and C#. – luiscubal Jul 11 '09 at 00:28
1

"else" is harmful.

  • What do you propose as an alternative? – Matt Grande Jun 05 '09 at 19:27
  • A series of if()s, each fully enumerating the situation where it occurs. Else of course only implicitly states; the reader has to maintain state in his own head, and there's a pretty low limit before people are overwhelmed and start getting it wrong. Another way is to have if()s which set a state variable, and then switch on that state. –  Jun 06 '09 at 07:44
  • Another issue here is reading code from top to bottom. Multiple elses, especially with a chunks of code of any size in them, require the reader to scoot up and down the code, matching elses to ifs. I find it immesureably better to have a purely linear flow of code. –  Jun 06 '09 at 07:46
  • 1
    Consider --if(a) a = false; else print("x");-- and --if(a) a = false; if(!a) print("x");-- They are not the same thing. As for issues with understanding code, I believe proper indentation solves most of the problems. – luiscubal Jul 11 '09 at 00:30
  • +1.. interesting point.. At university I was taught to -always- write an else statement explicitly to improve testability (an omitted else clause should be considered as executing a null statement in code path analysis), but I agree that for readability it can sometimes indeed be better to refactor. – Wouter van Nifterick Jul 12 '09 at 00:10
  • @Wouter van Nifterick: I have to criticize that point of view too, however. If the if statement has a return statement, then you can just write code outside the if stuff. – luiscubal Jul 17 '09 at 17:40
  • -1, else is valid for controlling program flow - excuse the pun, but how ELSE are you going to do it? – JL. Apr 04 '10 at 16:07
  • Please see the second comment. With regard to your point about program flow, all you've stated is what else is to be used for. That does not indicate in any way whether or not it is harmful. GOTO is also intended for controlling program flow. It has it's place (infinite loops) but beyond that, over-use is harmful. –  Apr 04 '10 at 17:09
1

That the Law of Demeter, considered in context of aggregation and composition, is an anti-pattern.

chaos
  • 122,029
  • 33
  • 303
  • 309
1

I am of the opinion that there are too many people making programming decisions that shouldn't be worried about implementation.

Andrew Sledge
  • 10,163
  • 2
  • 29
  • 30
0

You must know C to be able to call yoursel a programmer!

navigator
  • 1,588
  • 1
  • 13
  • 19
  • 5
    Completely disagree. C isn't the be-all-and-end-all of programming. There were many languages before it, and there are many languages after it that will suit different situations better than C will. Also, programming is about the analytical problem solving, and not just writing code in a particular language. – Jasarien Oct 13 '09 at 21:36
  • Like Jasarien I'm completely disagree. C is another language, is not THE language. – Lucas Gabriel Sánchez Oct 14 '09 at 12:19
  • Actually, C is pretty much THE language for some tasks, although certainly not for all. There is a lot of documentation and tutorials online - specially on low-level stuff - which are way harder to understand without C knowledge. – luiscubal Oct 15 '09 at 18:15
  • More people use C than any other language and it's used on more projects than any other language. – Rob Oct 30 '09 at 12:37
  • agreed. I wonder, would you say you can call yourself a programmer if you know D and not C? (D doesnt hide anything from you alike C). –  Dec 24 '09 at 09:05
  • Depends on what you want to make. High level Windows GUI applications should not be made in C, low level ICU programming, C is required. – Petah Oct 15 '10 at 02:50
0

C must die.

Voluntarily programming in C when another language (say, D) is available should be punishable for neglect.

reinierpost
  • 8,425
  • 1
  • 38
  • 70
  • 6
    Disagree. If C is the language you are more comfortable in, and is suitable for the task, then C is the language that would make most sense for you to develop in. If you're already proficient in C, then why waste the time learning D (as you put it) if you could complete the task to an acceptable standard using C? – Jasarien Oct 13 '09 at 21:38
  • The answer is real easy: you and other people will forever have to clean up the things D helps you prevent in your C code, unless you belong to the top 0.5% of C programmers who never makes such mistakes in the first place. (it may be 0.05%, I'm not sure). There are certainly tools for C which help prevent such mistakes as well. The trouble is you can't count on other people having used them. – reinierpost Oct 14 '09 at 13:47
0

The C++ STL library is so general purpose that it is optimal for no one.

dicroce
  • 45,396
  • 28
  • 101
  • 140
  • 'The' and 'STL library' don't belong in that sentence. Remove them. –  Dec 20 '10 at 14:51
0

Human brain is the master key to all locks.

There is nothing in this world that can move faster your brain. Trust me this is not philosophical but practical. Well as far as opinions are concerned , they are as under


1) Never go outside the boundry specified in the programming language, A simple example would be pointers in C and C++. Dont misuse them as you are likely to get the DAMN SEGMENTATION FAULT.

2) Always follow the coding standards, yes what you are reading is correct, Coding standards do alot to your program, After all your program is written to be executed by machine but to be understood by some other brain :)

Sachin
  • 20,805
  • 32
  • 86
  • 99
0

Once i saw the following from a co-worker:

equal = a.CompareTo(b) == 0;

I stated that he cannot assume that in a general case, but he just laughed.

Rauhotz
  • 7,914
  • 6
  • 40
  • 44
  • I'd be interested in hearing your reasoning here - as well as which CompareTo method you're talking about. – Jon Skeet Jan 02 '09 at 14:03
  • I'm taking about the C# IComparable.CompareTo method. Don't expect that two IComparable implementing objects are equal if the CompareTo method returns zero. They just have the same order. – Rauhotz Jan 02 '09 at 14:09
  • Then your implementation of IComparable is broken. The docs state that a return value of zero means "This instance is equal to obj." I'm not saying that there aren't broken implementations out there, but your colleague can reasonably point to the docs... – Jon Skeet Jan 02 '09 at 14:12
  • 2
    I'd argue that if things don't have a natural equality/ordering relationship, it's better to have a separate IComparer implementation, which can express this explicitly. There are certainly tricky edge cases - is 1.000m equal to 1.0m for example? – Jon Skeet Jan 02 '09 at 14:14
  • that's a good case of narrow-minded view. check the lots of 'compare' predicates in Scheme – Javier Jan 02 '09 at 14:21
  • Jon, could you be so kind and point me the lines in the docs, where it says "a.CompareTo(b) == 0 implies a.Equals(b) == true"? – Rauhotz Jan 02 '09 at 14:22
  • Sure. The docs to ICompable.Compare mean that "a.CompareTo(b) == 0" implies "a is equal to b". The docs to object.Equals mean that "a.Equals(b)" should return true if a is equal to b. It can be argued that the documentation is too narrow or incomplete (Java's docs are more careful on this front) – Jon Skeet Jan 02 '09 at 14:34
  • but the documentation really does seem fairly clearly limiting. It does say that the meaning of "equals" depends on the implementation, but it's at the very least confusing for "equals" to mean something different *within the same type* between two methods. – Jon Skeet Jan 02 '09 at 14:35
  • That's why I think it's clearer to implement non-natural orderings (i.e. where equality within ordering doesn't mean equality between objects) via IComparer instead of IComparable. – Jon Skeet Jan 02 '09 at 14:36
  • JavaDocs where it's nice and clear (compared with the MSDN ones for IComparable): http://java.sun.com/javase/6/docs/api/java/lang/Comparable.html It even says how to document times when you violate consistency with equals. – Jon Skeet Jan 02 '09 at 14:40
  • Of course, given the number of comments discussing this line, I think we're justified in considering it at least a bit unclear. – David Thornley Jan 02 '09 at 16:23
  • On the other hand, 7 of the comments before this one were mine :) – Jon Skeet Jan 02 '09 at 19:46
0

System.Data.DataSet Rocks!

Strongly-typed DataSets are better, in my opinion, than custom DDD objects for most business applications.

Reasoning: We're bending over backwards to figure out Unit of Work on custom objects, LINQ to SQL, Entity Framework and it's adding complexity. Use a nice code generator from somewhere to generate the data layer and the Unit of Work sits on the object collections (DataTable and DataSet)--no mystery.

Mark A Johnson
  • 958
  • 9
  • 31
  • You've obviously never used a DataSet then :P – Cameron MacFarland Jan 04 '09 at 08:23
  • I have to disagree. IMO the DataSet is overkill for the vast majority of operations. And before it's asked, yes, I have used it. – Mike Hofer Jan 04 '09 at 10:43
  • By the same reasoning, LINQ to SQL, Entity Framework, NHibernate, etc. are also overkill for the "vast majority" of operations. BTW, did you mean the "vast majority" of all operations or the "vast majority" of places where I'd use DDD? – Mark A Johnson Jan 09 '09 at 20:53
0

Not everything needs to be encapsulated into its own method. Some times it is ok to have a method do more then one thing.

  • reminds me of an old manager of mine who abstracted himself out of a job. He spent months abstracting an app to make it "perfect" but in the end got nothing done. – Neil N Feb 19 '09 at 22:37
0

Don't worry too much about what language to learn, use the industry heavy weights like c# or python. Languages like Ruby are fun in the bedroom, but don't do squat in workplace scenarios. Languages like c# and Java can handle small to the very large software projects. If anyone says otherwise, then your talking about a scripting language. Period!

Before starting a project, consider how much support and code samples are available on the net. Again, choosing a language like Ruby which has very few code samples on the web compared to Java for example, will only cause you grief further down the road when your stuck on a problem.

You can't post a message on a forum and expect an answer back while your boss is asking you how your coding is going. What are you going to say? "I'm waiting for someone to help me out on this forum"

Learn one language and learn it good. Learning multiple languages may carry over skills and practices, but you'll only even be OK at all of them. Be good at one. There are entire books dedicated to Threading in Java which, when you think about it, is only one namespace out of over 100.

Master one or be ok at lots.

Razor
  • 17,271
  • 25
  • 91
  • 138
0

USE of Desgin patterns and documentation

in web devlopment whats use of these things never felt any use of it

0

Controversial to self, because some things are better be left unsaid, so you won't be painted by others as too egotist. However, here it is:

If it is to be, it begins with me

Hao
  • 8,047
  • 18
  • 63
  • 92
0

Programming is so easy a five year old can do it.

Programming in and of itself is not hard, it's common sense. You are just telling a computer what to do. You're not a genius, please get over yourself.

mweiss
  • 1,363
  • 7
  • 14
  • 1
    I'm not a genius and I don't need to get over myself. In fact, not a day goes by that I don't question myself and wonder if just maybe I am a moron. And that's because I'm trying to tell a computer what it should do, and me realising that I'm not explaining it well enough. – Captain Sensible Jan 26 '09 at 14:20
  • 2
    Please submit your five year olds resume to my HR personell. ;) – Eddie Parker Jan 27 '09 at 03:36
  • Explain x = 4 * 7 to a 5-year old. – Cameron MacFarland Feb 06 '09 at 14:35
  • this is a pretty controversial opinion - so why the downvote? i'm confused – Simon_Weaver Feb 10 '09 at 10:02
  • Maybe if you program like a five year old. I think a programmer requires a certain amount of maturity and understanding to help them understand it all a little more clearly; like why Bubble Sort is not scalable and what an Octree is and what it is useful for... – Robert Massaioli May 26 '09 at 03:30
  • I started when I was 4, so +1. – Coding With Style Jul 04 '09 at 07:59
  • 3
    Programming *can* be done by a 5-year-old. *Good* programming takes experience, self-discipline, and self-criticism, not traits found in your average 5-year-old (or many professionals, either). – DevSolar Oct 16 '09 at 09:05
  • This isn't controversial, it's both stupid and factually incorrect. You are only telling a computer "what" to do if you are using a purely function language. You'll find that writing functional code takes a bit more than "common sense" and more mental capacity than that of a five year old. If you are using an imperative language such as C then you are not just telling the computer what to do, you have to explicitly state "how" to do "what" you want. – Tim Dec 09 '09 at 19:54
  • 4
    i started programming when i was 1. i used a 1bit stream to tell my mother change my pampers. it was {guess}. – Behrooz Dec 14 '09 at 20:17
0

"Don't call virtual methods from constructors". This is only sometimes a PITA, but is only so because in C# I cannot decide at which point in a constructor to call my base class's constructor. Why not? The .NET framework allows it, so what good reason is there for C# to not allow it?

Damn!

Peter Morris
  • 20,174
  • 9
  • 81
  • 146
0

Logger configs are a waste of time. Why have them if it means learning a new syntax, especially one that fails silently? Don't get me wrong, I love good logging. I love logger inheritance and adding formatters to handlers to loggers. But why do it in a config file?

Do you want to make changes to logging code without recompiling? Why? If you put your logging code in a separate class, file, whatever, what difference will it make?

Do you want to distribute a configurable log with your product to clients? Doesn't this just give too much information anyway?

The most frustrating thing about it is that popular utilities written in a popular language tend to write good APIs in the format that language specifies. Write a Java logging utility and I know you've generated the javadocs, which I know how to navigate. Write a domain specific language for your logger config and what do we have? Maybe there's documentation, but where the heck is it? You decide on a way to organize it, and I'm just not interested in following your line of thought.

David Berger
  • 12,385
  • 6
  • 38
  • 51
  • 2
    "Do you want to make changes to logging code without recompiling?Why?" All the time. I have a deployed server that has no reason to log the finest detail when it's serving production traffic, but I have to be able to turn logging on when something goes wrong. Perhaps you just don't work on the type of applications for which this is necessary, but it's not a superfluous feature. – Kai Apr 25 '09 at 21:48
  • Fair enough. That's actually a scenario that I have some experience with...but the difference is that the compile time in the cases I deal with is < 2 min. I know I have to restart the server if I change the logging config...recompiling doesn't seem like such a big deal to me in light of that. – David Berger Apr 25 '09 at 22:34
0

Apparently mine is that Haskell has variables. This is both "trivial" (according to at least eight SO users) (though nobody can seem to agree on which trivial answer is correct), and a bad question even to ask (according to at least five downvoters and four who voted to close it). Oh, and I (and computing scientests and mathematicians) am wrong, though nobody can provide me a detailed explanation of why.

Community
  • 1
  • 1
cjs
  • 25,752
  • 9
  • 89
  • 101
  • Even though I respect math, I'd have to disagree. Those aren't variables. Those are contants. Variables should be... well... variable. I believed Haskell has no variables because "x = x + 1" isn't possible. You use functions, you don't *really* change the value of x. HOWEVER, that post mentioned IORef, so maybe Haskell *does* have variables... – luiscubal Jul 11 '09 at 00:18
  • Well, go put an answer up on the question to which I linked showing why, in the definition "double x = x * 2", "x" is a constant. – cjs Jul 15 '09 at 18:04
  • 1
    "double x = x * 2" makes no sense in no language. Not even C. – luiscubal Jul 17 '09 at 17:44
  • It's an equation, saying that the left and right sides are equivalent (i.e., "double 3" means the same thing as "3 * 2"), and not only does it make perfect sense in mathematics, but it's perfectly valid Haskell code. – cjs Aug 05 '09 at 11:22
  • 1
    So haskell is single-assignment within the bounds of a particular scope, and you can only "change" the value of x by reintroducing a new inner scope, which is what "double x= x *2" really does, right? It doesn't change the value of x at all, it just overloads the identifier x with a new (temporary) value at a particular scope. – Warren P Apr 01 '10 at 00:04
-1

That WordPress IS a CMS (technically, therefore indeed).

https://stackoverflow.com/questions/105648/wordpress-is-it-a-cms

Community
  • 1
  • 1
madcolor
  • 8,105
  • 11
  • 51
  • 74
  • Not exactly, it is a CMS focussed on blogging. Like MySpace is a social network focussed on music. And they are both terrible. –  Dec 20 '10 at 14:55
-1

To quote the late E. W. Dijsktra:

Programming is one of the most difficult branches of applied mathematics; the poorer mathematicians had better remain pure mathematicians.

Computer Science is no more about computers than astronomy is about telescopes.

I don't understand how one can claim to be a proper programmer without being able to solve pretty simple maths problems such as this one. A CRUD monkey - perhaps, but not a programmer.

Community
  • 1
  • 1
Andrew not the Saint
  • 2,496
  • 2
  • 17
  • 22
-1

Copy/Pasting is not an antipattern, it fact it helps with not making more bugs

My rule of thumb - typing only something that cannot be copy/pasted. If creating similar method, class, or file - copy existing one and change what's needed. (I am not talking about duplicating a code that should have been put into a single method).

I usually never even type variable names - either copy pasting them or using IDE autocompletion. If need some DAO method - copying similar one and changing what's needed (even if 90% will be changed). May look like extreme laziness or lack of knowledge to some, but I almost never have to deal with problems caused my misspelling something trivial, and they are usually tough to catch (if not detected on a compile level).

Whenever I step away from my copy-pasting rule and start typing stuff I always misspelling something (it's just a statistics, nobody can write perfect text off the bat) and then spending more time trying to figure out where.

serg
  • 109,619
  • 77
  • 317
  • 330
-2

A real programmer loves open-source like a soulmate and loves Microsoft as a dirty but satisfying prostitute

Andrew not the Saint
  • 2,496
  • 2
  • 17
  • 22
-2

"Good Coders Code and Great Coders Reuse It" This is happening right now But "Good Coder" is the only ONE who enjoy that code. and "Great Coders" are for only to find out the bug in to that because they don't have the time to think and code. But they have time for find the bug in that code.

so don't criticize!!!!!!!!

Create your own code how YOU want.

Access Denied
  • 886
  • 1
  • 13
  • 24
  • 2
    In the working world it is not an option to rewrite code "the way you want it" you have to deal with what is there regardless of who wrote it. The rest of your post is incomprehensible. – jwpfox Jan 04 '09 at 11:26
  • I totally disagree with you: do not reinvent the wheel, they say! – Luis Filipe Aug 17 '09 at 14:22
  • I totally agree that this is the most controversial statement. – nate c Nov 24 '10 at 22:24
-4

Software sucks due to a lack of diversity. No offense to any race but things work pretty when a profession is made up of different races and both genders. Just look at overusing non-renewable energy. It is going great because everyone is contributing, not just the "stereotypical guy"

  • I agree that programming is a privileged white collar job that attracts privileged people, and that it's an ol' boys club. But this really only hurts the quality of life at the workplace (NO, I do not want to talk about anime at work), not the quality of software. – temp2290 Oct 13 '09 at 21:30
  • Wow... I don't know where you people are working (and no, not that "you people"). The last few places that I have worked are diversified and definitely not a privileged position. Maybe if you are a COBOL programmer from the 60s... – Joseph Ferris Oct 29 '09 at 19:14
-4

Developers should be able to modify production code without getting permission from anyone as long as they document their changes and notify the appropriate parties.

Eric Mills
  • 121
  • 1
  • 1
  • 5
  • What does this even mean? "Hey, I just released a patch that deleted the customer's requested functionality because I felt like it but it's ok because I have documented it and told you that I did it." Is that the kind of thing you were suggesting? – jwpfox Jan 05 '09 at 03:48
  • This could happen if a programmer has poor judgement, but I ultimately believe developers have better judgement than they are given credit for. They should be allowed to fix bugs without a bunch of friction. I believe in trust over regulation with the developers I work with. – Eric Mills Jan 07 '09 at 15:11
  • +1. Why the downvotes? Maybe doing the kind of work that demands that level of scrutiny removes the ability to see that there's more than one kind of coding environment? There's no manager to lean on when your world-view-interpreter algorithms are wonky. – jTresidder Jan 07 '09 at 21:43
  • I could count on one hand the number of programmers I know that I would trust in that sort of environment - too many cowboys out there. – Evan Jan 08 '09 at 07:48
  • Ok, I would start by modifying all the code you wrote. It would be interesting to see if you would still feel the same way. – Captain Sensible Jan 26 '09 at 10:52
  • 4
    @Eric Mills: Go work for a bank, or qualify your answer. Maybe you are unaware or underestimating the impact erroneous (or even malicious) code changes can have on a company. Hours of work lost, bazillions of space credits blown. Careers have been destroyed over these kinds of things, people fired on the spot. Probably not something you'll understand until you are personally responsible for an insanely important system...and some cowboy wants to tweak it at will. – Stu Thompson Apr 28 '09 at 19:59
  • At least, in all systems i worked with, this logic would be a very bad policy. Could you provide us with an environment where you would like this to occur? – Luis Filipe Aug 17 '09 at 14:25
  • I wouldn't even want myself in that kind of environment. Forgetting matters of trust or judgment for a second, with any project with more than 1 dev you run into concurrency issues. We both coded well, but our updates clashed . . . in production. And do you truly think QA before release is useless? Any important or sizable project has those checks for a reason. Some non-important, non-sizable projects with 1 or 2 devs and a knowledgeable user base (e.g., some games) can and do practice this, but they are exceptions - not the rule. – Ethel Evans Dec 22 '10 at 01:12
-8

My controversial view is that the "While" construct should be removed from all programming languages.

You can easily replicate While using "Repeat" and a boolean flag, and I just don't believe that it's useful to have the two structures. In fact, I think that having both "Repeat...Until" and "While..EndWhile" in a language confuses new programmers.

Update - Extra Notes

One common mistake new programmers make with While is they assume that the code will break as soon as the tested condition flags false. So - If the While test flags false half way through the code, they assume a break out of the While Loop. This mistake isn't made as much with Repeat.

I'm actually not that bothered which of the two loops types is kept, as long as there's only one loop type. Another reason I have for choosing Repeat over While is that "While" functionality makes more sense written using "repeat" than the other way around.

Second Update: I'm guessing that the fact I'm the only person currently running with a negative score here means this actually is a controversial opinion. (Unlike the rest of you. Ha!)

seanyboy
  • 5,623
  • 7
  • 43
  • 56
  • What if you're unaware of when a condition is false? And where has Repeat come from? While works on the English basis of "while this condition is true, do this" – Kieran Senior Jan 02 '09 at 13:41
  • You could replace all constructs with goto. – Toon Krijthe Jan 02 '09 at 13:56
  • Not only do I like WHILE but I would also borrow Nemerle's UNLESS and put it into C#. – Dmitri Nesteruk Jan 02 '09 at 14:00
  • a language designed for mediocre or unexperienced programmers gets only mediocre and unexperienced users. – Javier Jan 02 '09 at 14:23
  • I haven't seen Repeat...Until since BBC BASIC! VB now has Do...Loop, Repeat...Until and While...Wend should both be removed. It bugs me though when I see, Do While Not ... instead of Do Until ... – pipTheGeek Jan 02 '09 at 14:47
  • The first question I usually ask when I see a While loop is "Will it break during the loop or after the check?" The reason for this is I've used a language or two before that immediately broke out of the loop when the condition returned false. –  Jan 02 '09 at 17:39
  • 5
    This is nonsense. Neither repeat nor while will break in the middle so your argument is absurd. Basically the developers need to be instructed in the use of break/exit/goto to exit a loop early. As for testing condition at the beginning/end both have their uses. – Cervo Jan 02 '09 at 18:20
  • Also do { statements } while (!condition) is the same as do { statements } until (condition) so I don't know what the complaint is. – Cervo Jan 02 '09 at 18:21
  • Actually, I'm not sure if it's the same or not, but I never use do ... while blocks, so I think perhaps I agree with you. :) – skiphoppy Jan 03 '09 at 10:25
  • 2
    "One common ... flags false" - How common is this? In what language? Perhaps the answer for those who have this idea when it's false is "RTFM!". This is just a bad solution looking for a problem it can't find. – jwpfox Jan 04 '09 at 11:19
  • A while with a repeat is a if then repeat until condition not a while + bool – Marco van de Voort May 07 '09 at 06:47
-10

If it's not native, it's not really programming

By definition, a program is an entity that is run by the computer. It talks directly to the CPU and the OS. Code that does not talk directly to the CPU and the OS, but is instead run by some other program that does talk directly to the CPU and the OS, is not a program; it's a script.

This was just simple common sense, completely non-controversial, back before Java came out. Suddenly there was a scripting language with a large enough feature set to accomplish tasks that had previously been exclusively the domain of programs. In response, Microsoft developed the .NET framework and some scripting languages to run on it, and managed to muddy the waters further by slowly reducing support for true programming among their development tools in favor of .NET scripting.

Even though it can accomplish a lot of things that you previously had to write programs for, managed code of any variety is still scripting, not programming, and "programs" written in it do and always will share the performance characteristics of scripts: they run more slowly and use up far more RAM than a real (native) program would take to accomplish the same task.

People calling it programming are doing everyone a disservice by dumbing down the definition. It leads to lower quality across the board. If you try and make programming so easy that any idiot can do it, what you end up with are a whole lot of idiots who think they can program.

Mason Wheeler
  • 82,511
  • 50
  • 270
  • 477
  • 19
    This sounds like argumentative nonsense to me. Suppose I compile a program which satisfies your definition... but then run it in VMWare or something like that. Does that make it a "script" because it's running virtually? Of course not. Likewise if you're dismissing Java as "not programming" would your view change if at any point anyone brought out a "Java CPU" (if such things don't exist already)? Yes, there are plenty of arguments for not trying to "dumb down" programming too much - but the way you're putting it takes that *much* too far. – Jon Skeet May 03 '09 at 07:47
  • With all due respect for you and your obvious intelligence, I have to disagree. A VM is just an abstraction of the hardware. The program is still capable of running directly on the hardware and talking to it. By contrast, if someone built a Java CPU, you still wouldn't be able to write an OS or device drivers for it in Java. (No pointers, etc.) – Mason Wheeler May 03 '09 at 12:55
  • 2
    So it would have to be able to do *more* than just Java - but it would still be able to execute Java code natively. Would all the "non-programmers" in the world who are currently writing Java suddenly become programmers in your view? Sorry, I still can't see this as a sensible or useful delineation at all. – Jon Skeet May 15 '09 at 20:47
  • You might also want to try convincing the Wikipedians, who certainly include scripts as programs, even leaving aside the question of whether Java is a script or not: http://en.wikipedia.org/wiki/Computer_program – Jon Skeet May 15 '09 at 20:51
  • 2
    I seem to remember that UCSD Pascal compiled to p-code, which was then interpreted, but Pascal has certainly always been considered a programming language and not a scripting language. The colege I was at did also have something they called a Pascal Microengine, which could execute p-code natively. So the distinction is somewhat arbitrary and defies definition. – Tim Long May 17 '09 at 04:42
  • Gee, a Delphi programmer ridiculing code that runs on a framework! What a surprise! Self-deluded, elitist crap. – Ash Aug 06 '09 at 14:33
  • Delphi has a framework too. It's called the VCL. The difference is that it's native code, and it tends to add a few hundred kilobytes to your application, as opposed to .NET, which adds a few hundred MEGABYTES of dependencies. – Mason Wheeler Aug 06 '09 at 14:42
  • also, what about the Burroughs machines that ran COBOL natively as their assembly language? – warren Oct 23 '09 at 04:25
  • 2
    www.ajile.com. Hardware CPU runs java natively, realtime with direct access to the hardware. – Tim Williscroft Apr 09 '10 at 03:46
-12

Two lines of code is too many.

If a method has a second line of code, it is a code smell. Refactor.

Jay Bazuzi
  • 45,157
  • 15
  • 111
  • 168
  • 3
    Or you could make your entire program one (reaaaly long) line of code. That's always fun. – Kiv Jan 03 '09 at 02:24
  • BAKA!! even in a functional language, like haskell, you can have several lines in a function! – hasen Jan 03 '09 at 02:34
  • When one combines the rule that a class should fit on the screen and every method has only one line a class can contain only approximately 7 lines of code. – tuinstoel Jan 03 '09 at 02:42
  • 7
    I'm amused that this is currently the lowest-ranked answer; I think I've succeeded at the "controversial" part. – Jay Bazuzi Jan 03 '09 at 18:48
  • 1
    It is indeed controversial, so I up. – tuinstoel Jan 04 '09 at 21:02
  • 6
    I agree completely, when will people see the light? I use Perl so I don't know how to write a function with more than one line of code, also, what is this "Refactor" thing you speak of? :-O – Robert Gamble Jan 05 '09 at 04:41
  • 3
    You must be a functional programmer... but one line per function is still a little extreme ;) – oz10 Jan 14 '09 at 03:10
  • I'm sorry this is nonsense. -1 from me – Friedrich Jan 15 '09 at 08:58
  • 23
    It's not controversial - it's inane. – Lawrence Dol Feb 19 '09 at 00:45
  • 1
    That depends on your definition of "line". For some methods even a single line is too much. – G B Jun 05 '09 at 16:55
  • No method I've ever written (as far as I recall) has just one line of code =) – Jader Dias Sep 24 '09 at 16:20
  • int screwYou() { printf("This is balls...\n"); } – Jasarien Oct 13 '09 at 21:40
  • Typically, when I write a *VOID* dummy method, just due to formatting conventions, it takes at least two lines. Non-void functions typically take three lines. Of ocurse, like Kiv said, you can have 10.000 characters in a single line - so "lines" might not be the best metric for program size counting. – luiscubal Oct 15 '09 at 18:17
  • This is controversial because I do not think you can apply this type of statement to all languages. – atconway Sep 29 '10 at 14:44
  • @atconway: C++ fails, because you can't do anything useful in one statement. Perl fails because even one line is confusing. (To all: there is sanity behind this, but I was going for shock value.) – Jay Bazuzi Oct 03 '10 at 02:56