203

anti-pattern : there must be at least two key elements present to formally distinguish an actual anti-pattern from a simple bad habit, bad practice, or bad idea:

  • Some repeated pattern of action, process or structure that initially appears to be beneficial, but ultimately produces more bad consequences than beneficial results, and
  • A refactored solution that is clearly documented, proven in actual practice and repeatable.

Vote for the TDD anti-pattern that you have seen "in the wild" one time too many.
The blog post by James Carr and Related discussion on testdrivendevelopment yahoogroup

If you've found an 'unnamed' one.. post 'em too. One post per anti-pattern please to make the votes count for something.

My vested interest is to find the top-n subset so that I can discuss 'em in a lunchbox meet in the near future.

Gishu
  • 134,492
  • 47
  • 225
  • 308
  • Aaron, you seem to be all over this one :) Would it be a good idea to add the tag-lines or slogans as comments so that we can have less scrolling.. what say? – Gishu Dec 02 '08 at 13:06
  • 1
    This is coming up rather well.. thanks guys n gals. Keep 'em coming.. one of the most informative SO posts IMHO – Gishu Dec 04 '08 at 04:59
  • 2
    +1 love this thread!!! And most of these are so true and prevailent too! – Chii Dec 05 '08 at 02:55
  • Nice thread, why is this community wiki though??? – Quibblesome Apr 17 '09 at 12:33
  • 2
    Coz it is kind of a poll - you wouldn't wanna be harvesting rep just coz you posted the most common type of anti-pattern ;) – Gishu Jun 20 '09 at 04:52
  • Most answers i see are **unit-testing-antipatterns** but not tdd-anipatterns. For Example [Happy Path](http://stackoverflow.com/a/333944/519334) is a qa antipattern but totally valid for tdd. In my opinion tdd is to implement just enough to make it work by prefering the happy-path and ignoring codecoverage. So can we change the question title so that they better fit the answers :-) – k3b Sep 14 '13 at 12:34
  • @k3b - Agreed. Not anti-patterns for the test driven practice. – Gishu Jun 25 '14 at 06:19

31 Answers31

70

Second Class Citizens - test code isn't as well refactored as production code, containing a lot of duplicated code, making it hard to maintain tests.

Ilja Preuß
  • 2,421
  • 17
  • 15
67

The Free Ride / Piggyback -- James Carr, Tim Ottinger
Rather than write a new test case method to test another/distinct feature/functionality, a new assertion (and its corresponding actions i.e. Act steps from AAA) rides along in an existing test case.

Konerak
  • 39,272
  • 12
  • 98
  • 118
Gishu
  • 134,492
  • 47
  • 225
  • 308
  • 15
    Yeah, that's my favorite one. I do it all the time. Oh... wait... you said that this was a *bad* thing. :-) – guidoism Sep 10 '10 at 22:37
  • 1
    I'm not so sure this is an anti-pattern. All invariants must be `true` after every possible mutator call. So you will want to check that every invariant is `true` after every combination of mutator and input data that you are testing. But you will want to reduce duplication, and ensure you check *all* the invariants, including those that do not *currently* cause test failures. So you put them all in a `checkInvariants()` verification function and use that in every test. The code changes and another invariant is added. You put that in the function too, of course. But it is a freerider. – Raedwald Jan 27 '11 at 18:41
  • 2
    @Raedwald - Over time, the test name no longer matches all the things it tests. Also you have some thrashing due to intertwining tests ; a failure does not point out the exact cause of failure. e.g. a canonical example of this test would read something like Opaque Superset of all Arrange steps >> Act >> Assert A >> Act some more >> Assert B >> Act some more >> Assert C. Now ideally if A and C are broken, you should see 2 test failures. With the above test, you'd see only one, then you fix A and on the next run, it'd tell you that now C is broken. now imagine 5-6 distinct tests fused together.. – Gishu Jan 28 '11 at 11:03
  • 1
    "the test name no longer matches all the things it tests" Only if the test is named for the post condition that was originally present. If you name for the combination of method-name, set-up state and input data (method arguments), there is no problem. – Raedwald Jan 28 '11 at 16:38
  • "a failure does not point out the exact cause of failure" no *assertion* failure ever indicates the *cause* of a failure. That requires some delving into the implementation details: debugging for a regression failure, your knowledge of the development state for some TDD work. – Raedwald Jan 28 '11 at 16:43
  • "Arrange steps >> Act >> Assert A >> Act some more >> Assert B >> Act some more >> Assert C" you seem to be talking about a different kind of anti-pattern (called *greedy test*, IIRC) here, in which additional assertinos **and actions** have been added. I'm dead against that anti-pattern. But this reply is about "a new assertion rides along in an existing test case". – Raedwald Jan 28 '11 at 16:45
  • @Raedwald - "greedy test" is what this post is about - the emphasis is on "orthogonal/new feature/functionality" and not on "an assertion". The other scenario is relatively rare. To prevent ambiguity, I'll update the post. In my experience, you can write tests that isolate a failure down to a specific source line(/method). Also I've moved to testing behavior (instead of methods) - leads to less brittle tests. – Gishu Jan 30 '11 at 04:15
64

Happy Path

The test stays on happy paths (i.e. expected results) without testing for boundaries and exceptions.

JUnit Antipatterns

Potherca
  • 13,207
  • 5
  • 76
  • 94
Geoglyph
  • 360
  • 2
  • 8
  • Cause: Either exaggerated time constraints or blatant lazyness. Refactored solution: Get some time to write more tests to get rid of the false positives. The latter cause needs a whip. :) – Spoike Dec 15 '08 at 06:36
59

The Local Hero

A test case that is dependent on something specific to the development environment it was written on in order to run. The result is the test passes on development boxes, but fails when someone attempts to run it elsewhere.

The Hidden Dependency

Closely related to the local hero, a unit test that requires some existing data to have been populated somewhere before the test runs. If that data wasn’t populated, the test will fail and leave little indication to the developer what it wanted, or why… forcing them to dig through acres of code to find out where the data it was using was supposed to come from.


Sadly seen this far too many times with ancient .dlls which depend on nebulous and varied .ini files which are constantly out of sync on any given production system, let alone extant on your machine without extensive consultation with the three developers responsible for those dlls. Sigh.

Konerak
  • 39,272
  • 12
  • 98
  • 118
annakata
  • 74,572
  • 17
  • 113
  • 180
58

Chain Gang

A couple of tests that must run in a certain order, i.e. one test changes the global state of the system (global variables, data in the database) and the next test(s) depends on it.

You often see this in database tests. Instead of doing a rollback in teardown(), tests commit their changes to the database. Another common cause is that changes to the global state aren't wrapped in try/finally blocks which clean up should the test fail.

Konerak
  • 39,272
  • 12
  • 98
  • 118
Aaron Digulla
  • 321,842
  • 108
  • 597
  • 820
  • this one is just plain nasty.. Breaks the tests must be independent notion. But I've read about it in multiple places.. guess 'popular TDD' is pretty messed up – Gishu Dec 02 '08 at 12:33
56

The Mockery
Sometimes mocking can be good, and handy. But sometimes developers can lose themselves and in their effort to mock out what isn’t being tested. In this case, a unit test contains so many mocks, stubs, and/or fakes that the system under test isn’t even being tested at all, instead data returned from mocks is what is being tested.

Source: James Carr's post.

Konerak
  • 39,272
  • 12
  • 98
  • 118
Gishu
  • 134,492
  • 47
  • 225
  • 308
  • 2
    I believe the cause for this is that your class under test has way too many dependencies. Refactored alternative is to extract code that can be isolated. – Spoike Dec 15 '08 at 06:27
  • @Spoike; If you're in a layered architecture that really depends on the role of the class; some layers tend to have more dependencies than others. – krosenvold Dec 22 '08 at 11:51
  • I saw recently, in a respected blog, the creation of a mock entity setup to be returned from a mock repository. WTF? Why not just instantiate a real entity in the first place. Myself, I just got burned by a mocked interface where my implementation were throwing NotImplementedExceptions all around. – Thomas Eyde Jun 02 '09 at 16:24
40

The Silent Catcher -- Kelly?
A test that passes if an exception is thrown.. even if the exception that actually occurs is one that is different than the one the developer intended.
See Also: Secret Catcher

[Test]
[ExpectedException(typeof(Exception))]
public void ItShouldThrowDivideByZeroException()
{
   // some code that throws another exception yet passes the test
}
Community
  • 1
  • 1
Gishu
  • 134,492
  • 47
  • 225
  • 308
  • That one's tricky and dangerous (ie makes you think you tested code that always explodes every time it's run). That's why I try to be specific about both an exception class and something unique within the message. – Joshua Cheek Jan 20 '15 at 13:05
34

The Inspector
A unit test that violates encapsulation in an effort to achieve 100% code coverage, but knows so much about what is going on in the object that any attempt to refactor will break the existing test and require any change to be reflected in the unit test.


'how do I test my member variables without making them public... just for unit-testing?'

Gishu
  • 134,492
  • 47
  • 225
  • 308
  • 2
    Cause: Absurd reliance on white-box testing. There are tools for generating these kind of tests like Pex on .NET. Refactored solution: Test for behavior instead and if you really need to check boundary values then let automated tools generate the rest. – Spoike Dec 15 '08 at 06:33
  • 1
    Before Moq came around, I had to abandon mocking frameworks in favor of handwriting my mocks. It was just too easy to tie my tests to the actual implementation, making any refactoring next to impossible. I can't tell the difference, other than with Moq, I rarely do these kinds of mistakes. – Thomas Eyde Jun 02 '09 at 16:29
34

Excessive Setup -- James Carr
A test that requires a huge setup in order to even begin testing. Sometimes several hundred lines of code are used to prepare the environment for one test, with several objects involved, which can make it difficult to really ascertain what is tested due to the “noise” of all of the setup going on. (Src: James Carr's post)

Konerak
  • 39,272
  • 12
  • 98
  • 118
Gishu
  • 134,492
  • 47
  • 225
  • 308
  • I understand that excessive test setup usually points to a) poorly structured code or b) insufficient mocking, correct? – Topher Hunt Jun 24 '14 at 13:55
  • Well every situation could be different. It could be due to high coupling. But usually it is a case of overspecification, specifying (mock expectations) each and every collaborator in the scenario - this couples the test to the implementation and makes them brittle. If the call to the collaborator is an incidental detail to the test, it should not be in the test. This also helps in keep the test short and readable. – Gishu Jun 25 '14 at 06:14
32

Anal Probe

A test which has to use insane, illegal or otherwise unhealthy ways to perform its task like: Reading private fields using Java's setAccessible(true) or extending a class to access protected fields/methods or having to put the test in a certain package to access package global fields/methods.

If you see this pattern, the classes under test use too much data hiding.

The difference between this and The Inspector is that the class under test tries to hide even the things you need to test. So your goal is not to achieve 100% test coverage but to be able to test anything at all. Think of a class that has only private fields, a run() method without arguments and no getters at all. There is no way to test this without breaking the rules.


Comment by Michael Borgwardt: This is not really a test antipattern, it's pragmatism to deal with deficiencies in the code being tested. Of course it's better to fix those deficiencies, but that may not be possible in the case of 3rd party libraries.

Aaron Digulla: I kind of agree. Maybe this entry is really better suited for a "JUnit HOWTO" wiki and not an antipattern. Comments?

Kev
  • 15,899
  • 15
  • 79
  • 112
Aaron Digulla
  • 321,842
  • 108
  • 597
  • 820
  • isn't this the same as the Inspector? – Gishu Dec 02 '08 at 12:54
  • No, the inspector thrives to achieve the utmost code coverage. This one here tries to test anything at all. Think of a class which has only private fields, a run() method without arguments and no getters at all. – Aaron Digulla Dec 02 '08 at 13:01
  • 1
    Hmm.. this line 'the class under test tries to hide even the things you need to test' indicates a power struggle between the class and the test. If it should be tested.. it should be publicly reachable somehow.. via class behavior/interface.. this somehow smells of breaching encapsulation – Gishu Dec 04 '08 at 05:17
  • This most often happens when you need to access some service from a third party API. Try to write a test for the Java Mail API or MQSeries which doesn't actually modifies any data or needs a running server ... – Aaron Digulla Dec 04 '08 at 08:17
  • 2
    npellow: Maven2 has a plugin for that, hasn't it? – Aaron Digulla Dec 04 '08 at 08:58
  • 1
    This is not really a test antipattern, it's pragmatism to deal with deficiencies in the code being tested. Of course it's better to fix those deficiencies, but that may not be possible in the case of 3rd party libraries. – Michael Borgwardt Jan 10 '09 at 14:51
  • @Michael: Yes the antipattern here is exactly that the test should be testing externally visible behavior instead of poking into internals. Such tests frequently break when the SUT is refactored... Same as inspector. The test author is doing the easy thing instead of the right thing.. this anti-pattern is a deo to patch the design smells of the code.. Over an extended period, you have a tangled mess of tests that are a pain to maintain. – Gishu Mar 21 '10 at 07:49
  • @Gishu: Still, sometimes you *cannot* do the right thing - for instance when, as I wrote, your test involves code that you don't control. – Michael Borgwardt Mar 22 '10 at 09:12
  • @Michael: Aah.. you're speaking for scenarios involving legacy code/third party code. This post (most of it) deals with greenfield TDD if I'm not mistaken. For legacy code, it might be ok (although I'd still try to fix the design if it's a 1-2 day effort). For third party code, you definitely should not be testing it. e.g. I'd not write unit tests for classes in the .net framework... in short you don't write tests for code that you don't control. What you might want to do there is write interface level tests so that you know if a new version of the dll breaks your code. – Gishu Mar 27 '10 at 08:29
  • 1
    IDK, it must have some sort of side effect. I'd test the side effect. Not sure what you mean about testing third party API, I'd argue you should wrap that in your own code that you can test was used correctly, then integration test that code against the third party API. Wouldn't unit test third party code. – Joshua Cheek Jan 20 '15 at 13:13
26

The Test With No Name -- Nick Pellow

The test that gets added to reproduce a specific bug in the bug tracker and whose author thinks does not warrant a name of its own. Instead of enhancing an existing, lacking test, a new test is created called testForBUG123.

Two years later, when that test fails, you may need to first try and find BUG-123 in your bug tracker to figure out the test's intent.

Konerak
  • 39,272
  • 12
  • 98
  • 118
npellow
  • 1,985
  • 1
  • 16
  • 23
  • 7
    So true. Tho that is slightly more helpful than a test called "TestMethod" – NikolaiDante Dec 03 '08 at 14:12
  • 8
    unless the bugtracker changes, and you loose the old tracker and its issue identifiers...so PROJECT-123 no longer means anything.... – Chii Dec 05 '08 at 02:50
25

The Slow Poke

A unit test that runs incredibly slow. When developers kick it off, they have time to go to the bathroom, grab a smoke, or worse, kick the test off before they go home at the end of the day. (Src: James Carr's post)

a.k.a. the tests that won't get run as frequently as they should

Konerak
  • 39,272
  • 12
  • 98
  • 118
Gishu
  • 134,492
  • 47
  • 225
  • 308
  • Some tests run slowly by their very nature. If you decide to not run these as often as the others, then make sure that they at least run on a CI server as often as possible. – Chris Vest Dec 04 '08 at 09:57
  • This is an obvious question but what are the most general ways to fix this? – Topher Hunt Jun 24 '14 at 13:58
  • This initially seems beneficial, eh? – Kev Oct 21 '14 at 15:38
  • 1
    @TopherHunt Typically the tests are slow because they have some expensive dependency (ie filesystem, database). The trick is to analyze the dependencies until you see the problem, then push the dependency up the callstack. I wrote a case study where my students' took their unit-test suite from 77 seconds to 0.01 seconds by fixing their dependencies: https://github.com/JoshCheek/fast_tests – Joshua Cheek Jan 20 '15 at 13:17
20

The Butterfly

You have to test something which contains data that changes all the time, like a structure which contains the current date, and there is no way to nail the result down to a fixed value. The ugly part is that you don't care about this value at all. It just makes your test more complicated without adding any value.

The bat of its wing can cause a hurricane on the other side of the world. -- Edward Lorenz, The Butterfly Effect

Aaron Digulla
  • 321,842
  • 108
  • 597
  • 820
  • What is the anti-pattern here: What does a test like this look like? Is there a fix? Is there any arguable advantage to the code-under-test to factor out a dependency like `System.DateTime.Now`, besides having simpler or more deterministic unit tests? – Merlyn Morgan-Graham Mar 03 '13 at 22:36
  • 1
    In Java, an example would be to call `toString()` on an object which doesn't overwrite the method. That will give you the ID of the object which depends on the memory address. Or `toString()` contains the primary key of the object and that changes every time you run the test. There are three ways to fix this: 1. Change the code you're testing, 2. using regexp to remove the variable parts of the test results or 3. use powerful tools to overwrite system services to make them return predictable results. – Aaron Digulla Mar 04 '13 at 10:07
  • The underlying cause for this anti-pattern is that the code under test doesn't care how much effort it might be to test it. So the whim of a developer is the wing of the butterfly which causes problems elsewhere. – Aaron Digulla Mar 04 '13 at 10:08
19

Wait and See

A test that runs some set up code and then needs to 'wait' a specific amount of time before it can 'see' if the code under test functioned as expected. A testMethod that uses Thread.sleep() or equivalent is most certainly a "Wait and See" test.

Typically, you may see this if the test is testing code which generates an event external to the system such as an email, an http request or writes a file to disk.

Such a test may also be a Local Hero since it will FAIL when run on a slower box or an overloaded CI server.

The Wait and See anti-pattern is not to be confused with The Sleeper.

Community
  • 1
  • 1
npellow
  • 1,985
  • 1
  • 16
  • 23
  • Hmm.. well I use something like this. how else would I be able to test multi-threaded code? – Gishu Dec 04 '08 at 04:57
  • @Gishu, do you really want to unit test multiple threads running concurrently? I try to just unit test whatever the run() method does in isolation. An easy way to do this is by calling run() - which will block, instead of start() from the unit test. – npellow Dec 04 '08 at 08:36
  • @Gishu use CountDownLatches, Semaphores, Conditions or the like, to have the threads tell each other when they can move on to the next level. – Chris Vest Dec 04 '08 at 10:07
  • An example: http://madcoderspeak.blogspot.com/2008/11/my-solution-for-unclebobs-mark-iv_08.html Brew button evt. The observer is polling at intervals and raising changed events.. in which case I add a delay so that the polling threads gets a chance to run before the test exits. – Gishu Dec 04 '08 at 10:49
  • I think the cartoon link is broken. – Andrew Grimm May 04 '10 at 23:23
19

The Flickering Test (Source : Romilly Cocking)

A test which just occasionally fails, not at specific times, and is generally due to race conditions within the test. Typically occurs when testing something that is asynchronous, such as JMS.

Possibly a super set to the 'Wait and See' anti-pattern and 'The Sleeper' anti-pattern.

The build failed, oh well, just run the build again. -- Anonymous Developer

Community
  • 1
  • 1
  • @Stuart - a must see video describing this is "Car Stalled - Try Now!" http://www.videosift.com/video/Car-Stalled-Try-it-now-Classic-Kids-in-the-Hall-sketch This pattern could also be called "Try Now!", or just - "The Flakey Test" – npellow Dec 04 '08 at 09:53
  • 1
    I once wrote a test for a PRGN that ensured a proper distribution. Occasionally, it would fail at random. Go figure. :-) – Chris Vest Dec 04 '08 at 10:14
  • 1
    Wouldn't this be a *good* test to have? If a test ever fails, you need to track down the source of the problem. I fought with someone about a test which failed between 9p and midnight. He said it was random/intermittent. It was eventually traced to a bug dealing with timezones. Go figure. – Trenton Feb 23 '09 at 19:31
  • @Christian Vest Hansen: couldn't you seed it? – Andrew Grimm Jan 15 '10 at 23:21
  • @trenton It's only a good test to have if the developers can be bothered to track it down, instead of just ignoring it (which they can get away with, as it passes most of the time). – Will Sheppard May 10 '13 at 10:37
17

Inappropriately Shared Fixture -- Tim Ottinger
Several test cases in the test fixture do not even use or need the setup / teardown. Partly due to developer inertia to create a new test fixture... easier to just add one more test case to the pile

Gishu
  • 134,492
  • 47
  • 225
  • 308
16

The Giant

A unit test that, although it is validly testing the object under test, can span thousands of lines and contain many many test cases. This can be an indicator that the system under tests is a God Object (James Carr's post).

A sure sign for this one is a test that spans more than a a few lines of code. Often, the test is so complicated that it starts to contain bugs of its own or flaky behavior.

Konerak
  • 39,272
  • 12
  • 98
  • 118
Gishu
  • 134,492
  • 47
  • 225
  • 308
15

I'll believe it when I see some flashing GUIs
An unhealthy fixation/obsession with testing the app via its GUI 'just like a real user'

Testing business rules through the GUI is a terrible form of coupling. If you write thousands of tests through the GUI, and then change your GUI, thousands of tests break.
Rather, test only GUI things through the GUI, and couple the GUI to a dummy system instead of the real system, when you run those tests. Test business rules through an API that doesn't involve the GUI. -- Bob Martin

“You must understand that seeing is believing, but also know that believing is seeing.” -- Denis Waitley

Aaron Digulla
  • 321,842
  • 108
  • 597
  • 820
Gishu
  • 134,492
  • 47
  • 225
  • 308
  • 1
    If you thought flashing GUIs is wrong, I saw someone who wrote a jUnit test that started up the GUI and needed user interaction to continue. It hanged the rest of the test suite. So much for test automation! – Spoike Dec 15 '08 at 06:38
  • I disagree. Testing GUI's are hard, but they are also a source of errors. Not testing them is just lazy. – Ray Oct 07 '09 at 00:22
  • 3
    the point here is that you shouldn't test GUIs but rather that you shouldn't test only via the GUI. You can perform 'headless' testing withouth the GUI. Keep the GUI as thin as possible - use a flavor of MVP - you can then get away with not testing it at all. If you find that you have bugs cropping up in the thin GUI layer all the time, cover it with tests.. but most of the time, I dont find it worth the effort. GUI 'wiring' errors are usually easier to fix... – Gishu Oct 08 '09 at 01:59
  • @Spoike: Guided manual tests aren't bad, nor is using jUnit (or any other unit testing framework) to drive automated testing that aren't unit tests. You just shouldn't put those in the same project, nor treat them like unit tests (e.g. run constantly, or after every build). – Merlyn Morgan-Graham Mar 03 '13 at 22:56
  • 1
    @MerlynMorgan-Graham I agree, and I didn't mean that you shouldn't test the GUI. The conviction held by team members that it was OK to mix guided manual tests with automatic ones, was disturbing me. I've found out later it was an excellent way to get everyone who are not used to TDD to stop using it. I find that mixing functional tests (which are volatile) with unit tests (which are supposed to be stable) is bad if you want to follow TDD process. – Spoike Mar 04 '13 at 09:52
14

The Sleeper, aka Mount Vesuvius -- Nick Pellow

A test that is destined to FAIL at some specific time and date in the future. This often is caused by incorrect bounds checking when testing code which uses a Date or Calendar object. Sometimes, the test may fail if run at a very specific time of day, such as midnight.

'The Sleeper' is not to be confused with the 'Wait And See' anti-pattern.

That code will have been replaced long before the year 2000 -- Many developers in 1960

Community
  • 1
  • 1
npellow
  • 1,985
  • 1
  • 16
  • 23
  • I'd rather call this a dormant Volcano :).. but I know what you're talking about.. e.g. a date chosen as a future date for a test at the time of writing will become a present/past date when that date goes by.. breaking the test. Could you post an example.. just to illustrate this. – Gishu Dec 04 '08 at 04:55
  • @Gishu - +1 . I was thinking the same, but couldn't decide between the two. I updated the title to make this a little clearer ;) – npellow Dec 04 '08 at 08:33
11

The Dead Tree

A test which where a stub was created, but the test wasn't actually written.

I have actually seen this in our production code:

class TD_SomeClass {
  public void testAdd() {
    assertEquals(1+1, 2);
  }
}

I don't even know what to think about that.

Reverend Gonzo
  • 39,701
  • 6
  • 59
  • 77
  • 8
    :) - also known as Process Compliance Backdoor. – Gishu Nov 11 '09 at 05:03
  • 1
    We had an example of this recently in a test and method-under-test that had been refactored repeatedly. After a few iterations, the test became a call to the method-under-test. And because the method now returned void, there weren't any assertions to be asserted. So basically, the test was just making sure the method didn't throw an exception. Didn't matter if it actually did anything useful or correctly. I found it in code review and asked, "So ... what are we even testing here?" – Marvo Feb 28 '15 at 23:22
11

got bit by this today:

Wet Floor:
The test creates data that is persisted somewhere, but the test does not clean up when finished. This causes tests (the same test, or possibly other tests) to fail on subsequent test runs.

In our case, the test left a file lying around in the "temp" dir, with permissions from the user that ran the test the first time. When a different user tried to test on the same machine: boom. In the comments on James Carr's site, Joakim Ohlrogge referred to this as the "Sloppy Worker", and it was part of the inspiration for "Generous Leftovers". I like my name for it better (less insulting, more familiar).

Zac Thompson
  • 12,401
  • 45
  • 57
  • You can use junit's temporary-folder-rule to avoid wet floors. – DaveFar Aug 28 '11 at 17:19
  • This kind of relates to a Continuous Integration anti pattern. In CI, every developer should have his/her own work space and resources, and the build machine should be it's own environment as well. Then you avoid things like permission problems (or maybe you end up hiding them so that they only turn up in production.) – Marvo Feb 28 '15 at 23:13
11

The Cuckoo -- Frank Carver
A unit test which sits in a test case with several others, and enjoys the same (potentially lengthy) setup process as the other tests in the test case, but then discards some or all of the artifacts from the setup and creates its own.
Advanced Symptom of : Inappropriately Shared Fixture

Community
  • 1
  • 1
Gishu
  • 134,492
  • 47
  • 225
  • 308
10

The Secret Catcher -- Frank Carver
A test that at first glance appears to be doing no testing, due to absence of assertions. But "The devil is in the details".. the test is really relying on an exception to be thrown and expecting the testing framework to capture the exception and report it to the user as a failure.

[Test]
public void ShouldNotThrow()
{
   DoSomethingThatShouldNotThrowAnException();
}
Gishu
  • 134,492
  • 47
  • 225
  • 308
  • 2
    This can in fact be a valid test, in my opinion - especially as a regression test. – Ilja Preuß Dec 02 '08 at 14:39
  • sorry again got this confused with Silent catcher... unit tests should state intent clearly about what is being tested rather than saying 'this should work'.. (+1 tp something is better than nothing. esp if you're in legacy regression country) – Gishu Dec 03 '08 at 06:35
  • 1
    In this kinds of tests, I am at least catching Exception and assign it to a variable. Then I assert for not null. – Thomas Eyde Jun 03 '09 at 09:50
  • 4
    Some frameworks have a `Assert.DoesNotThrow(SomeDelegateType act)` style assertion that can be used specifically in cases like this. I find this less gross than having a test case that succeeds when a constructor returns non-null, but fails when the constructor throws. A constructor will never return null. (Note: only applies to languages where a constructor is guaranteed to return non-null) – Merlyn Morgan-Graham Mar 03 '13 at 22:52
10

The Forty Foot Pole Test

Afraid of getting too close to the class they are trying to test, these tests act at a distance, separated by countless layers of abstraction and thousands of lines of code from the logic they are checking. As such they are extremely brittle, and susceptible to all sorts of side-effects that happen on the epic journey to and from the class of interest.

Konerak
  • 39,272
  • 12
  • 98
  • 118
10

The Turing Test

A testcase automagically generated by some expensive tool that has many, many asserts gleaned from the class under test using some too-clever-by-half data flow analysis. Lulls developers into a false sense of confidence that their code is well tested, absolving them from the responsibility of designing and maintaining high quality tests. If the machine can write the tests for you, why can't it pull its finger out and write the app itself!

Hello stupid. -- World's smartest computer to new apprentice (from an old Amiga comic).

Aaron Digulla
  • 321,842
  • 108
  • 597
  • 820
10

The Environmental Vandal

A 'unit' test which for various 'requirements' starts spilling out into its environment, using and setting environment variables / ports. Running two of these tests simultaneously will cause 'unavailable port' exceptions etc.

These tests will be intermittent, and leave developers saying things like 'just run it again'.

One solution Ive seen is to randomly select a port number to use. This reduces the possibility of a conflict, but clearly doesnt solve the problem. So if you can, always mock the code so that it doesn't actually allocate the unsharable resource.

Aaron Digulla
  • 321,842
  • 108
  • 597
  • 820
gcrain
  • 602
  • 2
  • 7
  • 17
  • @gcrain.. tests should be deterministic. IMO a better approach would be to use a 'well-known-in-the-team' port for testing and cleanup before and after the test correctly such that it's always available... – Gishu Dec 04 '08 at 06:51
  • 1
    @gishu - the problem is not that there are no setup() and teardown() methods to handle using these ports. the problem is for example running a CI server, and multiple versions of the test run at the same time, attempting to use the same, hardcoded-in-the-test port numbers – gcrain Dec 04 '08 at 22:37
9

Doppelgänger

In order to test something, you have to copy parts of the code under test into a new class with the same name and package and you have to use classpath magic or a custom classloader to make sure it is visible first (so your copy is picked up).

This pattern indicates an unhealthy amount of hidden dependencies which you can't control from a test.

I looked at his face ... my face! It was like a mirror but made my blood freeze.

Aaron Digulla
  • 321,842
  • 108
  • 597
  • 820
7

The Test It All

I can't believe this hasn't been mentioned till now, but tests should not break the Single Responsibility Principle.

I have come across this so many times, tests that break this rule are by definition a nightmare to maintain.

thegreendroid
  • 3,239
  • 6
  • 31
  • 40
7

The Mother Hen -- Frank Carver
A common setup which does far more than the actual test cases need. For example creating all sorts of complex data structures populated with apparently important and unique values when the tests only assert for presence or absence of something.
Advanced Symptom of: Inappropriately Shared Fixture

I don't know what it does ... I'm adding it anyway, just in case. -- Anonymous Developer

Community
  • 1
  • 1
Gishu
  • 134,492
  • 47
  • 225
  • 308
6

Line hitter

On the first look tests covers everything and code coverage tools confirms it with 100%, but in reality tests only hit code without any output analyses.

coverage-vs-reachable-code

Community
  • 1
  • 1
Rrr
  • 1,747
  • 3
  • 17
  • 22
0

The Conjoined Twins

Tests that people are calling "Unit Tests" but are really integration tests since they are not isolated from dependencies (file configuration, databases, services, other in other words the parts not being tested in your tests that people got lazy and did not isolate) and fail due to dependencies that should have been stubbed or mocked.

PositiveGuy
  • 46,620
  • 110
  • 305
  • 471