3

I was wondering if anyone had any idea about how one would go about re-runing failed Junit tests in the same run through. For example, tests 1-5 are run and all pass then when test 6 is run, and it fails the first time. It would then automatically be run again a second time before moving on to tests 7. I am using an ant script that runs all of my tests. The tests are run on a Hudson box if that helps at all. I read about being able to select the failed test and put them in a new file where they are run the second time the suit is run, but thats not really what I am looking for.

Any help or pointers in the right direction would be helpfull. Thank you.

<!-- ============================= -->
<!--   target: test-regression-all -->
<!-- ============================= -->
<!--
<target name="test-regression-all" description="Runs all tests tagged as regression" depends="compile">
    <mkdir dir="${target.reports.dir}"/>
    <junit printsummary="yes" haltonerror="no" haltonfailure="no" fork="yes"
               failureproperty="junit.failure" errorproperty="junit.error" showoutput="true">           
        <formatter type="xml"/>
        <classpath>
            <pathelement location="${target.build.classes.dir}"/>
            <path refid="classpath"/>
        </classpath>
        <batchtest todir="${target.reports.dir}">
           <fileset dir="${src.dir}">
              <include name="emailMarketing/AssetLibrary/*.java" />
              <include name="emailMarketing/attributes/*.java" />
              <include name="emailMarketing/contacts/*.java" />
              <include name="emailMarketing/DomainKeys/*.java" />
              <include name="emailMarketing/lists/*.java" />
              <include name="emailMarketing/messages/*.java" />
              <include name="emailMarketing/Segments/*.java" />
              <include name="emailMarketing/UploadContact/*.java" />
              <exclude name="emailMarketing/lists/ListArchive.java"/>
              <exclude name="emailMarketing/messages/MessageCreation.java" />
           </fileset>
        </batchtest>
        <jvmarg value="-Duser=${user}"/>
        <jvmarg value="-Dpw=${pw}"/>
        <jvmarg value="-Dbrowser=${browser}"/>
        <jvmarg value="-Dserver=${server}"/>
        <jvmarg value="-Dopen=${open}"/>
        <jvmarg value="-DtestType=regression"/>
    </junit>
    <junitreport todir="${target.reports.dir}">
        <fileset dir="${target.reports.dir}">
            <include name="TEST-*.xml"/>
        </fileset>
        <report todir="${target.reports.dir}"/>
    </junitreport>
    <fail if="junit.failure" message="Test(s) failed.  See reports!"/>
    <fail if="junit.error" message="Test(s) errored.  See reports!"/>
</target>
John Miller
  • 31
  • 1
  • 4
  • 1
    But why? If it fails the first time, I would expect it to fail on all subsequent runs. If its output changes randomly, you probably need a more deterministic test. – Rob Hruska Oct 11 '11 at 19:23
  • you can create rule as it's described in the http://stackoverflow.com/questions/8295100/how-to-re-run-failed-junit-tests-immediately – user1459144 Apr 10 '13 at 22:40
  • @RobHruska I'd like to rerun failures-only because my log has debug messages, and it's nicer if (1) it's smaller (2) only contains relevant info. – djeikyb Aug 06 '13 at 20:18

4 Answers4

4

Take a look at the Ant Retry task.

<target name="myTest1">
    <mkdir dir="${junit.output.dir}" />
    <retry retrycount="3">
      <junit haltonerror="yes" haltonfailure="yes" 
             fork="no" printsummary="withOutAndErr" 
             showoutput="true" tempdir="c:/tmp">
      <formatter type="xml" />
      <test name="MyPackage.myTest1" todir="${junit.output.dir}" />
      <classpath refid="Libs.classpath" />
      <formatter type="brief" usefile="false"/>
      </junit>
    </retry>
</target>
NullUserException
  • 83,810
  • 28
  • 209
  • 234
Rob Foster
  • 41
  • 1
2

Tests should be deterministic, such that errors are reproducible. Hence immediately rerunning a failed test will fail again.

Tests should be independent, i.e. each one should make its own setup (and teardown). With junit, you usually do not have a specific order in which the tests are executed. Hence it is not necessary to rerun test6 for setting up the environment for test7.

If you want test case prioritization, i.e. start with the failed tests when rerunning tests after a code fix:

DaveFar
  • 7,078
  • 4
  • 50
  • 90
  • 1
    Just because they are deterministic doesn't mean you should have to rerun 1000 tests if only one of them failed. As for independence, only unit tests should be independent, it's actually very useful to have dependencies for functional tests. – Cedric Beust Oct 12 '11 at 13:50
  • Immediately rerunning (i.e. without changing any code) any number of tests does not make sense ;) But thanks for the comment - I tend to put on my unit test hat when reading "junit". ...not all testing frameworks can cover the whole span of functional tests as well as your TestNG ;) – DaveFar Oct 12 '11 at 14:05
  • The reason I was looking for this is because currently our Test Suit is running over 300+ UI tests with Sikuli. All tests can run on their own, but sometimes they fail for random reasons. For example after logging in the web app not loading fast enough(60s) and it timesout and gives a false fail. The test is failing not for the reason that is being tested, but from poor performance from the web app, internet, and the Hudson Box. This is why I was wondering if it was possible, to try and remove the false UI failures that are just the app loading to slowly. Thank you for the quick responses. – John Miller Oct 12 '11 at 17:20
  • IC. For unit tests, I'd consider using test doubles. But it sounds as if you are doing system testing with junit. Right? – DaveFar Oct 12 '11 at 18:43
  • Correct. These are just High-Impact regression tests that are running in this suit. We are only using Junit to asset if the image that we are looking for is true using the Sikuli library. Like for example creating a message and clicking send. Then asserting that the Popup box is there saying that the message was sent. I might just have to break up the Test suits so only 60-90 are run at a time. – John Miller Oct 12 '11 at 18:52
  • I might just have to put this one on the back burner now. – John Miller Oct 12 '11 at 19:01
  • DaveBall: Actually, I do that pretty often: run testng.xml then run testng-failed.xml in debug mode. It makes it very easy to find out why certain tests failed. – Cedric Beust Oct 14 '11 at 05:27
  • 1
    JohnMiller: transient failures are a huge problem that you should do your best to eliminate. Once you start thinking that certain failures are okay, they will end up masking an important failure one day. Tests that fail transiently don't add anything to your test reports, so you should either fix them or just remove them until you can write better ones. – Cedric Beust Oct 14 '11 at 05:29
0

JUnit 1.8.0 has a "failure" formatter for junit, which can be used to create a TestCase that just contain the failed tests. See http://ant.apache.org/manual/Tasks/junit.html :

The fourth formatter named failure (since Ant 1.8.0) collects all failing testXXX()
methods and creates a new TestCase which delegates only these failing methods.
The name and the location can be specified via Java System property or Ant property
ant.junit.failureCollector. The value has to point to the directory and the name of
the resulting class (without suffix). It defaults to java-tmp-dir/FailedTests.
mgaert
  • 2,338
  • 21
  • 27
  • I'm running into the same problem described in http://mail-archives.apache.org/mod_mbox/ant-user/201112.mbox/%3CA041ABC995E3144794836E93D9612B98053E0A79@MBX1-LWL.litle.com%3E#archives. – Noel Yap Mar 29 '13 at 18:57
0

Would this work?

http://stefan.samaflost.de/blog/en/Apache/Ant/rerun_tests_that_failed.html

Janne Aukia
  • 1,003
  • 6
  • 7