1

I'm trying to clean up our functional suite at work and I was wondering if there is way to have cucumber repeat a scenario and see if passes before moving on to the next scenario in the feature? Phantom is my headless webkit browser poltergeist is my driver.

Basically our build keeps on failing because the box gets overwhelmed by all the test and during a scenario the page won't have enough time to render whatever it is we're trying to test. Therefore, this produces a false positive. I know of no way to anticipate what test will hang up the build.

What would be nice is to have a hook(one idea) that happens after each scenario. If the scenario passes then great print the results for that scenario and move on. However, if the scenario fails then try running it again just to make sure it isn't the build getting dizzy. Then and only then do you print the results for that scenario and move on to the next test.

Does anyone have any idea on how to implement that?

I'm thinking something like

 After do |scenario|
     if scenario.failed?
         result = scenario.run_again # I just made this function up I know for a fact this doesn't actually exist (see http://cukes.info/api/cucumber/ruby/yardoc/Cucumber/Ast/Scenario.html)
         if !result
            Cucumber.wants_to_quit = true
         end
     end
 end

The initial solution I saw for this was: How to rerun the failed scenarios using Cucumber?

This would be fine, but I would need to make sure that

 cucumber @rerun.txt

actually corrected the reports if the test passed. Like

 cucumber @rerun.txt --format junit --out foo.xml

Where foo.xml is the junit report that initially said that feature 1, 2 & 5 were passing while 3 and 4 were failing, but now will say 1, 2, 3, 4 & 5 are passing even though rerun.txt only said to rerun 3 and 4.

Community
  • 1
  • 1
mpdunson
  • 227
  • 1
  • 11
  • sounds like you have a load-capacity concern here to be truthful. If your test env gets overwhelmed by a few cucumber tests, what chance does the real environment have if multiple users happen to hit it at the same time. Maybe the focus should be on either a more robust test server that is more like production, or on addressing the issue of site performance under what is likely a pretty modest load (this is after all functional testing, not loadtesting, how many tests are running at the same time? ) – Chuck van der Linden Mar 11 '14 at 19:13
  • @mpdunson Did you happen to find a way to do it? I basically know the `cucumber re-run` command here, but that wouldnt work for me. The tests which I have are dependent on each other due to the way the application is designed. So all my tests fails, if the second test case fails due to some reason. I was looking for a way similar to yours, that if I can run that particular scenario until it passes before going to the next? instead of using an `until` or `unless` loop in the steps. Any idea on it? – Emjey May 31 '17 at 08:27

1 Answers1

3

I use rerun extensively, and yes, it does output the correct features into the rerun.txt file. I have a cucumber.yml file that defines a bunch of "profiles". Note the rerun profile:

    <%
rerun = File.file?('rerun.txt') ? IO.read('rerun.txt') : ""
rerun_opts = rerun.to_s.strip.empty? ? "--format #{ENV['CUCUMBER_FORMAT'] || 'progress'} features" : "--format #{ENV['CUCUMBER_FORMAT'] || 'pretty'} #{rerun}"
%>

<% standart_opts = "--format html --out report.html --format rerun --out rerun.txt --no-source --format pretty --require features --tags ~@wip" %>
default: <%= standart_opts %> --no-source --format pretty --require features


rerun: <%= rerun_opts %> --format junit --out junit_format_rerun --format html --out rerun.html --format rerun --out rerun.txt --no-source --require features

core: <%= standart_opts %> --tags @core
jenkins: <%= standart_opts %> --tags @jenkins

So what happens here is that I run cucumber. During the initial run, it'll throw all the failed scenarios into the rerun.txt file. Then, after, I'll rerun only the failed tests with the following command:

cucumber -p rerun

The only downfall to this is that it requires an additional command (which you can automate, of course) and that it clutters up test metrics if you have them in place.

Whitney Imura
  • 810
  • 1
  • 6
  • 15
  • Thank you for your answer. I've just up voted it for now. I'll try it and accept the answer after it works. – mpdunson Feb 03 '14 at 15:44
  • 2
    One problem is our build doesn't see some of the previous successes. When it writes to junit_format_rerun it overwrites what was previously in there. So unless all the features in a feature file pass on the same pass some of the information is lost. Is there an append formatter in cucumber and/or a third party library that wrote a cucumber formatter intelligent enough to know where to place the successful features? I could write one and place it on github, but I thought someone else may have already faced this issue. – mpdunson Feb 12 '14 at 15:41
  • Also. How do you deal with skipped features? Those don't seem to get logged in rerun.txt. – mpdunson Feb 12 '14 at 15:43
  • Maybe I'm not understanding your question correctly, but why would you want to log the skipped features in the rerun? If the feature has a skipped tag, it will never get retried (and thus, won't get logged in rerun.txt). – Whitney Imura Feb 12 '14 at 16:52
  • You're right regarding the previous successes. I'm not sure if there is something to handle this already, so you'd have to investigate that one. However, why do you want to see/log your previous successes (outside of the terminal)? The goal of rerun is to just log the failed scenarios so that it can rerun it. – Whitney Imura Feb 12 '14 at 16:54
  • Thanks for the timely response. My Jenkins build runs the features and now I have it rerun the failed scenarios to make sure they're not flaky failures. Some of the features are skipped, not because of a tag, but because of cucumber. However, I haven't been able to reproduce this lately so maybe I'm mistaken and all the test are being ran. What's most important to me is that Jenkins passes if the rerun passes. This isn't happening even though I get to zero failures. However, this is more of a Jenkins problem. I want to log previous successes so Jenkins reports the full number of test that ran. – mpdunson Feb 12 '14 at 17:34
  • currently it says something like 20 features ran & passed (no failures) even though there were 170. This is because by the nth rerun there's only 1 or 2 failures left per feature. So even though all 40 features in a feature file passed only 2 of them are getting documented in junit_format_rerun. – mpdunson Feb 12 '14 at 17:40
  • Oh yes--I forgot about the cases on the nth rerun. I can't think of anything off the top of my head that can do what you're asking, but now that you mention it, I'd like to have that info, too. I'll look into it and post what I find/write when I can. If you decide to do the same, that'd be great! – Whitney Imura Feb 12 '14 at 18:22
  • This does not answer the question as the initial question does indicate @mpdunson knows how to use the cucumber rerun functionality but wants to know how to merge the results generated in the rerun with that from the initial execution which is why I am here as well – Ransom Aug 20 '14 at 15:23