0

Given a test failure, I have to extract all code statements that the test execution ran.

Let's say unit test 1 has failed. I have to extract all the codes it executed.

public class Driver {

  method1 {
  }  

  method2 {
  }  

  method3 {
  }  

  public TakeScreenshot(int flag){
     statement1;
     statement2;
     
     if(flag) {
       statement_inside_flag;
     }
    

     statement100;
  }
}
[TestMethod]
public void TestThings()
{
        boolean result = Driver.TakeScreenshot(true);
        Assert.isTrue(result);
}

Is there an easy way I can do it using an open-source tool?

If I want to extract the body of the code under test i.e. in this case the output would be as follows. Some lines in the TakeScreenshot was not executed:

public class Driver {
  public TakeScreenshot(int flag){
     statement1;
     statement2;
     
     if(flag) {
       statement_inside_flag;
       return;
     }
  }
}
Exploring
  • 2,493
  • 11
  • 56
  • 97
  • 1
    What does it mean "extract the codes it executed"? You want to print the source code of the test into a file? You want to print the call stack that was performed by the test? Or what else? – Matteo NNZ May 19 '22 at 19:39
  • For what purpose do you want to do this extraction? What are you trying to achieve? – ndc85430 May 19 '22 at 19:45
  • @MatteoNNZ example added. – Exploring May 19 '22 at 19:46
  • @ndc85430 I have to build an automated tool, that takes input as a failed test and needs to output all lines that led up to that failure. – Exploring May 19 '22 at 19:47
  • 1
    What is the purpose of the tool? The task you have is far from being something simple... – Matteo NNZ May 19 '22 at 19:48
  • The tool's purpose is to build a `dataset` for training a program repair `model`. (https://arxiv.org/abs/1901.01808) – Exploring May 19 '22 at 19:50
  • I think you want some sort of code coverage on the unit tests. For that, one famous framework is [Jacoco](https://www.baeldung.com/jacoco). However, Jacoco creates a report for all tests, not only the ones which are red. And also, it doesn't "report the code" exactly as you wish. You may probably work your solution around it (to avoid the very complex task of rebuilding the call stack), but I don't understand well for which purpose and the sentence "dataset/model" has a mysterious meaning to me... – Matteo NNZ May 19 '22 at 19:51
  • The model is an Neural machine translation model. Details here: `SequenceR: Sequence-to-Sequence Learning for End-to-End Program Repair` – Exploring May 19 '22 at 19:53
  • @MatteoNNZ I suppose an alternate approach could be static analysis using control and dataflow. But the tools like Soot, Wala are very very flaky and support is limited. – Exploring May 19 '22 at 19:55
  • Ah ok, just saw the link you added. So my understanding is that you want to build a tool that runs the unit tests, then extracts the failed code as text and feeds it to SequenceR so that the latter can learn where the bugs are? – Matteo NNZ May 19 '22 at 19:55
  • yeah - but the major headache is extracting the dataset. Would love to hear your thought. – Exploring May 19 '22 at 19:57
  • The (very high-level) suggestion I may have is you run your tests with Jacoco coverage. Then you scan the surefire reports to see the names of the tests that failed, then scan the jacoco reports and see if, by some chance, you're able to retrieve information about the code that was called inside the coverage report. But again, your task seems far to be simple, and I don't even want to imagine the complexity of the code that "learns from bugs" to self-fix them, it sounds like science fiction :O – Matteo NNZ May 19 '22 at 19:59
  • @MatteoNNZ my task is to apply the SequenceR model and evaluate the result. I don't think SequenceR would work that great. But as I mentioned my task is just to do the evaluation. – Exploring May 19 '22 at 20:03
  • @MatteoNNZ but even to do the evaluation seems like a very hard task as building the dataset is very difficult. :-( But I would try what you have said. – Exploring May 19 '22 at 20:04
  • 1
    Yes I see that, but your task is pretty complex. If I was you, I'd go by that solution. JUnit run => produces XML files in target with all the names of the test failed. Jacoco coverage run => produces XML files in target with the names and the code coverage by each test. It sounds a very nice, and very hard task you got... – Matteo NNZ May 19 '22 at 20:05
  • Thanks the feedback, would approach it accordingly. – Exploring May 19 '22 at 20:07

0 Answers0