Open In App

How to Rerun the Failures in a BDD Junit Test Automation Framework?

Last Updated : 03 Feb, 2023
Improve
Improve
Like Article
Like
Save
Share
Report

We come across failures many times after we execute our test scenarios through automation. These failures can be because of majorly two reasons:

  • Functional Issue (actual failure).
  • Intermittent Issues (Application Slowness, Page not loaded properly, Technical Issue, etc.).

Having a mechanism to automatically rerun the failures can be very helpful for an automation tester as it saves a lot of time to manually analyze the failures and retrigger them. In TestNG we have mechanisms to rerun the failures which can be achieved using below two ways:

  • Using TestNG Listener to retry the test case again in case of a failure.
  • Trigger the failed test scenarios using the testing.xml file which is automatically generated by TestNG post-execution. 

Retrying the failures automatically is also available in Junit Framework in which the system automatically re-executes the failures. But I see some issues with the above approaches when the number of test scenarios we are executing is very high. Let’s take an example:

  • We need to execute 500 automated test cases as part of regression. Assume each test case takes an average of 10 mins of time for execution. Hence total execution time required is 500 * 10 = 5000 mins. 
  • Assume out of a total of 500 test cases 400 got passed and 100 got failed. Out of 100 failures, 50 are functional issues and 50 are intermittent issues. 
  • If we select to rerun the failures automatically then 50 test cases which were failed due to functional issues will also be executed multiple times based on the parameter we set, which will ultimately increase the executing time by 50 * 10 = 500 mins (if the retry count is set as twice), but ideally, these test cases should not have been executed as these are functional failures. 

Now let’s discuss what else can be done to rerun the failures in a Junit Framework. I had to deal with a large regression test bed of more than 10,000 test cases. Below is the approach I followed. 

  • In the Junit runner class, we used a plugin to generate JSON files as output after the execution of each scenario. Hence at the end of the execution, I would have JSON files for each scenario. Here is what the JSON report looks like. 
[
  {
    "line": 4,
    "elements": [
      {
        "start_timestamp": "2023-01-06T11:08:54.758Z",
        "before": [
          {
            "result": {
              "duration": 10773000000,
              "status": "passed"
            },
            "match": {
              "location": "stepdefinition.Hooks.launchBrowser(io.cucumber.java.Scenario)"
            }
          }
        ],
        "line": 6,
        "name": "Stock Analysis of given stocks",
        "description": "",
        "id": "stock-analysis;stock-analysis-of-given-stocks",
        "after": [
          {
            "result": {
              "duration": 4863000000,
              "status": "passed"
            },
            "match": {
              "location": "stepdefinition.Hooks.closeBrowser(io.cucumber.java.Scenario)"
            }
          }
        ],
        "type": "scenario",
        "keyword": "Scenario",
        "steps": [
          {
            "result": {
              "duration": 1967000000,
              "status": "passed"
            },
            "line": 7,
            "name": "we read all the company name from excel \"StockName\" and sheet name \"Set1\"",
            "match": {
              "arguments": [
                {
                  "val": "\"StockName\"",
                  "offset": 40
                },
                {
                  "val": "\"Set1\"",
                  "offset": 67
                }
              ],
              "location": "stepdefinition.StockAnalysisSteps.we_read_all_the_company_name_from_excel_and_sheet_name(java.lang.String,java.lang.String)"
            },
            "keyword": "Given "
          },
          {
            "result": {
              "duration": 215000000,
              "status": "passed"
            },
            "line": 8,
            "name": "landed on the google homepage",
            "match": {
              "location": "stepdefinition.StockAnalysisSteps.landed_on_the_google_homepage()"
            },
            "keyword": "When "
          },
          {
            "result": {
              "duration": 76622000000,
              "status": "passed"
            },
            "line": 9,
            "name": "capture all the stock statistics",
            "match": {
              "location": "stepdefinition.StockAnalysisSteps.capture_all_the_stock_statistics()"
            },
            "keyword": "Then "
          }
        ],
        "tags": [
          {
            "name": "@STOCK_ANALYSIS"
          }
        ]
      }
    ],
    "name": "Stock Analysis",
    "description": "",
    "id": "stock-analysis",
    "keyword": "Feature",
    "uri": "file:target/parallel/features/StockAnalysis_scenario001_run001_IT.feature",
    "tags": [
      {
        "name": "@STOCK_ANALYSIS",
        "type": "Tag",
        "location": {
          "line": 3,
          "column": 1
        }
      }
    ]
  }
]
  • Write a Java Program to read all the JSON files programmatically and find out if the scenario was passed or failed. If the scenario is failed then store the scenario name in an ArrayList<String>. Now in the ArrayList<String>, we should have all the failed scenarios. The below java program can be used to find out whether the scenario is passed or failed from the above JSON report. 

Java




public static void getScenarioNameWithOverallStatus(String jsonPath) {
        JSONParser parser = new JSONParser();
        try {
            Object obj = parser.parse(new FileReader(jsonPath));
            JSONArray jsonArray = (JSONArray) obj;
            JSONObject jsonObject = (JSONObject) jsonArray.get(0);
            JSONArray jsonArrayElements = (JSONArray) jsonObject.get("elements");
            for (int totalSteps = 0; totalSteps < jsonArrayElements.size(); totalSteps++) {
                jsonObject = (JSONObject) jsonArrayElements.get(totalSteps);
                jsonArray = (JSONArray) jsonObject.get("steps");
                if (!getResultFromSteps(jsonArray)) {
                    storeFailedScenarios(jsonArrayElements);
                }
            }
        } catch (Exception e) {
            System.out.println("Exception : " + e);
        }
    }


  • The next step is to store the failed scenarios. In my case, I used a database to store the information. If your organization allows using a database, I would suggest that is the best option to store the failed scenarios. Otherwise, you can generate the failed scenarios in a new JSON file which will contain the scenario name as a key and any value that you would like to assign, such as Failed. 
  •  If you decide to store the failed scenarios in a database, then generate a random alphanumeric key (I used 4 characters) which would be also stored along with failed scenarios in the database. Please note for all failed scenarios of a build only one key would be assigned in the database. Let’s call it as rerunKey. You can use the below java method to generate a random alphanumeric key of a specific length. 

Java




public static String generateRandomAlphnumericString(int length) {
        try {
            return RandomStringUtils.randomAlphanumeric(length);
        } catch (Exception e) {
            return null;
        }
  }


  • Now, let’s come to executing the failures using rerunKey. We need to pass the rerunKey(in case you are using a database approach) in the maven build command. Junit provides us a hook called @Before that is executed at the beginning of a test automatically. In this part, we can write code to fetch all the failed scenarios from the database for the given rerunKey. Here is a sample maven command for your reference. 
clean verify -Denv=<environment_name> -Dtag=<tag_name> -Dfeature=<feature_folder_name> -Drerunkey="as43" exec: java
  • So we have now the set of failed scenarios retrieved from database and stored in a HashMap<String,String> format. In the code, we can now validate if the current scenario is present in the HashMap<String,String> or not as the scenario name is being used as the key. If it’s present, then execute the test, or else skip the test case. The below code snippet can be used to skip a test scenario:
Assume.assumeTrue("Skipping the scenario", false);
  • The same approach can be used while using JSON files to store the failed scenarios instead of a database. In that case, instead of reading the scenarios from a database, we need to read from the JSON file and store it in a HashMap<String,String> format, the remaining logic is the same. Using the below code we can write the failed scenarios in a JSON format

Java




public static void generateFailedScenariosJSONFile(Set<String> failedScenarios, String fileLocation) {
        try {
            JSONObject jsonObject = new JSONObject();
            for (String scenarioName : failedScenarios) {
                jsonObject.put(scenarioName.trim(), "Failed");
            }
            FileWriter file = new FileWriter(fileLocation);
            file.write(jsonObject.toJSONString());
            file.close();
        } catch (Exception e) {
            System.out.println("Exception Occurred : " + e);
        }
    }


Now the question is what is the advantage we can achieve with the above approach:

  • If the failed scenarios are stored in the database, then using the autogenerated rerunKey we can easily check what the scenarios tagged with the rerunKey. Let’s take the previous example in which out of 500, 400 test cases were passed and 100 failed. Out of 100 test cases, 50 test cases are functional issues ( actual failure) and 50 are intermittent issues. So we should have now a total of 100 test scenario names stored in the database with a rerunKey. 
  • Ideally, we should not rerun the 50 scenarios that are having functional issues. We can quickly delete the 50 scenarios from the database using the editor or nullify the rerunkey from these 50 scenarios that we don’t want to rerun. 
  • Pass the rerunKey in the maven command with nothing changed in the earlier run configuration, there is no need to change. 
  • The system will now automatically rerun only the 50 scenarios having intermittent issues.
  • Same with the JSON file approach, just remove the 50 scenarios from the JSON file having functional issues and keep the remaining as it is. 
  • With this approach, we can first do our analysis and then rerun the failures as per our need, which is a great saving in execution time.


Like Article
Suggest improvement
Previous
Next
Share your thoughts in the comments

Similar Reads