As the name suggests, test case prioritization refers to prioritizing test cases in test suite on basis of different factors. Factors could be code coverage, risk/critical modules, functionality, features, etc.
Why should test cases be prioritized?
As the size of software increases, test suite also grows bigger and also requires more efforts to maintain test suite. In order to detect bugs in software as early as possible, it is important to prioritize test cases so that important test cases can be executed first.
Types of Test Case Prioritization :
- General Prioritization :
In this type of prioritization, test cases that will be useful for the subsequent modified versions of product are prioritized. It does not require any information regarding modifications made in the product.
- Version – Specific Prioritization :
Test cases can also be prioritized such that they are useful on specific version of product. This type of prioritization requires knowledge about changes that have been introduced in product.
Prioritization Techniques :
1. Coverage – based Test Case Prioritization :
This type of prioritization is based on code coverage i.e. test cases are prioritized on basis of their code coverage.
- Total Statement Coverage Prioritization –
In this technique, total number of statements covered by test case is used as factor to prioritize test cases. For example, test case covering 10 statements will be given higher priority than test case covering 5 statements.
- Additional Statement Coverage Prioritization –
This technique involves iteratively selecting test case with maximum statement coverage, then selecting test case which covers statements that were left uncovered by previous test case. This process is repeated till all statements have been covered.
- Total Branch Coverage Prioritization –
Using total branch coverage as factor for ordering test cases, prioritization can be achieved. Here, branch coverage refers to coverage of each possible outcome of condition.
- Additional Branch Coverage Prioritization –
Similar to additional statement coverage technique, it first selects text case with maximum branch coverage and then iteratively selects test case which covers branch outcomes that were left uncovered by previous test case.
- Total Fault-Exposing-Potential Prioritization –
Fault-exposing-potential (FEP) refers to ability of test case to expose fault. Statement and Branch Coverage Techniques do not take into account fact that some bugs can be more easily detected than others and also that some test cases have more potential to detect bugs than others. FEP depends on :
- Whether test cases cover faulty statements or not.
- Probability that faulty statement will cause test case to fail.
2. Risk – based Prioritization :
This technique uses risk analysis to identify potential problem areas which if failed, could lead to bad consequences. Therefore, test cases are prioritized keeping in mind potential problem areas. In risk analysis, following steps are performed :
- List potential problems.
- Assigning probability of occurrence for each problem.
- Calculating severity of impact for each problem.
After performing above steps, risk analysis table is formed to present results. The table consists of columns like Problem ID, Potential problem identified, Severity of Impact, Risk exposure, etc.
3. Prioritization using Relevant Slice :
In this type of prioritization, slicing technique is used – when program is modified, all existing regression test cases are executed in order to make sure that program yields same result as before, except where it has been modified. For this purpose, we try to find part of program which has been affected by modification, and then prioritization of test cases is performed for this affected part. There are 3 parts to slicing technique :
- Execution slice –
The statements executed under test case form execution slice.
- Dynamic slice –
Statements executed under test case that might impact program output.
- Relevant Slice –
Statements that are executed under test case and don’t have any impact on the program output but may impact output of test case.
4. Requirements – based Prioritization :
Some requirements are more important than others or are more critical in nature, hence test cases for such requirements should be prioritized first. The following factors can be considered while prioritizing test cases based on requirements :
- Customer assigned priority –
The customer assigns weight to requirements according to his need or understanding of requirements of product.
- Developer perceived implementation complexity –
Priority is assigned by developer on basis of efforts or time that would be required to implement that requirement.
- Requirement volatility –
This factor determines frequency of change of requirement.
- Fault proneness of requirements –
Priority is assigned based on how error-prone requirement has been in previous versions of software.
Metric for measuring Effectiveness of Prioritized Test Suite :
For measuring how effective prioritized test suite is, we can use metric called APFD (Average Percentage of Faults Detected). The formula for APFD is given by :
APFD = 1 - ( (TF1 + TF2 + ....... + TFm) / nm ) + 1 / 2n where, TFi = position of first Test case in Test suite T that exposes Fault i m = total number of Faults exposed under T n = total number of Test cases in T
AFPD value can range from 0 to 100. The higher APFD value, faster faults detection rate. So simply put, APFD indicates of how quickly test suite can identify faults or bugs in software. If test suite can detect faults quickly, then it is considered to be more effective and reliable.
Attention reader! Don’t stop learning now. Get hold of all the important CS Theory concepts for SDE interviews with the CS Theory Course at a student-friendly price and become industry ready.
- Software Testing | Use Case Testing
- Difference between Use Case and Test Case
- Boundary Value Test Cases, Robust Cases and Worst Case Test Cases
- Difference between Test Case and Test Script
- Beta Testing | Software Testing
- Software Engineering | Differences between Sanity Testing and Smoke Testing
- Software Testing | Endurance Testing
- Software Testing | Dynamic Testing
- Software Testing | Accessibility Testing
- Smoke Testing | Software Testing
- Performance Testing | Software Testing
- Software Testing | Non-functional Testing
- Sandwich Testing | Software Testing
- Software Engineering | Comparison between Regression Testing and Re-Testing
- Alpha Testing | Software Testing
- Unit Testing | Software Testing
- Stress Testing | Software Testing
- Sanity Testing | Software Testing
- Gray Box Testing | Software Testing
- Acceptance Testing | Software Testing
If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to email@example.com. See your article appearing on the GeeksforGeeks main page and help other Geeks.
Please Improve this article if you find anything incorrect by clicking on the "Improve Article" button below.