Open In App

Agile Testing Techniques

Last Updated : 22 Mar, 2022
Improve
Improve
Like Article
Like
Save
Share
Report

Testing is an integral part of software development and it goes hand in hand with the requirement creation activities. There can be a structured and systematic approach to testing in agile projects too. As there are multiple releases usually defined in Agile projects, testing plays an important role in the quality control of the product or service.  

In Agile, emphasis is given to the requirement analysis discussion where team members are expected to work closely with the Product Owner and Business Analyst for the user stories and its acceptance criteria development. Tester’s involvement is necessary right from the initiation of the project so that they can actively participate in defining the definition of Done, exploration of User Stories, etc. Testers should not remain idle or wait for requirements. They should follow up on validation criteria in order to know what’s expected of a new feature or a change being made in the system. In Agile projects, each team member irrespective of the work has to become more proactive, enthusiastic, and forward-thinking in their interpretation of requirements.  

Agility not only improves the software testing process but also adds flexibility and enhances ease of use. Thus, agility leads to a ‘continuous process improvement’, which significantly increases testing volumes because of satisfaction with the process.

Key Concepts

In Agile frameworks, both testing and development are carried out in the same sprint. Once the testing team has a clear idea of what is being developed, the tester’s work on how the system would behave after the new changes are incorporated and whether it satisfies the business needs or acceptance criteria. In the time when the development team works on the design and code changes, the testing team works on the testing approach and comes up with the test scenarios and test scripts. Once the code is ready to be tested, the testing team starts the testing which needs to be completed by the end of the sprint. This is quite possible in some scenarios when a module of the business need is made available to test. The testing team is expected to work on smaller modules and test them as and when provided rather than the whole functionality to be developed completely.

1. Test Strategy and Planning

Test strategy is an important artifact in testing a project. It’s the responsibility of the testing team to have a structured and definite testing approach and strategy in the project. Unlike the traditional way, in Agile as the scope of requirement and hence the testing do change very often, it is usually recommended to update the test strategy at the beginning of each sprint. Documentation of this test strategy starts from the Release initiation phase and acts as a living document throughout the Release. Each sprint has its own scope of stories that need to be delivered. As part of the testing team, they can create the test plan. This plan has to be updated in each Sprint and is continued till the release.  

Traditionally in software projects, system testing is used to begin after the development team has completed the integration and then the business owners used to carry out UAT (User Acceptance Testing) after system testing. However, when it comes to Agile all these boundaries are very porous. As an Agile team member, one has to test one functionality at the same time write a test plan for another and at the same time review the design for another story and carry out many similar kinds of activities all at the same point in time. Testers participate in design discussions in Agile projects. This helps them plan their test cases in a better way.  

In Agile, towards the end of every sprint, the team participates in the Sprint review with Business. In these reviews, it’s primarily the responsibility of the team members involved in testing to prepare the test data sheets and run the tests in front of the Business Stakeholders, Product Owners and show them how the product is working and whether or not it’s meeting their Acceptance Criteria. This is a key process in Agile and helps the team to collect feedback from the client and other stakeholders on the user stories developed within the sprint. The testing team plays a pivotal role in this process.  

2. Importance of Automation for Testing in Agile

Agile needs continuous testing of the functionalities developed in past Sprints. The use of Automation helps complete this regression testing on time. It minimizes the rework that can happen due to error-prone manual processes. It helps bring predictability to the tests run as part of Sprint testing. Helps to cover more regression tests than it could have been done manually. This is very much required in Agile as it has a higher number of builds and more frequent changes because of its iterative nature. The resource is available for other tasks – Rather than retesting existing functionalities, testers can focus their efforts on parts of new functionality that require manual intervention. Continuous regression testing would require huge testing effort since we have to repeat the tests many times depending on the iterations. The use of automation helps reduce this effort. The smaller automated test cases when aggregated in each sprint, during the end of the project form an exhaustive regression suite that covers almost all the functionalities from a system, regression, and acceptance testing point of view. Thus, it helps in minimizing the cost of the project in the Release Sprint (last Sprint in the Release) – as the majority of the testing can be handled through the existing automation.

3. Test Coverage in Agile

Considering a limited timeframe for a Sprint to complete the testing, identifying and finalizing the test coverage for the user stories is one of the most important and challenging tasks in an Agile project. Inadequate test coverage might result in missing critical tests for some requirements. Test coverage for a user story is usually discussed and finalized during the backlog grooming sessions and later during the Sprint Planning meetings. There are a couple of situations where test coverage might result in missing out a few test scenarios which can be mitigated by creating a traceability matrix between requirements, user stories, and test cases. A link can also be created between test cases and user stories.  

Another case might arise when a code change is done without complete impact analysis, review and the corresponding changes in test cases needed could result in incomplete test coverage. This can be mitigated by analyzing the impact, reviewing source code to identify modules that are to be changed, and ensuring that all the changed code has been properly tested. Also, the use of code coverage tools must be leveraged for verifiable thoroughness as part of the automation effort.  

4. Test Data Management in Agile

Test data is one of the most important factors in testing. Creating and manipulating the data for testing various test cases is one of the challenging tasks in testing. Agile Testing in Sprints gets difficult when the system is dependent on another team for test data. The Sprint timelines do not give enough room for any delays in the data setup. Setting up the test data is done along the Sprint to make available them at least, just in time (JIT) if not earlier, by the time when the developer checks in his code for claiming it ‘as done’. Since all of the developers usually practice test automation for unit testing and integration testing and sometimes use the ‘test-driven development’ approach to complete their user stories, the synchronized supply of the test data by the testing team is essential.  

The analysis of the data needs is done during the backlog grooming sessions and the Sprint planning meetings to ensure the complete data setup needs. Once the analysis is done, the concerned team(s) is/are reached out accordingly. This also may not be fruitful at times, because if the data is complex and huge, we may not get adequate support from the team owning the data. Hence the discussion should ideally start with the business, at least a Sprint early or as per the applicable SLA to get the data at the appropriate time.  

Also, if possible, members (from business) who are responsible for the data should be included as part of the Scrum team for Sprint where their support is required. This way they get more clarity on the exact data required and the team can gauge/refine their progress.  

Moreover, agile being an iterative model, there might be similar user stories in different sprints where the test data created in earlier sprints can either be directly reused or used with minor modifications in order to avoid redundant data requests/set up and reduced turnaround time for the test data creation and manipulation.  

5. Impact Analysis in Agile

In the traditional practice, Team used to go through the Business requirements and develop the test scenarios in isolation. They participate in design reviews only when it was completed by development teams. Hence contributions from the testing teams in those reviews were minimal. However, in Agile, there is a more significant role of the testing team in impact analysis and design reviews as they are involved in the design discussions happening for a story. This helps the developer in the impact analysis and debugging of the issues. Many times, testing team members go to the code level to analyze or debug the issue along with the developer. In agile, because of the short timeframe, it is the responsibility of the entire team to ensure that things are developed correctly from day one and thus contribute to successful delivery.

Testing Practices in Agile:  

1. Types of Testing:

2. Defect Management in Agile: 

There is a myth that defect management is not needed in Agile. However, it is important to understand how does it work when there is a bug that needs more effort and could not be fixed in the stipulated time of a sprint. In such cases, defect management is needed in agile projects too. Whenever the stories are available to test, the testing team starts testing and finds bugs. The usual practice is, if the bug or the issue can be fixed in a day, defects are not raised, and the development team is communicated to work on the issue. If the issue which is found is not fixed in a day, then a defect is raised to make the team and the relevant stakeholders aware of the issue. This has to be tracked formally in a defect management tool. The critical defects are discussed with the team along with the Product Owner who finalizes the criticality of the defect and the criticality of the story under which the defect has been logged.  

Following is the depiction of how Defect management is done in Agile:

Defect Management Flow within a Sprint

The defects are prioritized on the following basis:  

  • Impact on the test coverage
  • Impact on meeting the acceptance criteria of the story

What if a defect comes up late during the sprint or some defect that is directly not related to any of the stories taken up in the sprint? In such cases, a defect is raised and discussed whether it can be fixed in the same sprint and also how much effort is required to fix and retest it. If the effort is too high and if forecasted that it cannot be delivered as a part of the current Sprint, it is either moved to a story related to the defect which is then planned in future Sprint, or it is converted to a new story and gets assigned to a future sprint.  

It is very important to capture the details of how effective the testing techniques are which were deployed by the team. Following are a few of the suggested metrics which the team can track and analyze as part of the defect management activities:

Defect Removal Effectiveness (DRE): Defect removal effectiveness is a metric that tells how many defects testers could identify in their testing and how many of them got slipped to the next stage i.e. UAT or Production or Warranty period etc. DRE is expressed as %.  

Defect Removal Effectiveness (DRE) = (No. of In-process defects / Total no. of defects)* 100  
(Total number of defects = In-process defects + Defects found in Sprint/Release review)

Test Coverage (using code coverage): When the User Stories are tested using the Black box technique, the team does validation to compare if the output is correct for given inputs or not. But when the focus is on Test Coverage, testers can look inside the Black box code and analyze how much of the code was actually exercised by the test set. The team can do this with Test Coverage tools available commercially like ‘Semantic Design’ test coverage tool, ‘Bullseye Coverage’ etc. If there is a focus on test coverage, it also expands the view of Black box testing to White box testing. This is how a high-level process is:  First compile the code with test coverage tool integrated. Then use that code to test your test cases. After testing is complete, check the tool report to see the code coverage achieved by tests that were just executed o Based on the report and discussion with developers, testers should take the decision to add/update test suit to have additional coverage of code where the tests did not even reach to. The tool report throws reports on which paths/classes/branches/files were covered when the software was executed to run the test suit. Based on the report and discussion with developers, testers must decide if additional/updated tests to be created so as to have better code coverage through the tests and surety that each corner of the code is tested by your tests o Many times these tools provide additional details than just % code coverage. They are Product Quality Metrics (PQM) like memory issues, dead code, clone identification, etc. That eventually helps developers in cleaning and optimizing code further.  Thus, using code coverage tools help testers to understand what part of code got tested and what more needs to be tested. And it helps developers to fine-tune and refactors the code further.  

Test Coverage % = % of Code covered through the Test Suit.  
Test coverage targets should be 90% to 100%.

Automation Test Coverage: From a testing standpoint, Automation of test coverage provides an insight into the percentage of test cases being automated as a part of the project. Each Sprint covers certain test cases to be automated which can be run in an automated fashion in the future Sprints. In this way, the number of test cases being automated increases Sprints by Sprint reducing the number of manual test cases thereby providing a higher percentage of automation that can cover system cases during release level testing, rather than manually executing the cases.

Automation test coverage metrics = (Number of automation test cases/ 
Total number of test cases)*100

 A team having a higher percentage of automation can concentrate more on exploratory and ad-hoc testing so that the regular test cases can be covered through automation.

Automation Coverage Effectiveness: The effectiveness of Automation is used to define how effective the Automation test cases are by having the number of defects caught through automation testing. The more defects caught through running automation test cases, the higher is the effectiveness. The team works in identifying the components which could be modified so as to identify more defects. 

Automation coverage effectiveness = (Number of defects caught by automation
/ Total number of defects caught)*100

With each Sprint, the team can work on the defects caught manually in the earlier sprint to incorporate the changes in automation so that the effectiveness of the automation test cases increases. If the effectiveness remains the same/decreases, then the team needs to strive to have some changes in the automation testing strategy.

Project/Release Initiation Phase: Testers should get involved right from the beginning of the Project initiation to understand the requirements better. o Understand the high-level requirements and highlight the issues (if any) in the very early phase of the project.  Provide inputs to Product Owner or Business Analyst while setting business priorities in Product Backlog.  

Understand the business priorities and help in the Release Planning:

  • Project/Release Planning and Sizing:  Both, developers, as well as testers, should participate in the Product Backlog grooming sessions.   Developers and Testers should together participate during Planning Poker Game which helps inaccurate estimates and Story Sizing. Customized Estimation Template is used to capture defect rate, Injected defect rate, and rounds of Regression Cycle based on past agile project experiences.
  • Sprint Planning:  Testers should mandatorily attend the Requirement walkthrough sessions to capture more number requirement/design findings & production issues in the early phase.  It is always recommended to revisit and revise estimates after Sprint level requirement analysis.  A modular approach for Test Planning and Scripting should be adopted.  Test scenario walkthrough with the team at end of the planning phase will always help every member to get a broader idea.
  • Sprint Execution:  Consolidated/modular approach for Test Data setup activities.  Test data repository to increase the reusability.  Metrics Based test execution for better tracking and coverage.  Risk-Based testing in critical situations. To maximize test coverage, use “Sprint Traceability Metrics.” Agile Review Checklists to increase the quality of deliverables. Strong collaboration with waterfall team for workload balance during a crisis.


Like Article
Suggest improvement
Share your thoughts in the comments

Similar Reads