Engineering Manager, Quality, Creative Suites
Ajay is the Engineering Manager, Quality at Adobe Systems, Inc. primarily leading and test progra... more>>
This article talks about "analytics-driven testing coverage" approach which turns the task of measuring test coverage into an easy and instantaneous process. I recommend readers to start thinking, and see how this can be used in their respective projects. The proposed approach proves to be not only effective in saving manual effort, time, and energy but also in helps in improving product quality by quantitatively measuring coverage on the fly, reducing redundant test cases and identifying areas which are currently being less tested.
Delivering a well-designed and working product requires an exhaustive coverage of the code base and product's requirements. Achieving 100% code coverage and validating user-specified implicit and explicit requirements is an important measure of the testing thoroughness.
Code Coverage signifies checking the behavior of the test program against the source code. It checks the algorithmic flow across the code structure and logic and examines how the program works. Functional coverage is little different; it checks the output of the program, that is, the results achieved by the program's functionality. The results are then mapped to the product's specifications to validate if the given output is acceptable. Functional coverage does not examine how the code is traversed during the process of generating the end result (or output).
There are multiple tools available in the market that makes the tracking, measuring, and analysis of code coverage easy. Analyzing these tools is not in the scope of this paper.
Measuring, tracking, and optimizing test coverage is difficult for a test manager in a team where multiple testers are working on a project and are spread across different locations. The profile of a tester further complicates this activity because black-box testers find it difficult to document what they are testing. This is because their test cases are a mix of defined and ad-hoc test cases.
The issues that complicate measuring, tracking, and optimizing test coverage are detailed below:
1. One-To-Many (OTM) Delivery: In a testing task, the software executable (build) connects the developers and testers. A build released by the developers is assigned to multiple testers. The testers have their own targets, preferences, test area assignments.
Although the manager, strategist and planner can plan logical distribution of test cases among the testers, compiling results on the tested areas is a challenge. If a developer wants to interact directly with the tester, to determine the test areas and data values, it is difficult to interact with all involved testers.
There is no easy way for a developer to get instant feedback on what is tested by each tester.
2. Sequential flow: Build delivery, test execution, and result reporting happen in a sequential manner based on the test engineer's initiative and readiness. A tester might not be able to release results because of his/her involvement in a workflow- based testing. Also, a test parameter might have multiple possible data values; a tester might start executing the test case with one specific value and then change the value during the exercise. Although she is testing all the values, it is difficult for her to record all value selections if the count is high. Recording all data values, is a time-consuming activity.
3. Multiplicity factor and need for a dedicated person to co-ordinate, capture, and compile test results: The above highlighted activities are done manually in many projects. There are hardly any automated procedures followed by which test results are compiled and analyzed, and coverage charts are automatically plotted. Most of the times a testing coordinator, is tasked with this responsibility. The testing coordinator works with all testers to report back the testing status, corresponding test data, machine configurations etc.
• Test variables
• Test data
• Machine configurations (Machine Processor, 32 bit, 64 bit)
• OS variants (Win - XP, Vista, Windows 7, different service packs, MAC- 10.5, 10.6 etc)
4. Time loss: Compiling, tracking, and measuring test coverage data manually at specific intervals is a time-consuming activity. With projects getting larger, go to market time is shrinking; a project manager cannot spend time in measuring how much has been achieved during testing. A coordinator cannot always follow up with testers to get testing details. In fact many a times, a tester is reluctant to provide data because she may not have completed testing. The tester may ask for more time to report results as she might be occupied with test-execution activities. Reporting test results can become a low-priority task when compared to test-execution activities. This situation can cause delays in compilation of test results further delaying generation of overall test coverage data. Without knowing the current test status, a project manager cannot take decisions to optimize the testing process, such as determine the test cases to stress on and identify the redundant test cases.
5. Multiple frequencies of data reporting: Who needs test coverage data? Multiple people.
The development team is interested to know:
• What sections of the code base are tested?
• Which functional requirements are tested?
• If all possible and logical data values are used?
The testing team reviews test coverage data to determine:
• When is the time stop testing?
• What feature requests are yet to be tested?
• Which data values are still not tested?
• Which machine configurations were used in test setup?
All these data points are needed for the project. None of these questions should be left unanswered due to lack of data. Each audience has their reasons and goals for analyzing data.
Timing the reporting results also varies based on the audience’s criticality. Project managers might require data on specific milestones, whereas a test manager might need data more frequently to shift test resources and re-prioritize testing areas and goals..
6. In-complete test coverage: Manually compiling test coverage data from multiple testers often results in in-complete or incorrect test results. During black box testing, a tester often executes a substantial number of ad-hoc  test cases, test steps and scenarios in addition to the planned test cases. These ad-hoc test cases get accounted for when these steps result in abnormal behavior or application failures. If ad-hoc cases do not result in any application failure they are not considered. Although executing ad-hoc cases requires a tester's time and effort, because such test cases are not -documented, they are not captured in the test coverage data.
This loss of information is critical because if the steps and data values used during ad-hoc testing are not captured then a significant testing effort is not accounted for.
There is no easy way by which these problems can be solved effectively. There is no optimized method by which data is automatically measured, tracked and analyzed, without monitoring what each tester is executing.
What is Analytics?
Analytics is defined as the process of measuring, recording, tracking, and analyzing data to study real-time patterns and usage of an application or a workflow. Some of the key players in this field are Google Analytics, Yahoo Analytics and many others.
Currently, significant work is being done in the field of analytics (especially web analytics). Website owners and content providers are measuring and studying website-usage patterns and user behavior while on the website. This data helps publishers decide appropriate selection and placement of advertisements. This paper attempts to apply analytics to areas which are not yet explored. In this paper, we use analytics to identify ways to increase testing efficiency of a product and organization.
Measuring test coverage is difficult and time consuming when the software executable (build) is assigned to multiple black box and white box testers. It becomes difficult to measure what is being covered in case of black box testing. Knowing that software is well tested with empirical and substantial evidence is more difficult. To achieve efficiency, it is necessary that a test manager is aware, conscious and capable of tracking every test case and the total coverage achieved by all the testers (inside and outside the team).
This article focuses on using the analytics approach for an application module (in our case an installer technology) and tracking, measuring, and analyzing test coverage based on real-time actions of the testers. These testers are mostly black box testers who are testing the installer as part of their day-to-day testing activities. These testing activities comprise of certifying features and the product as a whole while covering user-specified explicit and usage-oriented implicit requirements. These measurements are then studied to achieve the following goals.
The goal of this approach is to:
1. Determine a quantitative measure of test coverage, which is imperative to measure quality.
2. Find untested areas of the software not.
3. Identify redundant test cases that do not increase test coverage.
Achieving these goals will have result in the following action items for test managers:
1. Re-prioritize the test modules and test areas.
2. Reduce count of redundant test cases.
3. Create additional cases to increase coverage.