METRICS FOR SOFTWARE TEST PLANNING ATTRIBUTES

Get Complete Project Material File(s) Now! »

SOFTWARE TESTING

This chapter explains the concept of software testing along with a discussion on different level of testing.

The Notion of Software Testing

Software testing is an evaluation process to determine the presence of errors in computer software. Software testing cannot completely test software because exhaustive testing is rarely possible due to time and resource constraints. Testing is fundamentally a comparison activity in which the results are monitored for specific inputs. The software is subjected to different probing inputs and its behavior is evaluated against expected outcomes. Testing is the dynamic analysis of the product [18]; meaning that the testing activity probes software for faults and failures while it is actually executed. It is apart from static code analysis, in which analysis is performed without actually executing the program. As [1] points out if you don’t execute the code to uncover possible damage, you are not a tester. The following are some of the established software testing definitions:
• Testing is the process of executing programs with the intention of finding errors [2].
• A successful test is one that uncovers an as-yet-undiscovered error [2].
• Testing can show the presence of bugs but never their absence [3].
• The underlying motivation of program testing is to affirm software quality with methods that can be economically and effectively applied to both large scale and small scale systems [4].
• Testing is the process of analyzing a software item to detect the differences between existing and required conditions (that is, bugs) and to evaluate the features of the software item [5].
• Testing is a concurrent lifecycle process of engineering, using and maintaining testware (i.e. testing artifacts) in order to measure and improve the quality of the software being tested [6].
Software testing is one element of the broader topic called verification and validation (V&V). Software verification and validation uses reviews, analysis and testing techniques to determine whether a software system and its intermediate products fulfill the expected fundamental capabilities and quality attributes [7].
There are some pre- established principles about testing software. Firstly, testing is a process that confirms the existence of quality, not establishing quality. Quality is the overall responsibility of the project team members and is established through right combinations of methods and tools, effective management, reviews and measurements. [8] quotes Brian Marick, a software testing consultant, as saying the first mistake that people make is thinking that the testing team is responsible for assuring quality. Secondly, the prime objective of testing is to discover faults that are preventing the software in meeting customer requirements. Moreover, testing requires planning and designing of test cases and the testing effort should focus on areas that are most error prone. The testing process progresses from component level to system level in an incremental way, and exhaustive testing is rarely possible due to the combinatorial nature of software [8].

Test Levels

During the lifecycle of software development, testing is performed at several stages as the software is evolved component by component. The accomplishment of reaching a stage in the development of software calls for testing the developed capabilities. Test driven development takes a different approach in which tests are driven first and functionality is developed around those tests. The testing at defined stages is termed as test levels and these levels progresses from individual units to combining or integrating the units into larger components. Simple projects may consist of only one or two levels of testing while complex projects may have more levels [6]. Figure 2 depicts the traditional waterfall model with added testing levels.
The identifiable levels of testing in the V -model are unit testing, integration testing, system testing and acceptance testing. V-model of testing is criticized as being reliant on the timely availability of complete and accurate development documentation, derivation of tests from a single document and execution of all the tests together. In spite of its criticism, it is the most familiar model [1]. It provides a basis for using a consistent testing terminology.
Unit testing finds bugs in internal processing logic and data structures in individual modules by testing them in isolated environment [9]. Unit testing uses the component-level design description as a guide [8]. Unit testing a module requires creation of stubs and drivers as shown in Figure 3 below.
According to IEEE standard 1008-1987, unit testing activities consists of planning the general approach, resources and schedule, determining features to be tested, refining the general plan, designing the set of tests, implementing the refined plan and designing, executing the test procedure and checking for termination and evaluating the test effort and unit [10].
As the individual modules are integrated together, there are chances of finding bugs related to the interfaces between modules. It is because integrating modules might not provide the desired function, data can be lost across interfaces, imprecision in calculations may be magnified, and interfacing faults might not be detected by unit testing [9]. These faults are identified by integration testing. The different approaches used for integration testing includes incremental integration (top-down and bottom-up integration) and big-bang. Figure 4 and Figure 5 below shows the bottom-up and top-down integration strategies respectively [19].
The objective of system testing is to determine if the software meets all of its requirements as mentioned in the Software Requirements Specifications (SRS) document. The focus of system testing is at the requirements level.
As part of system and integration testing, regression testing is performed to determine if the software still meets the requirements if changes are made to the software [9].
Acceptance testing is normally done by the end customers or while customers are partially involved. Normally, it involves selecting tests from the system testing phase or the ones defined by the users [9].

SOFTWARE TESTING LIFECYCLE

The software testing process is composed of different activities. Various authors have used different ways to group these activities. This chapter describes the different software testing activities as part of software testing lifecycle by analyzing relevant literature. After comparing the similarities between different testing activities as proposed by various authors, a description of the key phases of software testing lifecycle has been described.

READ  Imprinting Isolated Single Iron Atoms onto Mesoporous Silica by Templating with Metallosurfactants

The Need for a Software Testing Lifecycle

There are different test case design methods in practice today. These test case design methods need to be part of a well-defined series of steps to ensure successful and effective software testing. This systematic way of conducting testing saves time, effort and increases the probability of more faults being caught [8]. These steps highlights when different testing activities are to be planned i.e. effort, time and resource requirements, criteria for ending testing, means to report errors and evaluation of collected data.
There are some common characteristics inherent to the testing process which must be kept in mind. These characteristics are generic and irrespective of the test case design methodology chosen. These characteristics recommends that prior to commencement of testing, formal technical reviews are to be carried out to eliminate many faults earlier in the project lifecycle. Secondly, testing progresses from smaller scope at the component level to a much broader scope at the system level. While moving from component level to the complete system level, different testing techniques are applicable at specific points in time. Also the testing personnel can be either software developers or part of an independent testing group. The components or units of the system are tested by the software developer to ensure it behaves the way it is expected to. Developers might also perform the integration testing [8] that leads to the construction of complete software architecture. The independent testing group is involved after this stage at the validation/system testing level. One last characteristic that is important to bear in mind is that testing and debugging are two different activities. Testing is the process which confirms the presence of faults, while debugging is the process which locates and corrects those faults. In other words, debugging is the fixing of faults as discovered by testing. The following Figure 6 shows the testing and debugging cycles side by side [20].

Table of contents :

1 INTRODUCTION
1.1 BACKGROUND
1.2 PURPOSE
1.3 AIMS AND OBJECTIVES
1.4 RESEARCH QUESTIONS
1.4.1 Relationship between Research Questions and Objectives
1.5 RESEARCH METHODOLOGY
1.5.1 Threats to Validity
1.6 THESIS OUTLINE
2 SOFTWARE TESTING
2.1 THE NOTION OF SOFTWARE TESTING
2.2 TEST LEVELS
3 SOFTWARE TESTING LIFECYCLE
3.1 THE NEED FOR A SOFTWARE TESTING LIFECYCLE
3.2 EXPECTATIONS OF A SOFTWARE TESTING LIFECYCLE
3.3 SOFTWARE TESTING LIFECYCLE PHASES
3.4 CONSOLIDATED VIEW OF SOFTWARE TESTING LIFECYCLE
3.5 TEST PLANNING
3.6 TEST DESIGN
3.7 TEST EXECUTION
3.8 TEST REVIEW
3.9 STARTING/ENDING CRITERIA AND INPUT REQUIREMENTS FOR SOFTWARE TEST PLANNING AND TEST DESIGN PROCESSES
4 SOFTWARE MEASUREMENT
4.1 MEASUREMENT IN SOFTWARE ENGINEERING
4.2 BENEFITS OF MEASUREMENT IN SOFTWARE TESTING
4.3 PROCESS MEASURES
4.4 A GENERIC PREDICTION PROCESS
5 ATTRIBUTES FOR SOFTWARE TEST PLANNING PROCESS
5.1 PROGRESS
5.1.1 The Suspension Criteria for Testing
5.1.2 The Exit Criteria
5.1.3 Scope of Testing
5.1.4 Monitoring of Testing Status
5.1.5 Staff Productivity
5.1.6 Tracking of Planned and Unplanned Submittals
5.2 COST
5.2.1 Testing Cost Estimation
5.2.2 Duration of Testing
5.2.3 Resource Requirements
5.2.4 Training Needs of Testing Group and Tool Requirement
5.3 QUALITY
5.3.1 Test Coverage
5.3.2 Effectiveness of Smoke Tests
5.3.3 The Quality of Test Plan
5.3.4 Fulfillment of Process Goals
5.4 IMPROVEMENT TRENDS
5.4.1 Count of Faults Prior to Testing
5.4.2 Expected Number of Faults
5.4.3 Bug Classification
6 ATTRIBUTES FOR SOFTWARE TEST DESIGN PROCESS
6.1 PROGRESS
6.1.1 Tracking Testing Progress
6.1.2 Tracking Testing Defect Backlog
6.1.3 Staff Productivity
6.2 COST
6.2.1 Cost Effectiveness of Automated Tool
6.3 SIZE
6.3.1 Estimation of Test Cases
6.3.2 Number of Regression Tests
6.3.3 Tests to Automate
6.4 STRATEGY
6.4.1 Sequence of Test Cases
6.4.2 Identification of Areas for Further Testing
6.4.3 Combination of Test Techniques
6.4.4 Adequacy of Test Data
6.5 QUALITY
6.5.1 Effectiveness of Test Cases
6.5.2 Fulfillment of Process Goals
6.5.3 Test Completeness
7 METRICS FOR SOFTWARE TEST PLANNING ATTRIBUTES
7.1 METRICS SUPPORT FOR PROGRESS
7.1.1 Measuring Suspension Criteria for Testing
7.1.2 Measuring the Exit Criteria
7.1.3 Measuring Scope of Testing
7.1.4 Monitoring of Testing Status
7.1.5 Staff Productivity
7.1.6 Tracking of Planned and Unplanned Submittals
7.2 METRIC SUPPORT FOR COST
7.2.1 Measuring Testing Cost Estimation, Duration of Testing and Testing Resource Requirements
7.2.2 Measuring Training Needs of Testing Group and Tool Requirement
7.3 METRIC SUPPORT FOR QUALITY
7.3.1 Measuring Test Coverage
7.3.2 Measuring Effectiveness of Smoke Tests
7.3.3 Measuring the Quality of Test Plan
7.3.4 Measuring Fulfillment of Process Goals
7.4 METRIC SUPPORT FOR IMPROVEMENT TRENDS
7.4.1 Count of Faults Prior to Testing and Expected Number of Faults
7.4.2 Bug Classification
8 METRICS FOR SOFTWARE TEST DESIGN ATTRIBUTES
8.1 METRIC SUPPORT FOR PROGRESS
8.1.1 Tracking Testing Progress
8.1.2 Tracking Testing Defect Backlog
8.1.3 Staff Productivity
8.2 METRIC SUPPORT FOR QUALITY
8.2.1 Measuring Effectiveness of Test Cases
8.2.2 Measuring Fulfillment of Process Goals
8.2.3 Measuring Test Completeness
8.3 METRIC SUPPORT FOR COST
8.3.1 Measuring Cost Effectiveness of Automated Tool
8.4 METRIC SUPPORT FOR SIZE
8.4.1 Estimation of Test Cases
8.4.2 Number of Regression Tests
8.4.3 Tests to Automate
8.5 METRIC SUPPORT FOR STRATEGY
8.5.1 Sequence of Test Cases
8.5.2 Measuring Identification of Areas for Further Testing
8.5.3 Measuring Combination of Testing Techniques
8.5.4 Measuring Adequacy of Test Data
9 EPILOGUE
9.1 RECOMMENDATIONS
9.2 CONCLUSIONS
9.3 FURTHER WORK
9.3.1 Metrics for Software Test Execution and Test Review Phases
9.3.2 Metrics Pertaining to Different Levels of Testing
9.3.3 Integration of Metrics in Effective Software Metrics Program
9.3.4 Data Collection for Identified Metrics
9.3.5 Validity and Reliability of Measurements
9.3.6 Tool Support for Metric Collection and Analysis
TERMINOLOGY
REFERENCES

GET THE COMPLETE PROJECT

Related Posts