Page 184 - DCAP405_SOFTWARE_ENGINEERING
P. 184
Unit 11: Testing Strategies
Explain debugging Notes
Scan the testing tools
Discuss the testing activities for conventional and object oriented software
Introduction
Dear students it should be clear in mind that the philosophy behind testing is to find errors. Test
cases are devised with this purpose in mind. A test case is a set of data that the system will
process as normal input. However, the data with the express intent of determining whether the
system will process then correctly. For example, test cases for inventory handling should include
situation in which the quantities to be withdrawn from inventory exceed, equal, and are less
than the actual quantities on hand. Each test case is designed with the intent of finding errors in
the way the system will process it. There are two general strategies for testing software: code
testing and specifications testing. In code testing, the analyst develops that case to execute every
instructions and path in a program. Under specification testing, the analyst examines the program
specifications and then writes test data to determine how the program operates under specific
conditions. Regardless of which strategy the analyst follows, there are preferred practices to
ensure that the testing is useful. The levels of tests and types of test data, combined with testing
libraries, are important aspects of the actual test process.
11.1 Software Testing
Software Testing is the process of executing a program or system with the intent of finding
errors. Or, it involves any activity aimed at evaluating an attribute or capability of a program or
system and determining that it meets its required results. Software is not unlike other physical
processes where inputs are received and outputs are produced. Where software differs is in the
manner in which it fails. Most physical systems fail in a fixed (and reasonably small) set of ways.
By contrast, software can fail in many bizarre ways. Detecting all of the different failure modes
for software is generally infeasible.
Unlike most physical systems, most of the defects in software are design errors, not manufacturing
defects. Software does not suffer from corrosion, wear-and-tear — generally it will not change
until upgrades, or until obsolescence. So once the software is shipped, the design defects – or
bugs – will be buried in and remain latent until activation.
Software bugs will almost always exist in any software module with moderate size: not because
programmers are careless or irresponsible, but because the complexity of software is generally
intractable – and humans have only limited ability to manage complexity. It is also true that for
any complex systems, design defects can never be completely ruled out.
Discovering the design defects in software is equally difficult, for the same reason of complexity.
Because software and any digital systems are not continuous, testing boundary values are not
sufficient to guarantee correctness. All the possible values need to be tested and verified, but
complete testing is infeasible. Exhaustively testing a simple program to add only two integer
inputs of 32-bits (yielding 2^64 distinct test cases) would take hundreds of years, even if tests
were performed at a rate of thousands per second. Obviously, for a realistic software module,
the complexity can be far beyond the example mentioned here. If inputs from the real world are
involved, the problem will get worse, because timing and unpredictable environmental effects
and human interactions are all possible input parameters under consideration.
A further complication has to do with the dynamic nature of programs. If a failure occurs during
preliminary testing and the code is changed, the software may now work for a test case that it
LOVELY PROFESSIONAL UNIVERSITY 177