Software testing
Software testing is the act of checking whether software meets its intended objectives and satisfies expectations.
Software testing can provide objective, independent information about the quality of software and the risk of its failure to a user or sponsor or any other stakeholder.
Software testing can determine the correctness of software for specific scenarios but cannot determine correctness for all scenarios. It cannot find all bugs.
Based on the criteria for measuring correctness from an oracle, software testing employs principles and mechanisms that might recognize a problem. Examples of oracles include specifications, contracts, comparable products, past versions of the same product, inferences about intended or expected purpose, user or customer expectations, relevant standards, and applicable laws.
Software testing can be functional or non-functional in nature.
Software testing is often dynamic in nature: running the software to verify actual output matches expected. It can also be static in nature: reviewing code and its associated documentation.
Software testing is often used to answer the question: Does the software do what it is supposed to do and what it needs to do?
Information learned from software testing may be used to improve the process by which software is developed.
A commonly suggested approach to automated testing is the "test pyramid," wherein most of the tests are unit tests, followed by a smaller set of integration tests and finally a few end-to-end tests.
Economics
A study conducted by NIST in 2002 reported that software bugs cost the U.S. economy $59.5 billion annually. More than a third of this cost could be avoided if better software testing was performed.Outsourcing software testing because of costs is very common, with China, the Philippines, and India being preferred destinations.
History
initially introduced the separation of debugging from testing in 1979. Although his attention was on breakage testing, it illustrated the desire of the software engineering community to separate fundamental development activities, such as debugging, from that of verification.Software testing typically includes handling software bugs a defect in the code that causes an undesirable result. Bugs generally slow testing progress and involve programmer assistance to debug and fix.
Not all defects cause a failure. For example, a defect in dead code will not be considered a failure.
A defect that does not cause failure at one point in time may lead to failure later due to environmental changes. Examples of environment change include running on new computer hardware, changes in data, and interacting with different software.
Goals
Software testing is typically goal driven.Finding bugs
Software testing typically includes handling software bugs a defect in the code that causes an undesirable result. Bugs generally slow testing progress and involve programmer assistance to debug and fix.Not all defects cause a failure. For example, a defect in dead code will not be considered a failure.
A defect that does not cause failure at one point in time may lead to failure later due to environmental changes. Examples of environment change include running on new computer hardware, changes in data, and interacting with different software.
A single defect may result in multiple failure symptoms.
Ensuring requirements are satisfied
Software testing may involve a Requirements gap omission from the design for a requirement. Requirement gaps can often be non-functional requirements such as testability, scalability, maintainability, performance, and security.Code coverage
A fundamental limitation of software testing is that testing under all combinations of inputs and preconditions is not feasible, even with a simple product.Defects that manifest in unusual conditions are difficult to find in testing. Also, non-functional dimensions of quality usability, scalability, performance, compatibility, and reliability can be subjective; something that constitutes sufficient value to one person may not to another.
Although testing for every possible input is not feasible, testing can use combinatorics to maximize coverage while minimizing tests.
Categories
Testing can be categorized many ways.Automated testing
Levels
Software testing can be categorized into levels based on how much of the software system is the focus of a test.Unit testing
Integration testing
System testing
Static, dynamic, and passive testing
There are many approaches to software testing. Reviews, walkthroughs, or inspections are referred to as static testing, whereas executing programmed code with a given set of test cases is referred to as dynamic testing.Static testing is often implicit, like proofreading, plus when programming tools/text editors check source code structure or compilers check syntax and data flow as static program analysis. Dynamic testing takes place when the program itself is run. Dynamic testing may begin before the program is 100% complete in order to test particular sections of code and are applied to discrete functions or modules. Typical techniques for these are either using stubs/drivers or execution from a debugger environment.
Static testing involves verification, whereas dynamic testing also involves validation.
Passive testing means verifying the system's behavior without any interaction with the software product. Contrary to active testing, testers do not provide any test data but look at system logs and traces. They mine for patterns and specific behavior in order to make some kind of decisions. This is related to offline runtime verification and log analysis.
Exploratory
Preset testing vs adaptive testing
The type of testing strategy to be performed depends on whether the tests to be applied to the IUT should be decided before the testing plan starts to be executed or whether each input to be applied to the IUT can be dynamically dependent on the outputs obtained during the application of the previous tests.Black/white box
Software testing can often be divided into white-box and black-box. These two approaches are used to describe the point of view that the tester takes when designing test cases. A hybrid approach called grey-box that includes aspects of both boxes may also be applied to software testing methodology.White-box testing
White-box testing verifies the internal structures or workings of a program, as opposed to the functionality exposed to the end-user. In white-box testing, an internal perspective of the system, as well as programming skills are used to design test cases. The tester chooses inputs to exercise paths through the code and determines the appropriate outputs. This is analogous to testing nodes in a circuit, e.g., in-circuit testing.While white-box testing can be applied at the unit, integration, and system levels of the software testing process, it is usually done at the unit level. It can test paths within a unit, paths between units during integration, and between subsystems during a system–level test. Though this method of test design can uncover many errors or problems, it might not detect unimplemented parts of the specification or missing requirements.
Techniques used in white-box testing include:
- API testing – testing of the application using public and private APIs
- Code coverage – creating tests to satisfy some criteria of code coverage
- Fault injection methods – intentionally introducing faults to gauge the efficacy of testing strategies
- Mutation testing methods
- Static testing methods
100% statement coverage ensures that all code paths or branches are executed at least once. This is helpful in ensuring correct functionality, but not sufficient since the same code may process different inputs correctly or incorrectly.
Black-box testing
Black-box testing describes designing test cases without knowledge of the implementation, without reading the source code. The testers are only aware of what the software is supposed to do, not how it does it. Black-box testing methods include: equivalence partitioning, boundary value analysis, all-pairs testing, state transition tables, decision table testing, fuzz testing, model-based testing, use case testing, exploratory testing, and specification-based testing.Specification-based testing aims to test the functionality of software according to the applicable requirements. This level of testing usually requires thorough test cases to be provided to the tester, who then can simply verify that for a given input, the output value, either "is" or "is not" the same as the expected value specified in the test case. Test cases are built around specifications and requirements, i.e., what the application is supposed to do. It uses external descriptions of the software, including specifications, requirements, and designs, to derive test cases. These tests can be functional or non-functional, though usually functional. Specification-based testing may be necessary to assure correct functionality, but it is insufficient to guard against complex or high-risk situations.
Black box testing can be used to any level of testing although usually not at the unit level.
Component interface testing
Component interface testing is a variation of black-box testing, with the focus on the data values beyond just the related actions of a subsystem component. The practice of component interface testing can be used to check the handling of data passed between various units, or subsystem components, beyond full integration testing between those units. The data being passed can be considered as "message packets" and the range or data types can be checked for data generated from one unit and tested for validity before being passed into another unit. One option for interface testing is to keep a separate log file of data items being passed, often with a timestamp logged to allow analysis of thousands of cases of data passed between units for days or weeks. Tests can include checking the handling of some extreme data values while other interface variables are passed as normal values. Unusual data values in an interface can help explain unexpected performance in the next unit.