Software verification and validation
In software project management, software testing, and software engineering, verification and validation is the process of checking that a software system meets specifications and requirements so that it fulfills its intended purpose. It may also be referred to as software quality control. It is normally the responsibility of software testers as part of the software development lifecycle. In simple terms, software verification is: "Assuming we should build X, does our software achieve its goals without any bugs or gaps?" On the other hand, software validation is: "Was X what we should have built? Does X meet the high-level requirements?"
Definitions
Verification and validation are not the same thing, although they are often confused. Boehm succinctly expressed the difference as- Verification: Are we building the product right?
- Validation: Are we building the right product?
Building the product right implies the use of the Requirements Specification as input for the next phase of the development process, the design process, the output of which is the Design Specification. Then, it also implies the use of the Design Specification to feed the construction process. Every time the output of a process correctly implements its input specification, the software product is one step closer to final verification. If the output of a process is incorrect, the developers have not correctly implemented some component of that process. This kind of verification is called "artifact or specification verification".
Software verification
It would imply to verify if the specifications are met by running the software but this is not possible. Only by reviewing its associated artifacts, can someone conclude whether or not the specifications are met.Artifact or specification verification
The output of each software development process stage can also be subject to verification when checked against its input specification.Examples of artifact verification:
- Of the design specification against the requirement specification: Do the architectural design, detailed design and database logical model specifications correctly implement the functional and non-functional requirements specifications?
- Of the construction artifacts against the design specification: Do the source code, user interfaces and database physical model correctly implement the design specification?
Software validation
However, it is also possible to perform internal static tests to find out if the software meets the requirements specification but that falls into the scope of static verification because the software is not running.
Artifact or specification validation
Requirements should be validated before the software product as a whole is ready.Examples of artifact validation:
- User Requirements Specification validation: User requirements as stated in a document called User Requirements Specification are validated by checking if they indeed represent the will and goals of the stakeholders. This can be done by interviewing the stakeholders and asking them directly or even by releasing prototypes and having the users and stakeholders to assess them.
- User input validation: User input is validated by checking if the input provided by the software operators or users meets the domain rules and constraints.
Validation vs. verification
- Software Validation: The process of evaluating software during or at the end of the development process to determine whether it satisfies specified requirements.
- Software Verification: The process of evaluating software to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase.
In other words, software verification ensures that the output of each phase of the software development process effectively carries out what its corresponding input artifact specifies, while software validation ensures that the software product meets the needs of all the stakeholders. Software verification ensures that "you built it right" and confirms that the product, as provided, fulfills the plans of the developers. Software validation ensures that "you built the right thing" and confirms that the product, as provided, fulfills the intended use and goals of the stakeholders.
This article has used the strict or narrow definition of verification.
From a testing perspective:
- Fault – wrong or missing function in the code.
- Failure – the manifestation of a fault during execution. The software was not effective. It does not do "what" it is supposed to do.
- Malfunction – according to its specification, the system does not meet its specified functionality. The software used excessive resources ; it had side effects (heat/voltage/vibration risks to systems, heat/current/force risks to users; it was not usable, not reliable, etc. It does not do something "how" it is supposed to do it.
Related concepts
Within the modeling and simulation community, the definitions of verification, validation and accreditation are similar:
- M&S Verification is the process of determining that a computer model, simulation, or federation of models and simulation implementations and their associated data accurately represent the developer's conceptual description and specifications.
- M&S Validation is the process of determining the degree to which a model, simulation, or federation of models and simulations, and their associated data are accurate representations of the real world from the perspective of the intended use.
- Accreditation is the formal certification that a model or simulation is acceptable to be used for a specific purpose.
V&V methods
Formal
In mission-critical software systems, formal methods may be used to ensure the correct operation of a system. These formal methods can prove costly, however, representing as much as 80 percent of total software design cost.Independent
Independent Software Verification and Validation is targeted at safety-critical software systems and aims to increase the quality of software products, thereby reducing risks and costs throughout the operational life of the software. The goal of ISVV is to provide assurance that software performs to the specified level of confidence and within its designed parameters and defined requirements.ISVV activities are performed by independent engineering teams, not involved in the software development process, to assess the processes and the resulting products. The ISVV team independency is performed at three different levels: financial, managerial and technical.
ISVV goes beyond "traditional" verification and validation techniques, applied by development teams. While the latter aims to ensure that the software performs well against the nominal requirements, ISVV is focused on non-functional requirements such as robustness and reliability, and on conditions that can lead the software to fail.
ISVV results and findings are fed back to the development teams for correction and improvement.
History
ISVV derives from the application of IV&V to the software. Early ISVV application dates back to the early 1970s when the U.S. Army sponsored the first significant program related to IV&V for the Safeguard Anti-Ballistic Missile System. Another example is NASA's IV&V Program, which was established in 1993.By the end of the 1970s IV&V was rapidly becoming popular. The constant increase in complexity, size and importance of the software led to an increasing demand on IV&V applied to software.
Meanwhile, IV&V consolidated and is now widely used by organizations such as the DoD, FAA, NASA and ESA. IV&V is mentioned in DO-178B, ISO/IEC 12207 and formalized in IEEE 1012.