Functional verification
Functional verification is the task of verifying that the logic design conforms to specification. Functional verification attempts to answer the question "Does this proposed design do what is intended?" This is complex and takes the majority of time and effort in most large electronic system design projects. Functional verification is a part of more encompassing design verification, which, besides functional verification, considers non-functional aspects like timing, layout and power.
Background
Although the number of transistors increased exponentially according to Moore's law, increasing the number of engineers and time taken to produce the designs only increase linearly. As the transistors' complexity increases, the number of coding errors also increases. Most of the errors in logic coding come from careless coding, miscommunication, and microarchitecture challenges. Thus, electronic design automation tools are produced to catch up with the complexity of transistors design. Languages such as Verilog and VHDL are introduced together with the EDA tools.Functional verification is very difficult because of the sheer volume of possible test-cases that exist in even a simple design. Frequently there are more than 10^80 possible tests to comprehensively verify a design – a number that is impossible to achieve in a lifetime. This effort is equivalent to program verification, and is NP-hard or even worse – and no solution has been found that works well in all cases.
The verification process and strategy
The verification plan
A functional verification project is guided by a verification plan. This is a foundational document that serves as a blueprint for the entire effort. It is a living document created early in the design cycle and is critical for defining scope and tracking progress. The plan typically defines:- Verification scope: A list of the design features and functions that need to be verified.
- Methodology: The techniques and standardized methodologies that will be used.
- Resources: The engineering team, EDA tools, and computational infrastructure required.
- Success criteria: Specific coverage goals that must be met to consider the verification complete.
Coverage metrics
- Code coverage: This measures how thoroughly the hardware description language source code has been exercised during testing. It includes metrics like statement coverage, branch coverage, and toggle coverage.
- Functional coverage: This measures whether the intended functionality, as described in the verification plan, has been tested. Engineers define specific scenarios or data values of interest, and the verification tool tracks whether these cases have been exercised.
Levels of Abstraction in Verification
- Unit/Block-Level Verification: This is the most granular level, where individual design modules or "units" are tested in isolation. The goal is to thoroughly verify the functionality of a small piece of the design before it is integrated into a larger system.
- Subsystem/IP-Level Verification: At this stage, multiple units are integrated to form a larger, functional block, often referred to as a subsystem or an Intellectual Property core. Verification at this level focuses on the combined functionality and the interactions between the integrated units. A common strategy at this stage is the use of behavioral models, which are high-level, functional representations of a block. These models simulate faster than detailed RTL code and allow verification to begin before the final design is complete, helping to formalize interface specifications and find bugs early.
- SoC/Chip-Level Verification: Once all subsystems and IP blocks are available, they are integrated to form the full System-on-a-Chip. Functional verification at the chip level is focused on verifying the correct connectivity and interactions between all these major blocks. System-level simulations are run to prove the interfaces between ASICs and check complex protocol error conditions.
- System-Level Verification: This is the highest level of abstraction, where the verified chip's functionality is tested in the context of a full system, often including other chips, peripherals, and software. Hardware emulation is a critical technique at this stage, as its high speed allows for the execution of real software, such as device drivers or even booting a full operating system on the design. This provides a "richness of stimulus" that is very difficult to replicate in simulation and is highly effective at finding system-level bugs.
Verification methodologies
Dynamic verification (simulation-based)
involves executing the design model with a given set of input stimuli and checking its output for correct behavior. This is the most widely used approach.- Logic simulation: This is the powerhouse of functional verification, where a software model of the design is simulated. A testbench is created to generate stimuli, drive them into the design, monitor the outputs, and check for correctness.
- Emulation and FPGA Prototyping: These hardware-assisted techniques map the design onto a reconfigurable hardware platform. They run orders of magnitude faster than simulation, allowing for more extensive testing with real-world software, such as booting an operating system.
- Simulation Acceleration: This uses special-purpose hardware to speed up parts of the logic simulation.
Static Verification
Static verification analyzes the design without executing it with test vectors:- Formal verification: This uses mathematical methods to prove or disprove that the design meets certain formal requirements without the need for test vectors. It can prove the absence of certain bugs but is limited by the state-space explosion problem.
- Linting: This involves using HDL-specific versions of lint tools to check for common coding style violations, syntax errors, and potentially problematic structures in the code.
Hybrid Techniques
Components of simulated environments
A simulation environment is typically composed of several types of components:- The generator generates input vectors that are used to search for anomalies that exist between the intent and the implementation. This type of generator utilizes an NP-complete type of SAT Solver that can be computationally expensive. Other types of generators include manually created vectors, Graph-Based generators proprietary generators. Modern generators create directed-random and random stimuli that are statistically driven to verify random parts of the design. The randomness is important to achieve a high distribution over the huge space of the available input stimuli. To this end, users of these generators intentionally under-specify the requirements for the generated tests. It is the role of the generator to randomly fill this gap. This mechanism allows the generator to create inputs that reveal bugs not being searched for directly by the user. Generators also bias the stimuli toward design corner cases to further stress the logic. Biasing and randomness serve different goals and there are tradeoffs between them, hence different generators have a different mix of these characteristics. Since the input for the design must be valid and many targets should be maintained, many generators use the constraint satisfaction problem technique to solve the complex testing requirements. The legality of the design inputs and the biasing arsenal are modeled. The model-based generators use this model to produce the correct stimuli for the target design.
- The drivers translate the stimuli produced by the generator into the actual inputs for the design under verification. Generators create inputs at a high level of abstraction, namely, as transactions or assembly language. The drivers convert this input into actual design inputs as defined in the specification of the design's interface.
- The simulator produces the outputs of the design, based on the design's current state and the injected inputs. The simulator has a description of the design net-list. This description is created by synthesizing the HDL to a low gate level net-list.
- The monitor converts the state of the design and its outputs to a transaction abstraction level so it can be stored in a 'score-boards' database to be checked later on.
- The checker validates that the contents of the 'score-boards' are legal. There are cases where the generator creates expected results, in addition to the inputs. In these cases, the checker must validate that the actual results match the expected ones.
- The arbitration manager manages all the above components together.