In the current complex digital design development process, functional verification is very important. Although the complexity of hardware continues to grow with Moore's law, the complexity of validation is more challenging. In fact, as the hardware complexity increases exponentially with time, the verification complexity is exponentially increasing in theory. Validation has been recognized as a major bottleneck in the design process: up to 70% of design development time and resources are spent on functional verification. Collett International, a functional flaw, is the number one cause of chip re-flow, even if it costs so much energy and resources for verification.
Functional verification Challenges
The disadvantage of the marginal situation (Corner-case) is the imitation and authenticity, because the simulation-based verification has a non-exhaustive inherent characteristic, so the marginal situation cannot be detected. In fact, no matter how much time it takes to simulate, or how intelligent the test platform and generator is, the simulation verifies that the design intent is incomplete for all circuits except the smallest circuit. The basic simulation artifacts can be divided into three categories: exhaustive, controllable, and observable .
Formal validation is a systematic process that uses mathematical reasoning to verify that the design intent (indicator) is implemented in the implementation (RTL). Formal validation overcomes all 3 simulation challenges, as formal validation can be exhaustive from the algorithm to check all input values that may vary over time. In other words, there is no need to show how to motivate the design or create multiple conditions to achieve high visibility.
Although theoretically, the input port of the simulation test platform has high controllability for the design to be verified (DUV), but the control of the internal point of the test platform is generally poor. In order to identify design errors using a simulation-based approach, the following conditions must be maintained:
* Must produce the correct input excitation to activate (that is, sensitive) the defect of a point in the design
* The correct input excitation must be generated in order to transfer all effects from the defect to the output port
When using simulation-based validation, you need to plan the objects in your design that need to be validated:
* Define the various input conditions that need to be tested
* Create a functional coverage model (to determine if enough emulation is done)
* Build test Platform (checker, test pile, generator, etc.)
* Create a specific direct test
* Simulation of carries
In reality, engineers have been doing these things over and over: running tests, debugging failures, emulating regression groups again, observing various coverage metrics, and adjusting incentives (such as manipulating input generators) to cover parts of the design that were not previously covered.
Here we discuss an example of an elastic cache (see Figure 1). The data can be changed between the clock domains and can be adjusted based on the phase and frequency offsets between the two clocks. Data must be transmitted without compromise through an elastic cache, even when the design allows the clock to be not fully synchronized. A functional defect example in this case is a cache overflow that occurs due to changes in the data when the clock is effectively aligned. This may require a large number of simulations and considerations for all possible input conditions to model and simulate this error behavior.
Figure 1: Elastic cache block Diagram
High-level requirements
Many companies have used assertion-based validation (ABV) technology to shorten validation times while improving their overall validation efforts. However, the generally used ABV focuses on partial, RTL implementation-related assertions used in simulation. The aggregation of all internal assertions does not characterize or complete the end-to-end behavior of the modules defined by the microarchitecture. In addition, these local assertions cannot be reused when the design implementation changes. In other words, with end-to-end attributes, including data integrity and packet order, and the black-box behavior required by the specified module, the high-level assertion provides much higher coverage of design functionality and can be reused across a variety of design implementations and across multiple projects. More importantly, validation integrity and productivity can be significantly improved by the high-level requirements set of the formal validation module. Therefore, high-level form verification eliminates the need for module-level emulation, which greatly reduces system-level validation time. Let's take a closer look at the high-level requirements shown in Figure 2.
Figure 2: Requirements are compared to RTL assertions.
The y-axis represents the decimation layer, and the x-axis represents the number of designs that are covered by a particular assertion or requirement. The more you go up along the y-axis, the greater the design space that is covered by the high-level requirements. These high-level requirements are proven to be of great value for a number of reasons:
1. High-level requirements relate to requirements in the micro-architecture
2. High-level requirements related to the Output Checker group in the test platform
3. High-level requirements cover the same design space as the hundreds of lower-level assertions the engineer wants to write
4. High-level requirements cover the design space that is not covered by the lack of lower-level assertions that the engineer misses
Finally, let's give an example, if the design contains a FIFO, and the engineer forgets to write an assertion to check that the FIF is not overflowing. This security violation will be identified by high-level requirements. However, by validating high-level requirements in a formal manner, high-level requirements can be traced to the root cause of the violation. For example, if a FIFO is included in the impact cone for a high-level requirement, the underflow condition that causes the high-level requirements to be met will be detected.
The ideal form-validation tool requires a certain scale to allow for exhaustive checking of all possible input conditions and the controllability and visibility of any point in the design (see table 1). Our flagship products, such as jaspergold, use high-performance and large-scale form-verification techniques to completely verify that the modules meet high-level requirements from the microarchitecture. This product uses mathematical algorithms because there is no need to use a simulation test platform or incentive.
Table 1: Comparison of simulation and formal validation
Summary of this article
Formal validation requires that you think in a different way. For example, simulation is a completely empirical approach, which means that you try to pinpoint defects by experimenting repeatedly, which takes a considerable amount of time to try out all possible combinations and therefore never be complete. In addition, because engineers must define and generate a large number of input conditions, their focus will be on how to break down the design on the basis of non-design objectives. On the other hand, formal validation is an exhaustive mathematical technique that allows engineers to focus only on design intent, or "what is the correct behavior of the design?" ”。
Validation implementations include defining multiple input conditions as part of a test plan, creating a functional coverage model, developing a test platform, creating an input excitation generator, writing a guided test, performing tests, analyzing coverage metrics, adjusting the excitation generator to target an unverified design section, and then repeating the process. The pure-form verification technique, in contrast, focuses on validating the end-to-end of the module, directly corresponding to the high-level requirements of the microarchitecture specification, helping the user play to greatly improve the project's design and validation capacity while ensuring correctness.
What is formal validation?