Guidelines For Creating A Formal Verification Test
Guidelines For Creating A Formal Verification Test
net/publication/228360702
Article
CITATIONS READS
26 3,492
4 authors, including:
Harry Foster
Mentor Graphics
58 PUBLICATIONS 1,075 CITATIONS
SEE PROFILE
All content following this page was uploaded by Harry Foster on 09 September 2015.
ABSTRACT real example. We chose this real example to illustrate the key
In this paper, we propose a systematic set of guidelines for point that verification completeness for this bridge involves
creating an effective formal verification testplan, which consists more than proving a set of simple assertions (for example, the
of an English list of comprehensive requirements that capture bridge’s FIFO will not overflow). In addition, verification
the desired functionality of the blocks we intend to formally completeness involves more than proving the bridge’s correct
verify. We demonstrate our formal verification testplanning interface behavior (for example, the bridge interface is AHB
techniques on a real example that involves an AMBA™ AHB compliant). Completeness requires a systematic process that
parallel to Inter IC (or I2C) serial bus bridge. ensures all key features described in the architectural and micro-
architectural specification are identified and covered in the
Keywords verification testplan prior to writing any assertions.
Assertion, Formal verification, High-Level Requirement,
Specification, Verification testplan. 2. TESTPLAN GUIDELINES
In this section, we discuss the strategies and techniques that will
1. INTRODUCTION help you create effective formal verification testplans.
Successful verification is not ad hoc in nature. On the contrary,
experience repeatedly demonstrates that success depends on 2.1 Where to apply formal
methodical verification planning combined with systematic Formal verification can often be a resource-intensive endeavor.
verification processes. The key to success is the verification The first step in developing a formal testplan is to identify
testplan. which blocks will get a higher ROI from the use of formal
verification, and which blocks can be more reliably tested with
With the emergence of assertion and property language simulation (directed and random). The discussion in this section
standards such as the IEEE Property Specification Language will build your background and help you make those decisions.
(PSL) [4] and SystemVerilog Assertions (SVA) [5], design Complexity of formal verification. Formal verification of
teams are investigating formal verification and finding that it properties (that is, assertions or requirements) on RTL designs is
should be a key component of their verification flow. Yet there a known hard problem: the complexity of all known algorithms
is a huge disconnect between attempting to prove an ad hoc set for formal verification (a.k.a. model checking) is exponential in
of assertions and implementing an effective verification flow the size of the designs [1, 2]. Thus, any naïve application of
that includes formal. The greatest return-on-investment (ROI) formal verification is likely to cause state-space explosion and
for integrating formal into the flow is not achieved by proving impractical computer run-times. One coarse measure of
only an ad hoc set of assertions—it also involves proving prediction of the tractability of formal verification is the number
blocks. For success, this approach requires you to create a of state-holding elements (often flip-flops) in the cone of
comprehensive formal verification testplan. Most design teams, influence of the property (see Figure 1). However, as we will
however, lack expertise and guidelines on how to methodically see later in paper, for some classes of designs, this number can
and systematically create an effective testplan. Furthermore, the sometimes be misleading because reduction techniques (based
industry lacks literature on effective formal verification on the requirements and the design) can dramatically reduce this
testplanning techniques. number.
In this paper, we propose an integrated verification process that It is imperative that the user prioritize the application of formal
includes formal verification as a key component. We begin by verification by choosing design blocks that fall in the sweet spot
introducing a systematic set of guidelines for creating an of formal verification and are amenable to all possible reduction
effective formal verification testplan, which consists of an techniques, such as design reduction, abstraction, and
English list of comprehensive requirements that capture the compositional reasoning (as discussed further in Section 2.2).
desired functionality of the blocks you intend to formally verify.
One benefit the formal verification testplan approach provides is Design
Block
a direct means to measure progress throughout the verification
process by tracking the English list of proved requirements. Property
Cone of
Finally, we demonstrate formal verification testplanning Influence
techniques on a real example that involves an AMBA AHB Irrelevant
parallel to Inter IC (I2C) serial bus bridge. We discuss Logic
techniques such as hierarchical property partitioning
considerations and constraint specification in the context of this Figure 1. Cone of influence
Sequential vs. concurrent designs. A key determining factor for Design
choosing designs suitable for formal is whether a design or Verification
block is mostly sequential (that is, non-concurrent) or mostly
concurrent.
Control Datapath
Sequential blocks typically operate a single stream of input data,
even though there might be multiple packets at various stages of
the design pipeline at any instant. An example of such Data Transport Data Transform
sequential behavior is an instruction decode unit that decodes a Figure 2. Data verification flow
processor instruction over many stages. Another example is an
MPEG encoder block that encodes a stream of data, possibly Blocks suitable for formal verification. As discussed, formal
over many pipeline stages. A floating point arithmetic unit is yet verification is particularly effective for control logic and data
another example. Often, you can describe the behavior of a transport blocks containing high concurrency (illustrated in
sequential hardware block in pseudo-code in a software Figure 3).
language, such as C or SystemC. In the absence of any
additional concurrent events that can interfere with the
sequential computation, you can adequately test blocks such as
these with simulation, often validating against a C reference
model. Formal verification, on the other hand, usually Figure 3. Concurrent paths
encounters state-explosion for sequential designs because most
interesting end-to-end properties typically involve most flops of The following list includes examples of blocks ideally suited for
these flop-intensive designs. formal verification:
Concurrent designs deal with multiple streams of input data that • Arbiters of many different kinds
collide with each other. An example of such a block is a token • On-chip bus bridge
generator that is serving multiple requesting agents and • Power management unit
concurrently handling returns of tokens from other returning • DMA controller
agents. Another example is an arbiter, especially when it deals • Host bus interface unit
with complex priority schemes. Both of the previous examples • Scheduler, implementing multiple virtual channels for
have mostly control flops in the cone-of-influence. An example QoS
of a concurrent design that is more datapath-intensive is a • Clock disable unit (for mobile applications)
switch core that negotiates traffic of packets going from • Interrupt controller
multiple ingress ports to multiple egress ports. While the cone- • Memory controller
of-influence of such a design can have a large number of flops, • Token generator
especially if the datapath is very wide, a clever use of • Credit manager block
decomposition can verify correctness of one datapath bit at a
• Standard interface (for example, PCI Express)
time. This process of decomposition (covered more in Section
• Proprietary interfaces
2.3) effectively reduces the mostly datapath problem to a mostly
control problem. An example of a bug identified using formal verification on a
block involving concurrent paths is as follows:
Control vs. data transport vs. data transform blocks. Given the
discussion above, the following coarse characterization can During the first three cycles of a “transaction start” from
often help you determine whether formal is suitable. You can one side of the interface, a second “transaction start”
usually characterize design blocks as control or datapath unexpectedly came in on the other side of the interface
oriented. You can further characterize datapath design blocks as and changed the configuration register. The processing of
either data transport or data transform. Data transport blocks the first transaction was confused by sampling of different
essentially transport packets that are generally unchanged from configuration values and resulted in a serious violation of
multiple input sources to multiple output sources, for example, a the PCI protocol and caused the bus to hang.
PCI Express Data Link Layer block. Data transform blocks Concurrent blocks have many of these obscure, timing based
perform a mathematical computation (an algorithm) over scenarios for which formal is well suited.
different inputs, for example, an IFFT convolution block (see
Figure 2). Blocks not suitable for formal verification. In contrast, design
blocks that generally do not lend themselves to formal
What makes data transport blocks amenable to formal is the verification tend to be sequential in nature (that is, a single-
independence of the bits in the datapath, often making the stream of data) and potentially involve some type of data
formal verification independent of the width of the datapath. transformation (see Figure 4).
Unfortunately, this kind of decomposition is usually not possible
in data transform blocks. The next section lists examples of
blocks that are more suited for formal than others.
1
While there is strictly no difference between these terms,
assert always !(A & B); “assertions” is often used specifically to refer to highly
localized, implementation-specific properties. For this reason,
Figure 5. Assume-guarantee we favor the term “requirements” for more general use.
properties provide substantial benefits in terms of defect develop a set of increasingly over-constrained assumptions that
localization and might require relatively little effort to define represent different levels of functional completeness.
and verify. Local assertions have been the traditional application Verification begins with those properties representing the most
for functional formal verification, and we do not discuss them in basic functionality of the block under the greatest restriction and
detail in this paper [10,11].2 proceeds to full functionality with no over-constraint. Section
3.1.6 details an example of this approach.
[Link] Assumptions
The second component of the formal testplan for a design block A graduated strategy for proving requirements under different
is a set of input assumptions. These are formal properties that levels of restrictions can also be valuable for tracking the
are generally defined using the same language and semantics as progress of formal verification. This is another area in which
formal requirements. This similarity is essential to the assume- formal tools can offer useful bookkeeping features.
guarantee methodology. Assumptions are necessary to prevent 2.2.3 Hierarchical testplanning
illegal input stimuli from causing spurious property violations. In reality, a formal testplan requires something more
Conversely, incorrect assumptions over-restrict the input stimuli complicated than a flat list of formal properties for a design
and hide real property violations. Conceptually, over- block. In general, the ideal block size for formal analysis is not
constraining a proof is similar to running simulation checks with known during the planning stage. In addition, you might target
poor functional coverage. In practice however, the situation is portions of a block or cluster for formal verification even when
different in that it is difficult to measure the effects of over- the block as a whole is not optimal for formal verification. In
constrained inputs and nearly impossible to predict them. this case, selecting properties is best viewed from the level of
Tracking and validating assumptions is possibly the most the larger block.
important and subtle part of creating an effective formal
testplan. It is often easier to manage assumptions when you use You will create formal testplans for large blocks hierarchically,
a hierarchical approach to testplan development. regardless of whether you intend to verify them with formal
alone or with a mix of formal and simulation. Initially, you will
You must explicitly state all formal assumptions. The best define the upper-level testplan, which consists of requirements,
option is to use assume-guarantee, that is, formally verify each assumptions, and coverage targets, as if you intend to run the
assumption as a requirement on a neighboring design block. formal analysis at the top level. Then define testplans for each
Though this option is ideal, in some cases it is not practical for subblock in reference to the top-level testplan and map each top-
formal verification. As an alternative, you can sometimes level requirement to one or more subblock requirements.
validate assumptions from well-specified interface rules, as is Finally, derive subblock assumptions from top-level
the case for a standard interface. If neither of these approaches assumptions and assume-guarantee relationships between
is practical, you should use assumptions as assertions in higher- subblocks.
level simulations. Most importantly, all assumptions must be
treated explicitly. It is a reasonable expectation for formal tools Within this two-tiered testplan, you will target certain properties
to provide bookkeeping mechanisms to help track the validity of for formal verification. If you formally prove the entire block,
assumptions. In addition, tools may provide methods for no simulation is required at this level. In many cases this
visually sanity testing assumptions. approach will not be practical, particularly for design
organizations that are relatively new to formal verification. If
Assumptions have applications other than constraining block you use simulation for the higher-level block, a clearly
inputs. One example is mode setting through mode-related input organized hierarchical formal verification strategy provides
signal or configuration registers. You will not validate these valuable guidance about what simulation checkers to create and
assumptions in the same sense as interface assumptions. what portions of the design you should target with input vectors
Yet another use for assumptions is to deliberately over-constrain and monitor with functional coverage points.
design behavior in preliminary verification.
2.3 Coverage
[Link] Coverage targets To conclude our testplan guideline discussion, we must address
The third component of the formal testplan relates to coverage, the concept of coverage. In a traditional simulation verification
specifically, formal coverage targets. Section 2.3 discusses environment, there are two aspects of coverage you must assess
coverage concepts further. In particular, formal coverage throughout the project to determine the quality of the
properties are a useful test for over-constraining input verification process: input space coverage and requirement
assumptions. coverage. In this section, we describe how these aspects of
coverage relate to formal verification.
2.2.2 Verification strategy
A complete set of formal properties is one part of a formal Input space coverage. Input space coverage is a measure of the
testplan; a staged implementation plan is the other part. In quality of the input vectors to activate (or exercise) portions of a
particular, when formally verifying a design under active design. Typically, you can achieve high input space coverage
development, organize properties into functional categories and (which is evaluated by metrics such as line coverage or
functional coverage) by enumerating various scenarios and
2
creating directed simulation tests to exercise these scenarios.
While the completeness of the formal requirement set, as with Since it is impossible to enumerate all possible corner-case
any set of checks, cannot be guaranteed analytically, a method scenarios for simulation, we generally apply constraint-driven
has been proposed for tools to provide quantifiable guidance random input stimulus generation techniques to boost simulation
(see [5]). coverage.
Formal verification, unlike simulation, does not depend on 2. Review all block output ports in terms of functionality and
enumerating corner-case scenarios and then generating input determine if you need to add items to your requirements
stimulus. In fact, formal verification does not depend on any checklist.
input stimulus since we explore the entire input space of the 3. Review all block input ports in terms of functionality and
design using mathematical techniques without the need for input determine if you need to add items to your requirements
vectors or simulation. This means that if a property is proven checklist.
true using formal verification, then there is no sequence of input
vectors you can simulate that would expose a corner-case bug. 4. Review all data input ports and understand the life of the
Hence, you do not need traditional coverage techniques (such as data from the point it enters the block until it exits the
line coverage or functional coverage) since the quality of block (considering various end-to-end scenarios) and
exploring the input space in formal is complete and exhaustive. determine if you need to add items to the requirements
checklist.
The risk with formal verification is that a proof might have
completed with a set of formal constraints that restricts the input 5. Conduct a final requirements checklist review with
space to a subset of possible behaviors. For formal verification, appropriate stakeholders (for example, architects,
the coverage you should perform ensures that the design is not designers, verification engineers).
over-constrained while performing a proof. Therefore, the extent Measuring verification progress. The formal verification
of coverage is very different from what coverage-driven testplan approach provides a direct means to measure progress
simulation does. Coverage in a formal verification environment throughout the verification process. This benefit is easily
ensures that we do not miss major operations. We demonstrate measured by tracking the English checklist of proved
this process on our example in Section 3. requirements contained within the formal testplan.
Requirement coverage. The other key aspect of coverage you
must consider during verification is requirement coverage (often 3. APPLICATION EXAMPLE
referred to as property coverage in formal verification). In a In this example, we demonstrate the concepts introduced in
traditional simulation environment, you cannot automatically Section 2 on a real bridge example.
apply any metrics to determine the completeness of the
testbench output checkers with respect to the requirements 3.1 Overview AHB-Lite to I2C Bridge
defined in the specification (that is, line coverage and functional "Bridge" is actually a rather broad term that refers to a design
coverage metrics do not measure the completeness of testbench where the transport of data (often between different protocols)
output checkers). Hence, when you create a simulation-based occurs. In general, data is transferred in one of three forms:
testplan, it is critical for the design and verification team to • Direct transfer of data, either as a single-cycle transfer or
carefully review the requirements identified in the design as a burst
specification to ensure that an output checker is created to check
the set of requirements. • A fixed-size cell where there is a header, followed by
payload, and finally, some frame-checking sequence
In formal verification, you must apply the same process to
ensure that the created property set covers all requirements • A packet, which is similar to a cell in terms of the structure
defined in the specification. During this process, there are two but different in terms of size
questions about the final property set that you must answer: There are several key components in this bridge; however, not
1. Have we written enough properties (completeness)? all components apply to all bridges. The first key component
consists of the interfaces on the two ends of the bridge. The
2. Are our properties connected (when partitioning complex second is the datapath flow through the bridge. The third is an
properties)? arbiter component (when applicable). Finally, bridges often
For your design, it is critical for you to review your have some decoding and arithmetic computation, such as CRC
specification (and your simulation testplan) to ensure that your calculations and checking, ALU, and so forth.
formal property set covers everything you intend. Figure 6 shows an example of a bridge, which is a simple
Concerning the question, “Are our properties connected,” take AMBA AHB-Lite [2] to I2C [3] bridge. In our example, the
care when constructing your property set to take advantage of commands flow one direction from AHB to I2C, but the data
the concept of assume-guarantee (as previously discussed). This flows both ways. For the write direction, data are written into a
approach ensures that any properties used as assumptions on one FIFO. When the FIFO is full, the AHB signal HREADYout is
block will be proved on its neighboring block(s)—thus ensuring deasserted until there is room in the FIFO again. Upon receiving
the property set is connected and that you can trace a property the write data, as long as there is room in the FIFO, the AHB
associated with the output of the memory controller all the way bus is free for other devices sharing the AHB to proceed to their
through the design back to its inputs. transactions. A read-cycle, however, will hold up the bus until
Achieving high requirement coverage. To ensure the data is ready (because a SPLIT transaction is not supported
comprehensiveness in developing your English requirements by the bridge). Therefore, the read transaction has priority over
checklist, we recommend the following steps: the write transaction except when there is a coherency issue. For
example, if a read address matches the write address of one of
1. Review the architectural and micro-architectural the entries in the FIFO, the read transaction must wait until that
specifications and create a checklist of requirements that location is sent before proceeding to the I2C bus. Also, the read
must be verified.
transaction does not interrupt an I2C write transaction that has 6. Define verification strategy. This section of the formal
already started. testplan is important for listing the strategy used to verify
the block. For example, it is important to verify interface
requirements before end-to-end requirements. In addition,
AHB-Lite I2C
Interface it might be beneficial to first verify some requirements with
Interface AMBAŖ
Bridge I2C restrictions before running with all possible inputs. For
AHB-Lite
example, you might decide to set HWRITE to 1 first. Then
proceed to checking the read path by setting HWRITE to 0.
Figure 6. AMBA AHB-Lite to I2C Bridge Finally, remove the restrictions to allow both read and
write.
3.1.1 Challenges in this class of designs 7. Define coverage goals. This section is important especially
Although the gate count for this example bridge is not after obtaining a proof. List the coverage points such that if
particularly high, it represents two main formal verification those points are covered, you will be sure the true proof is
challenges. First, as with many datapaths involving queues, not a false positive due to over-constraining. Some of the
there are storage elements that can cause a large state-space. examples of coverage points for this design include FIFO
Second, data-transport paths with queues, and especially full, completion of read and write on different HSIZE and
involving a serial bus, have a very high sequential depth. HBURST, and a read with some occupied FIFO locations.
Consequently, it is going to take a large number of cycles to
complete the proof. Note that although creating a simulation 3.1.3 Interface description
testbench for this example is fairly trivial, simulation suffers the The following table lists the signals defined in the AMBA
same challenges of dealing with the high sequential depth (that AHB-Lite to I2C bridge specification that we chose to monitor
is, a very high number of simulation cycles is required to as part of our high-level requirements model.
achieve reasonable coverage).
Directio
Signal Name Description Size
3.1.2 Example formal testplan process n
As we previously stated, it is important to create a formal HCLK AHB Clock 1-bit In
testplan prior to attempting to comprehensively prove a block. Master Reset
For our AMBA AHB–Lite to I2C bridge example, we followed a HRESETn 1-bit In
(active low)
systematic set of steps to create our formal testplan. In this
section, we generalize these steps into what we refer to as the HADDR AHB Address 7-bit In
seven steps of formal testplanning, which apply to a broad class HBURST AHB Burst length 3-bit In
of today’s designs. AHB Transaction
HTRANS 2-bit In
1. Identify good formal candidates. First, determine if the Type
block you are considering is a good candidate for formal. HSIZE AHB Transfer Size 3-bit In
(Use the procedure previously described in Section 2.1.)
HWRITE AHB Write 1-bit In
2. Create an overview description. Briefly describe the key HSEL AHB Select 1-bit In
characteristics of the bridge (as we did in Section 3.1). The
introduction does not have to be in great detail but should HREADYin AHB HReady 1-bit In
highlight the major functions of the bridge. HWDATAH
AHB Write Data 32-bit In
WDATA
3. Define interface. Create a table that describes the details
for the block’s interface (internal) signals that must be HRDATA AHB Read Data 32-bit Out
referenced (monitored) when creating the set of formal HRESP AHB Response 2-bit Out
properties. You will use this list to determine completeness
HREADYout AHB HREADYOUT 1-bit Out
of the requirement checklist during the review process.
SDA I2C Data 1-bit In/Out
4. Create the requirements checklist. List, in a natural
language, all high-level requirements for this block (Use SCL I2C Clock 1-bit Out
the guidelines previously described in Section 2.3, HCLK to I2C Clock
i2c_clk_ratio 2-bit In
Achieving high requirement coverage). For our example, ratio
this list can be as high-level as separating the requirements We find that creating this interface table is a useful part of our
into the following functionality: AMBA AHB interface, formal testplanning process because it provides a clear focus of
I2C interface, end-to-end requirements, and miscellaneous what needs to be checked from a black-box perspective. Thus it
requirements, or as detailed as identifying each of the is useful for identifying missing requirements during a formal
AHB-Lite requirements, I2C requirements, and so forth. testplan review (see Section 2.3, Achieving high requirement
5. Convert checklist requirements into formal properties. In coverage).
this step, convert each of the natural language high-level
requirements into a set of formal properties, using PSL, 3.1.4 Requirements checklist
SVA, or OVL, and whatever additional modeling is For our example, there are three main sections of high-level
required to enable you to describe the intended behavior. requirements, the two interfaces and the end-to-end
requirements. (Listing the full set of requirements is beyond the
scope of this paper.) Our point is to demonstrate the process of next(~i2c_start until i2c_end))
creating a comprehensive natural language list of requirements abort (~RESETn);
derived from the architectural or micro-architectural
specification. Figure 7. PSL I2C assertion
AMBA AHB-Lite interface requirements. In general, we can Figure 8 illustrates the SVA coding for our natural language
partition AMBA AHB-Lite requirements into two categories: requirements.
master requirements and slave requirements. For our example,
we will focus on the subset of slave requirements. property P_no_start;
@(posedge HCLK) disable iff (~HRESETn)
1. Slave must assert HREADYOUT after reset i2c_start |=> ~i2c_start[*0:$] ##1 i2c_end;
2. Slave must provide zero wait-state HREADYOUT=1 endproperty
response to IDLE transaction
3. Slave must provide zero wait-state HREADYOUT=1 A_no_start: assert property (P_no_start);
response to BUSY transaction
4. When not selected, Slave must assert HREADYOUT Figure 8. SVA I2C assertion
5. Slave must drive HREADY low on first cycle of two-cycle The process of converting the natural language list of
ERROR/SPLIT/RETRY response requirements into a formal description is generally
6. ... straightforward. Hence, we have only illustrated one example of
I2C interface requirements. Because of space limitations, we this translation process. At times, in addition to using the
will not list the comprehensive set of I2C interface requirements. temporal constructs of today’s assertion languages, you will
However, we list a few I2C requirements below to demonstrate need additional modeling (possibly as auxiliary state-machines
the process of creating a natural language list of requirements: to model conceptual states of the environment or for capturing
data in a scoreboard fashion).
1. SDA should remain stable when SCL is high
2. There should not be another start after a start until an end 3.1.6 Verification strategy
occurs in the I2C bus. For our example of a formal testplan, the verification strategy
3. The data between a start and an end should be divisible by section contains two main areas. First is to plan proper
9 (8 bit/transfer + 1-bit ack) partitioning to ensure that we can overcome any verification
4. ... bottleneck. Second is to provide a set of restriction definitions
End-to-end requirements. There are two classes of end-to-end and the recommended verification steps to systematically loosen
requirements associated with our bridge example. One class these restrictions over the course of the proof. The combination
includes data integrity requirements. The second class includes of restrictions and steps forms the methodology used to
consistency requirements, which use data as the golden complete the formal proof on the bridge example.
reference between the formal property and the RTL design to Functional partitioning. It is important to recognize potential
verify that all controls are consistent with the referenced data. bottlenecks in the verification process. Many times you can
For data integrity verification, there are also two separate paths manage those problems by applying compositional reasoning
that must be considered, one for read and the other for write. approaches previously described in Section 2.2. For example,
Miscellaneous requirements. Miscellaneous requirements are the write data from the input goes through an internal interface
the checks for read/write dependency and are not included in to the bridge before being sent out through the I2C interface.
this paper due to space limitations. There is a potential partition point around the internal bridge.
While it might not be necessary to partition the datapath, it is
3.1.5 Formal properties still important to keep this in mind in case performance becomes
Using the interface signals identified in Section 3.1.4, and the an issue.
set of natural language requirements identified in 3.1.3, create For the reverse (read) path, the read request goes directly to the
your set of formal properties. We recommend that you I2C interface—except when there is a conflict with a pending
encapsulate your set formalized requirements into a high-level write. Therefore, there must be a conflict detection function to
requirements model or verification unit that will monitor the compare the read address against all the write addresses in the
block’s interface signals. FIFO. To include a detection function as part of a datapath
To demonstrate the formal specification process, we convert the requirement not only makes the property overly complex to
following I2C requirement into both PSL and SVA: code, it also adds complexity into the datapath verification. This
There should not be another start after a start until an is because the conflict-checking is a form of decoding logic that
end occurs in the I2C bus. is rather complex for formal verification. However, it is rather
straight-forward to validate the conflict detection functionality
Figure 7 illustrates the PSL coding for our natural language as a standalone requirement, independent of the property.
requirement. In this example, i2c_start and i2c_end Therefore, it is probably a good idea to separate the datapath
represent modeling code associated with the assertion, verification and the conflict detection verification (by black-
composed of SCL and SDA. boxing the conflict-checking logic). The requirement for the
datapath will then use the output of the black-boxed conflict-
default clock = HCLK;
checking logic as an input during the analysis (assuming all
A_no_start: assert (always i2c_start -> possible combinations of error detection during the proof).
Finally, we verify the conflict-checking logic itself, independent Set 1: Input coverage – Read/write access with different burst
of the datapath property. This way, we partition a difficult types, sizes, and lengths, and with HREADYOUT
problem (data-transport and complex decoding) into two asserted and deasserted.
relatively simpler problems. Set 2: Output coverage – Read/write with acknowledgment,
Restriction definition. Formal verification allows you to no acknowledgement.
uncover corner-cases within the design relative to all valid Set 3: Internal main state-machines – I2C state-machines,
sequences of input values. However, there are occasions when AHB state-machines where they can enter and exit
you might want to verify a particular implemented functionality each state.
on a partially completed design by restricting the input
sequences to a specified mode of operation (for example, Other than coverage that determines whether each coverage
explore correct behavior for only read transactions versus read point is covered such that there is no over-constraint, it is also
and write transactions). Even under situations where the RTL is important to ensure that the requirements are complete. We go
complete, but the code has not gone through any verification, it through the steps in Section 2.3 to ensure that there is no
is often more efficient to start the verification process by obvious hole in the coverage provided by all the requirements.
independently verifying the main functionality with restrictions After all the requirements are proven, we also ensure that all the
(that is, a special assumption that restrictions the input space to RTL is included in at least one requirement. Otherwise, the
a subset of possible behaviors). codes that are not included are dead-code or additional
requirements are needed.
For our example, we divide the requirements into three sets:
Set 1: AMBA AHB-Lite and I2C interface requirements. 4. CONCLUSION
Set 2: End-to-end datapath, read and write. In this paper, we proposed a formal-based testplanning process,
which includes a systematic set of seven steps. By applying our
Set 3: Misc.
process to a real AMBA AHB parallel to Inter IC (I2C) serial
And we define the restriction sets as follows: bus bridge example, we demonstrated that it is relevant to
1. Only unidirectional access (read or write), single cycle today’s ASIC and SoC designs.
access, no flow-control or errors
2. Only unidirectional access, all burst length, no flow- 5. REFERENCES
control or errors [1] A. Aziz, V. Singhal, R. Brayton. ”Verifying interacting finite state
machine: complexity issues.” Technical Report UCB/ERL M93/52.
3. Bi-directional access, all burst length, no flow-control Electronics Research Lab, Univ. California, Berkeley, CA 94720.
or errors. [2] AHB - AMBA Specification (rev 2.0) by ARM, copyright ARM
Verification steps. The following lists the recommended steps 1999.
for proving the AMBA AHB-Lite to I2C bridge example: [3] I2C - The I2C-Bus Specification Version 2.1 January 2000 by
Philips Semiconductors.
Prove Requirements Set 1 with Restriction Definition 1, [4] IEEE Standard for Property Specification Language (PSL), IEEE
Restriction Definition 2, and Restriction Definition 3. Std. 1850-2005.
Prove Requirements Set 2 with Restriction Definition 1, [5] IEEE Standard for SystemVerilog: Unified Hardware Design,
Specification and Verification Language, IEEE Std. 1800-2005.
Restriction Definition 2, and Restriction Definition 3.
[6] J. R. Burch, E. M. Clarke, K. L. McMillan, D. L. Dill, L. J. Hwang.
Prove Requirements Set 1 with no restrictions. “Symbolic model checking: 1020 states and beyond. Information
and Computation,” 98(2):142-170, 1992.
Prove Requirements Set 2 with no restrictions.
[7] K. Claessen. “A coverage analysis for safety property lists.”
Prove Requirements Set 3 with no restrictions. Unpublished manuscript. April 2003.
[Link]
If the design is mature, such as a legacy code with minor [8] K. L. McMillan. “A methodology for hardware verification using
changes, or a design that has gone through some simulation, you compositional model checking.” Science of Computer
might decide to skip through the Restriction Sets 2 and 3. It is Programming, vol. 37, no. 1-3, May 2000.
still important to go through Restriction Set 1 simply to set up [9] D. L. Perry, H. Foster. Applied formal verification. McGraw-Hill,
2005.
the proper environment and constraints, but it is not necessary to
go through the other restriction sets. [10] J. Richards, D. Phillips. “Creative assertion and constraint methods
for formal design verification.” In Proceedings of DVCon, March
3.1.7 Coverage 2004.
As mentioned previously, coverage for formal verification [11] P. Yeung. “The four pillars of assertion-based verification.” In
serves a different purpose than that in simulation. The coverage Proceedings of EuroDesignCon, October 2004.
points should focus on ensuring no over-constraining at the
inputs. Therefore there are three sets of coverage points: