Coverage UVM Cookbook
Coverage UVM Cookbook
Cookbook
Online Methodology Documentation from the Mentor Graphics Verification Methodology Team Contact [email protected]
http://verificationacademy.com
Table of Contents
Articles
Introduction Coverage 1 2 4 4 6 8 12 17 23 27 34 40 40 44 45 50 60 62 71 74 75 80 92 92
Appendices
Requirements Writing Guidelines
Datestamp:
- This document is a snapshot of dynamic content from the Online Methodology Cookbook
Introduction
Introduction
Coverage Cookbook
As the saying goes, "What doesn't get measured likely wont get done." And that is certainly true when trying to determine a design project's verification progress or trying to answer the important question, "Are we done?" Whether your simulation methodology is based on a directed testing approach or constrained-random verification, to understand your verification progress, you need to answer the following questions: Were all the design features and requirements identified in the testplan verified? Were some lines of code or structures in the design model never exercised? Coverage is the metric we use during simulation to help us answer these questions. Yet, once coverage metrics become an integral part of our verification process, it opens up the possibility for more accurate project schedule predictions, as well as providing a means for optimizing our overall verification process. At this stage of maturity, we can ask questions such as: When we tested feature X, did we ever test feature Y at the exact same time? Has our verification progress stalled for some unexpected reason? Are there tests that we could eliminate to speed up our regression suite and still achieve our coverage goals? The book you are holding contains excerpts from the online Coverage Cookbook resource, which is evolving to address all aspects of a coverage-driven verification methodology, such as: coverage planning, coverage modeling, coverage implementation, coverage analysis, and coverage closure. Check out the Coverage Cookbook website for a set of downloadable examples contained in this bookand join a community of engineers interested in learning how to leverage coverage on their projects. Find us online at https://verificationacademy.com/cookbook
Coverage
Coverage
The Coverage Cookbook describes the different types of coverage that are available to keep track of the progress of the verification process, how to create a functional coverage model from a specification, and provides examples of how to implement functional coverage for different types of designs.
Coverage Cookbook contents: Introduction - Introduction to the Coverage Cookbook Coverage - This overview page Coverage Metrics and process (Theory) What is coverage? - What coverage is about and why you should use it Kinds of coverage - An explanation of the different types of coverage available Code Coverage - An explanation of the different types of code coverage Functional Coverage - Describes the various alternative forms of functional coverage Specification to testplan - Outlines different approaches for creating a testplan based on specifications Executable Testplan Format - Describes the format of an executable testplan spreadsheet Testplan to functional coverage - Explains how to go from a testplan to a coverage model Coding for analysis - How to ensure that your functional coverage code gives results that are easy to interpret Coverage Examples (Practice) Bus protocol coverage - Illustrates how to use assertions to check a bus protocol and yield functional coverage data APB3 Protocol test plan - A test plan for the APB3 protocol APB3 Protocol Monitor - A set of code fragments from the implemented APB3 Protocol monitor Block Level coverage - A block level UART design, where functional coverage is is based mainly on register based configuration UART test plan - The test plan for the UART UART example covergroups - A set of code fragments illustrating how to implement the block level covergroups Datapath Coverage - Illustrates how coverage is collected on the settings of a datapath block
Coverage BiQuad IIR Filter test plan - A test plan for the BiQuad IIRFilter BiQuad IIR Filter example covergroups - Code fragments to illustrate the implementation of the BiQuad IIR functional coverage model SoC coverage example - Explains the process for creating a SoC functional coverage model based on use cases Appendices Requirements Writing Guidelines - Guidelines for thinking about and writing requirements Please note that it may not always be possible or appropriate to supply source code for all of the examples in the Coverage Cookbook
What is coverage?
In general, coverage is a metric we use to meaure the controllability quality of a testbench. For example, code coverage can directly identify lines of code that were never activated due to poor controllability issues with the simulation input stimulus. Similarly, functional coverage can identify expected behaviors that were never activated during a simulation run due to poor controllability. Although our discussion in this section is focused on coverage, it's important to note that we can address observability concerns byembedding assertions in the design model to facilitate low-level observability, and creating monitorswithin andon the output ports ofour testbench to facilitate high-level observability.
Summary
So what is coverage? Simply put, coverage is a metric we use to measure verification progress and completeness. Coverage metrics tells us what portion of the design has been activated during simulation (that is, the controllability quality of a testbench). Or more importantly, coverage metrics identify portions of the design that were never activated during simulation, which allows us to adjust our input stimulus to improve verification. There are different kinds of coverage metrics available to you, and the process of how to use them is discussed in the Coverage Cookbook examples.
Kinds of coverage
Kinds of coverage
No single metric is sufficient at completely characterizing the verification process. For example, we might achieve 100% code coverage during our simulation regressions. However, this would not mean that 100% of the functionality was verified. The reason for this is that code coverage does not measure the concurrent interaction of behavior within, or between multiple design blocks, nor does it measure the temporal sequences of functional events that occur within a design. Similarly, we might achieve 100% functional coverage, yet only achieve 90% code coverage. This might indicate that there is either a problem with the fidelity in our functional coverage model (that is, an important behavior of the design was missing from the coverage model), or possibly some functionality was implemented that was never initially specified (for example, perhaps the specification and testplan needs to be updated with some late stage change in the requirements). Hence, to get a complete picture of a project's verification progress we often need multiple metrics.
Coverage Classification
To begin our discussion on the kinds of coverage metrics, it is helpful to first identify various classifications of coverage. In general, there are multiple ways in which we might classify coverage, but the two most common ways are to classify them by either their method of creation (such as, explicit versus implicit), or by their origin of source (such as, specification versus implementation).
For instance, functional coverage is one example of an explicit coverage metric, which has been manually defined and then implemented by the engineer. In contrast, line coverage and expression coverage are two examples of an implicit coverage metric since its definition and implementationis automatically derived and extracted from the RTL representation.
Kinds of coverage
Coverage Metrics
There are two primary forms of coverage metrics in production use in industry today and these are: CodeCoverage Metrics (Implicit coverage) Functional Coverage/Assertion Coverage Metrics (Explicit coverage)
References
[1] A. Piziali, Functional Verification Coverage Measurement and Analysis, Kluwer Academic Publishers, 2004.
Code Coverage
Code Coverage
In this section, we introduce various coverage metrics associated with a design model's implicit implementation coverage space. In general, these metrics are referred to as code coverage or structural coverage metrics.
Benefits:
Code coverage,whose origins can be traced back to the 1960's,is one of the first methods invented for systematic software testing.[1] One of the advantages of code coverage is thatit automaticallydescribes the degree to which the source code of a program has been activated during testing-thus, identifying structures in the source code that have not been activated during testing. One of the key benefits of code coverage, unlike functional coverage,is that creating the structural coverage model is an automatic process. Hence, integrating code coverage into your existingsimulation flowis easy and does not require a change to either your current design or verification approach.
Limitations:
In our section titled What is coverage, we discussed three important conditions that must occur during simulation to achieve successful testing. They were: 1. The testbench must generate proper input stimulus to activate a design error. 2. The testbench must generate proper input stimulus to propagate all effects resulting from the design error to an output port. 3. The testbench must contain a monitor that can detect the design error that was first activated then propagated to a point for detection. Code coverage is ameasurement of structures within the source code that have been activated during simulation. One limitation with code coverage metrics are that you might achieve 100% code coverage during your regression run, which means that your testbench provided stimulus that activated all structures within your RTL source code, yet there are still bugs in your design. For example, the input stimulus might have activated a line of code that contained a bug, yet the testbench did not generate the additional required stimulus that propagates the effects of the bug to some point in the testbench where it could be detected. In fact, researchers have studied this problem and found cases where a testbench achieved 90% code coverage-yet, only 54% of the code was covered would be observable during a simulation run.[2] That means that a bug could exist on a line of code that had been marked as coveredyet the bug was never detected due to insufficient input stimulus to propagate the bug to an observability point. Another limitation of code coverage is that it does not provide an indication on exactly what functionality defined in the specification was actually tested. For example, you could run into a situation where you achieved 100% code coverage, and then assume you are done. Yet, there could be functionality defined in the specification that was never testedor evenfunctionality that had never beenimplemented! Code coverage metrics will not help you find these situations. Even with these limitations, the automatic aspect of code coverage makes it a relatively simple way to identify input stimulus deficiencies in your testbench. And is a great first choice for coverage metrics as you start to evolve your advanced verification process capabilities.
Code Coverage
Code Coverage Expression Coverage Expression coverage (sometimes referred to as condition coverage)isa code coverage metric used to determine ifeach condition evaluated both to true and false. A condition is an Boolean operand that does not contain logical operators. Hence, expression coverage measures the Boolean conditions independently of each other. Focused Expression Coverage Focused Expression Coverage (FEC),which is also referred to as Modified Condition/Decision Coverage (MC/DC), is a code coverage metric often usedused by the DO-178B safety critical softwarecertification standard, as well as theDO-254 formal airborne electronic hardware certification standard. This metric is stronger than condition and decision coverage. The formal definition ofMC/DC as defined by DO-178B is: Every point of entry and exit in the program has been invoked at least once, every condition in a decision has taken all possible outcomes at least once, every decision in the program has taken all possible outcomes at least once, and each condition in a decision has been shown to independently affect that decisions outcome. A condition is shown to independently affect a decisions outcome by varying just that condition while holding fixed all other possible conditions. [3] It is worth noting that completely closing Focused Expressing Coverage can be non-trivial. Finite-State Machine Coverage Today's codecoverage tools are able toidentify finitestatemachines within the RTL source code. Hence,this makes it possible to automatically extract FSMcode coverage metrics to measure conditions. For example, the number of times each state of the state machine was entered, the number of times the FSM transitioned from one state to each of its neighboring states, and even sequential arc coverage to identify state visitation transitions.
10
Code Coverage coverage tool toexclude the unused or unreachable code during the coverage recording and reporting steps. Formal tools can be used to automate the identification of unreachable code, and then automatically generate the exclusion files.
11
References
[1] J. Miller, C. Maloney, "Systematic mistake analysis of digital computer programs." Communications of the ACM 6 (2): 58-63, February 1963. [2] F. Fallah, S. Devadas, K. Keutzer: "OCCOM: Efficient Computation of Observability-Based Code Coverage Metrics for Functional Verification."Proceedings of the Design Automation Conference,1998: 152-157 [3] DO-178B, "Software Considerations in Airborne Systems and Equipment Certification", RCTA, December 1992, pp.31, 74. [4] M. Stuart, D. Dempster: Verification Methodology Manual for Code Coverage in HDL Designs - TransEDA, August 2000
Functional Coverage
12
Functional Coverage
The objective of functional verification is to determine if the design requirements, as defined in our specification, are functioning as intended. But how do you know if all the specified functionality was actually implemented? Furthermore, how do we know if all the specified functionality was really tested? Code coverage metrics will not help us answer these questions. In this section, we introduce an explicit coverage metric referred to as functional coverage, which can be associated with either the design's specification or implementation coverage space. The objective of measuring functional coverage is to measure verification progress with respect to the functional requirements of the design. That is, functional coverage helps us answer the question: Have all specified functional requirements been implemented, and then exercised during simulation? The details on how to create a functional coverage model are discussed separately in the Testplan to functional coverage chapter.
Benefits:
The origin of functional coverage can be traced back to the 1990's with the emergence of constrained-random simulation.Obviously, one of thevalue propositions of constrained-randomstimulus generation is that the simulation environment canautomatically generate thousands of tests that would havenormally required a significant amount of manual effortto create as directed tests.However, one of theproblems with constrained-random stimulus generation is thatyou never know exactly what functionalityhas beentested without the tedious effort of examining waveforms after a simulation run. Hence, functional coverage was invented as ameasurement to help determineexactly what functionalitya simulation regression tested without the need for visual inspection of waveforms. Today, the adoption of functional coverage is not limited to constrained-random simulation environments. In fact, functional coverage provides an automatic means for performing requirements tracing during simulation, whichis often a criticalstep required for DO-254 compliance checking. For example, functional coverage can be implemented with a mechanism that links to specific requirements defined in a specification. Then, after a simulation run,it ispossible toautomatically measure which requirements were checkedby aspecific directed or constrained-randomtestas well asautomatically determine which requirements were never tested.
Limitations:
Since functional coverage is not an implicit coverage metric, it cannot be automatically extracted.Hence, this requires the user to manually create the coverage model. From a high-level, there are two different steps involved in creating a functional coverage model that need to be considered: 1. Identify the functionality or design intent that you want to measure 2. Implementing the machinery to measure the functionality or design intent The first step is addressed through verification planning, and the details are addressed inthe section on getting from a testplan to functional coverage. The second step involves coding the machinery for each of the coverage items identifiedin theverification planning step(for example, coding a set of SystemVerilog covergroups for each verification objective identified in the verification plan). During the coveragemodelimplementation phase, there are also many details that need to be considered, such as: identifying the appropriate point to trigger a measurement and defining controllability (disable/enable) aspects for the measurement. These and many other details are addressed in the detailed coverage examples.
Functional Coverage Since thefunctional coveragemust be manually created, there is always a risk that some functionality that was specified is missing in the coverage model.
13
Functional Coverage
14
A single write and read bus sequence for our non-pipelined bus protocolare illustrated in Figure 2.
Figure 2. Write and read cycles for a simple nonpipelined bus protocol
To verify our bus example,it's important to testthe boundary conditions for the address bus for boththe write sequence and read sequence (that is, the bits withinaddrat some point containedall zeros and all ones). In addition, it's also important that we have covered a sufficient number of non-boundary conditions on the address bus during our regression.We are only interested in sampling the address bus when the slave is selected and the enable strobe is active (that is, sel==1'b1 && en==1'b1). Finally, we will want to keep track of separate write and read events for these coverage items to ensure that we have tested boththese operations sufficiently. This is one example ofusing cover groupsto modelfunctional coverage (e.g., the SystemVerilog covergroup construct).In addition, we could apply the same data coverage approach to measuring the read and write data busses.
Functional Coverage Now, let's look atcover properties with respect to this example. There is a standard sequence that is followed for both the write and read cycle. For example, let's examine a write cycle. At clock one, since both the slave select (sel) and bus enable (en) signals are de-asserted, our bus is in an INACTIVE state. The first clock of the write sequence is called the bus START state, which the master initiates by asserting one of the slave select line (sel==1'b1). During the START state, the master places a valid address and valid data on the bus. The data transfer (referred to as the bus ACTIVE state) actually occurs when the master asserts the bus enable strobe signal (en). In our case, it is detected on the rising edge of clock three. The address, data, and control signals all remain valid throughout the ACTIVE state. When the ACTIVE state completes, the bus enable strobe signal (en) is de-asserted by the bus master, and thus completes the current single write operation. If the master has finished transferring all data to the slave (that is, there are no more write operations), then the master de-asserts the slave select signal (sel). Otherwise, the slave select signal remains asserted, and the bus returns to the bus START state to initiate a new write operation. Multiple back-to-back write operations (without returning to the bus INACTIVE state) are known as burst write. From a temporal coverage perspective, a set of assertions could be written to ensure proper sequencing of states on the bus. For example, the only legal bus state transitions are illustrated in Figure 3. Furthermore, it's important to test a single write and read cycle, as well as the burst read in write operation. In fact, we might want to measure the various burst write and read cycles.
15
By combiningcover groupsand cover properties, we are able to achieve a higher fidelity coverage model that more accurately allows us to measure key features of the design. Details on how to code temporal coverage are covered in the APB3 Bus protocol monitor example.
Functional Coverage simulations that capture coverage metrics early in the project cycle (that is, prior to seriously gathering coverage metrics) to work out any potential issues in your coverage flow. From a high-level perspective, there are generally four main steps involved in a functional coverage flow, which include: 1. 2. 3. 4. Create a functional coverage model If using assertions, instrument the RTL model to gather coverage Run simulation to capture and record coverage metrics Report and analyze the coverage results
16
Part of the analysis step is to identify coverage holes, and determine if the coverage hole is due to one of three conditions: 1. Missing input stimulus required to activate the uncovered functionality 2. A bug in the design (or testbench) that is preventing the input stimulus from activating the uncovered functionality 3. Unused functionality for certain IP configuations or expected unreachable functionality related during normal operating conditions The first condition requires you to either write additional directed tests or adjust random constraints to generate the required input stimulus that targets the uncovered functionality. The second condition obviously requires the engineer to fix the bug that is preventing the uncovered functionality from being exercised. The third condition can be addressed by directing the coverage tool to exclude the unused or unreachable functionality during the coverage recording and reporting steps.
Specification to testplan
17
Specification to testplan
Testplan Creation Approaches
The goal in creating a coverage model spreadsheet or testplan is to capture a subset of the design intent and behavior that is targeted for functional coverage. It is a time consuming, manual process that involves combing over various design specification documents and extracting the necessary requirements one at a time. It is best if this is done by a cross functional team staffed by architects, designers, firmware and verification engineers to get multiple points of view and different inputs. Without a cross functional aspect, various subsets of the design intent are easily missed. Creating the testplan is best done by holding multiple meetings, each of which targets a particular design area (the xyz block), and lasts for a fixed length of time (1 hour, every morning next week at 9am) and with a goal (50 requirements). Generally, there are two approaches that can be taken: 1. Bottom Up: Go over block by block or interface by interface 2. Top Down: Follow the use model(s) or data flow of the chip.
Two Approaches
Bottom Up Definition Extract requirements from available low level, detailed design and implementation specifications. This approach is more design oriented. Pros Low hanging fruit: Easiest to find, extract and prioritize. Easier to link to coverage. Easier to close on coverage goals. Because you comb over every block and interface, key, highly specific and important coverage is picked up that might be glossed over by the top down method. Top Down Extract requirements from high level architecture and use model specifications. This approach is customer/verification/user oriented.
Can give more useful, high level, interesting coverage information, such as utilization, to explore tradeoffs. Can be done before design specs are completed, without implementation details. Goes towards intelligent testbench automation (ITA - Infact) using flow chart graphs. Forces a customer centric look at the design. Needs access to high level specifications or architects with clear use model definitions. Use model(s) can sometimes grow exponentially and result in a huge coverage space with too many iterations. Coverage tends to be more upstream, generation oriented coverage, not downstream DUT or Scoreboard oriented. This can be misleading.
Cons
Need well developed specs with implementation details. Can lead to an explosion of requirements. Too many to implement in a reasonable amount of time. Needs prioritization. Tend to be low level, uninteresting coverage. Lots of data, little useful information to explore tradeoffs.
Approach Have a series of meetings each focused on a subset of the design, such as a block or interface, and gather the appropriate specifications and engineering personnel to extract out the requirements, refine them, prioritize them, and link them to some coverage group, coverage point or cross in a spreadsheet.
Have a series of meetings with the architect and come up with a single high level use model first, then create a use model(s) document the goes into further details using lots of diagrams (tables, graphs, etc.) and minimal words. Then take this document and rework it into spreadsheet format.
Specification to testplan
18
General (multiple) application, used by many customers Single application, used specifically by one or a few customers
Often a combination of top down and bottom up can be used. You can start with a top down and map out the main flow which naturally brings out categories and then do bottom up on each of the categories. It is wise to do this at the beginning of the project; as soon as some form of design specifications are ready. Get started by extracting a few hundred requirements, put them into a spreadsheet and then add more later as the project progresses. Some teams link each requirement to a coverage element right away as each requirement is extracted and refined. Others, enter in all the requirements into the spreadsheet, and then take a second pass to add the coverage linking later on. Neither way is better than the other, the important thing is to get the coverage linking done while the particular requirement(s) details are still fresh in your mind. To leave the links till later in the project will mean that you have to revisit each requirement and its associated documentation all over again, which will take longer.
Bottom Up Example
Below is a block diagram of a Ethernet Chip with an TX and RX path. Each path has a pipeline of blocks that the Ethernet frames pass through. Some of these blocks can be muxed in or muxed out for various configurations. Also there are various clocking configurations and each block has its own configuration setup details. With a bottom up approach we would go through each block's design specification and extract out the requirements for that block. We would also go through the global block and clock mux settings and extract out the requirements for each of those. The key is to divide up the work into small, digestible blocks or sub-blocks, so that the detailed requirements and behaviors can be easily extracted in a reasonable amount of time.
The first thing you need to do to start the bottom up approach is to gather as many people who know the design as possible, architects, designers, verification team, experts on various interfaces, etc. Next, a team of people need to sub-divide up the work into some logical, manageable size. This can be done my making a brainstorming diagram, also called a mindmaps. Microsoft Visio and similar software enable easy capture of these types of diagrams on-the-fly, as the
Specification to testplan team brainstorms together. Each topic or sub-block can be broken down further and further as needed and they all are correlated in the brainstorming diagram. A simple example for the Ethernet chip is shown in the brainstorming diagram below. For more complicated designs, the brainstorming diagram would have many more sub categories branching off of each block to divide up the requirement extraction work into manageable amounts. Each branch in the brainstorming diagram might end up being a corresponding a category or subcategory in the Ethernet testplan, or if large, might be its own hierarchical spreadsheet. Some of the mindmapping software can take these brainstorming diagrams and export the information into a spreadsheet with section numbers for each category and subcategory. This gives a great starting point and a ready framework for your testplan.
19
The brainstorming diagram is a great first start. Each grouping or branch can then be broken out and a testplan creation meeting(s) held to flesh out the requirements for that particular topic. At each meeting gather all available design and implementation specifications, as well as any industry specification for that block or topic so they can be consulted. Once you have a topic you can use the yellow sticky method [1], where you give post-it notes to a team who take 20 min to extract out requirements onto yellow stickies and then stick them all up on a white board for grouping into further categories. Rules and features are extracted out into detailed requirements and then each entered as a row into a spreadsheet with a title, and a brief description that describes the essence of that requirement. See the section on the do's and don'ts of requirements writing below. Adding some sort of unique, alpha numeric requirements tag number to each requirement is a good idea, especially if you do have requirements written at multiple levels. The tags can then be used to link higher level requirements to lower level requirements and vice versa. Requirements tracing tools, like ReqTracer, can be used to further regiment the requirement tag naming and help by automating the tracking of all your requirements. Another good idea is to to add other useful information that would be helpful to guide further work with each requirement. This extra useful information might be the location in the spec that the requirement came from, the author, notes, priority, estimated effort, questions to answer later, etc. Finally, each requirement needs to be linked to some specific closure element, like a covergroup, coverpoint, cross, assertion, test, etc. A second pass on each requirement where each is refined, and prioritized is a good idea. See the testplan format page for a description and example of the recommended format. The apb monitor, uart and datapath examples in the coverage cookbook use a bottom up planning approach. [1] The Yellow Sticky Method is described in more detail in the book - Verification Plans: The Five-Day Verification Strategy for Modern Hardware Verification Languages by Peet James, Springer 2003. Guidelines for writing requirements are available in the Requirements Writing Guidelines article. It is a good idea for the verification team to compile a list such as this before starting the planning process and to divide them up into rules (must be followed) and suggestions (good ideas). In effect, this is defining the requirements for writing requirements.
Specification to testplan
20
When you look at the two parts of the above diagram the left exponential one looks like one huge uncloseable covergroup, while the one on the right you can see covergroups and coverpoints naturally fallout from each table or diagram. So you take each part of the high level use model flow and you expand out each one using whatever table or diagram that is useful to contain that particular sections exponential nature. For instance, in the above block muxing section of the Setup/Configuration you might develop a table of the potential useful setups and name each one. In other cases a Y-tree, Sequence, Bubble diagram or some other chart would be more useful. Often it is a good idea to gather the high level use model flow and all these diagrams into a new use model document, intermixed with minimal words.
Specification to testplan
21
Use a table, chart or diagram that best holds the exponential nature of each area of the use model: Tables are good for small space, like a few bits of a register field, or a list of behaviors. Bubble diagrams are good to show relationships between tasks or items, like the power areas and their settings. Y tree diagrams are good for showing choices and decisions, ANDs & ORs, priorities. Sequence diagrams show progression, cause & effect, handshaking You can always combine diagrams together, like the group of tables above, connected by lines.
See the WB SOC design example for use models of how these diagrams are used in a coverage context. Once you have broken out your use model(s) into a progressive collection of useful diagrams and tables, it is a good idea to put them all in one document for easy viewing and dissemination. Some teams combine them into one big diagram; others put them together in a presentation with descriptive informational slides between the diagrams. Other formats for these diagrams include documents (separate or added as a chapter in the design architecture or implementation specifications) or as html files for a project website. The presentation format is the most common, and most useful. The collection document can go by many names for example: UMD: Use model document DITL: Day in the life document CAD: Coverage Architecture Document Whatever you call the document, this document typically is very useful for introducing a new team member to the design to give them a clear overview. The team often will revert back to this document and these diagrams to flesh out more details as the verification project progresses. Once you have a UMD, your verification team can take it and use it as a guide to write a testplan. They can comb through it and extract out the requirements and put them in thetestplan. They can take each diagram, chart, and table and make it a section or sub-section in the spreadsheet, or if large, break it out into its own hierachical spreadsheet. The key is to divide up the categories and sub-categories so that each spreadsheet row is for a single requirement and can be usefully to be linked to some coverage element. Another key is to write each requirement at about the same level. Each bubble in a bubble diagram might be a single requirement or an entire subsection of requirements. Each choice on a Y-tree diagram might be a single requirement or more. Each table can be a coverage group, each row or column, a coverpoint. The extraction of the requirements from the UMD often follows the same bottom up extraction process of described above. The UMD usually makes it easier, because of the inherent flow of the UMD and its diagrams. With practice, the verification team will start to visualize cover groups and coverpoints more readily, simply by looking at all the diagrams in their UMD. Just like with the bottom up approach adding the link and type to a coverage group, coverpoint, cross, assertion or test is best done as you write the requirement. See the Wishbone SOC example section for more details on how to take the UMD content and create a testplan spreadsheet.
Specification to testplan
22
Testplan Review
The verification process has many important aspects that request time and effort of the verification team. The building of the testbench, the running of tests, the schedule, etc., all too often take precedence over the coverage model testplan spreadsheet and its development is deferred. Often, a preliminary testplan is created but the links to actual functional coverage elements are left out. The results are poor coverage implementation and minimal coverage results. The team ends up verifying in the dark, letting random generation occur, but not using coverage as a feedback to guide the testing to any conclusion or closure. They tape out with a "good enough" approach to coverage that is not based on any real coverage metric data. Having a good testplan with well defined requirements that are each linked to real coverage elements links is key. Taking the time to make this testplan will pay off in the long run. Adding the links as the requirements are written is the best approach. It also ensures that the team does not have to revisit all the documentation that inspired each requirement. To avoid this problem, mature verification teams implement a testplan review process modeled after good document or code review processes. A three stage process generally works well: 1. PRELIMINARY REVIEW: A testplan is made early on and the first review is done early as well. It is a quick review, to make sure the testplan was created, has coverage linking and type, and is on the right track. It does not need to be perfect, but be the best that can be done at the time. It will evolve over the course of the project. 2. MAIN REVIEW: About two-thirds way through a project, the real review occurs. The testplan is the coverage model which defines a prioritized subset of design behavior and intent. The goal here is to make sure the priorities and the chosen subset is correct. You can't cover everything. You can't verify everything. The team must choose their subset and do the most verification and coverage in the allotted time. This review will take some time, often 2-5 days. The testplan is reviewed in detail, making sure each row's requirement is clear and is being met with the coverage linking. All issues are addressed, and entered into a bug tracking tool. Often some form of reorganization of requirements is needed to bring the testplan up to date. It might need additions to accommodate missing content or design changes, but often it must be reduced so it can be realistically accomplished in the remaining time scheduled. Often reprioritizations occur, and some work is moved to a future tape out. The goal of the review is to find and fix any major problems or missing parts in the coverage model testplan spreadsheet. 3. FINAL REVIEW: This review is done in the final weeks of the project and if the other two reviews were done well, it is a final confirmation that the plan is valid. All big issues should have already been found and dealt with. In the final review exception details are added and any final concerns addressed before the testplan is closed. This testplan review process is often combined with a similar three step code review process in which the rtl and testbench code are reviewed.
23
Creating a Testplan
In many cases, the features will be verified in simulation and recorded as verified using either code coverage or functional coverage. The testplan can also include information about lab validation and firmware/hardware integration testing. For testplans which include code coverage and functional coverage, the connection between the testplan and simulations can be automated. To make the testplan executable a certain document format must be followed. The format which Questa's Verification Management solution uses is described below.
Executable Testplan Format verification intent. The rest of this article will describe both required information needed in a spreadsheet and how to flexibly add additional information for usage throughout the testplan's life cycle.
24
Plan Structure
Each row in the spreadsheet corresponds to a requirement captured during the testplan creation process. Each column has specific meaning in Questa's Verification Management solution. Section and Title The Section and Title columns work in conjunction to create the naming and hierarchy within the testplan. Section The Section column, usually a number, is used to create hierarchy within the testplan and group related testplan items together in a parent / child relationship. In spreadsheets, the user is responsible for entering this information. Typically you will start numbering sections with the number '1' and continuing sequentially. A sub-section beneath that section would then be numbered "1.1" and so on, where each additional level of hierarchy would be represented by the addition of a "." between section numbers. Title The title column captures the name of the requirement or design feature to be verified. This is the name of the testplan section that will appear within Questa. The name chosen here should have meaning as it will be visible through the tool flow. When the testplan is extracted by Questa, it uses the information in theSection and Titlecolumns to create a hierarchical name and unique tag for each row of the testplan. For example, given the simple testplan example above, Section 1.1 would have a hierarchical name /testplan/Parent_1/Child_1.
Executable Testplan Format Description The description column allows for more detail to be added to the spreadsheet. This could include references to other documentation to allow engineers to gather more information or it could be a simple explanation as to why the requirement exists. Any text can be captured in this column. It is technically optional, but in practice a requirement captured in a testplan should have an entry in the description column. Link and Type The Link and Type columns are used to specify the code or functional coverage items that will be linked to the requirement. A requirement can be linked to multiple different coverage metrics, including metrics of different types. Questa supports linking to Covergroups, Coverpoints, Crosses, Assertions, Cover Directives, Directed tests and code coverage metric types. These columns also allow other testplans to be imported as described in theRe-Using Existing Testplans section. Link This column is where you specify the name of the actual coverage object(s) in the coverage database that is linked to this respective testplan item. This could include a specific covergroup instance, an assertion, etc. Type This column is where you specify the type of the coverage object you specified in the Link column. Together, theLink and Typecolumn information is used to find the corresponding coverage objects in the coverage database efficiently and create the links between the testplan and the specified coverage objects. These columns enable the testplan to become executable. Weight The weight column captures an integer number that reflects the relative importance of the current testplan item amongst its siblings, to its parent testplan section. The default is 1 if not specified. When coverage for the testplan is being calculated by Questa, which uses a "weighted-average" calculation algorithm, these weights are taken into account. For more information about how Questa calculates tesplan coverage please see Questa documentation on Verification Management. Additionally, the weight column can be used to exclude portions of a testplan by specifying a value of 0 for the testplan section / item row that need to be excluded. Goal This column specifies the verification objective for a particular testplan section. Legal values range from 1 to 100, with the default being 100 if not specified. Questa uses this information to determine the point at which a testplan section / item is deemed to be covered. It does not alter how coverage is calculated.
25
Executable Testplan Format hierarchy, the Path column allows for the specification of the design path which will be prepended to the entry in the link column to create a fully qualified reference. Unimplemented As testplans are being defined, it is common for requirements to be captured where corresponding coverage items don't yet exist in a testbench or design. To handle this situation, a requirement can be maked as unimplemented by either adding a value of 'yes' or a number greater than zero to the Unimplmented column. This will cause testplan coverage calculations to accurately reflect that a requirement exists which is not yet covered by showing zero coverage for that requirement. By default, it is assumed that coverage for a requirement is implemented unless this column is specified.
26
27
Deriving a functional coverage model is not an automatic process, it requires interpretation of the available specifications and the implementation of the model requires careful thought.
The Process
The process that results in a functional coverage model is usually iterative and the model is built up over time as each part of the testbench and stimulus is constructed. Each iteration starts with the relevant and available functional specification documents which are analyzed in order to identify features that need to be checked by some combination of configuration and stimulus generation within the testbench. In general terms, a testbench has two sides to it, a control path used to stimulate the design under test to get it into different states to allow its features to be checked; and an analysis side which is used to observe what the design does in response to the stimulus. A self-checking mechanism should be implemented in the testbench to ensure that design is behaving correctly, this is usually referred to as the scoreboard. The role of the functional coverage model is to ensure that the tests that the DUTpasses have checked the design features for all of the relevant conditions. The functional coverage model should be based on observations of how the design behaves rather than how it has been asked to behave and should therefore be implemented in the analysis path. The easiest way to think about this is that with a testbench, the stimulus that runs on it and the scoreboard(s) have to be designed to test all the features of a design, and that the functional coverage model is used to ensure that all the desired variations of those tests have been seen to complete successfully. Verification is an incomplete process, even for "simple" designs it can be difficult to verify everything in time available. For reasonable sized designs there is a trade-off between what could be verified and the time available to implement, run, and debug test cases, this leads to prioritization based on the technical and commercial background to the project. A wise verification strategy is to start with the highest priority items and work down the priority order, whilst being prepared to re-prioritize the list as the project progresses. The functional coverage model should evolve as each design feature is tested, and each additional part of the functional coverage model should be put in place before the stimulus.
28
Process Guidelines
The functional coverage model is based on functional requirements
The testbench is designed to test the features of the design. The role of the functional coverage model is to check that the different variants of those features have been observed to work correctly. Features may also be referred to as requirements or in some situations as stories. For instance - a DUTgenerates a data packet with a CRC field. The CRC is based on the contents of the packet which has, say 10 variants. The testbench generates stimulus that makes the DUTproduce the data packets and the scoreboard checks the CRC field to make sure that the DUThas calculated it correctly. The role of the functional coverage monitor in this case is to ensure that all 10 packet variantsare checked out.
Covergroup functional coverage relies on sampling the value of one or more data fields to count how many times different permutations of those values occur. Cover Property or temporal based coverage is based on counting how many times a particular sequence of states and/or conditions occurred during a test. Temporal coverage is usually used to get coverage on control paths or protocols where timing relationships may vary. Examples include: Whether a FIFO has been driven into an overflow or underflow condition Whether a particular type of bus cycle has been observed to complete The first step in developing a functional coverage model is deciding which of these two approaches should be taken for each of the areas of concern.
29
Testplan to functional coverage coded into the coverpoint as a set of bins. What are the dependencies between data variables? When analyzing the different ways in which a design might be configured or a packet might be constructed, are there relationships between different variables?If such a relationship does exist, then a cross product between the variables should be specified in the covergroup. Are there boundary conditions that should be checked? Are there particular values or combinations of values that should be checked because they are at the limits of operation or are at a known inflection point in the design?This will invariably require some "reading between the lines" of the specification and some design or implementation knowledge. Any boundary conditions identified should be added to the coverage model to ensure that they are tested. Are there illegal conditions? If there are conditions which should not occur, then the covergroup can have a term to trap those conditions. The term does not contribute to the functional coverage but it can help detect either a design or a testbench error. Are there conditions that are not important? Even the simplest of designs may have more ways of configuring it than are ever realistically going to be used. If there is a way to determine which modes are most likely to be used then it also likely that there will be some that are known to be either useless or very unlikely to used, these can be omitted from the permutations of coverage values collected. There may also be a degree of prioritization here, with certain configurations that have to be tested, then later on, if there is time, lower priority configurations can be checked. When is the right time to sample the coverage? The data coverage collection code needs to sample the data values it is referencing. The sample point needs to: Only occur when the associated check has passed Occur when the data values are valid Occur when the data values are stable If the sampling is based on receiving a UVManalysis transaction, then if it comes directly from a monitor it may need to have a means to discriminate between valid and invalid analysis traffic. If the functional coverage collector is fed analysis transactions from a scoreboard then the scoreboard should qualify that a check has passed before sending the analysis transaction. Are there times when the data coverage sample is not valid? If there are, then guards will have to be coded into the functional coverage implementation code. What information is required in the analysis transaction? For testbenches based on a TLM methodology, such as OVMor UVM, the information required by the functional coverage needs to come from the analysis transaction. This implies that the analysis transaction has to have all the information that is going to be sampled, and this may well affect the transaction and the design of the monitor or scoreboard that is generating the transaction.
30
Testplan to functional coverage Summary When considering how a design feature is to be tested, and what the covergroup based functional coverage model for that feature should be, remember to answer these questions:
Coverage Criterion Which values are important? Feature on which data coverage is to be collected Identify the important values to hit.
31
What are the dependencies between the values? Identify the important cross products between data values Are there illegal conditions? When is the right time to sample? When is the data invalid? Identify values, or combinations of values that should not occur Specify a valid sampling point Identify conditions when the data should not be sampled
32
Hybrid Coverage
There may be times when a hybrid of data coverage and temporal coverage techniques is required to collect specific types of functional coverage. For example, checking that all modes of protocol transfer have occured is best done by writing a property or sequence that identifies when the transfer has completed successfully and then sampling a covergroup based on the interesting signal fields of the protocol to check that all relevant conditions are seen to have occurred. The APBBus protocol monitor contains an example implementation of using hybrid functional coverage.
Maybe
Yes
In this style of design there are timing relationships between different signals which need to be checked and seen to work
( for source code example download visit us online at http:/ / verificationacademy. com ).
Yes
Maybe
Most of the functional coverage UARTCoverage can be derived from content of Example the registers which are used to control and monitor the behaviour of the device. The register interface may also serve the data path. There may be scope for using assertions on signal interfaces. In this class of design, the Biquad Filter stimulus pumps data through Example the design datapath and compares the output against a reference model. The functional coverage is primarily about ensuring that the algorithm 'knobs' have been tested sufficiently. Coverage of combinations of abstract stimulus on multiple ports, coverage of Config registers, coverage of features of target DDR specification To be released
( for source code example download visit us online at http:/ / verificationacademy. com ).
Yes
No
( for source code example download visit us online at http:/ / verificationacademy. com ).
Yes
Yes
Yes
Maybe
At the SoC level not functional SoC Coverage coverage is use case driven, and Example only some interface or block level coverage can be reused
Not applicable
33
34
Covergroup Labeling
The way in which you use labeling when coding a covergroup can have a huge impact on understanding the coverage results. A covergroup can be assigned a option.name string which helps with identification of which particular part of a testbench the coverage is associated with. In side a covergroup, coverpoints can be labelled and bins can be named. Using all of these techniques makes it much easier to understand the coverage results during analysis.
Covergroup naming
If multiple instances of the same covergroup are used within a testbench, then the option.name parameter can be used to assign an identity string to each instance. The name string can be passed in as an argument when the covergroup is constructed. In a UVMenvironment, the name could be passed in using get_full_name() method.See the following code example.
// Class containing a covergroup class my_cov_mon;
endclass: my_cov_mon
35
endclass: my_cov_mon
A covergroup can also be named programatically using the covergroup set_inst_name() built-in method.
// UVM Covergroup based component class my_cov_mon extends uvm_subscriber #(my_txn);
function new(string name = "my_cov_mon", uvm_component parent = null); super.new(name, parent); my_cg = new(); endfunction
function void build_phase(uvm_phase phase); my_cg.set_inst_name("TLB_coverage"); // Sets the instance name //... endfunction: build_phase endclass: my_cov_mon
3'b??0 No Parity 3'b001 Odd parity 3'b011 Even parity 3'b101 Stick 0 parity 3'b111 Stick 1 parity
36
coverpoint LCR[5:0];
endgroup: tx_word_format_cg
WORD_LENGTH: coverpoint lcr[1:0] { bins bits_5 = {0}; bins bits_6 = {1}; bins bits_7 = {2}; bins bits_8 = {3}; }
PARITY: coverpoint lcr[5:3] { bins no_parity = {3'b000, 3'b010, 3'b100, 3'b110}; bins even_parity = {3'b011}; bins odd_parity = {3'b001}; bins stick1_parity = {3'b101}; bins stick0_parity = {3'b111}; }
endgroup: tx_word_format_cg
Coding for analysis In order to check that all possible word formats have been transmitted we could implement a covergroup by creating a coverpoint for LCR[5:0] and not specifying any bins. This would create a set of default bins, one for each possible value of the register, as shown in the left hand code example. If the functional coverage collected samples these bits at least once, then there is no problem, but if not then it is reasonably difficult to figure out which bin corresponds to which condition - see the 'before' screen shot from the Questa covergroup browser. Here, not using labels has caused the simulator to use auto-bins, which means that the missing bin values need to be converted to binary and then mapped to the register fields to identify the missing configurations. A better way to implement the covergroup is to use a labeled coverpoint for each register field and then using the bins syntax for each of the values in the register truth table. When this is simulated, the cross products created reflect the different bin labels, which makes it much easier to determine which functional coverage conditions have not been sampled. It also makes it easier to see whether there are any gross coverage conditions that have been missed. See the 'after' screen shot from the Questa covergroup GUIfor the refactored covergroup.
37
38
Coding for analysis get_inst_coverage option To help with the scenario where the merge_instances option has been enabled, the option.get_inst_coverage variable can be set to 1 to enable the SystemVerilog $get_inst_coverage() system call to return the coverage for an instance of a covergroup, therefore allowing the coverage for all individual instances to be checked. If the merge_instances option is set to 0, then the get_inst_coverage variable has no effect. Summary Interaction between per_instance and merge_instances settings:
option.per_instance type_option.merge_instances Coverage reporting behaviour
39
Overall coverage reported as a weighted average of the coverage for all instances of the covergroup Overall coverage reported as a weighted average of the coverage for all instances of the covergroup, and broken out for each instance of the covergroup. Overall coverage reported as a merge of the coverage for the individual instances of the covergroup Overall coverage reported as a merge of individual coverage results, get_inst_coverage() enabled, coverage reporting broken out for each instance of the covergroup
40
With the APB3 protocol, a single master can interface to several slave peripheral devices. The master generates a set of control fields for address, write, and write data which are common to all the slaves. Each slave is selected by an
Bus protocol coverage individual peripheral select line (PSEL) and then enabled by a common PENABLEsignal. Each slave generates response signals, ready, read data and status which are multiplexed back to the master. The block diagram shows a typical APB3 peripheral block. The timing relationship between the APB3 signals is shown in the timing diagram below.
41
Another way to specify this is to require that all of the bus signals must be in a known state all of the time, however this may not always be practical and the properties defined are the minimum for the protocol to work correctly in all conditions.
Bus protocol coverage See the unknown signal properties section of the example for an example implementation.
42
Timing Relationships
The timing relationships between the signals in the protocol can be described using sequences and properties. If a covered sequence completes or an asserted and covered property passes then functional coverage can be assumed for the function in question. For the APB3 protocol, the following temporal relationships can be defined: Once PREADYis sampled at logic 1, PENABLEshall go low by the next clock edge When a PSELline goes to a logic 1, then the following signals shall be stable until the end of the cycle when PREADY is sampled at a logic 1 PSEL PWRITE PADDR PWDATA (iff PWRITE is at logic 1) There shall be at least one clock cycle where PENABLEis at logic 0, between bus transfers When a PSELline goes to a logic 1, then PENABLEshall go to a logic 1 on the following clock edge See the Timing Relationships section on the example page for an implementation of these properties.
Other Properties
There may be other protocol rules which are not strictly temporal in nature. For the APB3 protocol the following property is true: Only one PSELline shall be active at a logic 1 at any time See the Other Properties section of the examples page for an implementation.
Functional Coverage
In addition to the functional coverage represented by the protocol assertions which check for valid transfers, we need to check that all possible types of transfer have occurred. This is best done by using data coverage for the various bus fields to check that we have seen transfers complete for each of the valid values. The fields that are relevant to bus protocol functional coverage are: PSEL - That all PSELlines on the bus have been seen to be active - i.e. transfers occurred to all peripherals on the bus PWRITE - That we have seen reads and writes take place PSLVERR - That we have seen normal and error responses occur Creating a cross product between these fields checks that all types of transfer have occurred between the master and each slave on the APB3 bus. See the Functional Coverage section on the examples page for an implementation. Other types of functional coverage that could be collected would be: Peripheral delay - checking that a range of peripheral delays have been observed Peripheral address ranges - Checking that specific address ranges have been accessed However, these are likley to be design specific and should be collected using a separate monitor.
43
44
Timing Relationship checks PENABLEde-assertion PENABLEis de-asserted once PREADYbecomes active PSELto PENABLE Signal Stability There is only one clock delay between PSEL and PENABLE When PSELbecomes active, the PWRITE, PADDR, and PWDATA signals should be stable to the end of the cycle Assertion, Cover directive 1 Assertion, Cover directive 1 Assertion, Cover directive 1
Other Checks PSELUnique Functional Coverage APB3 Protocol All types of APB3 protocol transfers have taken place with all types of response for all active PSEL lines Covergroup 2 Only one PSELline is active at a time Assertion 1
45
// Reuseable property to check that a signal is in a known state property SIGNAL_VALID(signal); @(posedge PCLK) !$isunknown(signal); endproperty: SIGNAL_VALID
// Reuseable property to check that if a PSEL is active, then // the signal is in a known state
46
// Check that write data is in a known state if a write property PWDATA_SIGNAL_VALID; @(posedge PCLK) ($onehot(PSEL) && PWRITE) |-> !$isunknown(PWDATA); endproperty: PWDATA_SIGNAL_VALID
// Check that if PENABLE is active, then the signal is in a known state property PENABLE_SIGNAL_VALID(signal); @(posedge PCLK) $rose(PENABLE) |-> !$isunknown(signal)[*1:$] ##1 $fell(PENABLE); endproperty: PENABLE_SIGNAL_VALID
// Check that read data is in a known state if a read property PRDATA_SIGNAL_VALID; @(posedge PCLK) ($rose(PENABLE && !PWRITE && PREADY)) |-> !$isunknown(PRDATA)[*1:$] ##1 $fell(PENABLE); endproperty: PRDATA_SIGNAL_VALID
Timing Relationships
The monitor implements the timing relationships described in English on the previous page. The functional coverage strategy is to assume that if these assertions do not fail but are seen to complete with a cover directive then they add valid functional coverage:
// PENABLE goes low once PREADY is sampled property PENABLE_DEASSERTED; @(posedge PCLK) $rose(PENABLE && PREADY) |=> !PENABLE; endproperty: PENABLE_DEASSERTED
47
// From PSEL active to PENABLE active is 1 cycle property PSEL_TO_PENABLE_ACTIVE; @(posedge PCLK) (!$stable(PSEL) && $onehot(PSEL)) |=> PENABLE; endproperty: PSEL_TO_PENABLE_ACTIVE
// FROM PSEL being active, then signal must be stable until end of cycle property PSEL_ASSERT_SIGNAL_STABLE(signal); @(posedge PCLK) (!$stable(PSEL) && $onehot(PSEL)) |-> $stable(signal)[*1:$] ##1 $fell(PENABLE); endproperty: PSEL_ASSERT_SIGNAL_STABLE
PSEL_STABLE: assert property(PSEL_ASSERT_SIGNAL_STABLE(PSEL)); PWRITE_STABLE: assert property(PSEL_ASSERT_SIGNAL_STABLE(PWRITE)); PADDR_STABLE: assert property(PSEL_ASSERT_SIGNAL_STABLE(PADDR)); PWDATA_STABLE: assert property(PSEL_ASSERT_SIGNAL_STABLE(PWDATA & PWRITE)); COV_PSEL_STABLE: cover property(PSEL_ASSERT_SIGNAL_STABLE(PSEL)); COV_PWRITE_STABLE: cover property(PSEL_ASSERT_SIGNAL_STABLE(PWRITE)); COV_PADDR_STABLE: cover property(PSEL_ASSERT_SIGNAL_STABLE(PADDR)); COV_PWDATA_STABLE: cover property(PSEL_ASSERT_SIGNAL_STABLE(PWDATA & PWRITE));
Other Properties
The monitor checks that only one PSEL line is active at a logic 1 at any point in time. Since this property is checked on every clock cycle, if there are no failures then it implies functional coverage.
// Check that only one PSEL line is valid at a time: property PSEL_ONEHOT0; @(posedge PCLK) $onehot0(PSEL); endproperty: PSEL_ONEHOT0
48
Functional Coverage
To check that we have seen transfers complete correctly for each of the possible protocol conditions for each of the peripherals on the bus, we implement an array of covergroups, one for each peripheral, which collects the protocol coverage specific to that peripheral. The covergroups are sampled when a simple sequence holds. Note that to improve performance each covergroup is only sampled when the relevant PSELline is true.
// Functional Coverage for the APB transfers: // // Have we seen all possible PSELS activated? // Have we seen reads/writes to all slaves? // Have we seen good and bad PSLVERR results from all slaves? covergroup APB_accesses_cg();
// Since we check this for each PSEL we need to set the // per_instance flag option.per_instance = 1;
RW: coverpoint PWRITE { bins read = {0}; bins write = {1}; } ERR: coverpoint PSLVERR { bins err = {1}; bins ok = {0}; }
endgroup: APB_accesses_cg
// Creation the covergroups initial begin foreach(APB_protocol_cg[i]) begin APB_protocol_cg[i] = new(i); end end
// Sampling of the covergroups sequence END_OF_APB_TRANSFER; @(posedge PCLK) $rose(PENABLE & PREADY);
49
cover property(END_OF_APB_TRANSFER) begin foreach(PSEL[i]) begin if(PSEL[i] == 1) begin APB_protocol_cg[i].sample(); end end end
50
UARTOverview
The function of an Universal Aysynchronous Receiver Transmitter (UART) is to transmit and receive characters of differing formats over a pair of serial lines asynchronously. With an asynchronous serial link, there is no shared sampling clock, instead the receive channel samples the incoming serial data stream with a clock that is 16x the data rate. When there is no data to transmit the data lines are held high, and transmission of a data character commences by taking the data line low for one bit period to transmit the start bit. The receiving end detects the start bit and then samples and unpacks the serial data stream that can consist of between 5 and 8 bits of data, parity and then a stop bit which is always a 1.
Register Map
The UART design in this example is based on the industry standard 16550a UART. It has 10 registers which control its operation and in a system these are used by software to control the device and to send and receive characters. The transmit and receive paths are buffered with 16 word deep FIFOs. The register map is summarised here:
51
Width 8 8 8 8 8 8 8 8 8 8 8
Description Receive data FIFOoutput Transmit data FIFO input Enables for UARTinterrupts Interrupt status Set receive data FIFOthresholds Sets the format of the UARTdata word Used to control the modem interface outputs Transmit and receive channel status Used to monitor the modem interface inputs LSB of the 16 bit divider MSBof the 16 bit divider
Interrupt Identification (IIR) 0x8 FIFOControl (FCR) Line Control (LCR) Modem Control (MCR) Line Status (LSR) Modem Status (MSR) Divisor 1 Divisor 2 0x8 0xC 0x10 0x14 0x18 0x1C 0x20
For the UVMtestbench, a UVMregister model will be written to abstract stimulus for configuring and controlling the operation of the UART. One benefit of using this register model is that we can reference it for the functional coverage model. For more details on the UART functionality and the detailed register map, please refer to the datasheet.
External Interfaces
The UARTblock has a number of discrete interfaces which need to be driven or monitored. The UART example testbench is implemented using UVM, therefore the driving and monitoring of these interfaces will be done by Universal Verification Components (UVCs) or agents. If the testbench was implemented using another methodology, then BFM or BFM-like models would be used. However, the principles of how you model and collect coverage are essentially the same. The UART has the following external interfaces which will need to be driven and monitored in the testbench. APB Host interface Requires an APB agent TX Serial line Requires a passive UART agent RX Serial line Requires an active UART agent Modem interface Requires a simple parallel I/O agent Interrupt line Requires a monitor
52
Testbench Architecture
The UVM testbench architecture used for this example is shown in the block diagram.
An outline functional test plan for the UART has been created as part of the process of mapping its features to test cases and functional coverage.
Block Level coverage In order to check that the transmit channel is working correctly we can compare the content of the analysis transaction written by the passive UART monitor when a character is received with the character originally written to the transmit buffer of the UART. This implies scoreboard analysis connections to the UART agent and the APB agent. The UART transmit buffer writes will have to be buffered in a FIFO structure in the scoreboard so that they can be compared with the characters received by the UART. The transmit channel has two buffer status bits (TX empty and TX FIFO empty) which are read back from the Line Status Register, these need to be tested by the stimulus generation path. There is also a TX FIFO empty status interrupt which will be discussed in the section on interrupts.
53
We need to see all possible permutations of these configuration settings in order to say that we have achieved functional coverage for the transmit channel. An example implementation of the SystemVerilog covergroup used to collect this functional coverage is implemented in the example UARTtestbench. Transmit channel coverage summary
Coverage Criterion Which values are important? Transmit Channel Coverage LCR[5:0] - defining all permutations of UART serial word format
What are the dependencies between the values? No dependencies Are there illegal conditions? When is the right time to sample? When is the data invalid? No, all permutations are valid When a character has been transmitted N/A
Block Level coverage The checking mechanism used by the receive scoreboard is to compare the data sent by the UARTagent with the data read from the receive buffer of the UART device. Any errors inserted by the UARTagent need to be seen to be detected by the design either as bits set in the Line Status Register (LSR) or by the generation of a line status interrupt. The checks that need to be made by the testbench for the receive channel include: That a start bit is detected correctly That parity has been received correctly - if not a parity error is generated That at least one stop bit has been received - if not a framing error is generated That a data overrun condition is detected correctly That the data received flag works correctly That a break condition is detected correctly
54
There are a number of receive channel interrupt conditions that are considered in the section on interrupts.
What are the dependencies between the values? For error free RX conditions DR and all word formats For injected error RX, cross product of LCR &LSRbits Are there illegal conditions? When is the right time to sample? When is the data invalid? Cannot have OEwith no DR valid When a RX character has been received and DR is valid N/A
Block Level coverage the Modem Status Register (MSR). There is a loop-back mode which also needs to be checked. In this mode writes to the MCR are reflected back in the MSR and none of the external signals change or cause any changes. The modem scoreboard checks this functionality separately from the normal mode of operation. The testbench contains a modem agent that has a driver for the modem inputs and a monitor which sends transactions to the modem scoreboard when any of the modem signals (inputs or outputs) change. The scoreboard also receives transactions from the APB agent's monitor so that it can keep track of UART register accesses, this allows it to know when a modem output should have changed due to a write to the MCR and when a modem input change should be reflected in a read from the MSR.
55
What are the dependencies between the values? Each of the modem signals are orthogonal, but the loopback mode creates a dependency between the MCR bits and the MSRbits. For coverage all permutations are relevant. Are there illegal conditions? When is the right time to sample? No When a change occurs on the Modem interface, or there is a write to the MCR, determined by the modem scoreboard. Immediately after a change in the loopback mode, handled by the scoreboard
56
UARTInterrupts
Testing UARTinterrupts
The testbench contains a monitor for the UART interrupt line and some of the test cases have stimulus which enables the various interrupts and then handles the interrupt conditions as they occur. The scoreboarding within the testbench checks the validity of the interrupt conditions dependent on its source. Interrupts can be generated by the UARTfor the following conditions: Transmit FIFO empty Receive data FIFOthreshold reached (1, 4, 8, 14 characters) Receiver line status - Parity error, Framing error or Break condition Receiver timeout - At least one character in the FIFO, but no receive channel activity for at least 4 character times Modem status change
57
What are the dependencies between the values? Interrupts should only occur if they are enabled Need to see all valid permutations of interrupt enables and interrupt sources Are there illegal conditions? When is the right time to sample? Invalid conditions are interrupt sources reported when an interrupt type is not enabled For the interrupt enables, when an interrupt occurs. For interrupt ids, when an interrupt occurs, followed by a read from the IIRregister N/A
Receiver FIFOThreshold Interrupt Coverage Criterion Which values are important? UART interrupt coverage summary LCR[5:0] - Definining the different word formats FC[7:6] - Defining the different FIFOthreshold values
What are the dependencies between the values? Need a cross between the LCR and FCR bits to ensure that FIFOthreshold interrupts have occurred for all possible permutations. Are there illegal conditions? When is the right time to sample? When is the data invalid? None When an RXFIFOthreshold interrupt occurs N/A
Receiver Line Status Interrupt Coverage Criterion Which values are important? UART interrupt coverage summary LSR[4:1] - Defining the different types of RXline status
What are the dependencies between the values? None, each status bit has a distinct source Are there illegal conditions? When is the right time to sample? When is the data invalid? When the break condition occurs, PE and FE are not valid When a line status interrupt occurs, followed by a read from the LSR N/A
58
Transmitter Empty Interrupt Coverage Criterion Which values are important? UART interrupt coverage summary LCR[5:0] - Defining the UARTserial format
What are the dependencies between the values? Cross product defining all permutations of the word format Are there illegal conditions? When is the right time to sample? When is the data invalid? None When a TXempty interrupt occurs, followed by a read from the LSR N/A
Modem Status Interrupt Coverage Criterion Which values are important? UART interrupt coverage summary MSR[3:0] - The modem i/p signal change flags
What are the dependencies between the values? None, each signal is orthogonal Are there illegal conditions? When is the right time to sample? When is the data invalid? None When a modem status interrupt occurs, followed by a read from the MSR The MSRflags are reset on read, so a second read will return invalid status
59
What are the dependencies between the values? DIV1 & DIVare concatonated, otherwise no dependencies Are there illegal conditions? When is the right time to sample? When is the data invalid? The divider cannot have a value of 0 On the rising edge of the BAUD_O signal If the divider registers are being programmed, or have just been programmed in which case the divide ratio will not match the register content (this is not an error)
Register Interface
Testing the register interface
The register interface is implicitly tested by the functional stimulus for each of the various test cases. There is a specific test case to check that the register reset values are correct.
What are the dependencies between the values? Need to cross the valid addresses with the read/write bit to get the register access space Are there illegal conditions? When is the right time to sample? When is the data invalid? The MSR and LSR registers are read only, so writes to these registers are invalid When an APBbus transaction completes N/A
60
UARTTest Plan
Section Description Coverage Type Priority Registers Reset Values Register Accesses Bit level register accesses APBProtocol Transmitter Character formats TX FIFOEmpty flag All possible character formats are transmitted correctly The FIFOempty flag is set when the FIFOis empty and is read back correctly Covergroup, cross Design Assertion, Covergroup Design Assertion, Covergroup 1 1 All registers return the specified reset values All registers have been accesses for all possible access modes All read-write bits in the registers toggle correctly APBProtocol has been tested in all modes Test result Covergroup, cross Test result APBMonitor 1 1 1 1
TX empty flag
The transmit empty flag is set correctly and is read back correctly
Receiver Character formats Data Received Flag All possible character formats are received correctly The data received flag is set when data is available and is read back correctly Covergroup, cross Design Assertion, Covergroup 1 1
RX Line Status Framing Error Framing errors are detected for one or two stop bits Design Assertion, Covergroup Design Assertion, Covergroup Covergroup, cross Covergroup, cross Covergroup, cross Covergroup, cross 2
Parity Error
Break Indication Overrun Error FIFOE Status Modem Interface Modem Outputs
A break condition is detected correctly for all character formats RXoverrun is detected for all character formats That the FIFOerror condition is valid for all error/indication types Any valid combination of error/indicator has been observed
2 2 2 2
Covergroup, Cross
61
All combinations of modem input values have been seen The modem input status change signals work correctly Covergroup, Cross Design Assertion, Covergroup Covergroup 3 3
Modem Inputs
Modem output bits are routed to the right modem status bits
All combinations of the interrupt enable bits have been used All valid interrupt IDs have been detected Seen for all possible character formats All possible RXFIFOThreshold values checked
Covergroup, cross Covergroup, cross Covergroup, cross Covergroup, cross Covergroup, cross
1 1 1 2 1
Receive Line Status Interrupt Transmit empty interrupt Modem Status interrupt Receive timeout interrupt Baud Rate Divider values
Interrupts generated for all possible combinations of errors and indicators for all character formats Generated for all character formats Generated for all combinations of the signal change bits Has been checked for the shortest and longest character format and 4 other formats
1 3 4
Check UARToperation for a range of baud rate divider values Check baud rate divider ratio for a selection of values via baud rate divider output
Covergroup Covergroup
1 2
Code Coverage Statement coverage Branch coverage FSMcoverage Check each executable line of the RTLhas been covered Check each branch in the RTLhas been taken Each arc in the RTLFSMs has been taken 1 1 1
Notes: 1. The priority column indicates the relative importance of each feature. Items marked priority 1 will be verified first, followed by prioirity 2 items, down to priority 4. 2. The APBinterface behaviour is checked by inserting the APBprotocol monitor in the testbench, connected to the APBport on the UART, its functional coverage will be merged with the other UARTfunctional coverage 3. There are several checks that are performed using assertions which the designer has implemented in the design, these are included in the table as Design Assertions 4. Code coverage is included as a category in the test plan so that it can be tracked
62
`uvm_component_utils(uart_tx_coverage_monitor)
WORD_LENGTH: coverpoint lcr[1:0] { bins bits_5 = {0}; bins bits_6 = {1}; bins bits_7 = {2}; bins bits_8 = {3}; }
PARITY: coverpoint lcr[5:3] { bins no_parity = {3'b000, 3'b010, 3'b100, 3'b110}; bins even_parity = {3'b011}; bins odd_parity = {3'b001}; bins stick1_parity = {3'b101}; bins stick0_parity = {3'b111}; }
63
function new(string name = "uart_tx_coverage_monitor", uvm_component parent = null); super.new(name, parent); tx_word_format_cg = new(); endfunction
endclass: uart_tx_coverage_monitor
`uvm_component_utils(uart_modem_coverage_monitor)
DTR: coverpoint mcr[0]; RTS: coverpoint mcr[1]; OUT1: coverpoint mcr[2]; OUT2: coverpoint mcr[3]; LOOPBACK: coverpoint mcr[4];
64
DCTS: coverpoint msr[0]; DDSR: coverpoint msr[1]; TERI: coverpoint msr[2]; DDCD: coverpoint msr[3]; CTS: coverpoint msr[4]; DSR: coverpoint msr[5]; RI: coverpoint msr[6]; DCD: coverpoint msr[7]; LOOPBACK: coverpoint loopback;
MSR_INPUTS: cross DCTS, DDSR, TERI, DDCD, CTS, DSR, RI, DCD, LOOPBACK;
endgroup: msr_inputs_cg
uart_reg_block rm;
function new(string name = "uart_modem_coverage_monitor", uvm_component parent = null); super.new(name, parent); mcr_settings_cg = new(); msr_inputs_cg = new(); endfunction
if((t.addr[7:0] == 8'h18) && (t.we == 0)) begin data = rm.MCR.get_mirrored_value(); msr_inputs_cg.sample(t.data[7:0], data[4]); end else if((t.addr[7:0] == 8'h10) && (t.we == 1)) begin mcr_settings_cg.sample(t.data[4:0]); end
endfunction: write
endclass: uart_modem_coverage_monitor
65
UARTInterrupt Coverage
There are a number of covergroups required to check the UARTinterrupt functional coverage. Interrupt enable coverage This covergroup is sampled every time an interrupt occurs. It checks the state of the IERregister, decoding the bit patterns in order to determine which interrupt combinations have not been enabled.
covergroup int_enable_cg() with function sample(bit[3:0] en);
INT_SOURCE: coverpoint en { bins rx_data_only = {4'b0001}; bins tx_data_only = {4'b0010}; bins rx_status_only = {4'b0100}; bins modem_status_only = {4'b1000}; bins rx_tx_data = {4'b0011}; bins rx_status_rx_data = {4'b0101}; bins rx_status_tx_data = {4'b0110}; bins rx_status_rx_tx_data = {4'b0111}; bins modem_status_rx_data = {4'b1001}; bins modem_status_tx_data = {4'b1010}; bins modem_status_rx_tx_data = {4'b1011}; bins modem_status_rx_status = {4'b1100}; bins modem_status_rx_status_rx_data = {4'b1101}; bins modem_status_rx_status_tx_data = {4'b1110}; bins modem_status_rx_status_rx_tx_data = {4'b1111}; illegal_bins no_enables = {0}; // If we get an interrupt with no enables it's an error }
endgroup: int_enable_cg
Interrupt source coverage This covergroup checks that all possible interrupt status conditions have been sampled. It crosses the content of the IERwith the IIR, and also filters conditions which cannot occur. The ignore_bins in the cross ensure that if an interrupt is disabled then interrupt ids of that type are ignored in the cross.. It is sampled when there is an interrupt followed by a read from the IIRregister.
covergroup int_enable_src_cg() with function sample(bit[3:0] en, bit[3:0] src);
66
IEN: coverpoint en { bins rx_data_only = {4'b0001}; bins tx_data_only = {4'b0010}; bins rx_status_only = {4'b0100}; bins modem_status_only = {4'b1000}; bins rx_tx_data = {4'b0011}; bins rx_status_rx_data = {4'b0101}; bins rx_status_tx_data = {4'b0110}; bins rx_status_rx_tx_data = {4'b0111}; bins modem_status_rx_data = {4'b1001}; bins modem_status_tx_data = {4'b1010}; bins modem_status_rx_tx_data = {4'b1011}; bins modem_status_rx_status = {4'b1100}; bins modem_status_rx_status_rx_data = {4'b1101}; bins modem_status_rx_status_tx_data = {4'b1110}; bins modem_status_rx_status_rx_tx_data = {4'b1111}; illegal_bins no_enables = {0}; // If we get an interrupt with no enables its an error }
ID_IEN: cross IIR, IEN { ignore_bins rx_not_enabled = binsof(IEN) intersect{4'b0010, 4'b0100, 4'b0110, 4'b1000, 4'b1010, 4'b1100, 4'b1110} && binsof(IIR) intersect{4}; ignore_bins tx_not_enabled = binsof(IEN) intersect{4'b0001, 4'b0100, 4'b0101, 4'b1000, 4'b1001, 4'b1100, 4'b1101} && binsof(IIR) intersect{2}; ignore_bins rx_line_status_not_enabled = binsof(IEN) intersect{4'b0001, 4'b0010, 4'b0011, 4'b1000, 4'b1001, 4'b1010, 4'b1011} && binsof(IIR) intersect{4'hc, 6}; ignore_bins modem_status_not_enabled = binsof(IEN) intersect{4'b0001, 4'b0010, 4'b0011, 4'b0100, 4'b0101, 4'b0110, 4'b0111} && binsof(IIR) intersect{0}; }
endgroup: int_enable_src_cg
UART example covergroups Receive FIFOthreshold interrupt coverage The receive FIFO threshold level is determined by bits [7:6] of the FCR register. When a receive threshold interrupt occurs, word format is crossed with the FCR bits to ensure that all possible combinations occur. It is sampled when an interrupt occurs followed by a read from the IIRregister that indicates a FIFOthreshold interrupt.
covergroup rx_word_format_int_cg() with function sample(bit[5:0] lcr, bit[1:0] fcr);
67
WORD_LENGTH: coverpoint lcr[1:0] { bins bits_5 = {0}; bins bits_6 = {1}; bins bits_7 = {2}; bins bits_8 = {3}; }
PARITY: coverpoint lcr[5:3] { bins no_parity = {3'b000, 3'b010, 3'b100, 3'b110}; bins even_parity = {3'b011}; bins odd_parity = {3'b001}; bins stick1_parity = {3'b101}; bins stick0_parity = {3'b111}; }
FCR: coverpoint fcr { bins one = {0}; bins four = {1}; bins eight = {2}; bins fourteen = {3}; }
endgroup: rx_word_format_int_cg
UART example covergroups Receive Line Status interrupt coverage This covergroup is sampled when a line status interrupt occurs followed by a read from the LSRregister.
covergroup lsr_int_src_cg() with function sample(bit[7:0] lsr);
68
LINE_STATUS_SRC: coverpoint lsr[4:1] { bins oe_only = {4'b0001}; bins pe_only = {4'b0010}; bins fe_only = {4'b0100}; bins bi_only = {4'b1000, 4'b1100, 4'b1010, 4'b1100}; // BI active discounts pe & fe bins bi_oe = {4'b1001, 4'b1101, 4'b1011, 4'b1101}; // BI active discounts pe & fe bins oe_pe = {4'b0011}; bins oe_fe = {4'b0101}; bins fe_pe = {4'b0110}; bins no_ints = {0}; }
endgroup: lsr_int_src_cg
There are a few things to note about the bins in this covergroup: If a Break occurs, then it is also likely to create framing and parity errors The receive line status interrupt enable also enables the RXtimeout, this will not be detected by this covergroup which is why there is a no_ints bin Modem Status interrupt coverage The modem status interrupt can be caused by one of four status bits becoming true, this covergroup checks for all four bits being active and also the error condition where none are active but a modem status interrupt has occurred. The MSRconditions are crossed with the MCR loopback bit since they can be generated in normal and loopback mode. This covergroup is sampled when a modem status interrupt occurs followed by a read from the MSR.
covergroup modem_int_src_cg() with function sample(bit[4:0] src);
MODEM_INT_SRC: coverpoint src[3:0] { wildcard bins dcts = {4'b???1}; wildcard bins ddsr = {4'b??1?}; wildcard bins teri = {4'b?1??}; wildcard bins ddcd = {4'b1???}; illegal_bins error = {0}; }
69
endgroup: modem_int_src_cg
Note that the fidelity of this covergroup is reduced since wildcard bins are used to check that each of the MSRinterrupt source bits is seen to be active true, rather than all combinations. The reasoning behind this is that each bit is orthogonal to the other, and that therefore there is no functional relationship between them.
coverpoint div { bins div_ratio[] = {16'h1, 16'h2, 16'h4, 16'h8, 16'h10, 16'h20, 16'h40, 16'h80, 16'h100, 16'h200, 16'h400, 16'h800, 16'h1000, 16'h2000, 16'h4000, 16'h8000, 16'hfffe, 16'hfffd, 16'hfffb, 16'hfff7, 16'hffef, 16'hffdf, 16'hffbf, 16'hff7f, 16'hfeff, 16'hfdff, 16'hfbff, 16'hf7ff, 16'hefff, 16'hdfff, 16'hbfff, 16'h7fff, 16'h00ff, 16'hff00, 16'hffff}; } endgroup: baud_rate_cg
RW: coverpoint we {
70
ADDR: coverpoint addr { bins data = {0}; bins ier = {8'h4}; bins iir_fcr = {8'h8}; bins lcr = {8'hC}; bins mcr = {8'h10}; bins lsr = {8'h14}; bins msr = {8'h18}; bins div1 = {8'h1c}; bins div2 = {8'h20}; }
REG_ACCESS: cross RW, ADDR { ignore_bins read_only = binsof(ADDR) intersect {8'h14, 8'h18} && binsof(RW) intersect {1}; }
endgroup: reg_access_cg
Datapath Coverage
71
Datapath Coverage
What is a datapath block?
A datapath block takes an input data stream and implements a transform function that generates the output data. The transfer function may have settings which change its characteristics, or it may be a fixed implementation. In its path from the input to the output, the data does not interact with other blocks, hence the term datapath. Examples of datapath blocks include custom DSPfunctions, modems, encoders and decoders and error correction hardware. A datapath block is generally tested with meaningful, rather than random, data and the output is related to the input by the transform function and is therefore meaningful as well. The input to a datapath block is most likely generated from a software (c) based model of the system in which the function was originally modelled, and the output of the block is usually compared against the output from a golden reference model. In some cases the output data may require subjective testing. For instance, a video encoding block would require a video format signal as its input and the encoded output would have to be checked visually to check that the result of the encoding was of an acceptable quality.
Functional coverage for a datapath block is usually focussed on its settings (sometimes referred to as the "knobs"), or the parameters which affect its transform function. The role of the functional coverage model is to check that the block has been tested with all desired combinations of parameter settings. The value of the data that is fed into the datapath block may also be relevant to the coverage since it could be used to prove that a combination of input values has been processed against each valid set of parameters.
Datapath Coverage
72
In theory, the BiQuad filter design can handle a continuous, or infinite, range of possible input and co-efficient values, so the the verification problem needs to be constrained to something practical. In this case, the IIRfilter is going to be used as a programmable filter for audio data with frequencies between 50 Hz and 20 KHz, and it will be tested for correct operation as a Low Pass, High Pass and Band Pass filter over the frequency range, varying the co-efficient values to set the corner frequencies. The co-efficients are stored in registers which can be programmed using an APBinterface. The input data will be a frequency swept sine wave and the resultant output sine wave will be checked to make sure that the right level of attenuation has been achieved according to the intended characteristics of the filter. The diagram below illustrates the filter testbench architecture.
For each filter type, the filter parameters for corner frequencies will be tested at 200 Hz intervals in the 0 - 4 KHz range, and then at 1 KHz intervals in the 4-20 KHz range - this equates to 36 possible sets of co-efficient values, each of which are valid for a particular corner frequency.
Datapath Coverage The input frequency sweep waveform will be sampled to ensure that it covers all the frequencies of interest and this information should be crossed with the set of co-efficient values to ensure that all possible combinations have been observed. This strategy is summarised in the BiQuad IIRFilter Test Plan. In terms of sampling, the covergroups for a particular filter type should only be sampled when the filter has been configured in that mode, and they should be sampled when the input frequency crosses a frequency increment boundary.
Coverage Criterion Which values are important? What are the dependencies between the values? Are there illegal conditions? BiQuad Filter Coverage The calculated discrete values for the filter co-efficients, ordered by filter type. Several discrete frequencies. The co-efficients should be crossed with the input frequency to check all options have been tested.
73
No since we are representing a sub-set of a continuous range of values, but some filter/frequency values are out of range. When the frequency sweep waveform is sampled at one of the frequencies of interest. The right covergroup needs to be sampled for the right type of filter (LP, HP, BP)
See the example implementation of the BiQuad functional coverage model for more details.
74
75
Covergroup Design
Each filter configuration is represented by a set of co-efficient values. These are effectively unique and can be separated out into groups of values that apply to each of the three filter types, these values then need to be crossed with the filter input frequency to check that coverage has been obtained for all possible combinations. One way to do this would be to create a single covergroup with separate coverpoints for each filter type, with bins for each combination of filter co-efficient values. However, at any particular time the BiQuad filter can only be configured to operate in one mode, so there is a covergroup for each of the filter types, and only one of each of the covergroups will be sampled at each particular frequency change. The example shown in the code example is for the Low Pass filter type, but the other covergroups only differ in terms of the co-efficient values.
class LP_FILTER_cg_wrapper extends uvm_object;
`uvm_object_utils(LP_FILTER_cg_wrapper)
// Co-Efficient values bit[23:0] b10; bit[23:0] b11; bit[23:0] b12; bit[23:0] a10; bit[23:0] a11;
// Bins for frequency intervals: IP_FREQ: coverpoint frequency { bins HZ_100 = { 100 }; bins HZ_200 = { 200 }; bins HZ_400 = { 400 }; bins HZ_800 = { 800 }; bins HZ_1k = { 1000 };
76
bins HZ_17k = { 17000 }; bins HZ_18k = { 18000 }; bins HZ_19k = { 19000 }; bins HZ_20k = { 20000 }; }
// Co-efficient bins for different low pass knee frequencies: CO_EFFICIENTS: coverpoint {b10, b11, b12, a10, a11} { bins CE_200 = {120'h0002C10005830002C1FDA1603DAC66}; bins CE_400 = {120'h000AD30015A6000AD3FB43263B6E74}; bins CE_800 = {120'h0029C90053930029C9F68940373067}; bins CE_1000 = {120'h004029008052004029F42E2B352ED0}; bins CE_1200 = {120'h005ACF00B59E005ACFF1D4AA333FE8}; bins CE_1400 = {120'h00798300F307007983EF7CF4316303}; bins CE_1600 = {120'h009C11013822009C11ED27392F977E}; bins CE_1800 = {120'h00C24501848B00C245EAD3A32DDCBA}; // .... bins CE_16k = {120'h1DC4A13B89431DC4A127B0D70F61AE}; bins CE_17k = {120'h20FA4441F48920FA4431EA5811FEBA}; bins CE_18k = {120'h246A9C48D538246A9C3C563415543B}; bins CE_19k = {120'h281DC4503B88281DC446FCC7197A49}; bins CE_20k = {120'h2C1D25583A4B2C1D2551E4AB1E8FEA}; }
endclass: LP_FILTER_cg_wrapper
BiQuad IIR Filter example covergroups A SystemVerilog covergroup is intantiated inside a class, it has to be constructed in the class constructor method (new()). The Low Pass filter covergroup is instantiated inside a wrapper class, this allows it to be created when required by constructing the wrapper object. The covergroups sample() method is then chained into the wrapper object's sample() method. This is the recommended way to implement covergroups in a class environment. Inside the covergroup itself there is a coverpoint for the frequency which has a set of bins which correspond to each of the input frequencies of interest. The coverpoint for the co-efficients is based on the concatonated value of all of the co-efficients (an 80 bit value), and the bins correspond to the co-efficient values for different configurations from a 200 Hz knee frequency up to 20 Khz. The cross product of the two coverpoints is LP_X.
77
`uvm_component_utils(biquad_functional_coverage)
// Filter mode is defined in the env configuration object // together with the register model: biquad_env_config cfg;
// Cover groups - one for each type of filter: LP_FILTER_cg_wrapper lp_cg; HP_FILTER_cg_wrapper hp_cg; BP_FILTER_cg_wrapper bp_cg;
extern function new(string name = "biquad_functional_coverage", uvm_component parent = null); extern function void build_phase(uvm_phase phase); extern function void write(T t);
endclass: biquad_functional_coverage
function biquad_functional_coverage::new(string name = "biquad_functional_coverage", uvm_component parent = null); super.new(name, parent); endfunction
78
function void biquad_functional_coverage::write(T t); // Update the filter co-efficients and then sample // according to the filter mode: case(cfg.mode) LP: begin lp_cg.b10 = cfg.RM.B10.f.value[15:0]; lp_cg.b11 = cfg.RM.B11.f.value[15:0]; lp_cg.b12 = cfg.RM.B12.f.value[15:0]; lp_cg.a10 = cfg.RM.A10.f.value[15:0]; lp_cg.a11 = cfg.RM.A11.f.value[15:0]; lp_cg.sample(t); end HP: begin hp_cg.b10 = cfg.RM.B10.f.value[15:0]; hp_cg.b11 = cfg.RM.B11.f.value[15:0]; hp_cg.b12 = cfg.RM.B12.f.value[15:0]; hp_cg.a10 = cfg.RM.A10.f.value[15:0]; hp_cg.a11 = cfg.RM.A11.f.value[15:0]; hp_cg.sample(t); end BP: begin bp_cg.b10 = cfg.RM.B10.f.value[15:0]; bp_cg.b11 = cfg.RM.B11.f.value[15:0]; bp_cg.b12 = cfg.RM.B12.f.value[15:0]; bp_cg.a10 = cfg.RM.A10.f.value[15:0]; bp_cg.a11 = cfg.RM.A11.f.value[15:0]; bp_cg.sample(t); end endcase endfunction: write
Although, the functional coverage model has been implemented as a UVMclass, the same principles could be applied to a module or interface based implementation.
79
80
SoC coverage example Notice that the blocks are interconnected using an arbitrated Wishbone bus fabric. A RISC processor assembly with DDR memory, used for both firmware & traffic storage is attached to the fabric, as well as for interface cores. There is also side logic that takes care of power and clock control. Here are some facts about this design (These facts are just arbitrary for the purpose of the example): Trusted, reused IP: Ethernet, USB, I2C, VGA New IP: RISC processor & memory: one rev back, so should be stable, but new to us New project specific design: Wishbone Fabric, plus Clock & Power control The testbench will use an available wishbone agent in place of the RISC processor to drive the testbench. So in reality the Wishbone SoC, from a verification stimulus perspective, is a series of wishbone single or block operations (reads, writes, read-modify-writes) going across the fabric. The DDR located firmware has late availability and will be folded in when ready, but not available for most of the verification effort. The interconnect, configuration and throughput are the main concerns, especially the power and clock control to minimize power consumption. There is a Wishbone SoC architecture document with some basic register, power and clock implementation information. There are IP block level documents for the 5 IP cores (Processor, USB, Ethernet, I2C, VGA), but they are register and interface oriented, with minimal design detail. The I2C core however has a testplan spreadsheet that we can fold into the SoC testplan hierarchy
81
82
The result was this flow diagram and steps: 1. Setup/Configuration Pin and default register values control some initialization Firmware sets registersvia single write Wishbone operations Arbiter set up Clock and Power set up Interface set up (USB, Ethernet, I2C, VGA cores) 2. Traffic via memory wishbone operations (single or block moves) DDR to Interface Interface to DDR 3. Unexpected event Interrupt, error, etc This flow diagram is useful to show the overall high level use of the chip and even at this level starts to show coverage both in its flow and its diverging choices. To go further, each block in the flow needs to be expanded out using whatever table or diagram that can best represent the setup or data flow within that area. The diagrams typically used for this process are described in the top down section of the coverage model testplan creation. The following sections illustrate the use of the various diagram types in the context of the WBSoC example.
83
Take each of the sub blocks in the high level use model flow and expand it out using a table, diagram or chart that is best suited to describe that sub block's information. For instance the first "Pin and Register defaults" sub block describes how the Wishbone SoC is initialized upon power on and is best expanded into a table. A simple table is used here because of the small space of this simple power up and default configuration. There are just two pins that select where the firmware will come from: inside the ROM of the processor, preloaded in the DDR, or read into the DDR from the I2C or the USB. The startup power state is hard coded as defaults in the power register with only 4 choices. Likewise the start up clock register has 6 bits, one for each region (4 interfaces, fabric and the processor subsystem), which can be on or off, and each with a separate default speed register. This table then leads to a section in the testplan and individual requirements. See sections 1.1, 1.2 and 1.3 in the testplan spreadsheet picture below.
84
Here the Arbiter configuration choices are shown in a Y tree diagram. Notice the legend for mandatory and optional nodes and OR or AND choices. In practice, the Wishbone is arbitrated with a round robin, but the diagram shows more possibilities with 5 classic arbitration schemes. All 5 arbitration schemes could be further Yd into many other sub choices. The Y tree diagram is good for showing choices and priority. This Y-Tree diagram then leads to a section in the testplan and to individual requirements. See sections 2.1 to 2.2.5 in the testplan spreadsheet picture below.
85
Most SoCs address various clocking domains and power consumption, especially static power issues. The power and clock management logic for SoCs is growing in complexity and thus needs a prioritized place in the overall verification strategy and the testplan. The power and clock configurations are best described using bubble diagrams that mimic the block diagram of the chip. Each of the six regions of the WB SoC is shown with its corresponding power and clock configuration information. Bubble diagrams work well in this situation, showing the relationship of each area to one another, as well as each bubble showing that particular region's power and clock settings. These bubble diagrams then lead to sections in the testplan and individual requirements. See sections 3.1-2 for clocking and 4.1-2 for power in the testplan spreadsheet picture below.
86
The I2C blocks configuration is described using a combination of Y-tree and bubble diagrams. The Y tree at the top shows the choices between the regular and special CBUS modes of the I2C. It also shows the choices between the number of allowed masters and their speed choices. Then a bubble diagram is used to show other various configuration areas and their relationship (with the lines). Because the information in the bubble diagram is the same for all four choices a * is used to indicate that this info is repeated for the other 3 choices. This illustrates how you can mix and match various tables and diagram styles to best convey the necessary information. This combination diagram then leads to a section in the testplan and individual requirements. See section 5.3 in the testplan spreadsheet picture below. It is hierarchically referenced to the testplan that came with the i2c IP.
87
After all the configurations are done, then traffic is initiated. This is an example sequence diagram for the I2C read or write. The diagram shows the handshake between the testbench, starting with the testbenches test controller: 1. 2. 3. 4. 5. 6. The test controller initiates by telling the I2C vip to start a transaction sequence, read or write. The I2C IP then requests the bus from the fabric and the arbiter grants the request when ready. Next the I2C IP sends an single write operation to the processor, declaring the direction (read or write) and size, etc. The processor runs some firmware and tells the DDR via the CSR to initiate the transfer. The DDR requests the bus from the fabric and the arbiter grants the request when ready. The I2C then sends or receives the data via single or block operations (depending on the size, and read or write) and releases the bus when done.
The sequence diagram (borrowed from UML) is a great way to show data movement and handshaking. If there are hundreds and hundreds of data sequences, you do not need a sequence diagram for each one, but can instead divide them up into categories of similar sequences and make a "family" sequence diagram for each one. You can also often show both directions (Read and Write) on the same diagram as we did above for the WB SoC. In the I2C example above we might have other sequence diagrams with throttled data speeds, stalls, retries, errors, etc. The firmware might also direct other types of operations, and each could have its own sequence diagram.
SoC Firmware
The Wishbone SoC, like all SoCs, will ultimately be driven by software running on the processor. This firmware is not available until late in the project, and has its own development and testing process. There are several sound approaches for integrating firmware into the verification process: If the processor is not trusted, a processor and memory subsystem testbench can be created where firmware can be brought in in stages as it is made available. Firmware can be divided out by low level and high level functionality, prioritized into what will be done on the subset testbench and what will be done using other means (C model testing, prototypes, first pass chips, etc.). This layering of the firmware testing can be represented as diagrams and included in the coverage model testplan spreadsheet. If the processor is trusted, it can be left out of the main SoC testbench. This is the approach that was used for the Wishbone SoC. A Wishbone VIP agent is placed where the processor would be and a Control Status Register (CSR) agent was created to drive the CSR interface to the DDR. The two agents work in concert as directed by the top level
SoC coverage example test controller/virtual sequence to mimic the processor firmware activity on both the Wishbone bus and the DDR memory. In this way the goal of focusing on the overall interconnect; configuration and throughput traffic across fabric is addressed. Another approach is to have the actual processor RTL in place, and to put pseudo firmware into the DDR. This preliminary pseudo firmware code is made up of necessary low level functions to do basic firmware operations, like doing register read and writes across the fabric to do configuration, or to do basic data moves between the DDRand one of the four interfaces. The testbench then controls the running of these functions via a back door access. Questa's infact has a software driven verification package for addressing this type of problem. Whichever method is used, solely or in various combination, it is important that these strategies are fleshed out early and incorporated into both the overall verification architecture and implementation documentation and the coverage model testplan spreadsheet. Several columns can be added to any testplan spreadsheet that spell out how and where various firmware features will be used, tested and covered.
88
89
This spreadsheet shows the basic necessary content for a testplan. A real testplan for a large SoC would be larger (at least 500 rows), but this example is reduced to fit here.See theCoverage Plan Format articlefor a general description of what the various columns are used for and what are legal entries. Things to notice in the WB SoC testplan spreadsheet: Notice that other documents are referenced in many of the descriptions. There is no need to re-enter redundant information again here, just reference the document and section. Notice the descriptions are short and not formal. Some verification teams have a prioritized language with specific definitions of specific words for their definitions. A description has to start with one of these key words, for example "The WB SoC shall...." or "The i2c interface will only use ....". Notice (section 2.2.5, 3.1, 3.2, etc.) are not as detailed for now and are left for future expansion. These will probably be broken out into more rows (2.2.5.1, etc.) where the specific coverpoints are defined. They have been started here, so that they are not forgotten. Notice the naming conventions of the links, dt for directed test, assert_ for an assertion, and cov_ to start a functional coverage group or point with _cg or _cp at the end. Distinctive acronyms like "sfp" for static fixed priority are used for clarity. These nomenclatures make it easier to write scripts to manipulate coverage information. These conventions should be decided upon at that start of a project, written down, and used throughout a project uniformly. Notice on rows 5.1-4 that separate, lower level spreadsheets are linked in here hierarchically. The i2c one came with its VIP, the other 3 will be new, but will each be in their own separate testplan spreadsheet. The link ensures that they will be folded in as if they were in this top level testplan spreadsheet, and the section numbers will correlate.
SoC coverage example Notice on some links (1.3, 2.14) that there are more than one link/type per particular requirement. This is because many requirements might take a combination of a directed test, coverpoints and assertions to fully cover all of that requirement's details. Not shown: It is possible that a single type item, such as a coverage point, might cover several requirements. In this case the same link name and type will be used in each of the requirements rows. Notice the last two columns (Owner, Priority). These are added for clarity and to record useful information associated with each requirement. They can be read into a tool, such as Questa's Verification Management tools. There they can be sorted on and viewed, and are stored in a UCIS compliant database, but they are not used inside the simulator. Here each requirement is given an owner so that they can sort by their name and see just the requirements that they are responsible for. The priority can then guide the order that they work on their requirements. Notice the last four rows are TBD (To be determined) as there was not enough time to flesh out these rows during this weeks meeting, so it was left for next week. This is common, but it is important to fill in your spreadsheet as you go along. Click here for a copy of the WB SoC testplan and then click on file-save as in your browser to save this WBsoc.xml file. You should be able to open the downloaded file in Microsoft Excel.
90
91
92
Appendices
Requirements Writing Guidelines
Requirements Writing Guidelines
When creating a testplan,requirementsto have a successful chip need to recorded in a useful, easy to digest manner. The following rules and guidelines will help to ensure this happens. It is a good idea for the verification team to compile a list such as this before starting the planning process and to divide them up into rules (must be followed) and suggestions (good ideas). In effect, this is defining the requirements for writing requirements. Don't rewrite anything that is already detailed in the source specifications, just reference the original document. Divide up the categories and subcategories so that each row is a single requirement. Don't write five requirements on one row. Write one requirement per row. Each requirement should be unique. Do not use ten requirements, when one will do. The gauging criteria is often whether or not it will easily link to a coverage element. Each requirement must be linked to some coverage element (test, covergroup, coverpoint, cross. assertion, code coverage, etc). Write each requirement at about the same level. Don't write one at a subsystem level, and the next at the AND-gate level. If you do have multi-level requirements, come up with a natural three to five level scale and define them clearly. Maybe use alpha numeric tags to distinguish levels, or maybe put each level in its own hierarchical testplan spreadsheet. A requirement is typically written in the positive, a description of what the design shall do. However, some requirements which place bounds on behavior are easier to write in the negative; in other words, a description of what the design shall not do. It is alright to add a requirement that is not going to be addressed by the verification process. It might be addressed by C-modeling, or by FPGA validation in a lab, or by some other means. Include it anyway, and add a column that states which process is being used on that requirement. Identify each requirement with both unique name and a unique number system. Requirements might be sub-divided into major categories like design requirements (about the design), verification requirements (about the run management), testbench requirements (about the testbench), software requirements (about the firmware), tool requirements (about Questa), library requirement (about which part of UVM that you will use), etc. Requirements should be ranked or prioritized. This may be a scale of 1-3, or could be a complex risk equation that takes in other parameters. Requirements should be ordered. Have categories and sub-categories, do not just have them entered sporadically, have some logical order. Each design requirement needs to be thought from all three verification perspectives, generation, checking and coverage. How will a situation be generated to exercise this requirement? What will check that it is right; an assertion, a scoreboard, or both? What sort of permutations will need to be covered, how many permutations? Some testplans will have three columns with a brief description of each of these.
Requirements Writing Guidelines If a requirement is connected to some reused verification entity, it should be specified. A column for current or future reusability can be added and filled in. It is alright to have a requirement that is earmarked for a special directed test, but these should be not be widespread Testbenches often have levels of abstractions, often labeled with some layering (L1-3) or naming (configuration layer, traffic layer, etc.). A column that specifies each requirements abstraction layer can be added. Normal function and error handling function requirements should be separated, but do not leave out the error requirements. Some requirements might need to be ported across several environments, block, sub-system, system, lab, etc. This should be noted. A designated column can delineate this. Some requirements might be constraints in disguise. This is fine. Just note it. Some requirements are assertions in disguise; they have a cause and effect nature such as "after this, this will always happen". This is fine, just note it. It is wise to categorize assertions in some logical fashion, such as interface, internal, etc. Some requirements are configuration oriented. You may not need to specify each and every configuration, just point to where they are described in other documents, or describe each unique family of sequences. Divide them by how covergroups and coverpoints will capture them. Some requirements are sequence oriented, meaning they are configurations or traffic that need to be generated to stimulate the design. When you define sequence requirements it is best to start by defining each unique family of sequences by categories and sub categories, such as higher categories like configurations, traffic, interrupts, errors, etc., and then break those down into sub categories as needed. You do not need to specify each and every sequence, especially if they are already described in other documents, but make categorizes of them, each of which will lead to an interesting covergoups Some requirements might just be assumptions made, or required, that lead to easier implementation. This is fine. Scoreboard or assertion checking limitations should be included. Often the transfer function of a scoreboard or assertion might be too complex to be fully addressed. Specify what will be addressed, and what will not be addressed. For scoreboard, what actual transaction level elements will be checked? Another more advanced approach is to think covergroups and coverpoints up front and then to work backwards, reverse engineering and writing the requirements.
93
Verification Academy
Cookbook
www.mentor.com
2012-13 Mentor Graphics Corporation, all rights reserved. This document contains information that is proprietary to Mentor Graphics Corporation and may be duplicated in whole or in part by the original recipient for internal business purposes only, provided that this entire notice appears in all copies. In accepting this document, the recipient agrees to make every reasonable effort to prevent unauthorized use of this information. All trademarks mentioned are this document are trademarks of their respective owners.
Corporate Headquarters Mentor Graphics Corporation 8005 SW Boeckman Road Wilsonville, OR 97070-7777 Phone: 503.685.7000 Fax: 503.685.1204 Sales and Product Information Phone: 800.547.3000 Silicon Valley Mentor Graphics Corporation 46871 Bayside Parkway Fremont, California 94538 USA Phone: 510.354.7400 Fax: 510.354.1501 North American Support Center Phone: 800.547.4303 Europe Mentor Graphics Deutschland GmbH Arnulfstrasse 201 80634 Munich Germany Phone: +49.89.57096.0 Fax: +49.89.57096.400 Pacific Rim Mentor Graphics (Taiwan) Room 1001, 10F International Trade Building No. 333, Section 1, Keelung Road Taipei, Taiwan, ROC Phone: 886.2.87252000 Fax: 886.2.27576027 Japan Mentor Graphics Japan Co., Ltd. Gotenyama Garden 7-35, Kita-Shinagawa 4-chome Shinagawa-Ku, Tokyo 140-0001 Japan Phone: +81.3.5488.3030 Fax: +81.3.5488.3021