0% found this document useful (0 votes)
49 views24 pages

VLSI Testing and Verification Challenges

Uploaded by

student239003
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
49 views24 pages

VLSI Testing and Verification Challenges

Uploaded by

student239003
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Testing and Verification Date:08-05-2025

Q.1
(Already provided earlier in the conversation — included here succinctly for continuity.)

(a) Challenges in VLSI Testing (short recap)

 Circuit complexity (billions of transistors).


 High fault-coverage requirements (very low defect level).
 Very expensive ATE / test-time cost pressure.
 Limited controllability & observability of internal nodes.
 Power / heat issues during test (IDDQ, peak current).
 Timing faults and new fault mechanisms at deep-submicron (delay, crosstalk).
 Variability, soft errors, and manufacturing process defects.

(b) Testing vs Verification — comparison (recap)

 Verification: pre-fabrication, checks design correctness (simulation, formal methods).


 Testing: post-fabrication, finds physical manufacturing faults on silicon (ATE, BIST,
scan).
(See earlier tabular comparison for exam-ready phrasing.)

(c) Definitions (recap)

1. Equivalent Faults: Faults that produce identical faulty behavior for all input tests (can
be collapsed).
2. Fault: Deviation from intended behavior due to defect (stuck-at, bridging, delay etc.).
3. Reject Rate: Fraction of manufactured chips failed by test.
4. Rule of Ten: Fixing a defect gets ~10× more expensive at each later stage (design →
verification → fab → field).
5. Fault Coverage: % of modeled faults that a test set detects.
6. Defect Level: Probability a faulty device escapes detection.
7. Fault Detection Efficiency: Ratio of faults detected to faults simulated (or introduced)
by a test set.

Q.2
(a) Explain bridging fault models. (3 marks)
Bridging fault: When two or more signal nets that should be isolated become electrically
connected (shorted) due to a manufacturing defect such as metal bridging, conductive particles,
or via bridging. Bridging faults are common in VLSI.

Types / behavior:

1. Wired-AND (dominant-0) bridging: One net dominates and forces the resultant node
behavior equivalent to a wired-AND (dominant-0). Example: If net X is strongly driven
to 0 and shorted to net Y, the shorted pair behaves as X AND Y (tends toward 0).
2. Wired-OR (dominant-1) bridging: One net dominates high and drives the short to
logic-1; equivalent to a wired-OR behavior.
3. Resistive bridging (R-bridging): The short is resistive; behavior depends on signal
strengths and can cause intermediate voltages, transient or crosstalk-like effects (difficult
to model with pure stuck-at).
4. Dominant driver model: Determine which driver (pull-up/pull-down) is stronger — that
decides the effect of the bridge.

Modeling & testing implications:

 Bridging faults are not always detected by single stuck-at testing, because a stuck-at
model may not mimic the effect of shorting two active nets.
 ATPG must include bridging fault models (dominant-zero, dominant-one, resistive) or
use structural/transition tests that can differentiate combined behavior.
 Test patterns often require toggling drivers and isolating the effect (e.g., set one net to
logic that will reveal the bridge and the other to the opposite to create a conflict).

Detection technique examples:

 Use patterns that force one net to 1 and the other to 0 (and vice versa) to cause observable
mismatches.
 Add test points or use scan to control checks.
 Use IDDQ for resistive bridging which may cause increased leakage.

(b) Obtain Controllability and Observability for various signals of 5-input OR Gate using
SCOAP and Probability-based testability analysis. (4 marks)

Problem setup: A single 5-input OR gate with inputs A, B, C, D, E and output Y = A + B + C


+ D + E (logical OR). We'll compute SCOAP controllability (CC0, CC1) and observability
(CO) using the common SCOAP numeric rules (primary inputs have CC0 = CC1 = 1).

SCOAP rules used (standard numeric model):

 Primary input: CC0 = CC1 = 1.


 For an n-input OR gate:
o CC1(output) = 1 + min_i CC1(input_i)
(to make OR output 1, set any one input to 1; 1 extra cost for the gate)
o CC0(output) = 1 + sum_i CC0(input_i)
(to make OR output 0, all inputs must be 0 → effort is sum)
 Observability for input x_i of OR:
o CO(x_i) = 1 + min( CO(output), sum(CC0 of all other inputs) )
(to propagate a change at x_i to the output, either the output itself must be
observable, or all other inputs must be set to the controlling value 0).

Apply rules (numeric):

 For each primary input A..E: CC0 = 1, CC1 = 1 (by definition).


 Compute for output Y:
o CC1(Y) = 1 + min(1,1,1,1,1) = 1 + 1 = 2 .
o CC0(Y) = 1 + (1 + 1 + 1 + 1 + 1) = 1 + 5 = 6 .
 Observabilities:
o CO(Y) = 0 (output is primary observable node).
o For input A:
 sum(CC0 of others) = CC0(B)+CC0(C)+CC0(D)+CC0(E) = 1+1+1+1 = 4 .
 min( CO(Y)=0, 4 ) = 0.
 So CO(A) = 1 + 0 = 1.
o Similarly CO(B) = CO(C) = CO(D) = CO(E) = 1.

Interpretation (SCOAP):

 Inputs are easy to control to 0 or 1 (since CC0=CC1=1).


 Setting the output to 1 is relatively easy (CC1=2) — we only need to set a single input to
1 (plus the gate).
 Setting the output to 0 is hard (CC0=6) because all 5 inputs must be forced to 0.
 Observability of inputs is small (1) because the output itself is directly observable;
however, in practice the input’s effect is only visible if all other inputs are 0. SCOAP’s
numeric rule yields CO=1 due to the output being primary-observable (CO(Y)=0) so the
path to output adds one unit.

Probability-based testability analysis (basic):

Assume random, independent primary inputs with probability 0.5 for logic-1 and 0.5 for logic-
0.

 Probability that output Y = 0:


o P(Y=0) = P(A=0 ∧ B=0 ∧ C=0 ∧ D=0 ∧ E=0) = (1/2)^5 = 1/32 ≈ 0.03125.
 Probability that output Y = 1:
o P(Y=1) = 1 − 1/32 = 31/32 ≈ 0.96875 .
 Observability (probabilistic): Probability that a change at input A will change Y equals
the probability that all other inputs are 0 (only then the state of A decides Y):
o P(other inputs all 0) = (1/2)^4 = 1/16 = 0.0625 .
o So with random vectors, an input’s toggling affects Y only about 6.25% of the
time.

Summary (SCOAP + Probability):

 SCOAP highlights that forcing output=0 is expensive; making output=1 is much cheaper.
 Probability-based measure shows output usually equals 1, so detecting faults that require
output=0 will be rare with random stimuli (hence need for directed patterns).
 For effective ATPG, tests that force all other inputs to the right polarity are necessary
(SCOAP guided) since random vectors will rarely activate certain faults.

(c) Explain testing methodology for transistor faults in two-input CMOS NAND Gate. (7
marks)

Circuit reminder: 2-input CMOS NAND consists of:

 Pull-up (pMOS): two pMOS transistors in parallel from VDD to output Y (connected to
inputs A and B).
 Pull-down (nMOS): two nMOS transistors in series from Y to GND (A in series with
B).
 Output Y = NOT(A AND B).

Transistor-level fault models:

1. Stuck-short (gate-to-substrate or drain-to-source short): transistor permanently


conducts.
2. Stuck-open (open circuit): transistor never conducts (open).
3. Gate oxide leakage or partial opens (resistive faults).
4. Bridging between transistor nodes (local shorts).
5. Threshold shifts causing poor switching or slower transition (delay-like behavior).

Testing objectives at transistor level:

 Detect stuck-open and stuck-short behavior for each MOS device.


 Detect resistive defects causing timing or IDD anomalies.
 Detect bridging between nodes.

Methodologies & detection strategies:

1. Logical tests (vector-based) using functional/scan access:


o Convert sequential circuit to scan if needed to control inputs and capture outputs.
o For transistor stuck-open detection, apply sequences that cause the transistor to
open during one input value but be exercised as conducting in another value — use
two-vector tests.
o Example: nMOS stuck-open in the series path (say the nMOS controlled by A
stuck-open):
 Apply vector A=1, B=1 → expected Y=0. If nMOS for A stuck-open, the
pull-down path is broken; output might remain high (or float) — visible on
output when captured.
 Also apply A=0, B=1 then A=1, B=1 sequence: some stuck-open faults
manifest only after certain transitions due to charge storage; hence two-
vector sequences are needed.
o For pMOS stuck-short (permanently ON) causing contention, apply A=0, B=0 then
A=1, B=1 and check for abnormal high current or wrong output.
2. IDDQ testing (quiescent supply current test):
o Measure quiescent current in static conditions (when inputs are steady and no
clock switching).
o A stuck-short (pMOS to nMOS short or bridging) often produces elevated static
current. IDDQ is a powerful test for resistive shorts and bridging at transistor
granularity.
o Typical flow: apply a vector that should produce a low switching/no switching
condition and measure current. If IDDQ > threshold ⇒ defect.
3. Delay and dynamic tests:
o Resistive opens or threshold variation may not be visible in static logic levels but
will slow transitions → path delay tests (launch-on-capture, launch-on-shift)
detect such issues.
o For example, if a pMOS has increased threshold, rising edge of Y may be slow and
could fail timing tests at target frequency.
4. Two-vector and transition tests:
o Stuck-open faults often require a preceding vector to create charge or discharge a
node, then the second vector exposes the faulty behavior. So tests use sequences:
 Pre-condition vector (to set internal node in certain state) + test vector (to
observe behavior).
o Example sequences for NAND:
 Pre: A=1, B=0 (forces partial conduction), Test: A=1, B=1. Capture output
and see if expected transition occurs.
5. Structural and layout-aware tests:
o If layout indicates likely bridging between adjacent transistors, generate patterns
that drive those nets oppositely to expose defects.
o Add test points or DFT structures if transistor-level access is required.
6. Test generation & diagnosis:
o Fault model: define stuck-open/stuck-short at device terminals; use transistor-level
simulation (SPICE or fast modeling) to generate tests.
o For large chips, use IDDQ and logic-level ATPG in combination — ATPG to
catch logical failures and IDDQ/analog tests for transistor-level fault coverage.
Example specific test vectors to detect typical transistor faults in 2-input NAND:

 To test nMOS series path and detect an open in either nMOS:


o Vector A=1, B=1 expected Y=0. If Y=1 → possible nMOS open.
o To isolate which nMOS: apply A=1, B=0 then change B→1 and observe whether Y
transitions; similarly vary A to isolate.
 To test pMOS parallel path and detect stuck-short:
o Vector A=0, B=0 expected Y=1. If Y=0 or IDDQ high ⇒ pMOS short.
 Add transitions to detect dynamic stuck-open faults (two-vector tests).

Summary:
Transistor testing of CMOS NAND uses a mix of directed logic vectors (including two-vector
sequences), IDDQ checks for leakage/shorts, and timing/transition tests for
resistive/threshold issues. Good test plans combine gate-level ATPG to detect functional
manifestations and silicon-level current/timing tests for transistor defects.

Q.2 — OR alternative (same question number


but other part)
(c) What is meant by scan design rules? Explain scan design rules for following design
styles:

1. Derived Clocks
2. Combinational feedback loops (7 marks)

Background:

 Scan design (Design for Testability — DFT) converts flip-flops (FFs) into scan cells.
Scan chains allow shifting in test vectors and shifting out captured responses to make
sequential circuits testable like combinational circuits.
 Scan design rules ensure that scan path and scan mode work correctly, do not introduce
test escapes, and avoid hazards during test.

General scan design rules (high-level):

1. Single clock domain per scan chain or controlled crossing: Avoid mixing derived
clocks into a scan chain; if needed, provide gating or safe clocks.
2. Avoid asynchronous set/reset during scan shifting: Asynchronous signals should be
disabled while shifting to avoid corruption.
3. Control scan-enable (SE) net carefully: SE should be synchronous or glitch-free; drive
SE from a dedicated, well-buffered net.
4. No combinational feedback loops without breakpoints: Combinational loops make
ATPG and scan shifting ambiguous (may require additional test points or special mode).
5. Safe handling of clocks derived from scan signals: Avoid clocks that change when SE
toggles; derived clocks must be static or disabled during shift.
6. Test mode isolation: Isolate scan signals from normal functional signals when needed;
ensure reliable timing.

1. Scan rules for Derived Clocks:

 Derived clocks are clocks generated inside the module using logic (e.g., gated clocks,
XOR/AND-based derived clocks).
 Rule: Never use derived clocks to directly clock scan flip-flops unless the derived clock
is stable and well-defined in both functional and test modes. Prefer a single global test
clock for all scan FFs.
 Problems if not followed:
o When in scan mode, SE may change the structure of toggle activity causing
derived clocks to glitch, corrupting shifted data.
o Derived clocks can create timing uncertainty during shift and capture.
 Mitigations / design guidelines:
o Use clock gating cells that are test-aware (with test-mode passthrough) so during
scan shifting the clock is predictable.
o Ensure derived clocks are disabled in scan-shift mode (use tester clock gating) and
only enabled during capture cycle if required.
o Ensure scan FFs are clocked from a stable, dedicated test clock (TCK) during scan
operations. If multiple clock domains exist, create separate scan chains per domain.
o Avoid gating signals that depend on scan enable or test signals.

2. Scan rules for Combinational Feedback Loops:

 Combinational feedback loops (a path of combinational logic that forms a loop without
storage elements) create a problem for ATPG because logic becomes state dependent;
combinational loops can cause oscillations or ambiguous values during scan shift and
capture.
 Rules:
o Break loops: Insert scan FFs or test points to break the loop so the combinational
portion becomes acyclic for ATPG.
o Make feedback through registered element: Ensure any feedback path has a
register (scan cell) on the path so the loop is broken during shift/capture and
behavior is well-defined.
o If loop is intentional (e.g., latch-based state-holding), ensure it’s covered by
sequential ATPG or by designing additional control signals to isolate during
test.
 Testing implications:
o If a combinational loop remains, ATPG may not be able to generate tests (or tests
may be expensive). Emulate or constrain loop behavior for ATPG or manually
provide control signals that break the loop during test.
o Breakpoints and pseudo-primary inputs might be added to allow test vectors to
propagate.

Practical recommendations:

 During design, identify all loops and decide where to place scan FFs so the combinational
logic appears acyclic for ATPG.
 Use the Design Rule Checker (DRC) and DFT tools to enforce these rules before tape-
out.
 Coordinate clock gating and derived clock design with DFT team to ensure scan
reliability.

Q.3
(a) Calculate number of collapsed faults for two-input CMOS NOR Gate. (3 marks)

Gate: 2-input NOR: Y = NOT(A OR B).

Signal list: A, B (inputs), Y (output). Naive stuck-at faults: each signal has stuck-at-0 and
stuck-at-1 → 3 signals × 2 = 6 faults.

Find equivalences / collapsed faults:

 If A is stuck-at-1, then A = 1 ⇒ A OR B = 1 regardless of B ⇒ Y = 0. So:


o A s-a-1 implies Y s-a-0 for all input combinations — they are equivalent.
 Similarly, B s-a-1 is equivalent to Y s-a-0.
o So A s-a-1, B s-a-1, and Y s-a-0 all collapse into one equivalence class.
 Check other equivalences:
o Y s-a-1 is not equivalent to any single input stuck condition (forcing Y=1 cannot
be obtained by a single input stuck value).
o A s-a-0 and B s-a-0 are distinct and not equivalent to each other or to Y s-a-1.

Therefore collapsed fault set:

 Group 1 (collapsed): { A s-a-1, B s-a-1, Y s-a-0 } → 1 collapsed fault.


 A s-a-0 → distinct.
 B s-a-0 → distinct.
 Y s-a-1 → distinct.

Total number of collapsed faults = 4.


Answer: 4 collapsed faults for a 2-input NOR gate.

(b) Explain input scanning method for logic element evaluation. (4 marks)

Input scanning method (sometimes called scan-in testing or input scanning) refers to
techniques used to apply test vectors to the internal nodes of a combinational or sequential logic
element by shifting them in through serial scan chains and to evaluate the responses by
scanning them out.

Objective: Convert sequential circuit testing into an effectively combinational testing problem
by providing full control/observation of flip-flops (scan cells).

Key ideas and steps:

1. Scan insertion: Convert each storage element (flip-flop) to a scan cell (a multiplexer-fed
DFF with a scan-in pin).
o The scan cell has two modes:
 Functional mode: FEED D from logic input; SE = 0 (normal operation).
 Scan mode: SE = 1, the flip-flops are connected in a shift register chain for
serial shifting.
2. Creating scan chains: Connect scan-in of FF1 to external Scan-In (SI), scan-out of FF1
to scan-in of FF2, etc. Chains can be single or multiple (per clock domain).
3. Test sequence using input scanning method:
o Shift in a test vector (with SE=1) serially into the scan chain (using SHIFT clock).
This forces the internal flip-flop values to desired input pattern for the
combinational logic.
o Switch to capture mode (SE=0) and apply one functional clock tick (CAPT) so
the combinational logic processes the applied test inputs and the new responses are
captured in the FFs.
o Shift out the captured results by returning SE=1 and shifting the scan chain to the
tester or comparator for evaluation.
4. Advantages:
o Converts sequential testing problem into combinational ATPG problem: ATPG
tools can generate vectors assuming direct control of FFs.
o Reduces complexity of sequential ATPG (shift/launch-capture methodology).
o High controllability and observability on internal nodes (FF Q outputs).
5. Considerations:
o Scan shift overhead — test time increases because of serial shifting; mitigated by
multiple scan chains and compression techniques.
o Proper handling of clock domains, asynchronous resets, and clock gating.
o Careful management of power during shift (due to many toggling nodes).
o Need to secure SE signal and handle timing (scan clock skew, hold/setup margins).
Example (for single shift-capture-shift cycle):

 Suppose 8 flip-flops in a chain:


o Shift in T = t7 t6 ... t0 with SE=1.
o Switch: SE=0, apply capture clock — combinational logic evaluates and new
internal states are latched into FFs.
o Switch: SE=1, shift out captured R = r7 r6 ... r0 and compare R with expected
result produced by ATPG.

Conclusion: Input scanning is a practical and widely used DFT technique that provides
controllability/observability for logic element evaluation enabling efficient ATPG and high
fault coverage for sequential circuits.

(c) Draw and explain Clocked scan cell design with necessary waveforms. (7 marks)

Textual description of Clocked Scan Cell (common design):

A typical clocked scan cell is a multiplexer-fed D-type flip-flop (M-DFF) that supports two
modes:

 Functional mode (SE=0): The D input of the FF receives data from the functional
combinational logic (normal operation). The clock (CLK) causes normal operation.
 Scan mode (SE=1): The D input of the FF receives the serial scan-in value (SI) through a
scan multiplexer; the flip-flops form a chain to shift data in/out using the scan clock (may
be the same as functional clock but often separate).

Schematic components (textual):

1. Multiplexer: Selects between functional data input (D_func) and scan-in (SI) depending
on SE.
2. D-FF: Edge triggered flip-flop storing selected data on rising (or falling) edge of CLK.
3. Scan-out (SO): Q of the flip-flop routed to next scan cell’s SI (or to external SO if last
cell).
4. Test control (SE): Scan enable input controlling multiplexer select.

Simplified block diagram (text):


DFF
D_func ----|>D Q-----> SO
|
SI ----->|MUX|--->DFF
| ^
SE=0 -> D_func
SE=1 -> SI
(Imagine a 2-to-1 mux feeding the D input; SE selects the path.)

Operation modes:

 Shift mode (SE = 1):


o MUX selects SI.
o On each shift clock (CLK shifts), the chain shifts one bit right: each FF captures
the SI or the value from previous stage. This allows serial scan of test patterns into
each FF (and scanning out captured results).
 Capture mode (SE = 0):
o MUX selects D_func so normal functional data is sampled by the DFF on the next
clock edge (used for normal operation or capture cycles in test where functional
combinational logic is exercised).
 Example timing sequence (waveforms):
1. T0 - Shift-in: SE=1, apply SHIFT clock pulses. Waveform: each active clock edge
causes new SI bit to be loaded; SO of last cell shifts out previous data.
2. T1 - Switch to Capture: SE → 0 (synchronised), then apply one CAPTURE clock
edge. Functional logic computes and the result is captured by the flip-flops.
3. T2 - Shift-out: SE=1 again, SHIFT pulses shift out captured values for comparison.

Waveform sketch description:

 CLK: pulses; active edge triggers DFF sampling.


 SE (Scan Enable): high during shift phases (enables scan path), low during
capture/functional operation.
 SI: during shift-in, a serial bitstream (b7 b6 …) present at SI; those bits become latched
into the chain on successive CLK edges.
 SO: during shift-out, the bits present in chain get shifted out serially to external tester.

Timing/Design notes:

 SE must be stable and glitch-free around clock edge to avoid corrupting shift/capture.
 When using separate clock domains, ensure the clock for scan shifting is controlled and
that capture clocks are aligned to avoid metastability.
 Optional features: scan clocks (shift vs capture), hold latch to prevent propagation of
glitches, test mode disabling of asynchronous resets.

Why this design?

 It provides controllability (we can set any FF to 0/1 via shifting) and observability (we
can read FF states by shifting out). It’s the backbone of scan-based ATPG and widely
used DFT practice.
Q.3 — OR alternative parts (if answering other branch)
If you prefer the OR branch (parallel fault simulation, toggle coverage and fault sampling, scan
design flow) — I include it here too.

OR (Q.3 alternative answers)

(a) Advantages of parallel fault simulation & approaches (3 marks)

Parallel fault simulation advantages:

 High speed: Evaluate many faults in parallel using bit-parallel operations on machine
words (e.g., 32/64/128 faults simultaneously).
 Efficient memory usage: One simulation of test vector per multiple faults reduces
repeated logic evaluation.
 Scalable: Useful for large circuits and large fault lists.

Approaches (major):

1. Bit-parallel (bitwise) simulation: Each bit in a machine word represents the effect of a
different fault. Logic operations are bitwise on machine words — simulating multiple
faults at once.
2. Fault-parallel (pattern-parallel) simulation: Many test patterns are simulated in
parallel (less common historically).
3. Event-driven parallel fault simulation: Combines event-driven simulation with parallel
fault encoding to avoid evaluating whole circuit each time.
4. Dominant/Masking-aware parallelization: Use masking to avoid false interactions
among faults in same simulation word.

(b) Explain toggle coverage and fault sampling (4 marks)

Toggle coverage:

 Measures how many signals (nets or registers) in the design have toggled (changed value)
during a simulation or at least once during a testbench run.
 Important for power validation (estimating switching activity) and functional verification
completeness (un-exercised signals may hide bugs).
 Typically collected as a percentage: Toggle Coverage (%) = (Number of toggled
items / Total items) × 100.

Fault sampling:

 Technique to limit the set of faults considered by ATPG or simulation by selecting a


subset of representative faults (sampling) rather than full exhaustive fault list.
 Used to reduce compute time while preserving estimation of fault coverage trends.
 Methods include random sampling, stratified (select per module), or importance-based
sampling (select faults more likely to represent real defects).

(c) Draw and explain scan design flow (7 marks)

Scan Design Flow (high-level steps):

1. Design RTL / Synthesis: Create functional design and synthesize to gate-level with DFT
constraints.
2. Scan insertion: Insert scan cells (convert FFs to scan FFs) and create scan chains; ensure
clock gating and asynchronous resets handled.
3. Design rule checks (DFT DRC): Verify no illegal scan connections (e.g., asynchronous
set/reset during shift); ensure SE is properly buffered.
4. ATPG (Test generation): Generate test vectors (stuck-at or other faults) using
combinational ATPG leveraging the scan chain controllability.
5. Test simulation & fault simulation: Simulate tests on gate-level netlist for fault
coverage and debug failures.
6. Compression / pattern optimization: Reduce number of patterns (test compaction),
apply compression if supported by ATE.
7. Scan chain integration & layout: Connect scan chains in layout, check routing, timing.
8. Scan-based manufacturing tests & BIST (if any).
9. Sign-off: Ensure test coverage targets met, finalize test set for production.

(Flowchart is typically drawn showing sequential boxes for the above steps — include in your
answer diagram if drawing.)

Q.4
(a) Explain logic optimization process for logic simulation. (3 marks)

Logic optimization for simulation focuses on simplifying the netlist to reduce simulation time
while preserving functional behavior under constraints (like not disturbing observability of
faults used for testing). Typical logic optimization steps:

1. Constant propagation and folding: Evaluate constant drives and replace complex
expressions with constants (e.g., simplify A AND 1 → A).
2. Dead-logic elimination: Remove logic that does not affect primary outputs (unreachable
or masked nodes).
3. Redundancy removal / Boolean minimization: Combine/merge gates where Boolean
algebra permits (e.g., A & A → A).
4. Structural simplification: Replace complex gate networks with simpler equivalents
(reduce gate depth, fanout).
5. Retiming / register transfer optimization: Adjust register positions to reduce
combinational depth (helps timing simulation).
6. Preserving testability vs. optimization trade-offs: For test simulation, avoid
optimizations that break fault equivalences or that remove nodes needed for test insertion;
DFT-aware optimization required.

Purpose: Reduce simulation cycles, memory footprint, and accelerate event-driven simulation
while ensuring correctness.

(b) Draw and explain two-pass nominal event driven strategy. (4 marks)

Event-driven simulation overview: Rather than repeatedly evaluating all gates every clock
cycle (inefficient), event-driven simulation updates only those gates whose inputs have changed
(events), leading to efficient performance for large circuits.

Two-pass nominal event-driven strategy (conceptual):

 The two-pass approach reduces glitches and properly orders signal updates. The passes
are:

1. First Pass — Event Evaluation / Scheduling:


o When an input net changes value, generate an event and schedule the fanout gates
for evaluation.
o Evaluate gates in the event queue but do not immediately update outputs that could
cause cascaded updates within the same simulation time step.
o Instead, compute the next value (or tentative output) of each scheduled gate and
store it in a temporary place (delta-list).
2. Second Pass — Commit / Propagation:
o After all scheduled gates are evaluated for this simulation time, commit the
computed outputs (update net values).
o Any change in the committed outputs will generate events for subsequent time
steps (or for the next iteration if within the same time frame due to delays).
o This ensures that glitches (multiple transitions at a gate input within same time
point) are handled correctly according to timing models.

Why two passes?

 Prevents intra-time-step races and ensures deterministic ordering when multiple events
cause cascading changes.
 Enables correct accounting for inertial or transport delays, and avoids double counting an
input change.

Detailed steps:
 Maintain an event queue sorted by simulation time.
 At time t: pop all events scheduled for t. For each event:
o Evaluate affected gates and compute their new outputs (store in temporary buffer).
 After evaluating all events at time t (first pass), apply all buffered outputs to nets (second
pass). If committing these outputs causes outputs to change at the same time instant due
to zero delay elements, they may be queued for time t (careful management necessary).
 Continue with next earliest time from the event queue.

Benefits:

 Accurate modeling of zero-delay combinational behavior and inertial delays.


 Scalability for large designs as only gates influenced by events are computed.

(c) What is the need of timing models in testing? List down various timing models and
explain any one in detail. (7 marks)

Need of timing models in testing:

 As process nodes shrink and operational frequencies increase, functional correctness at


logic level is not sufficient — circuits also must meet timing constraints. Some defects
only manifest as timing failures (i.e., signals arrive too late), not functional logic faults.
 Tests must detect delay faults (e.g., open resistive connections, marginal transistor
strengths, load changes, crosstalk) that cause path delays to exceed clock period.
 Timing models allow accurate simulation of timing behavior to generate tests for path
delay faults, transition faults, and hold/setup violations.
 Without timing models, ATPG cannot generate meaningful test vectors for temporal
faults.

Common timing models used:

1. Unit-delay model: Assigns a uniform delay (unit) to every gate; simple but unrealistic.
2. Gate (inertial) delay model: Each gate has a single delay and an inertial behavior (short
pulses smaller than inertial threshold are filtered).
3. Transport delay model: Propagates all pulses (no filtering of short pulses).
4. Inertial delay model: Short pulses below a threshold at the input are suppressed
(modeling physical inertia of gates).
5. Path-delay model: Focuses on path delays (sum of gate and interconnect delays along a
path) — used for path-delay testing.
6. Timing annotated HDL/ SDF: Standard Delay Format (SDF) allows detailed timing
annotation (rise/fall delays, output load dependent).
7. PRPG/transition fault models (for dynamic tests): model the launch and capture timing
(launch on shift / capture on shift semantics).
Explain one in detail — Inertial delay model (commonly used):

Inertial Delay Model:

 Definition: Each gate has an associated propagation delay for rising/falling transitions
and an associated inertial threshold — i.e., the gate rejects (filters) input pulses that are
shorter than the inertial threshold, as real physical gates cannot reproduce very short
pulses due to internal capacitances and transistor inertia.
 Key properties:
o A pulse at gate input shorter than the inertial threshold will not appear at the
output.
o Only pulses longer than or equal to the threshold will be propagated, delayed by
the gate’s propagation delay.
 Why important: Inertial behavior captures glitch suppression — helps model how short-
duration hazards are filtered and determines whether transient pulses will cause
timing/fault coverage problems. Useful for detecting hazards and metastability issues.
 Use in testing:
o When generating delay tests, inertial delay can change how faults are
excited/propagated; tests must account for pulse widths and gate filtering to ensure
a transition persists long enough to propagate to outputs.
o Transition fault tests (detect rising/falling transition faults) rely on ensuring launch
event produces a pulse long enough to pass through multiple gates — inertial
threshold matters.

Example:

 Suppose a gate has propagation delay = 2 ns and inertial threshold = 1 ns.


 An input glitch of 0.6 ns will be suppressed (not appear at output).
 An input pulse of 1.2 ns may be transmitted to output after the gate’s propagation delay,
possibly shaped.

Summary: Timing models (especially inertial/transport and path-delay models) are essential
for realistic test generation and to find defects that only show up as timing violations in silicon.
SDF annotations are typically used to feed timing-accurate simulation for ATPG and sign-off.

Q.4 — OR alternative
OR (alternative subquestions)

If answering the OR alternative from Q.4:

(a) Importance and role of verification plan (3 marks)


 Verification plan defines scope, objectives, strategy, metrics (coverage goals), resources,
schedules, responsibilities for verifying a design.
 Importance:
o Ensures all functional aspects are verified comprehensively.
o Provides traceability from requirements to tests and coverage closure.
o Prevents missed features and late surprises; helps allocate effort efficiently.
 Key elements:
o List of features/blocks and how they will be verified (unit tests, directed tests,
random tests, formal proofs).
o Coverage metrics (functional coverage, code/line/branch coverage).
o Toolchain and simulation environments, testbench architecture.
o Pass/fail criteria and risk mitigation.

(b) Write a VHDL/Verilog code and test bench for 1 X 4 demux. (4 marks)

I’ll provide Verilog (more commonly used in GTU labs) — module and testbench.

Verilog: 1-to-4 demultiplexer (behavioral)


// 1x4 Demux (Verilog)
// Inputs: data (d), select (sel[1:0]), enable (en)
// Outputs: y[3:0]

module demux1x4 (
input wire d,
input wire [1:0] sel,
input wire en,
output reg [3:0] y
);
always @(*) begin
if (!en) begin
y = 4'b0000;
end else begin
case (sel)
2'b00: y = 4'b0001; // route d to y0
2'b01: y = 4'b0010; // route d to y1
2'b10: y = 4'b0100; // route d to y2
2'b11: y = 4'b1000; // route d to y3
default: y = 4'b0000;
endcase
// multiply by 'd' so if d=0 outputs all 0
y = y & {4{d}};
end
end
endmodule

Simple testbench for the demux:


`timescale 1ns/1ps
module tb_demux1x4;
reg d;
reg [1:0] sel;
reg en;
wire [3:0] y;

demux1x4 DUT (.d(d), .sel(sel), .en(en), .y(y));

initial begin
$display("time\ten\tsel\td\ty");
$monitor("%0t\t%b\t%b\t%b\t%b",$time,en,sel,d,y);
// Initialize
en = 0; d = 0; sel = 2'b00;
#5 en = 1; d = 1; sel = 2'b00; // expect y = 0001
#10 sel = 2'b01; // expect y = 0010
#10 sel = 2'b10; // expect y = 0100
#10 sel = 2'b11; // expect y = 1000
#10 d = 0; sel = 2'b10; // expect y = 0000 (d=0)
#10 en = 0; // expect y = 0000 (disabled)
#10 $finish;
end
endmodule

(You can run this in any Verilog simulator. For VHDL, I can translate if needed.)

(c) Draw and explain flowchart indicating steps to do concurrent fault simulation. (7
marks)

Concurrent fault simulation simulates multiple faults in parallel during a single pass over test
vectors, commonly used for speeding up fault grading.

High-level flowchart steps (explained):

1. Input: Gate-level netlist and list of faults to simulate; test vector(s).


2. Initialization: Set initial good circuit values for all nets for the applied vector.
3. Partition faults into batches (for memory width of parallel engine). Each batch contains
multiple faults simulated together.
4. For each fault batch:
o Inject faults in the model: Prepare fault dictionary marking which gate/input is
faulty per bit position.
o Simulate good circuit for the vector and store good outputs for comparison.
o Simulate faulty circuit(s) in parallel using bit-parallel representation: each bit
position in a word corresponds to one fault; logic functions executed as bitwise
operations produce faulty outputs for all faults in the batch simultaneously.
o Compare the simulated outputs with good outputs: if any bit differs →
corresponding fault is detected by this vector.
o Mark detected faults and remove from the fault list.
5. Repeat for next vector or fault batch until all vectors processed or all faults detected.
6. Results & reporting: Fault coverage, undetected faults listed, diagnostics if needed.

Notes & improvements:


 Use event-driven mechanism to update only affected gates per fault set.
 Use logical masking to ensure if two faults in the same simulation word affect same net,
interactions are handled correctly (some implementations avoid collisions by ensuring
faults in a word are non-interacting).
 Use collapsing and equivalence classes to reduce total number of faults before
simulation.

Q.5
(a) What is meant by functional coverage? How is it useful in verification flow? (3 marks)

Functional coverage measures whether the functional behaviors and scenarios specified in
the design requirements have been exercised by verification tests (directed or random). Unlike
code coverage that measures exercised lines of source code, functional coverage measures
exercised features and requirements (e.g., states visited, combinations of control signals,
corner-case conditions).

Components:

 Coverage items: Variables, cross coverage, bins for ranges, state transitions.
 Coverpoints: Specify which functional aspects to monitor.
 Cross coverage: Combines multiple coverpoints to ensure combinations are tested.

Usefulness:

 Requirement traceability: Maps verification tests to design requirements to ensure all


features are verified.
 Guides test development: Reveals unexercised scenarios so additional tests (directed or
randomized) can be created to fill coverage holes.
 Quality metric: Helps quantify verification completeness and risk assessment.
 Feedback loop: Coverage-directed random testing (constrained-random) can target holes
automatically by biasing stimulus toward uncovered bins.

Example: For an ALU, coverpoints might include operation types (add, sub, shift), operand
sign combinations, overflow conditions; cross coverage could ensure operations combined with
operand ranges are tested.

(b) What is code coverage? Explain its role in verification. (4 marks)

Code coverage measures which portions of the HDL code are executed during simulation.
Typical metrics:
1. Line coverage / Statement coverage: Percentage of lines/statements executed.
2. Toggle/Expression coverage: Whether expressions have evaluated to both 0 and 1.
3. Branch coverage / Decision coverage: Whether each branch of if/else, case items has
been executed.
4. Condition coverage: Whether each atomic condition in a compound expression
independently affects outcome.
5. FSM coverage / State coverage: Whether each state and transition in state machines was
exercised.

Role in verification:

 Detect dead code / uncovered logic: Unused or unreachable code can indicate bugs or
design problems.
 Help identify missing tests: Areas with low code coverage indicate tests that should be
added.
 Quality metric for regression: Track improvement or degradation in test suites.
 Complement functional coverage: While functional coverage ensures requirement
coverage, code coverage ensures source-level code paths have been executed.

Limitations:

 High code coverage does not guarantee functional correctness (you can execute lines
without checking correct outputs).
 Must be used with assertions and functional coverage for a complete verification strategy.

(c) Compare White box verification, Black box verification and Grey box verification. (7
marks)

Definitions & comparison

Aspect Black-box verification White-box verification Grey-box verification


Tester only knows
Full visibility of internal Partial knowledge —
Knowledge inputs/outputs and spec;
design (source code, RTL, some internal details
of internals internal structure
netlist) available
unknown
Internal structure examined;
Stimulus applied only Mix: use functional
tests target internal states,
Approach through interfaces; check tests and some
code paths, and
outputs against spec structural introspection
implementation
Functional correctness Code coverage, path Use functional
Focus against specification; end- coverage, internal corner coverage plus targeted
to-end behavior cases, structural defects tests for critical internal
Aspect Black-box verification White-box verification Grey-box verification
logic
Usually functional
Structural tests, unit tests, Hybrid tests that use
directed tests or
Test creation white-box test generation, introspection to direct
constrained-random at
formal methods stimuli
interfaces
Combination
Behavioral testbenches, Static analysis, code coverage
(simulator, coverage,
Tools used functional coverage, tools, ATPG, formal
selective formal
constrained random TB verification
checks)
Integration testing,
System-level validation, Unit testing, design
system verification
When used acceptance tests, black- verification, safety-critical
with white-box
box modules modules, DFT
optimizations
Mimics how end user High defect detection
Balanced effort and
Pros interacts; independent of (implementation bugs), helps
effectiveness
implementation optimize tests
Misses internal defects,
Requires implementation Requires coordination
Cons limited structural
details, more effort of both approaches
coverage

When to use which:

 Black-box: When only a spec is available or for final integration/regression tests.


 White-box: During design/verification phases to find implementation bugs early and to
achieve code/branch coverage.
 Grey-box: Practical compromise — testers know some internals (timing, states) which
guides more effective system-level testing.

Examples:

 Black-box: GUI testing of a module by external API only.


 White-box: Verifying RTL by inspecting state machine transitions, achieving 100%
branch coverage.
 Grey-box: Simulation at subsystem level using logs of internal registers to guide
stimulus.

Q.5 — OR alternative
If answering the OR set for Q.5:
(a) Differentiate between static hazard and dynamic hazard. (3 marks)

Static hazard:

 Occurs in combinational circuits when an output momentarily changes (glitches) even


though it should remain at the same logical value (0→0 or 1→1) during an input change
due to different propagation delays on different paths.
 Example: A static-1 hazard occurs when output should remain 1 but momentarily
becomes 0 during input transitions.
 Cause: Unequal delays in different paths that briefly create a condition where the
function evaluates to the opposite value.

Dynamic hazard:

 Involves multiple glitches: output changes multiple times before settling to final value
(e.g., 1→0→1→0 before finally 1). Often due to more than two paths and more complex
delay differences.
 Dynamic hazard can be seen as repeated static hazards causing multiple transitions.
 Mitigation: Add consensus terms (for combinational logic) or ensure hazard-free logic
design, or careful timing balancing.

Key difference: Static hazard is a single glitch for an input change that should not change
output; dynamic hazard is multiple glitches causing several transitions before final value.

(b) Compare Code-driven simulation and event-driven simulation. (4 marks)

Code-driven simulation (cycle-based or interpreter-driven):

 Approach: Evaluates the design according to a sequential algorithm, often re-evaluates


all statements or all modules each cycle.
 When used: Useful for high-level models, hardware/software co-simulation, or
simulation where cycle accuracy is enough (emulation).
 Pros:
o Simple to implement.
o Can be very fast for cycle-accurate operations (when combinational details are
abstracted).
 Cons:
o Inefficient if many parts do not change; may repeatedly compute unchanged
blocks.

Event-driven simulation (activity-driven):

 Approach: Only re-evaluates components whose inputs have changed (events).


Maintains an event queue; efficient for large designs where only small parts toggle.
 When used: Accurate timing-level simulation (gate-level or RTL-level with timing).
 Pros:
o Efficient for sparse switching activity.
o Can accurately model delay and transient behavior (glitches).
 Cons:
o More complex to implement; event queue overhead.
o In worst-case (heavy switching), performance can degrade.

Comparison table:

Feature Code-driven Event-driven


Evaluates only changed items? No (usually) Yes
Suited for timing-accurate
Not ideal Yes
simulation?
Complexity Low Higher
Performance when switching
Moderate High
sparse
High-level, cycle-based Gate-level, transient/functional
Use case
sims verification

(c) Draw and explain verification flow. (7 marks)

Verification Flow — key stages (detailed):

1. Requirement / Spec capture: Formalize feature list and constraints; derive verification
goals & acceptance criteria.
2. Verification Plan: Define coverage metrics (functional & code), test strategy (directed,
constrained random), tools, schedule, responsibilities.
3. Testbench Architecture: Create modular, reusable testbench (UVM or self-built) with
stimulus generators, drivers, monitors, scoreboards, and checkers.
4. Environment Setup: Compile RTL, create harness, integrate testbench components,
enable automation (regression scripts).
5. Create Tests:
o Directed tests for critical scenarios.
o Constrained-random tests to explore large state space.
o Formal properties for exhaustive checks on small modules.
6. Run simulation & collect results: Functional outputs, logs, waveform captures.
7. Coverage Collection:
o Functional coverage to see if requirements/scenarios are exercised.
o Code coverage (line/branch/statement/condition).
o Toggle / Path coverage for power and timing insights.
8. Analyze coverage & debug failures: For coverage holes, write new tests or update
constraints; for mismatches, debug and fix RTL or testbench.
9. Regression & Sign-off: Automate nightly/continuous runs; reach coverage goals and
sign-off.
[Link] / FPGA prototyping: For system-level or performance tests, move to
hardware emulators or prototypes.
[Link] sign-off & tape-out: After verification closure and sign-off.

Explanation:

 Verification is iterative: tests reveal bugs → fix RTL/testbench → re-run regressions and
re-check coverage.
 The scoreboard is used to compare expected vs actual results.
 Assertions (SVA/PSL) are used for run-time checking of protocol/temporal properties.
 Regression is essential for tracking stability across code changes.

Flowchart hint: If drawing, show boxes in sequence with arrows: Requirements →


Verification Plan → Testbench → Tests → Simulation → Coverage → Analyze → Debug →
Regression → Sign-off.

You might also like