0% found this document useful (0 votes)
57 views135 pages

Low Power VLSI Design & Analysis

The document discusses the necessity of low power VLSI chips due to increased power dissipation from higher transistor densities and operating frequencies, emphasizing the importance of power efficiency in modern electronics. It covers various sources of power dissipation, including dynamic and static power, and highlights the role of SPICE circuit simulation in analyzing and optimizing power consumption in VLSI designs. Additionally, it details different simulation techniques and their applications in ensuring efficient circuit performance and reliability.

Uploaded by

ajinajin1369
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
57 views135 pages

Low Power VLSI Design & Analysis

The document discusses the necessity of low power VLSI chips due to increased power dissipation from higher transistor densities and operating frequencies, emphasizing the importance of power efficiency in modern electronics. It covers various sources of power dissipation, including dynamic and static power, and highlights the role of SPICE circuit simulation in analyzing and optimizing power consumption in VLSI designs. Additionally, it details different simulation techniques and their applications in ensuring efficient circuit performance and reliability.

Uploaded by

ajinajin1369
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd

UNIT 1 INTRODUCTION &

Need for low power VLSI chips, sources of power


SIMULATION
dissipation Device & technology impactPOWERon low
ANALYSIS
power, impact of technology scaling, technology &
device innovation, Power estimation, SPICE circuit
simulators, gate level logic simulation, capacitive
power estimation, static power, gate level
capacitance estimation, architecture level analysis,
Monte-Carlo simulation
[Link] for low power VLSI
chips
Power Dissipation Became a
Problem:
 In the past: VLSI (Very Large
Scale Integration) chips had
lower device density and slower
speeds, so power consumption
wasn’t a major concern.
 Now: As technology has
advanced, more transistors are
packed into chips, and they
operate at higher frequencies—
leading to much higher power
Moore’s Law and Its Impact:
•Moore’s Law suggests that the number of transistors on
a chip doubles every 18 months—or increases tenfold
every seven years.
•This rapid growth boosts performance but also
increases heat and power demands, making power
efficiency a crucial design issue.
Battery-Powered Devices & Consumer Demand:
•Portable electronics (phones, laptops, etc.) are driving
demand for low power chips.
•Battery technology isn’t improving as fast as chip
performance.
•High energy density in batteries can be dangerous—
approaching explosive levels—so we can't rely on better
batteries alone.
•This makes efficient power usage in chips even more
critical.
High-Performance Systems Also Push Low
Power Needs:
•High-end microprocessors now consume power
. SOURCES OF POWER DISSIPATIONS:
2.1. POWER AND ENERGY:
Power: Instantaneous Consumption
Think of power as the rate at which energy is used at a
moment.
Measured in Watts (W), it shows how intense the energy
draw is from the battery or supply.
In the diagram:
Approach 1 has a taller, narrower curve → high power
•Evenbutif short duration.
Approach 1 uses high
power Approach
briefly, 2and
has aApproach
flatter, wider
2 curve → low power but
uses extended
low power duration.
over a longer
duration, the total energy (area
under the curve) remains equal.
•This is crucial in systems design,
especially in embedded systems
and VLSI, where choosing
between power optimization and
processing time can affect
thermal load, battery life, and
Power dissipation is measured commonly in
terms of two types of metrics:
1. Peak Power – The Instantaneous Surge
Definition: The maximum power a device draws at any given
moment.
Analogy: Like a sudden power spike when a processor jumps into
turbo mode.
Melting risks : Excessive peak power can overheat tiny
interconnections in VLSI, leading to physical damage.
Power-line glitches: Sudden spikes may disrupt the power
network, causing transients and affecting other devices.
Design Strategies:
Use clock gating or dynamic voltage scaling to control peak
activity.
Ensure proper decoupling capacitors and robust power rails to
absorb transients.
2. Average Power – The Long-Term Load
Definition: The mean power consumption over a span of time.
Impact on VLSI Design:
Packaging : High average power means constant heat →
packaging materials must withstand thermal stress.
Cooling : Requires more efficient heat sinks, fans, or even
active cooling mechanisms for chips.
Real-World Concern:
TYPES OF POWER DISSIPATIONS
1) Dynamic power:
Dynamic power dissipation refers to the power
consumed during signal transitions that is, when logic
gates in a circuit switch between high and low states. It
happens only when the device is actively processing,
not in idle states. It is generally categorized into three
types:
✓ Switching Power ✓ Short-Circuit Power ✓ Glitching
Power
1.a) Switching Power Dissipation :
Switching power dissipation (or dynamic power) arises
whenever a digital gate changes state—from 0 to 1 or vice
versa. This occurs due to the charging and discharging of
load andfig
• From internal capacitances
1.4 The in CMOS
circuit shows a circuits.
capacitor CL connected between
output and ground.
• During transitions, CL charges
from VDD and discharges to 0V.
• Each transition involves current
flow, which dissipates energy.
Energy per Transition:
Energy/Transition=1/2⋅CL⋅ V2DD
the capacitor
•Every time thecharges, storing
output energy.
switches
•from
This 0
energy
→ 1, is dissipated as heat, either in the charging
or discharging
Total phase.
Switching Power:
Including Internal Nodes

•Adds dissipation from internal node capacitances Ci and their


activity αi.
•Vth: Threshold voltage of the transistors.
•This sum captures
1.b) Short-Circuit power lost due to switching inside complex
Power Dissipation
gates or logic blocks.
During the signal transition
phase, there’s a brief time
when both NMOS and PMOS
transistors conduct
simultaneously in a CMOS
inverter. This allows a direct
path from VDDV_{DD} to
GND, causing a current
spike called the short-circuit
The expression for short-circuit power is given by:
current.
1.c) Glitching Power Dissipation:
•Glitching power is the dynamic power
dissipated due to unintended logic transitions
(glitches) caused by timing mismatches in a
digital circuit.
•These glitches happen when:
•Different input signals arrive at a gate at
different times.
•This creates temporary and unnecessary
output transitions before the final, correct
output stabilizes.
•Since dynamic power is tied to the number of
transitions, even these brief, unwanted ones
consume energy.
Unequal Propagation Delays
•Imagine a simple combinational block where:
•Input A arrives earlier than Input B.
) Static Power Dissipation
Even when the circuit is idle (no logic
transitions), the transistors continue to draw
leakage currents due to imperfections in the
device physics. These small currents add up—
especially in deep submicron technologies—and
lead to static power dissipation.
I₁ – Reverse-Bias Diode Leakage (Junction
Leakage)
Occurs at the source/drain junctions of a
MOSFET.
Reverse-biased p-n junctions allow minority
carrier diffusion.
The leakage grows with temperature and
junction area.
I₂ – Tunneling-Induced Junction Leakage
Arises from quantum tunneling when reverse
bias is strong enough.
I₃ – Sub-Threshold Leakage
•Occurs when VGS < Vth but the transistor still conducts
weakly.
•Electrons move from source to drain via diffusion, even
in OFF state.
•Increases with:
•Lower Vth
•Smaller channel lengths
•Higher temperature
•Major contributor to standby power in nanometer CMOS.
I₄ – Oxide Tunneling Leakage
Due to ultra-thin gate oxides → electrons tunnel through
the oxide.
Exponential relation with oxide thickness → very
sensitive.
Becomes serious below 2 nm oxide thickness.
I₅ – Hot Carrier Injection (HCI)
High-energy carriers near the drain inject into gate oxide,
causing gate leakage.
Over time, leads to reliability issues.
Often tackled with drain engineering (e.g., LDD
I₆ – Gate-Induced Drain Leakage (GIDL)
At high gate voltage, strong electric field causes band-
to-band tunneling near the drain junction.
Especially problematic during deep sleep modes when
gate is high and drain is low.
I₇ – Channel Punch-Through
In short-channel devices, drain and source regions are
too close, weakening control of the gate.
Leads to current leaking directly under the gate—even
when it's off.
Mitigated by careful channel length control and halo
implant techniques.
•Reverse-biased P–N junction leakage → same as I₁, but
can appear in I/O cells and analog blocks.
•Hot Carrier Injection gate current → I₅, worsens with
high-frequency switching.
•Channel punch-through current → I₇, intensified in sub-
65nm nodes.
SPICE circuit simulation
 SPICE (Simulation Program with
Integrated Circuit Emphasis) .
 It is mainly used for circuit simulation
purpose.
 When designing low-power VLSI circuits,
SPICE simulations are essential for
verifying the functionality, performance,
and power consumption of the designed
circuits.
SPICE Basics
SPICE operates using Kirchhoff's Current Law
(KCL), which involves solving a large system of
equations (or matrix) representing the current
at each node in a circuit. Its foundation lies in
basic circuit theory elements like:
Resistors
Capacitors
Inductors
Current sources
Voltage sources
These fundamental elements are combined to
create complex device models like diodes and
transistors, which are essential for simulating
real circuits. With these device models, SPICE
constructs a circuit to simulate its behavior and
computes precise values for parameters such as
voltage, current, and charge. Using these,
Analysis Modes in SPICE:
SPICE provides several ways to analyze
circuits, but for digital IC power
analysis, the most important mode is
transient analysis. This mode:
1. Solves the circuit's DC behavior at
time zero.
2. Simulates the circuit's dynamic
behavior over time by advancing in
small time steps.
3. Produces precise waveforms of
circuit parameters (e.g., voltage or
current) that can be plotted over the
simulation time.
This makes transient analysis crucial for
Device Models:
SPICE device models are highly
accurate and derived through a
characterization process, where:
Dozens of parameters describe the
behavior of each device.
These models are calibrated using
physical measurements from test
chips to ensure accuracy.
For even higher precision, finite
element methods or other physical
simulation techniques are
sometimes used to determine the
SPICE Power Analysis
Strengths of SPICE
Accuracy: SPICE's strongest advantage
lies in its exceptional precision. It
provides highly reliable estimates for:
• Dynamic power dissipation
• Static power dissipation
• Leakage power dissipation
Versatility: SPICE supports a wide range
of circuit components, including both:
• Common components: Diodes, resistors,
capacitors, inductors.
• Specialized components: These can be
modeled using SPICE's flexible modeling
capabilities.
Advanced Device Models: It
supports MOSFETs and bipolar
transistors, while capturing low-
level phenomena such as:
Charge sharing
Crosstalk
Transistor body effect
Precision in Results: With the
correct device models, SPICE
simulations can achieve accuracy
within a few percent of physical
measurements.
Limitations of SPICE
Computational Intensity: SPICE requires
significant computational resources, making it
unsuitable for simulating large circuits.
Standard SPICE-based simulators handle
several hundred to a few thousand devices,
while advanced ones may support up to ten
thousand devices.
Full-chip simulations are infeasible due to
memory and computation limits.
Process Variations: Semiconductor fabrication
introduces inherent variations that can affect
SPICE's accuracy.
These fluctuations make modeling physical
components and devices challenging.
Power and timing analyses are affected by
process variations, exacerbated by SPICE's
high accuracy goals.
Coping with Process Variation
To address the challenges caused by chip
production variability, extreme case analysis is
employed:
Multiple sets of device models are created to
simulate:
Typical conditions
Best-case conditions (fastest operating
speeds)
Worst-case conditions (slowest operating
speeds)
For CMOS circuits, faster devices usually
correspond to higher power dissipation, but
exceptions exist. For instance:
Low-speed worst-case models may cause slow
signal slopes, leading to high short-circuit
power.
Design Considerations
Cont..,
Some parts of using SPICE for low-power VLSI
circuit simulation are,
Modeling Components
Power Analysis
Transistor Sizing and Optimization
Voltage Scaling
Technology Node Considerations
Transient and DC Analysis
Monte Carlo Analysis
Simulating Low-Power Techniques
Gate-level Logic
Simulation
It’s a method used in designing complex
microchips (VLSI circuits) to check how
fast signals travel through digital circuits.
This helps make sure the chip works
correctly and on time.
 It helps engineers predict problems like
delays or signal glitches and fix them
early—saving time and money in the
chip development process.
Gate-level Analysis
 The popular gate-level analysis is based on the so
called event-driven logic simulation.
Event-Driven Simulation
 Events are changes in signal values (like switching
from 0 to 1).
 When a signal changes at the input of a gate, it may
cause a change at the output after a short delay.
 The computer tracks these changes to see how
the whole circuit behaves.
 It’s very accurate and helps catch problems early.
 Simulators also use extra signal types like:
 Unknown (X): when the value isn’t clear
 Don’t care (-): when the value doesn’t matter
 High-impedance (Z): when the signal isn’t being
Cycle-Based Simulation
 A newer method that assumes the circuit
runs with a clock.
 Instead of tracking every tiny change, it
only checks signals once per clock
cycle.
 This makes the simulation faster and
more efficient, though sometimes less
detailed.
Hardware Acceleration
 Some simulators use special hardware
to speed things up.
 Like how a graphics card helps with
video games, this hardware helps with
Hardware Emulation
 This is the fastest and most powerful method.
 Instead of simulating with software, the circuit is
built using real hardware like FPGAs
(programmable chips).
 The circuit is split into smaller parts, and each
part is mapped to hardware.
 These parts are connected to act like the full
circuit.
 It’s very fast—almost like running the real chip—
but also very expensive.
apacitive Power Dissipation
[Link] simulation is run to track how often each wire
(or "net") in the circuit switches.
[Link] time a wire switches, it uses power to charge
or discharge a tiny capacitor.
[Link] total power used can be calculated using the
formula:
P = C V^2 f
•P = power
•C = capacitance (how much charge the wire holds)
•V = voltage
•f = frequency (how often the wire switches)
This method using in analog simulations (like
SPICE), signals are smooth and not clearly digital, so
it's hard to define frequency. In logic simulation,
signals are just 0s and 1s, so it's easy to count how
often they switch and calculate power.
Frequency Calculated by Each wire has a counter
that increases every time it switches. At the end of
the simulation, the frequency is:
fi = ti/2T
tᵢ = number of switches
T = total simulation time
Then, power for each wire is:
ternal Switching Energy:
Def: Power dissipated inside the logic cell.
Components:
Short-circuit power — momentary current path from
VDD to GND during switching.
Internal node charging/discharging — happens inside
the cell, not just at its output.
Measure Internal Power:
Use characterization (similar to timing characterization).
Simulate the gate at the transistor level (with SPICE or
similar).
Identify dynamic energy dissipation events — these are
Example: NAND Gate
A NAND gate with inputs A and B and output Y:
If A = 1 and B changes from 0 → 1, then:
Output Y changes from 1 → 0.
Inside the gate, this transition draws a certain
amount of energy.
That energy comes from:
Short-circuit current
Charging/discharging of internal nodes.
Key idea: Each distinct type of input change = one logic
event.
Each event has a pre-measured energy cost (from SPICE).
Event-Based Power Computation
For each gate:
Predefine all possible events.
Assign an energy value to each (from characterization).
From logic simulation:
Measure how often each event happens → f(g, e).
•Even if two gates have the same logic function
(e.g., NAND), their transistor designs can differ.
•Different designs = different internal nodes and
thus different events.
•Example: One NAND implementation may have
4 events, another may have 6 events (extra
ones from extra internal switching).
•This means event list creation must be careful
— missing events → wrong power estimation.
STATIC STATE POWER
•About static power — the power a gate consumes even
when it’s not switching.
•This happens because real MOS transistors leak current
even when “off”.
Static (leakage) power mainly comes from:
•Subthreshold leakage — a small current flows between
source and drain when the transistor is off.
•Reverse-biased diode leakage — leakage from the drain-
bulk and source-bulk junctions.
•These leakages depend on:
•Process technology
•Voltage
•Temperature
•A gate can have multiple input combinations, and each
combination causes a different transistor configuration.
•Example: A 2-input NAND has 4 possible input states:
•In each state:
•Some transistors are ON, some OFF.
•Leakage paths change → static power changes.
•So, static power isn’t the same for all input states.
Measuring Static Power per State:
Characterization step (SPICE):
Measure P(g, s) → static power consumed by gate g
in state s.
Logic simulation step:
Simulate the whole circuit to see how long each
gate stays in each state.
Let:
T = total simulation time.
T(g, s) = total time gate g is in state s.
For each gate and each state:
•Multiply the static power for that state by the fraction of
time spent in that state.
•Sum this over all states and gates to get total static
power.
ATE-LEVEL CAPACITANCE
Dynamic power ESTIMATION
in CMOS circuits : proportional to
is directly
capacitance.
P∝C⋅V2⋅f
•Capacitance affects:
[Link] — More capacitance → more power to
charge/discharge it.
[Link] — More capacitance → slower gates → different
signal slopes → changes short-circuit current and switching
behavior.
Types of Parasitic Capacitance
Two main sources in CMOS circuits:
1. Device parasitic capacitance
• Comes from the MOSFET’s physical structure:
• Gate capacitance → depends on oxide thickness
(process factor) and transistor width/length (design
factor).
• Usually rectangular shape.
• For odd shapes (e.g., L-shaped), an equivalent
rectangle is calculated.
• Source/drain capacitance → depends on diffusion
area and shape.
• Larger transistors = larger capacitance on all
terminals.
2. Wiring capacitance
• Comes from interconnect wires on the chip.
• Depends on:
• Layer used (metal layer height).
• Length of wire.
• Width of wire.
• Distance from the substrate (affects coupling).
Measuring Device (Pin) Capacitance
In cell-based design, cell layouts are known
before the full chip is designed.
Each pin’s capacitance can be measured during
cell characterization.
SPICE simulation method:
Vary the pin’s voltage V over time T.
Measure the resulting current i.
Use the capacitor equation:

Store the measured pin capacitance in the


cell library for later use.
Estimating Wiring Capacitance
Post-layout (after routing):
Wire lengths are known → wiring capacitance can be
calculated exactly.
Pre-layout (before routing):
Physical design is not done yet → can’t measure exact
length.
Solution: Use a wire-load model.
Predict wire length from the number of pins
connected to the net.
Mapping: pin-count → estimated wiring
capacitance.
Pre-Layout vs Post-Layout Use
Built using historical data from similar chip
Pre-layout:
designs.
Use pin capacitance (from cell library) + wire-load
Sometimes
model adjusted
(estimated wiringbased on circuit size (e.g.,
capacitance).
1,000-cellquick
Purpose: vs 10,000-cell designs).
early power estimation without
waiting for full layout.
Post-layout:
Use actual wire lengths from routing to calculate true
wiring capacitance.
Purpose: verify and refine pre-layout estimates.
GATE-LEVEL POWER ANALYSIS
Gate level Analysis
Gate level Analysis
Cont..,
Cont..,
Cont..,
Technology Mapping
Technology Mapping
Cont..,
Cont..,
Cont..,
Phase Assignment
Pin Swapping
Cont..,
Glitching power Handling
Conti..,
Pre-Computation
Conti..,
Clock Gating
Conti..,
Input Gating
Conti..,
Data Correlation Analysis
• Sample
correlation in
digital signal
processing (DSP)
systems arises from
the process of
sampling a band-
limited analog
signal at a rate
higher than its
Nyquist rate, which
is twice the signal's
bandwidth. When
this occurs,
[Link] at a Higher Rate
[Link]-Limited Nature of the Signal
[Link] Variation in Signal
• Overall, sample correlation is a
natural consequence of sampling a
band-limited analog signal at a rate
higher than its Nyquist rate, and
understanding this phenomenon is
crucial for designing and analyzing
digital signal processing systems
effectively.
Developing a high-level power model for a
DSP system based on the correlation
properties of the data stream is a valuable
approach for estimating power dissipation
without the need for a detailed sample-by-
sample analysis.
1. Understanding the Relationship Between
Correlation and Power Dissipation
2. Correlation Measures
3. Frequency of the Data Stream
4. Developing the Power Model
5. Validation and Calibration
6. Application
Dual Bit Type Signal Model
• The "Dual Bit Type Signal Model" likely
refers to a signal representation scheme
where each data sample is represented by
two bits, indicating a dual-state or binary
representation. This type of signal model
can be useful in various applications,
particularly in digital communication
systems, digital signal processing, and
data compression.
1. Binary Representation
2. Signal Encoding
3. Data Transmission
4. Error Detection and Correction
In digital signal processing (DSP) systems, the
choice of numerical representation, such as two's
complement or signed magnitude, can significantly
impact the power dissipation characteristics,
especially concerning data correlation. Let's delve
into the toggle characteristics of data signals
under the influence of data correlation, assuming
the use of two's complement representation.
1. **Positively Correlated Data Stream**:
- **Least Significant Bits (LSB)**: In a positively
correlated data stream, successive data samples
are very close in their binary representation.
Consequently, the LSBs toggle frequently since
small variations in the data samples result in
changes in these bits.
- **Most Significant Bits (MSB)**: On the other
hand, the MSBs remain relatively quiet because
small variations in data samples do not affect them
- **Toggle Characteristics**: When plotting
the bit-toggle frequencies of the signals, we
observe specific characteristics:
- *Uniform White Noise Region*: LSBs
toggle frequently, approximating half the
maximum frequency. This region is termed
the uniform white noise region because the
bits toggle in a seemingly random fashion.
- *Sign Bit Region*: MSBs have a very
low toggle rate and mainly toggle when
there is a sign change in the data samples.
- *Grey Area*: Between the LSB and MSB
regions, there is a transition zone where the
toggle frequency changes from white noise
to sign bit. In this region, the toggle rate
increases gradually from near zero to
2. **Negatively Correlated Data Stream**:
- For a negatively correlated data stream,
the characteristics are converse:
- *Sign Bit Region*: The MSBs exhibit a
very high switching frequency as sign
changes in data samples occur frequently.
- *Noise Bit Region*: The LSBs remain at
random toggling levels.

3. **No Correlation**:
- In a data stream with no correlation, all
bit-switching characteristics resemble
uniform white noise, where all bits toggle
randomly.
1. **Sample Frequency**: This parameter refers
to the rate at which data samples are taken
from the analog signal during the sampling
process. It determines how frequently new data
points are captured and represented in the
digital domain. The sample frequency is
typically measured in samples per second or
hertz (Hz).
2. **Data Correlation Factor**: The data
correlation factor is a measure of the correlation
between successive data samples in the digital
signal. It ranges from -1.0 to +1.0, where:
- A correlation factor of +1.0 indicates perfect
positive correlation, meaning that successive
samples are identical.
- A correlation factor of -1.0 indicates perfect
negative correlation, where successive samples
exhibit an inverse relationship.
- A correlation factor of 0.0 indicates no
correlation, implying that successive samples
are independent of each other.
3. **Sign Bit and Uniform White Noise Regions**: These
regions are defined based on the characteristics of the
data signal's binary representation:
- **Sign Bit Region**: This region corresponds to the
most significant bits (MSBs) of the data signal. It
experiences relatively low toggling activity, primarily when
there is a change in the sign of the data samples.
- **Uniform White Noise Region**: This region
corresponds to the least significant bits (LSBs) of the data
signal. It exhibits frequent toggling, akin to white noise,
due to small variations in the data samples.
Data path Module Characterization
and Power Analysis :
• Data path module
characterization and power analysis involve
assessing the behavior and power
consumption of the datapath components
within a digital system.

1. Data path Module Identification 6. Correlation


Analysis
2. Functional Characterization 7. Power
Estimation
3. Timing Analysis 8.
Optimization
4. Power Modeling 9.
Monte Carlo
Simulation
Monte Carlo Simulation is a computational
technique used to understand the impact of
uncertainty and variability in a system by
repeatedly sampling random inputs and
observing the resulting outcomes. It is named
after the famous Monte Carlo Casino in Monaco,
known for its games of chance, as the method
involves randomness and statistical sampling.
1. Define the Problem
2. Generate Random Inputs
3. Perform Simulation
4. Collect Output Data
5. Analyze Results
6. Draw Conclusions
• Selecting simulation vectors for power analysis
and highlights the importance of accurately
representing the typical conditions of circuit
operation.
• The choice of vectors significantly impacts the
power estimation, particularly in digital circuits
where power dissipation heavily depends on
switching activities.
• Selecting simulation vectors lies with the chip
designer, who must ensure that the vectors
adequately represent the intended application.
While randomly generated vectors may suffice
for some scenarios, others require a specific
sequence of vectors to accurately capture the
circuit's behavior
•Discusses the stopping criteria for simulation-
based power analysis, drawing from statistical
principles to determine the appropriate sample
size for accurate estimation.

2.6.1 STATISTICAL ESTIMATION OF MEAN:

•Basic Sample Period and Power Samples:

•Mean Estimation Problem


•Trade-off Between Sample Size and Accuracy
There's a trade-off between sample size
(related to computational efficiency) and
accuracy. A smaller sample size may yield less
accurate results, while a larger sample size
increases computational cost without
necessarily gaining meaningful accuracy.
•Central Limit Theorem
According to the central limit theorem, the
sample mean (P) approaches a normal
distribution for large sample sizes (N),
irrespective of the distribution of individual
power samples (Pi).
Accuracy Quantification:
The accuracy of the sample mean (P) in estimating the
true mean (μ) is quantified using a maximum error
tolerance (ε), typically set to values less than 10%. The
goal is to determine the probability that P is within ε of
μ.

Confidence Level:
The probability that P falls within ε of μ is expressed
using a confidence variable (α), which is the
complement of the confidence level (1 - α). A
confidence level of 100% implies absolute certainty,
while higher confidence levels (>90%) are typically
desired for meaningful results.
Relationships Among ε, α, and N
By defining a variable (zα/2) based on the
confidence level, the minimum sample size (N)
required to achieve a desired confidence level
can be determined using statistical equations.
Z-Distribution:
The value of zα/2 is obtained from a
mathematical table known as the z-distribution
function. It corresponds to critical values for
different confidence levels (α), facilitating the
calculation of sample size.
From the above equation that requires knowledge of
the mean and μ variance σ 2 of certain samples Pi​.
However, this knowledge might not always be practical
because these parameters are dependent on various
factors like the circuit, simulation vectors, and the
sample interval.
he suggestion is to use sample averages and sample
variance as estimators. If you've taken N samples P0​,P1​
,...,PN​, you can substitute μ with the sample average p
= (Po + PI + ... +PN) /N
• When using these approximations, it's
important to consider the confidence level
and error tolerance. To achieve a desired
confidence level (1−α) and error tolerance E,
you may need to adjust the number of
samples taken. Typically, more samples are
required to achieve higher confidence levels
and smaller error tolerances.
• The distribution that governs their behavior
changes from a normal distribution to a
Student's t-distribution when the sample size
is small or when the population variance is
unknown. Using the t-distribution allows for
more accurate estimation of confidence
intervals, especially when dealing with
smaller sample sizes.
The table you're referring to likely
provides critical values of the t-
distribution for various degrees of
freedom and significance levels (α).
These critical values are used to
determine confidence intervals or
conduct hypothesis tests
2.6.2 Monte Carlo Power
Simulation
Monte Carlo power simulation
procedure allows you to iteratively run
simulations until you achieve results
with the desired level of confidence
and error tolerance.
1. Simulate to collect one sample Pi​
2. Evaluate sample mean and
variance
3. Check if inequality is satisfied
4. Repeat from Step 1 if criteria not
met
Monte Carlo power simulation process by
using a constant value instead of looking
up critical values from the t-distribution
table and setting a minimum number of
samples N for efficiency.
1. Replacing critical values with a constant
2. Setting a minimum number of samples
N
3. Setting a minimum number of samples
N
Probabilistic Power Analysis
Techniques
•Probabilistic power analysis techniques are
statistical methods used to assess the power
of a study
•These techniques take into account various
factors such as sample size, effect size,
significance level, and variability in the data.
There are some probabilistic method to
1. Analytical Power Calculations
estimate power of studyPower Analysis
2. Simulation-Based
3. Bootstrap Power Analysis
4. Monte Carlo Power Analysis
5. Bayesian Power Analysis
The application of probability propagation
analysis to transistor networks for power
analysis
•Transistor Networks
•Probability Propagation
Analysis
•Basic Propagation Algorithms
•Gate Signal Probability
•Signal Propagation
•Switching Frequencies
•Power Dissipation
Propagation of Static Probability in
Logic Circuits
The propagation of static probability in logic circuits
involves analyzing how probabilities of logic levels (0 or
1) change as signals traverse through the circuit. This
approach, often used in probabilistic digital circuit
design, accounts for uncertainties in the circuit due to
variations in manufacturing processes, environmental
conditions, and other factors.
1. Initial Probability Assignment: Each input of the circuit
is assigned a static probability representing the
likelihood of it being at logic level 1 (or 0). These
probabilities may be based on statistical analysis,
manufacturing data, or other considerations.
2. Probabilistic Gate Modeling: The behavior of each logic
gate in the circuit is modeled probabilistically. Instead of
assuming deterministic behavior (i.e., outputs are always
0 or 1), the gate's output probability is calculated based
3. Probability Propagation: The calculated
output probabilities of each gate are
propagated through the circuit. At each stage,
the output probabilities of one set of gates
become the input probabilities for the next set
of gates. This process continues until the
output probabilities for all nodes in the circuit
are determined.

4. Analysis of Probability Distribution: Once the


output probabilities of the circuit are obtained,
the probability distribution of the final output(s)
is analyzed. This may involve calculating
metrics such as the mean, variance, or
5. Impact on Circuit Performance: The
propagated probabilities can be used to assess
the impact of variations on circuit performance
metrics such as power consumption, delay,
and reliability. For example, circuits with higher
probabilities of certain nodes being at logic
level 1 may consume more power due to
increased switching activity.

[Link] Optimization: Designers can use


probabilistic analysis to optimize circuit
designs for improved performance under
uncertain conditions. This may involve
adjusting gate sizing, placement, or routing to
minimize the effects of variations on circuit
behavior.
1. Static Probability and Frequency
Relationship: In the context of the
memory less random signal model,
static probability refers to the
probability that a signal is at logic
level 1 at any given time. Frequency,
on the other hand, represents the rate
at which a signal transitions between
logic levels over time. There exists a
relationship between static probability
and frequency, where higher static
probability typically corresponds to
higher switching frequency and vice
versa.
[Link] of Static Probability
in Power Analysis: Static
probability is crucial in power
analysis because it directly
influences the switching activity
and, consequently, the power
consumption of a circuit.
Understanding the static
probabilities of signals in a circuit
allows for more accurate
estimation of power dissipation.
3. Propagation Model for Static
Probability: The passage presents a
propagation model for static probability
through logic gates, using a two-input
AND gate as an example. In this model,
if the static probabilities of the inputs
are \( P_1 \) and \( P_2 \) respectively,
and the two signals are statistically
uncorrelated, the output static
probability is \( P_1 \times P_2 \). This is
because the AND gate outputs a logic 1
only when both inputs are at logic 1. The
assumption of uncorrelated input signals
is crucial for the correctness of this
4. Consideration of Signal Correlation:
While the uncorrelated input signal
assumption holds for certain logic
gates like the AND gate, it may not
apply universally to all gates. Some
gates may exhibit signal correlation,
where the inputs are not statistically
independent. To address this, more
sophisticated propagation models that
consider signal correlation have been
studied. These models offer better
accuracy but require higher
computational resources due to the
increased complexity.
5. Trade-off Between Accuracy and
Complexity: Propagation models that
consider signal correlation provide more
accurate results by capturing the
dependencies between input signals.
However, they come at the cost of
increased computational complexity.
Designers must balance the need for
accuracy with the available
computational resources when choosing
a propagation model for power analysis.
Let we're considering an arbitrary Boolean function
f(x1, ..., xn) with n input variables. We're applying
Shannon's decomposition, which breaks down the
function with respect to one of its input variables xi .
Shannon's decomposition states that any Boolean
function can be expressed as the logical OR of two
terms: one where xi is set to 1 (fxi), and another
where xi is set to 0 ( fxi').

The decomposition formula is given as:

f(x1, ..., xn) = fxi(x1, ..., xn) + fxi'(x1, ..., xn)

Where:
- fxi(x1, ..., xn) is the Boolean function obtained by
setting xi = 1 in f(x1, ..., xn) .
- fxi’(x1, ..., xn) is the Boolean function obtained by
setting xi = 0 in f(x1, ..., xn) .
Now, let's consider the static probabilities
of the input variables, denoted as P(x1), ...,
P(xn) . We want to find the static
probability of the output of the function
f(x1, ..., xn) .

Since the two sum terms in the


decomposition are mutually exclusive
(meaning only one of them can be 1 at a
time), we can simply add their
probabilities. This is because the static
probability of the output being 1 is the
probability of it being 1 when xi is 1, plus
the probability of it being 1 when xi is 0.
Mathematically, we can express this as:
P(f(x1, ..., xn)) = P(fxi(x1, ..., xn)) +
P(fxi'(x1, ..., xn))

Where:
- P(f(x1, ..., xn)) is the static probability of the
output of the function f(x1, ..., xn) .
- P(fxi(x1, ..., xn)) is the static probability of
the output when xi is 1.
- P(fxi'(x1, ..., xn)) is the static probability of
the output when xi is 0.

This formula allows us to compute the static


probability of the output of an arbitrary
Boolean function by considering the static
probabilities of its input variables and applying
Shannon's decomposition.
In the context of the derivation
described, after applying Shannon's
decomposition to the Boolean function
f(x1, ..., xn) with respect to xi , we
obtain two new Boolean functions: fxi
and fxi'. These new functions do not
contain the variable xi because
Shannon's decomposition isolates the
effect of xi on the original function.

Now, let's consider the static


probabilities P(xi) and Pxi' These
probabilities represent the likelihood
that the input variable xi is at logic
Since P(xi) and P(xi') represent the
probabilities of mutually exclusive events
(either xi is 1 or xi is 0), we have:

P(xi) + P(xi') = 1

This equation reflects the fact that the sum of


probabilities of all possible outcomes (either xi
is 1 or 0) equals 1, which is the total
probability.

So, rearranging the equation, we can express


P(xi') in terms of P(xi) as:

P(xi') = 1 - P(x_i)

This equation states that the probability of xi


being at logic level 0 P(xi') is equal to 1 minus
This relationship holds true because
in a binary scenario where xi can
only be either 1 or 0, if the
probability of xi being 1 increases,
the probability of xi being 0
decreases, and vice versa.

Thus, in the context of the recursive


application of Shannon's
decomposition, P(xi') is computed
from P(xi) by subtracting P(xi) from
1, indicating the complementary
ransition Density Signal Model
•The Transition Density Signal Model
(TDSM) is a mathematical framework
used in digital circuit analysis,
particularly in the context of power
and timing analysis. It's designed to
capture the statistical behavior of
signal transitions within a digital
•The transition density formulation
circuit.
introduced by Najm for probabilistic
analysis of gate-level circuits. It
defines a logic signal as a zero-one
stochastic process characterized by
two parameters: static probability
and transition density.
[Link] Probability: This parameter
represents the probability that a
signal is at logic 1 at any given
time, as defined by Equation (3.3).
It quantifies the likelihood of the
signal being in the high state.
[Link] Density: Transition
density refers to the number of
signal toggles (transitions
between logic states) per unit
time. It captures the rate at which
the signal changes its logic level.
A higher transition density
The random signal model, which considers both
static probability and transition density, is a
generalization of the static probability model
discussed earlier. This model allows for a more
comprehensive characterization of logic signals.
In the example provided with two periodic logic
signals, the top and middle signals have
identical static probabilities but different
transition densities, illustrating how these
parameters can vary independently. Similarly,
the middle and bottom signals have both
identical static probabilities and transition
densities, making them indistinguishable under
this formulation.
While the original transition density formulation
assumes a continuous signal model where
transitions can occur at any point in time, it can
Equation (3.16) is referenced for
deriving additional parameters such as
p10 and p01 ​ from the specified static
probability p and transition density T.
For instance, given p1=0.4 and T=0.1,
p10= T/2p1=0.125 and p01​=​T/2p0 ​=
0.083 can be derived. These parameters
provide further insights into the
behavior of the signal and are essential
for probabilistic analysis and modeling
of gate-level circuits.
Propagation of Transition Density
The focus is on deriving the transition
density of the output signal y based on
the transition density of the input
variables in a Boolean function f(x1, ...,
xn) . The approach assumes a zero gate-
delay model, meaning the output
changes instantaneously with the input.

1. Transition Density Propagation: The


transition density D(x) of the input
variables is given. Now, the goal is to find
the transition density D(y) of the output
signal y .
[Link] Gate-Delay Model: This
model assumes that the output
changes instantly with any change
in the input. So, when an input xi
changes its state, it directly affects
the output y if certain conditions
are met.

[Link]'s Decomposition
Equation: The equation y = Xi! . Ixi’
+ Xi .Ixi is referenced, where Ixi’
and Ixi are Boolean functions
obtained by setting xi = 1 and xi =
4. Analysis of Input Transitions:
Assuming an input transition from 1
to 0 Xi = 1 to Xi = 0 , it's examined
how this transition affects the
output y . Similarly, the effect of a
transition from 0 to 1 Xi = 0 to Xi =
1 is analyzed.

5. Conditions for Output Transition:


To trigger a logic change in y , the
Boolean functions Ixi’ and Ixi must
have different values when Xi = 1
and Xi = 0.
[Link] Scenarios: There are two
scenarios where a logic change in y
can occur due to an input
transition:
- Xi = 1 to Xi = 0 transition
- Xi = 0 to Xi = 1 transition

These points set the stage for


further analysis to determine the
conditions under which an input
transition triggers an output
transition in the Boolean function
f(x1, ..., xn) . The focus is primarily
on identifying the scenarios where
That an input transition at xi​ affects
the output y if and only if the Boolean
difference between the two functions
Ixˉi​​ and Ixi​​ is equal to 1. This Boolean
difference is denoted as dy​/dxi, which
is a Boolean function derived from I(x1​
,...,xn​) and does not include the
Now,
variable let xi​
P. dy​/dxi represent the static
probability that the Boolean function dy​
/dxi ​ evaluates to logic 1, and D(xi​) be
the transition density of xi​. Due to the
assumption of uncorrelated inputs, the
output's transition density is
contributed by the transition density of
each input xi​. By analyzing the
conditions for all input signals xi​, we
Equation (3.25) is referenced for
propagating static probability through
the Boolean function, while Equation
(3.30) is used to propagate the
transition density. These equations
serve as fundamental theorems for
propagating static probability and
transition density through any Boolean
function I(x1​,...,xn​) under the zero
gate-delay model. They allow for the
computation of P(dy​/dxj) given P(xj​)
Gate Level Power Analysis Using
Transition Density
Probabilistic gate-level power analysis,
which offers an alternative to simulation-
based approaches. This method involves
propagating transition densities through
Boolean functions to estimate power
dissipation
1. For eachin combinational
internal circuits.
node y of the circuit,
find the Boolean function of the node
with
respect to the primary inputs.
2. Find the transition density D(y) of each
node y =f(x 1 , ••• , xn) using Equation
(3.30).
3. Compute the total power with the
Basic Idea: The primary inputs'
transition densities are propagated to
internal nodes. These transition
density values are then treated as
node frequencies, and the P=CV2
equation is applied for power
Algorithm for Combinational Circuits:
dissipation estimation.
• The algorithm involves steps to
propagate transition densities
through the circuit.
• Transition density propagation has
been validated using statistical
theory, eliminating the need for
costly time domain logic simulation.
• The method is experimentally observed
to be reasonably accurate for
combinational circuits.
Advantages over Static Probability
Methods:
• Transition density method is more
accurate than pure static probability
methods as it considers additional
parameters to characterize random
logic signals.
• It is more computation-efficient
compared to event-driven logic
simulation.
Commercial CAD Software: Commercial
software based on this analysis
technique has been developed and
introduced.
Disadvantages:
• Accuracy: The method assumes zero-
delay logic gates and does not
properly model signal glitches or
spurious transitions.
• Ignored Signal Correlations: Signal
correlations at primary inputs are not
considered.
• Limited to Combinational Circuits:
Transition density analysis is not
suitable for sequential circuits.
Extensions to Sequential Circuits:
• Several approaches have been
proposed to extend the analysis
technique to sequential circuits.
• These approaches involve predicting
probabilities and transition densities
of sequential elements' outputs and
propagating them to combinational
logic.
Sequential Signal Probability
Computation:
• Analytical Approach: Approximates
signal statistical quantities using
numerical methods.
• Simulation Approach: Performs high-
Signal
Entropy
In VLSI (Very Large Scale Integration)
power analysis, entropy theory offers a
unique perspective by treating the
signals within a logic circuit as random
variables

1. Entropy Theory: Entropy is a concept


borrowed from information theory,
which quantifies the uncertainty or
randomness of a system. In the context
of VLSI power analysis, entropy theory
is applied to understand the
randomness or unpredictability of
2. Random Signals: In this approach,
signals within the logic circuit are
treated as random variables. This
means their values are not
deterministic but rather probabilistic,
and they can take on different values
with certain probabilities.

3. Switching Activity: The entropy or


randomness of the signals is directly
related to the average switching
activity of the circuit. Switching activity
refers to how frequently signals change
their values within the circuit.
4. Power Dissipation: Empirical observations
suggest a relationship between the entropy
measures of the system and the power
dissipation of the VLSI circuit. Specifically,
circuits with higher entropy or greater
randomness tend to dissipate more power.
5. High-Level Power Models: Building on these
observations, researchers can develop high-
level power models that relate power
dissipation to entropy measures of the system.
These models provide a way to estimate power
consumption based on the degree of
randomness or unpredictability within the
circuit's signals.
Overall, the application of entropy theory to
VLSI power analysis offers a novel approach to
understanding and predicting power
Basics of Entropy
The information content, often denoted as Ci
of an event Ei occurring with probability Pi ,
can be quantified using the logarithm base 2 of
the inverse of its probability:

This formula ensures that rare events (those


with low probability) contribute more to the
overall information content, while common
events (those with high probability) contribute
less.

To get the total information entropy of the


system, you would sum up the information
content of all possible events:
This formula gives the entropy of the system in
The entropy of a random variable X, denoted
as H(X), in the context of information theory.
When n is large, computing H(X) directly can
be impractical due to the large number of sum
terms involved.
To simplify the computation, especially when
the probabilities Pj​ are skewed, you can use
the independence of the individual signal bits
Sj​ and approximate H(X) by summing the
entropy of each individual bit. This
approximation assumes that each bit behaves
independently and has its own probability Pj​.
The entropy of a single bit Sj​ can be
calculated using the formula:
Where Pj​ is the probability of bit Sj​ being 1, and (1−Pj​) is
the probability of it being 0.
When all bit patterns are equally likely,
meaning Pj​=0.5 for all j, the maximum
entropy H(X) is achieved, which equals log2​
(m)=n bits. This scenario represents
maximum uncertainty because all outcomes
are equally probable.
However, in general, when the probabilities Pj​
are skewed, H(X) will be lower, indicating less
uncertainty. This is because skewed
probabilities imply that certain outcomes are
more likely than others, reducing the overall
randomness or uncertainty associated with
the random variable X.
Power Estimation Using Entropy
•The concept of entropy in the context of logic
signals is crucial for understanding their
"randomness" and dynamic behavior. Entropy
provides a quantitative measure of
uncertainty or randomness within a set of
signals.

•For n-bit signals, the entropy value reflects


how frequently the signals change states,
which directly influences power consumption
in digital circuits.
•When signals toggle infrequently, the
word-level values they represent tend to
remain unchanged for longer periods,
resulting in a lower occurrence of
different values. This reduced variability
leads to a lower entropy measure.

•On the other hand, when signals switch


states frequently, the word-level values
are more varied and tend to appear with
roughly equal probabilities, resulting in
higher entropy.
•This relationship between entropy and
signal activity forms the basis for using
entropy as a metric for power
estimation.

•High switching activity, indicative of


high entropy, generally corresponds to
increased power consumption due to
the frequent charging and discharging
of capacitances in the circuit.

• Conversely, low switching activity,


indicating low entropy, suggests lower
power consumption.
•Nemani and Najm proposed an entropy-based
power estimation method, leveraging the
correlation between signal entropy and
switching activity to predict power usage.

•This approach involves calculating the entropy


of the signals and using it as a predictor for the
average switching frequency, which in turn is
used to estimate power consumption.

• Another similar formulation was developed,


reinforcing the idea that entropy can serve as a
reliable indicator for power estimation in
digital circuits.

• By quantifying the randomness and switching


behavior of signals through entropy, designers
can make more informed predictions about
•To estimate the power dissipation in a
combinational logic circuit with m-bit input X
and n-bit output Y, assuming a constant Vdd​,
we start by considering the switching
frequency Ii​ of a node capacitance Ci​ in the
circuit.
•Let N be the total number of nodes in the
circuit. The power dissipation P can be
expressed using the formula:
Assuming Ii​=F is constant for all nodes
i, the equation simplifies to:
•This equation shows that power
dissipation is related to the input and
output bit sizes m, n, and the entropy
measures H(X) and H(Y).
•Typically, H(Y) is dependent on the
Boolean functions and the input signals
of the circuit.
•In general, ≈H(Y)≈H(X) because the
outputs of a logic gate make fewer
transitions than its inputs
•The entropy power analysis method
indicates that circuits with more inputs
and outputs or higher switching
activities (expressed by entropy) result
in greater power dissipation.
•While this method provides relative
accuracy suitable for trade-offs in early
design stages, it is less precise for
absolute power estimation due to
varying implementation details such as
voltage, frequency, process technology,
devices, and cell libraries.
• It is most applicable to large
combinational logic circuits with a high
•For a practical application, the entropy
measures H(X) and H(Y) are typically
obtained by monitoring the signals during a
high-level simulation of the circuit or derived
from individual bit probabilities.

•These principles can be applied for early-


stage design trade-offs, such as comparing
different state machine implementations by
their input and output entropies and power
estimates​

You might also like