0% found this document useful (0 votes)
222 views28 pages

Chen Et Al. - 2024 - The Dawn of AI-Native EDA Opportunities and Challenges of Large Circuit Models

The document discusses the emergence of AI-native Electronic Design Automation (EDA) through the development of large circuit models (LCMs) that integrate multimodal representation learning for circuit design. It critiques existing AI4EDA approaches for merely augmenting traditional methods without addressing the complexities of circuit data, advocating for a paradigm shift that fully incorporates AI into the design process. The authors propose that LCMs can revolutionize EDA by enhancing design productivity and enabling significant advancements in circuit performance, power, and area optimization.

Uploaded by

yangkunkuo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
222 views28 pages

Chen Et Al. - 2024 - The Dawn of AI-Native EDA Opportunities and Challenges of Large Circuit Models

The document discusses the emergence of AI-native Electronic Design Automation (EDA) through the development of large circuit models (LCMs) that integrate multimodal representation learning for circuit design. It critiques existing AI4EDA approaches for merely augmenting traditional methods without addressing the complexities of circuit data, advocating for a paradigm shift that fully incorporates AI into the design process. The authors propose that LCMs can revolutionize EDA by enhancing design productivity and enabling significant advancements in circuit performance, power, and area optimization.

Uploaded by

yangkunkuo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

1

The Dawn of AI-Native EDA: Opportunities and


Challenges of Large Circuit Models
Tsung-Yi Ho1 , Sadaf Khan1 , Jinwei Liu1 , Yu Li1 , Yi Liu1 , Zhengyuan Shi1 , Ziyi Wang1 , Qiang Xu1 § ,
Evangeline F.Y. Young1 , Bei Yu1 , Ziyang Zheng1 , Binwu Zhu1 , Keren Zhu1
1 The Chinese University of Hong Kong

Yiqi Chen2 , Ru Huang2, 3 , Yun Liang2 , Yibo Lin2 , Guojie Luo2 § , Guangyu Sun2 , Runsheng Wang2 , Xinming
Wei2 , Chenhao Xue2 , Jun Yang3 , Haoyi Zhang2 , Zuodong Zhang2 , Yuxiang Zhao2 , Sunan Zou2
2 Peking University 3 Southeast University
arXiv:2403.07257v2 [cs.AR] 1 May 2024

Lei Chen4 , Yu Huang5 , Min Li4 , Dimitrios Tsaras4 , Mingxuan Yuan4 § , Hui-Ling Zhen4
4 Huawei Noah’s Ark Lab 5 Huawei HiSilicon

Zhufei Chu6 , Wenji Fang7 , Xingquan Li8 , Junchi Yan9 , Zhiyao Xie7 , Xuan Zeng10
6 Ningbo University 7 Hong Kong University of Science and Technology
8 Peng Cheng Laboratory 9 Shanghai Jiao Tong University 10 Fudan University

Enhance Existing EDA Tools Novel EDA Applications

IR Drop Cross-stage Early-stage PPA


Logic Synthesis Model Checking QoR Estimation Design Generation
Prediction Verification Prediction
…… ……
Recipe Exploration Capture Logic Structure-Aware Multiview Layout Spec-guided Circuit Prioritize Cases Multi-stage
Correlation Circuit Learning Learning Generation Alignment

Aligned Representations

Large Circuit Model


Floorplan
Encoder

Encoder
Encoder

Encoder

Encoder

Encoder

Encoder
Netlist

Image
HDL
Spec

Arch

HLS

Specification Architecture HLS HDL Netlist Floorplan Layout


Registers

ALU

Front-end Design Back-end Design

Fig. 1: Large Circuit Models: we call upon the creation of dedicated foundation models for circuits, which intricately
intertwines computation with structure, unlike other types of data (e.g., texts and images). Specifically, each design stage of
the EDA flow is considered a separate modality and requires a specific representation learning strategy to embed the available
circuit characteristics. The higher the design level is, the more semantics to represent; The lower the design level is, the more
details to represent. Central to the appeal of LCMs is their ability to fuse and align disparate representations throughout the
design continuum, creating a unified narrative that spans from high-level functional specifications to detailed physical layouts.
This unified approach promises to streamline the EDA process, reduce time-to-market, and improve design PPA.


§ The authors in each institution are ordered alphabetically. Contact: [email protected]; [email protected]; [email protected].
2
Abstract—Within the Electronic Design Automation (EDA) domain, AI-driven solutions have emerged as formidable tools, yet they typically augment
rather than redefine existing methodologies. These solutions often repurpose deep learning models from other domains such as vision, text, and
graph analytics applying them to circuit design without tailoring to the unique complexities of electronic circuits. Such an “AI4EDA” approach falls
short of achieving a holistic design synthesis and understanding, overlooking the intricate interplay of electrical, logical, and physical facets of circuit
data. This paper argues for a paradigm shift from AI4EDA towards AI-native EDA, integrating AI at the core of the design process. Pivotal to this
vision is the development of a multimodal circuit representation learning technique, poised to provide a comprehensive understanding by harmonizing
and extracting insights from varied data sources, such as functional specifications, RTL designs, circuit netlists, and physical layouts.

We champion the creation of large circuit models (LCMs) that are inherently multimodal, crafted to decode and express the rich semantics and
structures of circuit data, thus fostering more resilient, efficient, and inventive design methodologies. Embracing this AI-native philosophy, we foresee
a trajectory that transcends the current innovation plateau in EDA, igniting a profound “shift-left” in electronic design methodology. The envisioned
advancements herald not just an evolution of existing EDA tools but a revolution, giving rise to novel instruments of design tools that promise to
radically enhance design productivity and inaugurate a new epoch where the optimization of circuit performance, power, and area (PPA) is achieved
not incrementally, but through leaps that redefine the benchmarks of electronic systems’ capabilities.

Index Terms—AI-native EDA, large circuit models (LCMs), multimodal circuit representation learning, circuit optimization.

1 T HE F OUNDATION M ODEL PARADIGM Despite these advancements, the field of circuit design has only
begun to scratch the surface of what foundation models can offer.
The landscape of artificial intelligence (AI) has been profoundly
This hesitant engagement contrasts starkly with the transformative
transformed in recent years by the advent of large foundation
potential these models hold for this important field.
models. These models, characterized by their vast scale and general
applicability, have demonstrated an uncanny ability to understand,
predict, and generate content with a level of sophistication that was 1.2 The Unique Challenge of Circuit Data
previously the exclusive domain of human intelligence. In the realm of circuit design, a notable phenomenon is the inherent
similarity of many new designs to past iterations. Despite these
similarities, designers frequently face the challenge of recreating or
1.1 The Rise of Foundation Models
redesigning circuits from scratch, driven by the subtle yet critical
Large foundation models represent a significant leap in AI. These nuances required to meet ambitious performance, power, and area
models, typically pre-trained on web-scale datasets using self- (PPA) objectives. This repetitive process highlights the need for a
supervision techniques [1], have been adapted to excel in a wide learning solution that can effectively draw from historical successes
array of downstream tasks. In the fields of natural language and failures.
processing (NLP) and computer vision (CV), these models have The emergence of AI for electronic design automation
not only set new benchmarks but have fundamentally redefined the (AI4EDA) solutions [15] marks an attempt to integrate machine
realms of possibility. learning (ML) techniques into circuit design. These advancements
In NLP, models like BERT [2] and its derivatives, including represent significant progress but often only augment, rather than
RoBERTa [3] and T5 [4], have revolutionized language understand- redefine, existing methodologies. Typically, AI4EDA repurposes
ing, especially in contextual interpretation of text, thereby enhanc- deep learning models from other domains for EDA tasks such as
ing complex language-based tasks. Concurrently, the decoder-only PPA estimation and optimization, verification, or fault detection.
GPT series [5] has shown remarkable versatility, excelling in However, within the confines of traditional design frameworks,
diverse tasks from creative writing to code generation and pointing these models act more as individual analytical tools than as integral
towards the burgeoning potential of artificial general intelligence components of the design process, often failing to fully address the
(AGI). In CV area, self-supervised foundation models [6], [7], [8] unique complexities of circuit data.
have achieved competitive performances in image understanding Specifically, the distinctive nature of circuit data poses unique
tasks, rivaling fully supervised approaches. challenges for machine learning. Unlike text, images, or regular
The recent advent of multimodal foundation models has ushered graph data, circuit design intricately intertwines computation
in a new era of possibilities, integrating diverse data types such with structure. Minor structural changes can lead to significant
as text, images, and audio. A pioneering example is the CLIP functional impacts, and vice versa. This interdependency renders
model [9], which effectively bridges linguistic and visual data the task of modeling circuits highly nuanced and complex. Without
through contrastive learning. This innovation has set the stage for considering the above, existing AI4EDA solutions frequently fall
generative models like DALL-E [10] and Stable Diffusion [11], short in achieving a comprehensive synthesis and understanding of
which demonstrate the capability to generate intricate images the multifaceted interplay between electrical, logical, and physical
from textual descriptions, seamlessly blending visual and linguistic aspects of circuit data, which is essential for truly innovative design
understanding. Additionally, the recently introduced promptable synthesis.
CV systems (e.g., SAM [12]) have exhibited exceptional zero- Recent advancements in AI-native circuit representation learn-
shot generalization in image segmentation, enabling precise object ing, such as those presented in [16], [17], have begun to address
identification and extraction. The emergence of GPT-4V [13] and these unique challenges. The integration of multimodal learning
Gemini [14] further exemplify the evolution of AI, seamlessly nav- presents a significant opportunity to further enhance their effective-
igating and synthesizing multimodal information, thereby opening ness. By adopting the principles and capabilities demonstrated
new avenues for innovation across various fields, from creative by existing foundation models on various types of data, we
content generation to complex problem-solving in engineering and conceptualize a paradigm shift from AI4EDA to AI-native
design. EDA.
3

Pivotal to this vision is the development of sophisticated large Section 2 provides a historical overview of EDA, tracing its
circuit models (LCMs). Envisioned as models adept at integrating evolution alongside the semiconductor industry. It emphasizes
and interpreting diverse data types specific to circuit design, how the field has navigated challenges of complexity through
LCMs could potentially revolutionize the design, optimization, abstractions, setting a foundation for understanding the significance
and verification processes of electronic circuits. of LCMs in this evolving landscape. Next, we discuss the current
integration of AI in EDA in Section 3, highlighting how deep
learning has been utilized to improve EDA processes.
1.3 The Feasibility and Promises of AI-Native LCMs
In Section 4, we introduce AI-native LCMs, illustrating their
In the world of semiconductor design, the potential for leveraging departure from traditional AI4EDA approaches. It delves into
large circuit models is not just aspirational; it is rooted in a rich how these models encapsulate the intricacies of circuit design,
heritage of technological evolution. offering a more comprehensive approach to circuit analysis and
Decades of research and development have yielded a vast even creation. Focusing on the development of unimodal circuit
repository of circuit data. Though proprietary barriers exist, there is representation learning, Section 5 discusses its critical role in
enough in the public domain [18], [19], [20] to fuel the development building the foundation for multimodal LCMs. It explores the
of robust, intelligent models. The industry’s long history provides nuances of this approach in achieving a thorough understanding of
data that is richly annotated with domain expertise, offering deep circuit data. Then, Section 6 navigates the transition to multimodal
insights into the intricacies of circuit design. integration in LCMs. It discusses the development of techniques
Moreover, the landscape of circuit types, though vast, is marked to align and integrate representations from different design stages,
by commonalities that transcend individual designs. Processors, emphasizing the importance of preserving the original design intent.
domain accelerators (e.g. digital signal processors (DSPs) and AI Section 7 illustrates the potential applications of LCMs through
accelerators), communication modules, and other core components case studies and envisioned scenarios, bridging the gap between
display a pattern of design module reuse. Examples of these theoretical concepts and practical implementations. In Section 8,
reusable modules include arithmetic units, various decoders, and we explore the application of LCMs in specialized circuit domains,
cryptographic cores. This consistency provides a predictable discussing how these models can be adapted to cater to the unique
pattern—akin to an inductive bias—that is conducive to the needs of diverse circuit types other than standard digital circuits,
application of machine learning models. including standard cell designs, datapath units, and analog circuits.
Advances in neural network architectures, particularly Trans- Next, we discuss the challenges and opportunities presented by
formers [21] and graph neural networks (GNNs) [22], are well- the adoption of LCMs in EDA in Section 9. It highlights issues such
suited to capturing the complex, graph-like structure of circuit as data scarcity and scalability, as well as the potential advance-
schematics. They present an opportunity to transform the intricate ments these challenges can foster. Finally, the paper concludes with
web of design elements into actionable insights, a feat previously a summary of the key insights and a forward-looking perspective in
unattainable. The AI advancements from other domains, e.g., CLIP Section 10. It calls for continued collaboration between the AI and
model with multimodal machine learning capabilities [23] and EDA communities and suggests future research avenues to further
large language models for code generation [24], further underscore advance the field.
the potential for transformative applications in LCMs. These
capabilities could be adapted to address the unique challenges 2 H ISTORICAL O DYSSEY OF EDA
in circuit designs of various forms, enabling more nuanced and
As we stand on the precipice of this new frontier of AI-native EDA,
comprehensive modeling than ever before.
it is vital to appreciate the historical EDA journey. Understanding
In summary, while the challenges are nontrivial, the develop-
the evolution of cutting-edge EDA tools, methodologies, and
ment of LCMs is poised on a solid foundation of historical data,
philosophies will provide invaluable context for the challenges
pattern prevalence, and cutting-edge computational techniques.
and opportunities that lie ahead.
The potential for LCMs to revolutionize the field of EDA is not
just a theoretical possibility but a tangible goal, driven by the
convergence of historical knowledge and modern AI advancements. 2.1 Core Objectives and Complexities in EDA
By processing and interpreting a diverse array of data sources The odyssey of EDA is a chronicle of human ingenuity and
and formats, including schematic diagrams, textual specifications, technological advancement. It is a story that mirrors the exponential
register-transfer level (RTL) designs, circuit netlists, physical growth of the semiconductor industry, fueled by Moore’s Law,
layouts, and performance metrics, LCMs can facilitate a ‘shift- and characterized by the ceaseless push for smaller, faster, and
left’ in the design methodology. This proactive AI-native approach more efficient electronic devices. The journey from simple logic
enables the early identification of potential performance issues and circuits to today’s billion-transistor integrated circuits (ICs) has
design bottlenecks, streamlining the testing and redesign processes, necessitated a layered hierarchical design methodology with
and leading to more informed and efficient development cycles. the help of sophisticated EDA toolsets. This hierarchy, marked
by stages such as specification, architecture design, high-level
algorithm design, RTL design, logic synthesis, and physical design,
1.4 Overview of This Perspective Paper allows for incremental refinement of the circuit design, each stage
This paper embarks on a comprehensive exploration into the dawn adding a layer of detail, ensuring functionality while striving for
of AI-native EDA, focusing on the development and application optimization.
of large circuit models that inherently incorporate multimodal The journey of EDA is not just marked by the sophistication of
data. Spanning nine sections, the paper delves into the historical its tools but also by the fundamental goals that drive its evolution.
evolution of EDA, the current state of AI in this field, and the Two core objectives have consistently shaped the development of
promising future shaped by LCMs. EDA solutions:
4

• Equivalence and Consistency across Transformations: A typical front-end design flow, also known as logic design, is
Ensuring that each transformation—from behavioral de- shown in Fig. 2 a), in which the design specification is transformed
scriptions to gate-level implementation and from logical into a logic netlist. The front-end design flow begins with a design
to physical representation—maintains the original design specification, followed by architecture exploration. Subsequently,
intent is essential. C-RTL equivalence checking, assertion- HDLs are created to translate the design into a form suitable
based verification (ABV), logic equivalence checking for implementation, typically at the RTL abstraction level. The
(LEC), sequential equivalence checking (SEC), and various introduction of hardware construction languages (like Chisel [27])
types of simulation tools have been indispensable in this and C/C++ high-level synthesis (HLS) adds a new dimension
regard, providing designers with the assurance that despite to front-end design and offers more flexibility and efficiency in
the myriad of transformations a design undergoes, the addressing the complexities of modern front-end design.
end result is functionally equivalent to the original spec- After RTLs are created or generated from HLS tools, designers
ifications. This integrity across various stages, including first use static analysis tools such as Lint [28] to identify potential er-
architecture design, logic synthesis, technology mapping, rors and then apply various verification techniques, including logic
and place-and-route, is the bedrock upon which reliable simulation, emulation, and various formal methods (e.g., model
electronic design is built. checking). These techniques collectively contribute to validating
• Optimization of PPA and Other Design Factors: The the functionalities of the RTL design faithfully following the design
relentless pursuit of optimizing performance, power, and specification. The verification and testing processes, spanning
area is central to EDA. As designs scale and complexities various transformations and stages, are integral components of
increase, the balance between these three aspects becomes design flows, typically consuming between 60% to 70% of the total
more challenging to achieve. Tools dedicated to PPA engineering efforts allocated. This substantial investment under-
optimization employ a variety of techniques, including scores their critical role in ensuring the functionality and reliability
predictive modeling, heuristic algorithms, and iterative of circuit designs. Across diverse abstracts of circuit designs, a
refinement, to squeeze out efficiencies at every level of plethora of verification techniques are employed, reflecting the
design. Meanwhile, the traditional PPA triad is no longer nuanced requirements and challenges encountered at each stage
the sole focus. With the advent of ultra-deep submi- of development. For example the C-RTL equivalence checking
cron technologies, new concerns have emerged. Circuit rigorously compares the RTL implementations against the C-based
reliability has taken center stage, with issues such as specification models. This verification method, as evidenced by
electromigration and thermal effects becoming critical. studies such as [29], [30], is frequently applied, particularly in the
Manufacturability is another growing concern, as variability context of data-path intensive designs, as highlighted by Hector
in fabrication processes can significantly impact yield and from Synopsis [31]. Given that circuit designs within this abstract
performance. primarily encapsulate hardware behavior while abstracting concrete
physical details, theorem provers and SMT solvers emerge as
In the fiercely competitive realm of electronic product devel- pivotal tools for enhancing verification efficacy [32], [33].
opment, reducing time-to-market (TTM) is paramount. The rapid Next, the RTL implementation undergoes the next stage in the
evolution of consumer electronics, exemplified by the yearly refresh design flow, wherein logic synthesis tools have revolutionized the
cycles of smartphones and wearables, underscores the urgency way HDL code is transformed into gate-level representations. Logic
to expedite product launches to capture market share and meet synthesis typically involves three main steps: elaboration, logic
consumer expectations. This pressure significantly impacts the optimization, and technology mapping. The primary objective of
EDA process, where the need for TTM can sometimes compromise logic synthesis is to transform RTL codes into a gate-level netlist
design thoroughness, leading to potential flaws. For instance, under that meets specific design constraints while optimizing for power
the gun to release the next generation of microprocessors, teams efficiency, maximizing performance, and minimizing the required
may bypass exhaustive verification in favor of meeting launch silicon area, all within an acceptable timeframe. An indispensable
windows, risking the introduction of bugs into the final product. aspect of logic synthesis involves conducting logic and sequential
When such issues are not amendable through engineering change equivalence checks between optimized netlists and their initial
orders (ECO) [25], they necessitate a costly and time-consuming counterparts, as underscored by studies such as [34], [35], [36].
redesign, further exacerbating time-to-market pressures. Therefore, Furthermore, custom equivalence checking techniques have been
this cycle highlights the crucial need for EDA solutions that not tailored to cater to specific circuit design requirements, such as
only streamline design and verification processes but also ensure those pertaining to clock-gating [37].
design accuracy from the outset. The collective progression of these front-end design and
verification tools has not only streamlined the design process
but also expanded the realm of what is possible in digital circuit
2.2 EDA for Front-End Design
design. As we navigate increasingly complex design landscapes,
In the 1980s, the growth of the semiconductor sector was hindered these tools have become indispensable in the relentless pursuit of
by the manual creation of large schematics, significantly limiting innovation and optimization in digital systems.
design productivity [26]. The narrative of front-end EDA tools is
a testament to the field’s evolution from the era of hand-drawn
schematics to the sophistication of automated logic synthesis. This 2.3 EDA for Back-End Design
evolution has been underpinned by the introduction of hardware For modern chip design, the back-end design flow, also referred
description languages (HDLs) like Verilog and VHDL, which have to as layout design, is depicted in Fig. 2 b), transitioning from
become the bedrock for digital design representation, simulation, a gate-level or generic technology (GTech) netlist to a finalized
and verification. layout [38].
5

Gate
Netlist

Implementation
Verification
Technology
Mapping/DFT
Formal

Physical Simulation
Design

Physical Analysis
Verification
Parasitic
Design Rule
Extraction
Check

Sign-off Timing
Electronic
& Noise
Rule Check

Layout vs Power
Schematic & IR Drop
Layout
Synthesis

Tape-out

a) Front-end design flow b) Back-end design flow

Fig. 2: Typical front-end and back-end design flows.

This intricate process initiates with technology mapping, where interconnections without design rule violations or signal
a process library is applied to adapt the synthesized gate-level netlist integrity issues. It optimizes for shortest paths, minimizes
to a specified technology library, with a keen focus on optimizing crosstalk and delay, and manages layer assignment and
PPA constraints. To enhance testability for mass production, congestion.
testability features such as scan chains, built-in self-test (BIST)
As chip designs escalate in complexity, the functionalities
circuits, and boundary scan are incorporated into the design. The
of back-end EDA tools extend beyond mere layout creation and
subsequent phase, physical design, is tasked with establishing
routing, embracing a multi-faceted optimization challenge. For
the chip’s physical layout, entailing floorplanning, power delivery
example, thermal analysis tools empower designers to forecast
network (PDN) design, placement, clock tree synthesis (CTS), and
and address thermal hotspots, guaranteeing the chip’s dependable
routing.
performance across diverse environmental scenarios. Also, various
• Floorplanning: Floorplanning establishes the chip’s phys- design for yield (DfY) strategies are required to maximize the
ical layout by optimizing the placement of major blocks manufacturing yield by identifying and mitigating potential yield
to minimize interconnect lengths and ensure efficient detractors, performing layout adjustments to address process varia-
silicon area utilization. It involves strategic arrangement tions, defect probabilities, and other manufacturing imperfections.
considering timing, power, and thermal constraints to set a Advanced DfY tools and methodologies analyze critical areas, ap-
foundation for the design. ply lithography-friendly design principles, and optimize the layout
• Power Delivery Network (PDN) Design: PDN design to enhance robustness against variations in the fabrication process,
ensures stable power supply across the chip, aiming ensuring higher yields and reliability of the final product [39].
to minimize voltage drop and maintain power integrity. Physical verification stands as a critical final step in the
The design of power and ground networks is crucial for back-end design phase, ensuring that the chip layout adheres to
delivering power efficiently, with considerations for IR all necessary specifications and standards before proceeding to
drop, current density, and electromigration. manufacturing. This process involves an array of checks, including
• Placement: Placement optimizes the arrangement of stan- design rule checking (DRC), electrical rule checking (ERC), and
dard cells or IP blocks within the floorplan to enhance layout versus schematic (LVS) verification. DRC is essential
performance, power, and area. It strategically positions for validating the layout against a set of predefined rules to
components to reduce wire length, congestion, and consid- ensure manufacturability, focusing on physical dimensions and
ers timing and thermal impacts, employing algorithms to spacing between circuit elements to prevent fabrication errors.
find an optimal configuration. ERC goes a step further by examining the electrical integrity
• Clock Tree Synthesis (CTS): CTS distributes the clock of the design, identifying issues such as signal integrity, power
signal to synchronize the circuit’s operations with minimal distribution problems, and ensuring the circuit meets its functional
skew and jitter. Designing a balanced clock distribution requirements. Lastly, LVS verification confirms that the layout
network ensures reliable and synchronized performance accurately reflects the original schematic design, guaranteeing that
across the chip. the physical representation matches the intended circuit behavior.
• Routing: Routing connects the components based on the Together, these verification steps identify and rectify potential
established placement and netlist, aiming to complete layout issues, safeguarding the correctness of the final chip.
6

Placement Result provided targeted approaches to in-cell routing, addressing the


Standard Cell
Placement
unique challenges of standard cell design.
Dynamic Program
Reinforcement Learning
2.4.2 EDA for Datapath Circuits
Satisfiability Modulo The evolution of datapath circuits, from individual components
Theories
such as adders, multipliers, multiply-accumulate (MAC) units to
the entire datapath, is a testament to the continuous advancements
Routing Result in EDA technologies. Over the years, EDA tools have evolved to
Standard Cell
In-cell Routing address the increasing complexity and performance demands of
these critical components.
A-star & Maze Routing
Integer Linear
Adders: Adders serve as the cornerstone of arithmetic op-
Programming erations in digital circuits. The design of adders, from simple
Satisfiability Modulo ripple-carry to more advanced carry-lookahead and prefix adders,
Theories
has significantly benefited from EDA tools. These tools employ
Gate Pin Via Metal 1 OD optimization algorithms to reduce latency, conserve area, and
minimize power consumption, crucial for enhancing the overall
Fig. 3: A typical standard cell design flow. performance of digital systems. The capability of EDA tools to
simulate various adder configurations allows designers to select
the most suitable architecture for specific applications, balancing
In summary, the back-end EDA tools have fundamentally speed with resource utilization.
transformed the landscape of chip design, empowering designers to Specifically, prefix-tree adders, recognized for their efficiency
craft complex integrated circuits that house billions of transistors in parallel carry computation, have seen significant development
operating in unison on a single chip. As semiconductor technology and optimization through EDA solutions. Early designs focused
progresses, the significance of EDA tools in the back-end design on basic parallel prefix adders like the Kogge-Stone and Brent-
phase is poised to grow, continuing to fuel innovation and enhance Kung adders, which provided a foundation for understanding
efficiency in chip design research and engineering practices. the balance between speed and area [44]. Recent advancements
have introduced more sophisticated designs such as the Sparse
2.4 EDA for Specialized Circuits Kogge-Stone and Spanning Tree adders, optimizing for both power
Beyond EDA tools for regular digital circuit designs, the field efficiency and silicon area [45]. Datapath compilers have become
has witnessed a notable specialization in toolsets designed to instrumental in navigating the trade-offs between different prefix-
meet the unique requirements of standard cells, datapath units, tree configurations, employing algorithmic and heuristic methods
and analog circuits. This evolution underscores the maturation to select the optimal structure for a given application scenario.
of EDA, providing designers with tailored solutions to optimize Multipliers: Multipliers are pivotal in performing fast arith-
these fundamental components efficiently. Specialized EDA tools metic computations, crucial for applications ranging from general
have become indispensable in addressing the nuanced challenges computing to specialized tasks in signal processing and machine
presented by each component type, enhancing the precision and learning. EDA technologies have facilitated the design of high-
performance of chip designs. performance multipliers by exploring innovative architectures like
Booth encoding and Wallace tree multiplication.
2.4.1 EDA for Standard Cells The Wallace Tree technique involves grouping the partial
Standard cells, the building blocks of digital ICs, follow predefined products generated from the multiplication process and then
structures that align with a library’s specifications, enabling their summing these groups in stages, which reduces the overall
reuse across diverse designs. The focus of EDA tools in standard height of the addition tree and, consequently, the propagation
cell design is primarily on automating the layout generation process, delay. This architecture is particularly favored in digital signal
encompassing crucial steps like placement and in-cell routing, as processing (DSP) and graphics processing units (GPUs) where
shown in Fig. 3. rapid mathematical computations are critical. Over the years,
The placement process is dedicated to determining the optimal enhancements to the Wallace Tree architecture have aimed at
transistor locations within a cell to maximize space utilization optimizing its layout to minimize area and power consumption
while maintaining functionality and performance integrity. The while maximizing speed, reflecting the ongoing advancements
common solution algorithms for the placement includes dynamic in EDA tools to meet the evolving demands of semiconductor
program, reinforcement learning, and satisfiability modulo theories. technology.
Innovations in placement strategies, as highlighted in [40], [41], MAC Units: The design of MAC units, essential for digital
have introduced methods to expedite this intricate procedure while signal processing and deep learning applications, has similarly
ensuring routability and design efficiency. In contrast, in-cell benefited from the innovations in EDA tools. The integration of
routing tackles the intricate task of establishing connections within optimized adder designs with efficient multipliers within MAC
the cell, a process complicated by the rigorous area constraints units is critical for achieving high throughput and low latency.
of standard cells. The in-cell routing are usually solved by A-star, EDA tools now utilize analytical models and simulation-based
interger linear programming, and satisfiability modulo theories. methods to explore various MAC unit architectures, including fused
This stage demands specialized routing solutions, distinct from multiply-add (FMA) configurations that perform multiplication and
those applied to broader digital circuits, to navigate the tight addition in a single operation and pipelined designs, to meet specific
confines of cell layouts. Contributions from [42], [43] have performance goals [46].
7

Floating-Point Units (FPUs): Floating-point units are essential Moving to the back-end, attention turns to physical layout
for executing arithmetic operations on floating-point numbers, a implementation. Analog physical design, including placement
necessity in applications requiring a wide dynamic range, such as and routing stages similar to digital methods, requires a detailed
scientific computing, graphics, and machine learning algorithms. approach due to analog circuits’ sensitivity to parasitics. This phase
The evolution of FPUs under the guidance of EDA tools also incorporates considerations for parasitic effects, component
highlights the industry’s commitment to addressing the precision, matching, and other layout-dependent factors essential for preserv-
performance, and power efficiency challenges inherent in floating- ing the circuit’s integrity and performance [47].
point operations. Techniques such as pipelining and parallel A major challenge in analog design is performance optimiza-
processing have been integral in enhancing the throughput of tion, marked by its nonlinearity and the lack of clear functional
FPUs, allowing for simultaneous execution of multiple floating- expressions. Despite these obstacles, the EDA community has
point operations. Advances in EDA methodologies have facilitated significantly advanced the automation of analog IC design over the
the exploration of novel FPU designs, such as the adoption of FMA years. These efforts have covered various areas, such as topology
units, as in MAC unit designs. selection or exploration [48], analog sizing [49], analog placement-
Datapath circuits: Beyond individual components, the design and-route [50].
of entire datapath circuits, which comprise a combination of adders,
In summary, the journey of specialized circuit designs encap-
multipliers, MAC units, and other logic elements, represents a
sulates a dynamic interplay of art and science. As technologies
complex challenge addressed by EDA tools.
advance and design requirements become more stringent, the role
These tools adopt a comprehensive strategy for refining
of EDA tools in facilitating efficient, accurate, and innovative
datapath circuits, ensuring seamless integration and peak efficiency
design solutions continues to be of paramount importance.
among components. As depicted in Fig. 4, the design journey
initiates with pinpointing a target application and its corresponding
architectural design, thereby defining a broad and intricate design 3 AI FOR EDA: S TATE - OF - THE -A RT
space. This space might include, for example, CPU tasks like
The prowess of deep learning, particularly its capability to discern
SPEC2017 benchmarks or GPU tasks such as matrix multiplication,
patterns from historical design data, offers promising enhancements
targeting either CPU or GPU architectures accordingly. The design
to EDA processes. This modern thrust is propelled by an ambition
space diverges into two principal domains: the application space,
to harness the extensive repository of design knowledge accumu-
outlining application-specific parameters like dataflow patterns
lated across decades to drive superior and more efficient design
or neural network mapping strategies, and the architecture space,
methodologies.
detailing the structural and resource parameters, such as CPU issue
width or the quantity of MACs in a neural processing unit (NPU).
The intersection of parameters from these domains establishes a 3.1 Supervised Learning in EDA
“design point”, which, upon post-compilation application mapping, The utilization of supervised learning in EDA represents a sig-
is subjected to thorough evaluation and validation via cutting-edge nificant stride towards integrating AI into the optimization and
EDA tools. This rigorous process iteratively explores and assesses estimation of design objectives. This subsection categorizes various
new design points until the exploration goals are achieved. supervised AI4EDA solutions based on their application stage
The progression to new design points is typically steered by within the standard design flow, highlighting seminal works in each
optimization algorithms, which have advanced significantly. These category for a focused overview. For those seeking an exhaustive
optimizations fall into two categories: black box optimization, review, references such as [15], [51] offer comprehensive surveys
which proceeds without presuppositions about the design space, on the subject.
often utilizing Bayesian Optimization (BO) for its efficacy in
exploring datapath circuit design spaces, and other black box 3.1.1 Pre-RTL ML Methods
methods like simulated annealing (SA), genetic algorithms (GA),
At the architecture level, supervised ML methods diverge into two
and hill-climbing techniques. Conversely, optimizations that incor-
primary categories: ML for rapid system modeling and ML as a
porate domain knowledge demand an in-depth understanding of
design methodology.
the architecture, aiming for enhancements through precise, targeted
adjustments to the datapath. Techniques such as bottleneck analysis • ML for Fast System Modeling: This approach employs
have shown to outperform conventional black box approaches by ML to quickly estimate performance and power metrics
focusing on specific areas for improvement within the datapath of circuits and systems. Notable examples include the
architecture. work by Joseph et al. [52] and Ithemal [53], which
apply linear and recurrent neural network (RNN) models
2.4.3 EDA for Analog Circuits for CPU performance modeling, respectively. McPAT-
The design process for analog and mixed-signal ICs significantly Calib [54] enhances CPU power modeling by integrating
differs from digital design, showcasing the unique challenges and ML models with the analytical tool McPAT for calibration.
complexities of analog circuits. PANDA [55] advances this approach by reducing training
Fig. 5 illustrates a typical analog IC design flow, starting with data requirements and eliminating dependency on McPAT
a detailed set of circuit specifications covering area, power, and for power modeling. Boom-Explorer [56] automates design
performance requirements. The front-end design phase is crucial, space exploration for the RISC-V BOOM microarchitecture.
establishing the pre-layout circuit netlist that defines the circuit’s Beyond CPUs, XAPP [57] predicts GPU performance by
functionalities via meticulous topology design and device sizing. analyzing dynamic and static properties of single-thread
This phase sets the foundation for the circuit’s operational features CPU code, while Wu et al. [58] model GPU power by
and optimization criteria. examining kernel scaling behaviors. SVR-NoC [59] focuses
8

Compiling/
Target DesignSpace
Space
Compiling /
Target App Design Mapping
Mapping
Application
Application Archchitecture
Application Architecture N
Space
Space Space
Space
SpaceSpace Fast Goal Y
Exploration Fast Evaluation Exploration
Target Exploration Satisfied? End
END
Target Design Evaluation Goal?
Design

DetailDetail
Validation
Validation

Fig. 4: A typical datapath circuits design flow.

Circuit Spec. enhancing their efficacy in navigating complex design spaces. Sun
et al. [76] introduced a novel approach using correlated multivariate
Topology Design Gaussian process models to capture the intricate interdependencies
Front-End among multiple objectives across various design fidelities. Yu et
Circuit Sizing
al. [77] proposed the IT-DSE framework, leveraging a surrogate
model pre-trained on historical design data to refine the search
Placement process, illustrating how accumulated design knowledge can be
Back-End effectively reused to optimize new projects.
Routing In the realm of tensor computations, HASCO [78] employs
ML for DSE. This methodology optimizes both software programs
Routed Layout and hardware accelerators, showcasing ML’s capacity to bridge the
gap between software and hardware domains to achieve optimized
Fig. 5: A typical analog IC design flow. system performance.

3.1.2 RTL-Stage ML Methods


on predicting latency and waiting times in mesh-based At the RTL stage, innovative ML solutions have emerged to predict
network-on-chips (NoCs). the PPA without conducting logic synthesis. Initial attempts, such
• ML as a Design Method: In microarchitecture design, ML as SNS by Xu et al. [79] and the work by Sengupta et al. [80],
techniques facilitate innovative solutions. Shi et al. [60] employ a methodology where the RTL code is converted into
employ an LSTM model to derive insights from historical an abstract syntax tree (AST) format, from which features are
program counters for cache replacement using an SVM- extracted to forecast the design’s PPA. Subsequent advancements,
based predictor. Pythia [61] reimagines prefetching as including SNS-v2 [81] and MasterRTL [82], claim enhanced
a reinforcement learning challenge, while Hermes [62] accuracy compared to earlier efforts, showcasing the rapid progress
leverages ML to predict off-chip load request outcomes. in ML applications for RTL analysis. Additionally, there has been
Additional applications include task allocation [62], power a focused effort on applying ML for precise timing or logic
management [63], and resource management for CPU [64] estimation [83], [84].
and AI accelerators [65]. Power modeling at the RTL stage has also attracted lots of
In high-level synthesis, the application of ML models for attention. There are two primary categories: design-time power
rapidly estimating design metrics has become increasingly preva- estimation and runtime on-chip power modeling. For design-time
lent. For instance, Dai et al. [66] focus on timing and resource
https://mass.yuque.com/mass/projects/poum9u5hkr4vlkbr
estimation, PRIMAL [85] stands out for offering per-cycle power
usage, Pyramid [67] estimates throughput, Ustun et al. [68] look at evaluations tailored to each target design, alongside other notable
operation delay, Zhao et al. [69] consider routing congestion, and ML-based approaches [86], [87]. For runtime power modeling,
Lin et al. [70] dedicate their efforts to power consumption analysis. DEEP [88] introduces an efficient on-chip model that incorporates
These studies underscore the versatility of ML in covering a broad low-overhead hardware design, utilizing ML to identify power-
spectrum of design metrics, highlighting its capacity to provide correlated RTL signals, or ‘power proxies’. This method, along
comprehensive insights early in the design process. with other ML-based on-chip power modeling solutions like [89],
Moreover, ML’s role extends to facilitating design space [90], demonstrates the potential of ML in creating dynamic power
exploration (DSE) in HLS, exemplified by the work of Ustun et models that adapt to real-time conditions. Moreover, APOLLO [91]
al. [68], Liu et al. [71], and Meng et al. [72], who implement active presents a versatile solution applicable to both design-time and
learning strategies to navigate the DSE, using predictive ML models runtime scenarios. Simmani [92] and the early power modeling
as stand-ins for actual synthesis processes. This approach allows for work [93] focus on fast power emulation on FPGA and other
a more efficient evaluation of design alternatives without the need platforms, highlighting the broader applicability of ML methods in
for exhaustive synthesis runs. Additionally, contributions by Kim facilitating efficient power analysis during the design phase.
et al. [73], Mahapatra et al. [74], and Wang et al. [75] demonstrate In the realm of RTL testing and verification, Bayesian networks,
the integration of ML with traditional optimization algorithms, as explored by Fine et al. [94], offer a probabilistic model-based
9

approach for coverage-based test generation, underscoring the employs GNNs to infer high-level functional blocks from gate-
potential of ML in optimizing test planning. Design2Vec [95] level data. The success of these methodologies is largely attributed
advances this further by learning semantic abstractions of RTL to the capacity of GNNs to discern intricate structural patterns
designs, facilitating functionality prediction and efficient test and relationships within netlists, underscoring the transformative
generation that notably shortens verification cycles. Katz et impact of ML in enhancing the efficiency and intelligence of EDA
al.’s [96] decision tree-based method for learning microarchitectural processes.
behaviors exemplifies ML’s utility in enhancing test stimuli quality.
3.1.4 Layout-Stage ML Methods
3.1.3 Netlist-Stage ML Methods The layout stage presents a crucial phase where ML methods have
Within the netlist stage, supervised learning methods have been been increasingly applied to predict or optimize various design
leveraged to address a spectrum of challenges including logic metrics such as wirelength, routability, timing, and IR-drop.
synthesis, quality-of-results (QoR) prediction, verification support, ML for Placement Stage Enhancements The placement
and security concerns. stage, which determines the optimal locations of macros and
ML-based models have been particularly effective in assessing standard cells in the layout, is pivotal for achieving desired design
synthesis quality and influencing the optimization process. For metrics. Early applications of ML aimed to augment traditional
instance, LSOracle [97] utilizes ML to determine the most appro- placement strategies. PADE [111] incorporates support vector
priate optimizers for various logic networks, thereby enhancing machines (SVM) and neural networks for datapath extraction
the overall synthesis outcomes. Yu et al. [98] propose to classify and evaluation, facilitating datapath-aware placement strategies.
and select among multiple random synthesis flows by their quality, DREAMPlace, developed by Lin et al. [112], conceptualizes the
subsequently focusing on the most efficacious ones. Their further placement challenge as akin to training a neural network, thus
research [99] extends to evaluating expected delay and area accelerating the global placement process by harnessing GPU
outcomes for synthesis flow candidates, offering a data-driven computing capabilities. Building on DREAMPlace, Agnesina et
approach to guide synthesis decisions. al. [113] apply multi-objective Bayesian optimization for macro
More recent advancements, such as AlphaSyn [100] integrate placement design space exploration, demonstrating the potential of
Monte Carlo tree search with tailored learning strategies for ML in enhancing macro-placement outcomes.
area reduction, showcasing the potential of combining ML with ML also assists in predicting design metrics in the later
heuristic search techniques for synthesis optimization. Additionally, routing phase, benefiting both iterative refinement and early-stage
SLAP [101] targets the enhancement of design timing by identi- optimization. Many studies have explored early-stage routability
fying and utilizing candidate cuts that lead to improved synthesis prediction. RouteNet [114] uses a CNN to forecast the post-routing
results during technology mapping. Their subsequent work [102] design rule violations (DRVs), thus avoiding difficult-to-route
further demonstrates the ability of ML models to pinpoint post- placements. Another study [115] guides macro placement based
routing timing critical paths, focusing technology mapping efforts on predicted routability. Chang et al. [116] introduce a neural
on these areas to minimize delays. DeepGate2 [17] develops a architecture search (NAS) for the autonomous development of
pre-trained model that predicts the behavioral correlation of logic routability prediction models, eliminating the need for manually
gates in netlists and prioritizes SAT-sweeping process to accelerate designed machine learning models. Pan et al. [117] propose
fraig optimization operation. a federated learning-based approach for routability evaluation,
Following logic synthesis, innovative machine learning so- addressing data privacy concerns. To achieve better routability
lutions are being developed to foresee the post-physical de- prediction performance, Zheng et al. propose a multimodal neural
sign quality of previously unknown circuit netlists. Tools like network Lay-Net [118], which aggregates both layout and netlist
Net2 [103] pave the way by predicting wirelength and timing information. The ultimate purpose of routability prediction is to
information, effectively capturing the implications of placement on assist routability optimization. Liu et al. [119] incorporate a fully
the netlist. GRANNITE [104] advances this further by facilitating convolutional network (FCN)-based routability prediction model
the propagation of RTL toggle rate down to the gate-level netlist, into the DREAMPlace framework, using it as a penalty factor to
aiming for rapid and accurate average power estimation. Similarly, explicitly optimize for routability. PROS [120] introduces a routing
GRAPSE [105] evaluates average power based on unoptimized congestion predictor as a plug-in for commercial placers, effectively
and unmapped netlists, showcasing improvements in both speed adjusting cost parameters to mitigate congestion issues. Moreover,
and precision of power estimation. Recently, DeepSeq [106] learns Zheng et al. [121] develop LACO, a look-ahead mechanism
a generic sequential netlist representation that accurately embeds designed to address the distribution shift problem in congestion
the switching activity behavior and predicts the dynamic power modeling.
estimation. Timing is another important metric for placement. The field of
Moreover, ML methods have shown exceptional prowess in pre-routing timing prediction at the placement stage has witnessed
deriving high-level abstractions from bit-blasted netlists, unlock- a range of modeling approaches leveraging various features and
ing new potentials across various domains within EDA. These machine learning techniques. Studies like those by Barboza et
high-level abstractions are instrumental in enhancing functional al. [122] and He et al. [123] have implemented tree-based methods,
verification, logic minimization, datapath synthesis, and the de- incorporating careful manual feature extraction. TF-Predictor [124]
tection of malicious logic within circuits. For instance, tools like employs Transformers to treat timing paths as sequences, while
ReIGNN [107] and GNN-RE [108] utilize ML for reverse engineer- Guo et al. [125] have devised a customized GNN inspired by
ing tasks, such as identifying state registers and deciphering the static timing analysis mechanisms. Additionally, recent work by
functionality of subcircuits. Additionally, ABGNN [109] leverages Wang et al. [126] addresses the re-structuring of netlists due to
graph neural networks to delineate the boundaries of arithmetic timing optimization, integrating graph data from netlists with layout
blocks in flattened gate-level netlists, while GAMORA [110] image information through multimodal fusion. Moreover, Liang et
10

al. [127] focus on cross-talk prediction, exploring various machine work equipped with a multi-task Gaussian model, significantly
learning models for this purpose. To reduce turn-around time at the improving the design flow tuning process’s efficiency.
pre-routing stage, Liu et al. [128] propose a concurrent learning- Verification, a critical component throughout the design process,
assisted early-stage timing optimization framework called TSteiner, has also seen the integration of ML to validate circuit design
which guides the refinement of Steiner points based on gradients correctness. Cho et al. [151] propose an efficient lithography-aware
obtained from a GNN-driven timing evaluator. router, which moves lithography verification to the routing stage,
ML for Sign-Off Enhancements: During the routing and sign- effectively enhancing the quality of the printed layout.
off stages, the precision of sign-off timing, especially using the
path-based static timing analysis (PBA), becomes crucial. However, 3.2 Reinforcement Learning in EDA
the PBA process is time-consuming, leading to the application of
Reinforcement learning (RL) in EDA has emerged as a powerful
machine learning models for predicting path-based timing based
method for navigating the expansive solution spaces inherent in
on quicker graph-based analysis (GBA) results. The pioneering
logic synthesis and physical design, often uncovering innovative
work by Kahng et al. [129] was instrumental in predicting PBA
solutions that surpass traditional, intuition-based approaches. Inno-
from GBA using carefully engineered features and a tree-based
vations like Synopsys.ai [152] underscore this trend, showcasing
model. Subsequent studies, such as [124], [130], have delved into
AI-driven methodologies that enhance PPA metrics across the
various machine learning models, including transformers and GNN,
design spectrum.
to enhance the accuracy of GBA-PBA predictions.
In logic synthesis, Liu et al.’s PIMap framework [153]
Additionally, IR-drop analysis is a critical component in the exemplifies the application of RL by optimizing LUT-based
sign-off stage. Several studies have investigated rapid IR-drop FPGAs through graph partitioning and iterative synthesis operation
estimation using machine learning, focusing on either static or selection, leveraging parallelization for efficiency gains. FlowTune,
dynamic analysis to cater to different requirements. For instance, introduced by Yu et al. [154], employs a multi-stage multi-armed
works like IncPIRD [131] and XGBIR [132] concentrate on static bandit (MAB) strategy to constrain the search space and streamline
IR-drop analysis. In contrast, studies such as [133] target dynamic the synthesis process. Pei et al.’s AlphaSyn [100], utilizing a
IR-drop analysis. domain-specific Monte Carlo tree search (MCTS), and Zhu et
ML for Manufacturability Enhancements: In the field of al.’s approach [155], framing logic synthesis as a Markov decision
design for manufacturing (DFM), leveraging ML has become piv- process (MDP) with a graph convolutional network (GCN), both
otal for bolstering the reliability of lithography and manufacturing illustrate the capacity of RL to thoroughly explore synthesis
processes, with layout patterns often analyzed as images. Studies strategies. DRiLLS by Hosny et al. [156] and subsequent works
like GAN-SRAF [134], GAN-OPC [135], Develset [136], and like those by Peruvemba et al. [157] further extend this exploration,
L2O-ILT [137] use various ML methods to improve mask synthesis introducing constraints and optimization targets into the RL models
printability. Other works, such as those by Watanabe et al. [138], to fine-tune synthesis outcomes. RL has also been applied to logic
Ye et al. [139], Lin et al. [140] and Chen et al. [141], focus on optimization challenges. For instance, Haaswijk et al. [158] and
lithography modeling to simulate printed patterns from mask clips. Timoneda et al. [159] leverage policy gradient methods and GCNs
For identifying layout patterns prone to printing failures like shorts to optimize majority-inverter graphs (MIGs), showcasing RL’s
or opens, ML-enhanced lithography hotspot detection is explored adaptability to various logic structures.
in various studies. For example, Yang et al. [142] propose to In physical design, the application of RL ranges from automat-
extract layout features with discrete cosine transform and utilize a ing chip floorplanning, as demonstrated by Mirhoseini et al. [160],
CNN architecture for hotspot detection. The performance is further to minimizing area and wirelength in floorplanning processes like
improved with the proposed bias learning algorithm because of GoodFloorplan [161]. Agnesina et al.’s [162] use of RL to tune
the imbalanced dataset. Inspired by the object detection problem physical design flows for improved PPA metrics and RL-Sizer by
in computer vision, Chen et al. [143] propose to detect multiple Lu et al. [163] for gate sizing highlight RL’s potential to refine
hotspots within large layouts simultaneously. In [144], the binarized physical design processes, including timing optimization [164] and
neural network is utilized to speed up the hotspot detection flow. mask optimization in the RL-OPC process [165]. For clock tree
New network architecture is designed based on residual networks synthesis, research efforts are directed toward predicting the quality
to achieve higher detection accuracy and performance. Additionally, of the clock network and enhancing timing optimization by leverag-
ML further contributes to yield estimation and analysis, as seen in ing clock skew. GAN-CTS [166] employs a conditional generative
works like Ciccazzo et al. [145], Nakata et al. [146], and Alawieh adversarial network (GAN) combined with reinforcement learning
et al. [147]. for predicting and optimizing CTS outcomes.

3.1.5 Cross-Stage ML Methods 3.3 Leveraging Large Language Models in EDA


In addition to stage-specific applications, ML4EDA has signifi- The integration of generative AI, particularly large language
cantly impacted the broader task of design flow tuning, garnering models (LLMs), into IC designs is emerging as a transformative
substantial interest. trend. By utilizing proprietary datasets, IC design companies can
Kwon et al. [148] introduce a novel approach that blends tensor develop AI assistants to enhance and expedite the design process.
decomposition with regression analysis to recommend parameters These tools, capable of providing in-depth insights, automate and
for both logic synthesis and physical design stages, demonstrating refine traditionally manual tasks like design conceptualization and
ML’s capability to streamline design parameterization. FIST [149] verification. Consequently, a growing body of research explores
utilizes a clustering strategy to automate the adjustment of flow the application of LLMs in EDA, tackling a broad spectrum
parameters, aiming for enhanced design quality. Furthermore, of tasks including RTL code generation, task planning, script
PTPT [150] presents a multi-objective Bayesian optimization frame- generation, and bug fixing. While still in the early stages, these
11

studies underscore the profound potential of LLMs to improve the specifications with golden RTL implementations, offering a robust
efficiency and efficacy of EDA tools. framework for evaluating assertion generation. Furthermore, LLMs
This section delves into the use of LLMs for RTL code have achieved success in solving the Boolean Satisfiability (SAT)
generation—a key area of focus. It categorizes the research into problem [179], which can be applied to verify arithmetic circuits.
LLM-aided RTL design generation and verification. Additionally, Security Verification Leveraging LLMs: Security validation,
we explore LLM applications in generating EDA scripts and high- critical in identifying and mitigating common vulnerability enumer-
level architecture design. ations (CWEs), has also benefited from LLM integration. Ahmad
et al. [180] demonstrate the capacity of LLMs to repair hardware
3.3.1 RTL Generation through LLMs security bugs, provided the bug’s location is known. Further
The advent of large language models has ushered in a new era for research includes leveraging ChatGPT to recommend secure RTL
RTL code generation, offering solutions that have the potential to code [181] and employing LLMs in hardware security assertion
redefine traditional approaches. generation [182]. The latter develops an evaluation framework and
Early explorations in this domain primarily focused on evaluat- benchmark suite that encompasses real-world hardware designs,
ing models against simple design tasks, hindered by the absence illustrating LLMs’ potential to contribute significantly to security
of standardized benchmarks. This challenge has been recently validation efforts.
addressed with the introduction of comprehensive benchmarks like
RTLLM [167] and VerilogEval [168], facilitating a more robust 3.3.3 EDA Script Generation and Architecture Design
comparison of LLM capabilities across complex design tasks.
RTLLM stands out by providing an open-source benchmark with The versatility of LLM-based solutions in EDA also extends
thirty detailed design tasks, accompanied by ground-truth RTL code to embrace tasks like EDA script generation and high-level
for functionality verification. It emphasizes three core objectives: architectural design.
syntax correctness, functional accuracy, and design quality, show- EDA Script Generation: ChatEDA [183] introduces an LLM-
casing a significant leap in performance through innovative prompt based agent designed to facilitate EDA tool control using natural
engineering techniques like self-planning. Similarly, VerilogEval language, offering an alternative to traditional TCL scripts. This
expands the evaluation framework by gathering Verilog code from agent supports a range of operations from RTL code to the graphic
diverse sources to construct over 100 test cases. Its approach of data system version II (GDSII), encompassing automated task
collecting additional RTL code for model training demonstrates planning, script generation, and task execution, making EDA tools
comparable performance with advanced models like GPT-3.5, yet more accessible and efficient.
its training data and model remain unreleased to the public. Architectural Design: GPT4AIGChip [184] leverages LLMs
Commercial LLMs are utilized for RTL generation, with initial to generate C code for AI accelerator high-level synthesis. Similarly,
attempts applying GPT-2 for code completion showing promising Yan et al. [185] examine the use of LLMs in optimizing compute-in-
results [169]. Subsequent developments have introduced tools like memory (CiM) DNN accelerators, showcasing the model’s potential
ChipGPT [170] and AutoChip [171], which leverage GPT-3.5 to in enhancing computational efficiency. Further extending the scope,
refine code generation through prompt engineering and feedback Liang et al. [186] delve into quantum architecture design, exploring
loops, further reducing the need for human intervention. Chip- the frontiers of quantum computing. SpecLLM [187] contributes
Chat’s [172] achievement in designing a microprocessor with GPT- to this growing body of work by providing a dataset of architecture
4 underscores LLMs’ potential to autonomously generate hardware specifications at various abstraction levels, investigating LLMs’
description languages. capabilities in both generating and reviewing these specifications.
Recently, the shift towards fine-tuning open-source LLMs
presents a viable alternative for customized model development,
addressing privacy concerns in VLSI design. Projects like Chip- 3.4 AI for Specialized Circuits
NeMo [173], RTLCoder [174], and BetterV [175] have demon- The advent of AI4EDA also presents a unique opportunity to
strated significant advancements, employing domain adaptation redefine the design and optimization of specialized circuits,
techniques and automated training dataset generation to enhance including standard cells, datapath components, and analog circuits.
LLM efficiency and performance for RTL code generation.

3.3.2 Enhancing Verification with LLMs 3.4.1 AI for Standard Cells


The application of LLMs extends beyond RTL code generation to The application of AI in standard cell design, particularly in
the verification processes. These models assist in both functional placement and routing, presents a unique set of challenges due
correctness and security analysis, showcasing their versatility and to their high density and strict routability requirements. An AI-
depth in enhancing design validation. assisted approach, utilizing reinforcement learning, has been shown
Functional Verification through LLMs: LLMs have made to improve placement sequences and routability, offering better wire
significant strides in functional verification by translating natural length performance [188]. Additionally, RL methods have been
language specifications into SystemVerilog assertions (SVAs). used to address DRC violations post-routing [189], simplifying the
This process ensures that RTL implementations adhere to their routing process and enabling the use of A-star or maze routing for
intended specifications. Notably, [176], [177] leverage human- optimal solutions. Machine learning techniques have also facilitated
written specification sentences alongside RTL designs to generate the adaptation of DRC rules, easing the migration of standard cell
precise SVAs. AssertLLM [178] takes a proactive approach by layouts across technology nodes [190]. A notable area for AI
generating assertions directly from comprehensive specification application is in the evaluation of standard cell layouts, where
documents, even before the RTL design phase. This method is machine learning models can rapidly assess performance without
complemented by a benchmark set that pairs natural language the need for detailed simulations.
12

3.4.2 AI for Datapath Circuits the prediction of new, unseen design points, thereby enhancing
Machine learning-based methods are emerging as a powerful the efficiency of the search process [203]. On another front, RL,
tool for optimizing the design of datapath circuits, enabling especially via the GCL-RL algorithm, marries RL techniques
enhanced efficiency and performance. By leveraging the distinct with graph neural networks to adeptly optimize analog sizing
functionalities and structures of datapath circuits, AI can facilitate across varying technological domains. This synergy leverages
a more effective design optimization process. GNNs’ robust capability to encapsulate circuit topologies within
Roy et al. [191] employ machine learning to predict the the optimization framework [204]. Such methodologies, along
Pareto frontier for adders within the physical design domain. It with other RL-centric approaches, aim squarely at the intricate
exemplifies how machine learning can be leveraged for design space balance between global exploration and local exploitation, a balance
exploration, providing insights into optimal design configurations. that is essential for achieving sample efficiency in analog sizing
Utilizing an integrated framework that combines variational graph tasks. Innovative strategies, including the use of Voronoi trees
autoencoders with graph neural processes, [192] develops a novel for the decomposition of the design space and Monte Carlo tree
approach for automatic feature learning of prefix adder structures. search (MCTS) for honing in on local search areas, highlight the
This method facilitates sequential optimization, enabling the complex tactics employed to navigate the vast, high-dimensional
exploration of Pareto-optimal structures alongside quality metrics. optimization landscapes with greater efficiency [205]. The field’s
Another study [193] employs multi-perception neural networks progress and the diverse methodologies employed are thoroughly
to analyze and learn from existing designs and performance data reviewed in a dedicated book chapter, offering a deep dive into the
of adders and multipliers. This approach not only achieves high significant advancements and techniques in ML applications for
prediction accuracy but also outpaces traditional optimization analog sizing [206].
methods in speed. Moreover, the RL-MUL framework [194] AI in Analog Layout Automation: The application of AI in
introduces a novel RL strategy for enhancing multiplier designs. By analog layout automation significantly enhances processes such
adopting matrix and tensor representations for the compressor tree as constraint extraction, placement, and routing, as extensively
and leveraging CNN as the agent, this method allows for dynamic reviewed in [207].
adjustments to the multiplier structure, showcasing the adaptability For constraint extraction in analog layouts, graph-based method-
of AI in complex design optimization. ologies are pivotal for identifying symmetry in netlists. These
methods encompass graph similarity analysis, edit distance compu-
3.4.3 AI for Analog Circuits tation, and unsupervised learning for device matching, alongside
AI’s integration into analog IC design automation marks a pivotal convolutional graph neural networks for the prediction of layout
advancement, enhancing both the efficiency and effectiveness of constraints [208]. A detailed survey on these techniques is provided
algorithms. This integration capitalizes on graph and image data in [209].
representations, mirroring circuit topologies and layouts [195], to ML’s role in analog layout extends to automating the imitation
address the challenges inherent to analog design—namely, slow of expert designs, modeling circuit performance, and optimiz-
performance evaluation and high search complexity. ing the layout process. GeniusRoute [210] leverages variational
AI in Analog Topology Generation: The integration of AI autoencoders for making routing predictions that mimic human
into the generation of analog topologies is revolutionizing the expertise, impacting various aspects of layout design including well
field by speeding up evaluation processes, honing in on more generation [211], placement strategies [212], and cell generation
efficient search spaces, and improving optimization techniques. processes [213]. CNNs and GNNs are utilized for predicting
Among the diverse approaches, variational graph autoencoders the performance of designs, thereby optimizing placement and
(VGAEs) have been employed for circuit topologies as showcased minimizing the dependency on extensive simulations [214], [215].
by Lu et al. [196], while RL-based methods have been applied The significant impact of ML on performance-driven placement
to power converters, as demonstrated by Fan et al. [197]. More and optimization in analog layout is thoroughly examined in [216].
broadly, Zhao et al. have utilized RL alongside predefined libraries Finally, addressing the pre-layout and post-layout simulation
to address a wider array of problems [198]. Poddar et al. have gap in analog IC design is vital. ML predicts post-layout parasitics
introduced a data-driven strategy for selecting topologies and sizing directly from schematics to enhance simulation accuracy and speed
devices, employing a variational autoencoder (VAE) to synthesize up design. For example, ParaGraph [217] employs GNNs for
data and thereby reduce simulation expenses [199]. To tackle accurate parasitic predictions, using ensemble models for specific
the complexities of large circuit design, hierarchical methods are value ranges. Early performance assertions using CNNs [218]
being investigated. Lu et al. have put forward a bi-level Bayesian and layout-aware optimization with BagNet [219], utilizing deep
optimization technique for ∆ − Σ modulators [200], while Fayazi neural networks and evolutionary algorithms, streamline the design
et al. and Hakhamaneshi have delved into intermediate topology process. TAG combines text, self-attention networks, and GNNs
representations and GNN models for voltage node prediction, for a comprehensive circuit representation, aiding in various
respectively [201] [202]. These developments suggest that AI predictions [195].
holds significant promise in streamlining the generation of complex
topologies, including those of larger circuits comprising multiple
sub-circuits. 4 L ARGE C IRCUIT M ODELS : A N EW H ORIZON
AI in Analog Sizing: AI is playing a pivotal role in advancing As discussed in the previous section, AI4EDA solutions have
optimization within the realm of analog sizing, notably through shown remarkable potential, yielding promising outcomes across
the use of ML as surrogate models and RL for direct optimization a spectrum of tasks. However, these solutions predominantly
efforts. ML models, particularly feed-forward neural networks, have exhibit a task-specific orientation, which, while effective in narrow
been adeptly trained to closely approximate circuit performance applications, often limits their scalability and adaptability to the
metrics. These models, when operated in inference mode, enable broad spectrum of design challenges.
13

Venturing into the domain of large circuit models (refer to superficial understanding but a deep encoding of the nuances
Fig. 1) marks a bold departure from the previous AI4EDA solutions, present in each modality. For instance, in the circuit netlist modality,
moving towards a more integrated and AI-native design process. the encoded representations must encapsulate both the functional
The term ‘large’ in LCMs signifies both the substantial model intent and the physical structure of the circuits. This depth of
size and the vast array of circuit data collected from various understanding facilitates a more accurate and cohesive foundation
EDA stages for circuit pre-training. Such a foundational model for subsequent design tasks. Please refer to Section 5 for details.
concept promises a unified framework that transcends task-oriented The next step in harnessing the power of LCMs involves the
limitations, ensuring that LCMs are robust, versatile, and capable fusion and alignment of these unimodal representations to form a
of handling the diverse tasks of modern circuit design with limited cohesive multimodal representation [23]. This process is critical in
fine-tuning. bridging the gaps between disparate stages of the design process,
employing advanced techniques such as shared representation
4.1 Motivation spaces, cross-modal pre-training, and innovative fusing strategies.
These methodologies aim to synthesize the information captured in
The realm of AI4EDA, despite its advancements, faces inherent
individual modalities into a unified, actionable framework that can
limitations by primarily repurposing machine learning models
guide the design process from conception to completion.
from disparate domains to tackle EDA challenges. This approach
Since the specifications, RTL codes, netlists and layout designs
necessitates the development of distinct models for each specific
are representative formats in front-end and back-end flows, the
EDA task. While these models have demonstrated efficacy on
perspective paper outlines three primary alignment challenges:
benchmark datasets, their ability to generalize to novel designs
remains a subject of concern. The unique blend of computation and • Spec-HLS-RTL Representation Alignment: Utilizing
structure inherent to circuit data requires a nuanced understanding the transformative self-attention mechanism inherent to
that transcends the capabilities of generic AI solutions. For instance, Transformers, this approach seeks to harmonize the repre-
adapting LLMs for RTL generation without a deep comprehension sentations of architecture design, high-level C/C++ pro-
of circuit design nuances often falls short of achieving optimal PPA totypes, and RTL designs. This unified space enables
results. the coexistence and interaction among these modalities,
The emergence of large foundational models, such as BERT [2], facilitating a seamless transition across design stages.
GPT [5], and MAE [8]), has redefined AI’s landscape, offering a bi-
• RTL-Netlist Representation Alignment: Inspired by
furcated approach of extensive pre-training on diverse data followed
the groundbreaking CLIP model, this challenge leverages
by targeted fine-tuning for specific tasks. This methodology has
contrastive learning and mask-and-prediction training strate-
been instrumental in achieving breakthroughs across various data
gies. The goal is to map the embeddings of RTL designs
types, heralding a new era of AI applications. The introduction of
and circuit netlists into a shared latent space, ensuring
multimodal foundation models like GPT-4V [13] and Gemini [14]
a coherent progression from logical design to physical
further exemplifies this trend, facilitating previously unimaginable
implementation.
applications by harmonizing disparate types of data.
Drawing inspiration from these developments, we propose • Netlist-Layout Representation Alignment: The final
a paradigm shift towards AI-native EDA through the adoption alignment challenge focuses on the crucial step of ensuring
of large circuit models. LCMs, with their focus on learning that the physical layout accurately mirrors the detailed
comprehensive circuit representations, are designed to encapsulate design captured in the netlist. This alignment is vital for
the intricate details and unique characteristics of circuits at every the physical realization of the design, embodying the tran-
design stage. Echoing the CLIP model’s success in bridging text sition from theoretical models to tangible, manufacturable
and vision, LCMs aim to forge a similar convergence within circuits.
EDA, weaving together high-level functional specifications with By confronting these alignment challenges head-on, LCMs
the minutiae of physical layouts. This holistic approach not only promise to revolutionize the EDA workflow, enabling novel
promises to refine the EDA workflow but also aims to significantly applications and methodologies that were previously unattainable.
reduce time-to-market and enhance the overall design quality such This detailed exploration (please refer to Section 6) sets the stage for
as PPA and circuit reliability. a comprehensive discussion on multimodal alignment techniques,
By championing LCMs, we stand on the cusp of revolutionizing
further elaborated in subsequent sections, heralding a new era of
EDA, transcending task-specific limitations, and embracing a
AI-native circuit design.
future where AI-native solutions drive innovation, efficiency, and
excellence in circuit design.
4.3 Opportunities and Potentials
4.2 Overview of LCMs By accumulating knowledge learned from diverse circuit types and
applying cross-stage learning on various design modalities, the
The EDA workflow, extending from initial specification to the
potentials of LCMs extend across various aspects of design and
detailed final layout, encompasses a variety of circuit design
verification:
formats, each demanding distinct encoders within the LCMs. These
encoders, designed to handle specific modalities – specification, • Enhanced Verification: LCMs promise to revolutionize
architecture design, high-level algorithms, RTL design, circuit verification by harnessing a deep, cross-stage understanding
netlists, and physical layouts – are the core components of LCMs. of circuit designs. This enables more streamlined veri-
To effectively leverage the diverse data inherent to each design fication processes, significantly reducing iterations and
modality, LCMs must be pre-trained with a focus on general enhancing the detection of design flaws early in the design
yet comprehensive design knowledge. This involves not just a cycle.
14

• Early and Precise PPA Estimation: The comprehensive 5.1.1 Representation Learning for Architecture Design
insights LCMs offer into design data empower them to The performance and power consumption of architectures exhibit
provide early and accurate PPA predictions. This capability an intrinsic dependence on specific application contexts. In pursuit
ensures that critical design decisions are informed and of optimizing the trade-off among PPA for targeted applications,
strategic from the outset, aligning with optimal design architectural designers traditionally employ detailed simulation
objectives. tools complemented by extensive domain-specific expertise. This
• Streamlined Optimization: By pinpointing the true bottle- conventional methodology, while comprehensive, tends to be
necks affecting PPA, LCMs can facilitate targeted optimiza- both time-intensive and prone to human errors. The advent of
tions. This not only accelerates the design optimization LCM presents a novel paradigm, facilitating rapid exploration of
process but also ensures that improvements are effectively architectural design spaces by leveraging insights into the nuanced
implemented across different design stages, enhancing interactions between application workloads and architectural config-
overall design quality. urations. Thus, it is imperative for LCM to encapsulate application
• Innovative Design Space Exploration: The intelligence workload representations adaptable to various architectural designs.
imbued within LCMs opens the door to expansive design Several endeavors have been undertaken in tasks related to
space exploration. Designers are equipped to discover architectural design. For instance, NPS [220] utilizes a specialized
novel architectures that ingeniously balance PPA trade-offs, GNN called AssemblyNet for workload representation learning,
fostering creativity and innovation in circuit design. leveraging both the application’s code structure and its runtime
• Generative Design Solutions: Perhaps the most revolution- states. Trained with a data prefetch task, AssemblyNet identifies
ary aspect of LCMs is their potential to underpin generative the characteristics of typical program slices and minimizes the
models capable of autonomously crafting efficient and inaccuracy of sampling-based simulation. Perfvec [221] proposes to
innovative circuits. This could drastically reduce the time- learn independent program and architecture representation for gen-
to-market for new chip designs, offering a competitive edge eralizable performance modeling. Supervised with an instruction
in the rapidly evolving semiconductor industry. incremental latency prediction task, the yielded model demonstrates
applicability on performance modeling across different microar-
In essence, LCMs represent not just a technological advance- chitectures. On the other hand, several studies have explored
ment but a paradigm shift in how circuit design and verification the representation of architecture in depth. For instance, GRL-
are approached. The full realization of LCMs’ potential, however, DSE [222] leverages graph representation learning to establish a
hinges on the development of sophisticated AI-native techniques compact and continuous embedding space for microarchitecture.
for circuit representation learning, challenging the EDA community This approach, utilizing self-supervised learning, enhances the
to explore and harness these untapped capabilities. efficiency of identifying optimal microarchitecture parameters.
Meanwhile, daBO [223] presents an architecture representation for
accelerators enriched with domain-specific knowledge. It involves
5 U NIMODAL C IRCUIT R EPRESENTATION L EARN - the manual identification of critical factors that significantly
ING influence the architecture’s PPA, and seeks the optimal parameter
combinations within this newly defined representation space.
The journey toward an AI-native EDA paradigm embarks with the However, existing studies still face challenges in workload and
essential development of robust unimodal circuit representation architecture characterization:
learning. These foundational representations are the building blocks • Many models struggle to account for performance-critical
for the envisioned multimodal LCMs. This section delves into the factors like branch mispredictions and cache misses, which
nuances of unimodal circuit representation learning, underscoring relate to broader historical states and resist capture through
its indispensable role in establishing a comprehensive and nuanced static execution snapshots. An effective LCM must grasp
foundation for sophisticated LCMs. The insights garnered here are these long-term and complex relationships to accurately
paramount for achieving a holistic comprehension of circuit data, represent application workloads.
which is crucial for the realization of advanced LCMs. • The intricate relationship between application workloads
and power consumption has been underexplored. An
ideal LCM would not only integrate power-related factors
5.1 Representation Learning for Front-End Design tailored to varied application workloads, such as flip rates
Circuit design commences with the specification and architecture and dynamic voltage fluctuations, but also encapsulate
design phase, where the high-level functional intents are formulated. the complex interplay between power consumption and
At this juncture, techniques derived from natural language pro- performance, ensuring a cohesive modeling of both aspects.
cessing are invaluable, transforming specifications into structured, • Current methods primarily concentrate on direct analysis of
machine-interpretable representations. source code or simulation traces, which overlooks the incor-
poration of substantial domain knowledge accumulated by
As we descend the design hierarchy, representation learning
experienced architecture designers over the years. A LCM
must adeptly adapt to the increasing granularity of detail. At
should aim to blend these disparate strands of knowledge,
the SystemC and RTL stages, the representation’s focus shifts
facilitating an enhanced representation learning in terms of
to encompassing the logical and behavioral intricacies of the
accuracy and interpretability.
circuit. In this domain, machine learning paradigms such as LLMs
for code, graph neural networks, and hybrid models become At architectural exploration stage, the focus should be on devel-
instrumental, skillfully capturing the complex logic structures and oping representations that accurately mirror the multidimensional
their interrelations. nature of hardware design, capturing not just the static features
15

but also the dynamic interactions within the system. To achieve Alternatively, a more sophisticated approach is cross-modal predic-
this, we should employ advanced ML techniques that can process tion, which facilitates deeper interaction between the two views.
and integrate information from various data sources, including Through cross-modal prediction, the model is trained to predict
code structure, runtime behavior, and architectural parameters. one view based on the other view, encouraging the exploration of
This process involves constructing multi-layered embeddings that shared information and dependencies between the representations.
reflect the hierarchical nature of hardware systems, from individual By employing multi-view learning techniques, we can maximize
components to the entire architecture. These representations should the potential of the learned representations and create a more
be learned through a combination of supervised and unsupervised unified and enriched representation of HLS/RTL.
learning tasks, designed to highlight different aspects of the The learned HLS/RTL representations would offer a wide range
hardware’s performance and operational characteristics. By doing of applications for various downstream tasks. For instance, they can
so, LCMs can provide a rich, nuanced understanding of the be leveraged to predict PPA directly from the HLS/RTL, enabling
design space, guiding designers towards solutions that optimize efficient estimation of these crucial design metrics. Additionally,
performance, power, and area in concert. the learned representations can be employed for formal verification
to verify the correctness and functional behavior of the design.
5.1.2 Representation Learning for HLS/RTL
HLS and RTL represent two pivotal stages in the digital circuit 5.1.3 Representation Learning for Circuit Netlist
design process. HLS provides a higher-level abstraction, utilizing At the netlist level, the design serves as a pivotal junction bridging
high-level programming languages such as C, C++, or SystemC to the front-end design phase with the subsequent back-end processes.
articulate the functionality and behavior of the hardware system. Integrating machine learning into logic synthesis, physical design,
Conversely, RTL offers a more granular view, detailing the data or verification necessitates a nuanced understanding of the netlist’s
flow between registers and the operations on that data in Verilog graph topology alongside gate functionality. This dual focus ensures
or VHDL. Transitioning from HLS to RTL, designers typically the netlist encapsulates both the high-level behaviors critical in
employ HLS tools to synthesize the higher-level representation into front-end designs and the intricate structures that profoundly
its detailed RTL counterpart. influence PPA in back-end designs.
To incorporate deep learning in understanding and optimiz- Initiatives like the DeepGate Family [16], [17] stand at the
ing HLS/RTL representations, we can explore two innovative forefront of crafting generalized gate-level representations. The first
methodologies. One method interprets code as a series of tokens, version [16] targets circuits in the and-inverter graph (AIG) format
analogous to words in natural language, making it possible and innovatively employs random simulation outcomes to pre-train
to apply NLP techniques to HLS/RTL codes. A particularly circuit netlists, with logic-1 probabilities as labels encapsulating
effective strategy in this domain is masked language modeling crucial functional and structural insights. This pre-training strategy
(MLM), where certain tokens are obscured during model training, equips DeepGate to capture the core attributes of gate-level circuit
prompting a Transformer-based encoder (such as BERT) to infer designs, allowing for subsequent fine-tuning across a range of
the missing tokens. This self-supervised learning approach yields front-end applications, such as logic verification [224] and design
representations rich in the semantic essence of the hardware design, for testability [225].
capturing the functional nuances at both the HLS and RTL levels. DeepGate2 [17] advances this approach by disentangling
Another method may represent HLS/RTL designs as control data functional and structural representations within a netlist, learning
flow graphs (CDFGs) offers a graphical perspective, mapping distinct embeddings for each through specialized labels. Functional
out the control and data dependencies within the design. Here, embeddings leverage pairwise truth table similarities for supervi-
advanced GNNs come into play, learning from the complex web sion, aligning netlists of similar functionalities in close proximity
of interactions and dependencies depicted in the CDFGs. This within the functional embedding space. This alignment aids in
method allows for the extraction of comprehensive representations discerning behavioral similarities and discrepancies. Concurrently,
that embody the intricate structure and operational logic of the structural embeddings predict pairwise reconvergence, mirroring
design, providing a solid foundation for subsequent optimization topological nuances and the complex interconnectivity among
and synthesis tasks. logic cells in netlists. Beyond the DeepGate Family, FGNN [226]
The former token view is more aligned with the high-level introduces a novel contrastive learning task focused on differenti-
specifications and contains more syntax information. With the ating functionally equivalent from inequivalent circuits, enriching
application of language models that excel in capturing global the dataset through strategic perturbations to generate logically
relationships, we can get representations that encompass the overall equivalent circuit variants.
behavior and functionality of the design. Besides, the learned rep- After technology mapping, netlists are transformed into a form
resentations will benefit the generalizability and scalability of the optimized for the target technology, presenting new challenges
attention-based models. On the other hand, the graph view is more and opportunities for representation learning. This stage is critical,
aligned with the lower-level gate-level representations and contains as it directly influences the final PPA outcomes of the design.
more structural and semantic information. Compared to language While we can still formulate the post-mapping netlist as a directed
models, GNNs focus more on extracting local information. graph and utilize a GNN-based model similar to DeepGate to
To enhance the effectiveness of the learned representations, we learn general representations, the complexity of post-mapping
may consider combining these two views by employing multi-view netlists, characterized by their technology-specific primitives and
learning techniques. There are different strategies for integrating configurations, necessitates sophisticated representation learning
these views. The simplest approach involves concatenating the techniques that can accurately capture the nuances of these
representations obtained from each view and passing them through transformations.
a multi-layer perceptron (MLP). This allows for the fusion of While the primary focus of logic synthesis has been on
information from both views, leveraging their individual strengths. optimizing combinational logic, the sequential behavior of circuits
16

𝑷𝑶𝟏
A
0
B
0
C
0 0 Embeddings Embedding Space Unlike images in computer vision which can be resized without
0 0 1 0 A
0
0
1
1
0
1
0
0 B losing major information, the dimensions of layouts change with
ℎ𝑠! Circuit 1
1 0 0 0
design scales, e.g., 256 × 256, 1024 × 1024, 4096 × 4096, and

dd al
C

gs
be tur
1 0 1 0

in
Em ruc
1 1 0 0
Circuit 1 Circuit 3
1 1 1 1
ℎ𝑠" beyond. Simply resizing a layout like images can lose a lot of

St
A B C 𝑷𝑶𝟐
0 0 0 0 information, because individual pixels from layouts of different
0 0 1 0 A ℎ𝑠# Circuit 2
0
0
1
1
0
1
0
0 B DeepGate2
design scales can correspond to the same geometric resolution
1 0 0 0
1 0 1 0
C defined by manufacturing technologies. A layout representation
1 1 0 1
Circuit 2 ℎ𝑓! Circuit 1
1 1 1 1
encoder needs to handle various layout dimensions in a universal

Fu mbe
A B C 𝑷𝑶𝟑 Circuit 2
ℎ𝑓" way for training on different designs.

E
nc dd
0 0 0 0

tio in
0 0 1 0 A

na gs
0 1 0 0
A layout of a chip design contains both geometric and

l
0 1 1 1 B ℎ𝑓# Circuit 3
1 0 0 1
1
1
0
1
1
0
1
1
C topological (i.e., interconnect) information, its representation needs
Circuit 3
1 1 1 1
to align with its circuit graph as well. For instance, if two geometric
shapes of adjacent layers (e.g., a metal layer and a via layer) are
Fig. 6: DeepGate2: structural and functional disentangled netlist
located at the same positions, they are regarded as connected. A
representation learning.
layout representation encoder should be able to identify such
topological correlation between shapes. Meanwhile, back-end
design has many stages, the geometric information in a layout
is also a critical facet to represent. DeepSeq [106] expands upon the
evolves from abstract to concrete, with more and more details,
DeepGate technique by elucidating the temporal correlations within
representations at each stage should align with each other as well.
sequential netlists. This advancement is facilitated by leveraging
These problems raise challenges in learning general representa-
both transition and logic-1 probabilities for supervision across
tions for back-end design and also call for the multidimensional
each logic gate and memory element, where transition probabilities
alignment that is emblematic of LCMs, which will be detailed in
unveil insights into the circuit’s state transition behaviors and logic-
the subsequent section.
1 probabilities illuminate functional and topological characteristics.
Such a nuanced approach allows DeepSeq to adeptly encode the
complex dynamics and behaviors of sequential circuits, proving 6 H ARMONIZING R EPRESENTATIONS : A M ULTI -
instrumental for downstream applications such as netlist-level MODAL S YMPHONY
power estimation and reliability analysis.
In the realm of circuit design, moving away from unimodal
representation learning towards a multimodal integration approach
5.2 Representation Learning for Back-End Design offers a fertile ground for innovation. This strategy seeks to merge
the distinct representations from each design phase into a cohesive
Advancing to the physical design stage, representation learning and unified narrative, ensuring a seamless transition across the
confronts the geometric and spatial intricacies of the circuit design stages. Such integration not only maintains a consistent flow
layout. Here, convolutional neural networks (CNNs) and vision of information but also enriches the design process with enhanced
Transformers (ViT) are particularly adept at capturing the spatial coherence.
relationships and critical topology in this phase. The objective is to
distill the physical design’s essence into a representation that not
only mirrors the layout’s complexities but also yields actionable 6.1 Implementing Multimodal Circuit Alignment
insights for further optimization and refinement. Central to the concept of multimodal circuit learning is the
The meticulous development of unimodal representations across understanding that all design stages, although distinct in form,
each design stage knits a rich tapestry of circuit knowledge. share a common functional objective. By applying sophisticated
Existing studies have explored unimodal learning for prediction feature extraction and alignment techniques, it becomes possible
of various factors such as routability, IR drop, and lithography to overcome the semantic disconnects that typically arise in
hotspots [114], [135], [227]. Although back-end design consists of representation learning. This ensures that the original design
many design stages with different levels of geometric abstraction, intent is not only preserved but also accentuated throughout the
as shown in Fig. 2 b), existing studies mostly focus on individual entire design lifecycle. The adoption of machine learning models,
stages. There are a few key problems yet to be solved before particularly those leveraging scalable self-attention mechanisms
making back-end representation learning practical in real design and joint embedding spaces, promises to lead the charge towards a
applications. For easier understanding, we interpret the layout more integrated and holistic approach to circuit design.
representation learning task by comparing with computer vision A potential solution to achieve this alignment involves the use
tasks on images. of masked modeling across different modalities. This technique,
Modern layouts consist of rectilinear shapes with a layer inspired by successful applications [228] in natural language
property to represent placement and routing information. These processing, involves selectively hiding parts of the input data
shapes need to follow design rules like minimum width, spacing, across modalities and then training the model to predict these
area, and so on. Detailed of shapes matter. A layout representation masked portions. By applying this method across circuit design rep-
encoder needs to capture the detailed changes in layouts. Besides, resentations—ranging from natural language specifications, high-
each shape in a layout is located at a layer. A layer is like an RGB level algorithms, and RTL implementations to detailed physical
channel of image, so a straightforward way is to encode shapes layouts can learn a joint representation that captures the essence
at each layer into a channel. However, modern layouts often have of the design process at various abstraction levels. This joint
more than 20 layers, including metal and via layers, which goes representation is crucial for the model to understand the transition
far beyond the typical cases of images. from high-level specifications to detailed implementations, enabling
17

it to navigate the complexities of circuit design with greater the aligned representations, we could devise a more sophisticated
precision and efficiency. tokenization strategy for HDL code, paving the way for a deeper
However, addressing the variability in how a high-level design understanding and representation of hardware design intricacies.
can be mapped to multiple lower-level implementations, each with This method transcends the capabilities of existing approaches by
PPA characteristics, poses a significant challenge. To tackle this, generating RTL code that is not only syntactically accurate but also
models need to be equipped with the ability to recognize and semantically rich, closely aligned with the initial specifications and
evaluate the trade-offs associated with different design choices. high-level design intentions. Such advancements promise to elevate
Integrating reinforcement learning techniques with the multimodal the precision and applicability of automatically generated RTL,
learning framework can provide a solution. By setting the opti- ensuring designs are both optimized and verifiable from the outset.
mization of PPA metrics as the reward function, the model can Furthermore, the C2RTL verification process benefits sig-
learn to navigate the space of possible implementations, identifying nificantly from the aligned representation facilitated by LCMs,
solutions that best meet the specified criteria. Furthermore, incor- addressing a pivotal challenge in the transformation from high-
porating attention mechanisms can enhance the model’s ability to level specifications to RTL. This verification phase necessitates
focus on relevant features across modalities, thereby improving its a thorough comparison of functional behaviors across natural
capacity to predict implementations that not only meet functional language specifications, high-level programming languages like
requirements but also optimize for PPA objectives. Through these C/C++, and RTL implementations. Traditionally, within the EDA
methods, the implementation of multimodal alignment in circuit framework, this comparison has been both labor-intensive and
design can become not just a theoretical concept but a practical prone to errors, largely due to the disconnect between the abstract,
tool for advancing the field. functional descriptions at the high level and the detailed, hardware-
Considering the vast differences between design modalities, specific implementations at the low level. Bridging this gap between
aligning them in a single step is a formidable challenge. To address high-level and low-level circuit representations has been a long-
this, we propose a phased approach to multimodal alignment, standing challenge for the EDA community.
partitioning the process into three distinct phases: “Spec-HLS- The adoption of LCMs with multimodal alignment into this
RTL Representation Alignment,” “RTL-Netlist Representation process introduces a transformative approach to C2RTL verification.
Alignment,” and “Netlist-Layout Representation Alignment.” Since By harmonizing the representations of the circuit’s functionality
the RTL design contains high-level semantics and netlist is more across different stages, these models significantly streamline the
suitable for aligning with the following backend designs, this verification process. LCMs can identify and resolve discrepancies
strategy employs these two designs as intermediaries, facilitating by meticulously comparing the generated RTL representation
a more manageable and focused alignment process. By breaking against its high-level counterparts. This capability is enhanced
down the alignment into these stages, we can concentrate on by the transformer technology, renowned for its ability to attend
specific transitions within the design flow, allowing for a more selectively to various parts of the input based on their relevance.
tailored application of machine learning techniques to each phase. Such focused attention allows the models to concentrate on areas
This phased approach not only makes the task of alignment more where discrepancies between the intended functionality and its
feasible but also ensures that each stage of the design process is RTL implementation are most pronounced, offering precise insights
optimally aligned, leading to more coherent and efficient design and resolutions to designers. This method not only reduces the
outcomes. Through careful implementation of this strategy, we time and effort traditionally associated with C2RTL verification
aim to bridge the gap between the various design modalities, but also increases the accuracy and reliability of the verification
ultimately fostering a more integrated and seamless circuit design process, marking a significant advancement in ensuring circuit
environment. design integrity and performance [182], [229].

6.2 Spec-HLS-RTL Representation Alignment 6.3 RTL-Netlist Representation Alignment


The transition from conceptual specifications to RTL imple- The RTL-Netlist Representation Alignment stage is crucial for
mentation involves a complex journey through natural language bridging the gap between RTL, AIG netlists, and post-mapping
specifications, architectural exploration, high-level languages such netlists. This alignment paves the way for numerous applications,
as SystemC, and hardware description languages like Verilog significantly impacting early PPA estimation, design optimization,
and VHDL. Leveraging LCMs within a multimodal framework and verification processes.
would significantly refine this transformation across different stages, One of the primary benefits of RTL-Netlist alignment is the
boosting the quality, efficiency, and pace of the design process. enhancement of early PPA estimation. By aligning representations
LCMs orchestrate a unified representation space that ensures from the RTL design phase through to the netlist level, designers
the harmonious integration of front-end design elements across can gain insights into the potential power, performance, and area
various formats. This unified approach not only streamlines the characteristics of their designs much earlier in the development
capture of intricate relationships among circuit components but cycle. This early insight allows for more informed decision-making,
also accelerates design generation, enhances optimization efforts, enabling adjustments to the design that can lead to optimal PPA
and streamlines verification, embodying a leap forward in circuit outcomes. Such proactive adjustments can significantly reduce
design methodology. the need for time-consuming and costly revisions at later stages,
One of the paramount applications of aligning representa- streamlining the design process and accelerating time-to-market.
tions at this stage is the potential substantial improvement in Beyond early PPA estimation, RTL-Netlist alignment also opens
RTL generation. As discussed earlier, existing RTL generation the door to more sophisticated design optimization strategies. By
techniques merely fine-tune large language models on HDL having a clear view of how RTL designs translate into netlist
code, a process that lacks circuit-specific understanding. With implementations, designers can identify and address inefficiencies
18

at a much deeper level. This insight enables the application of require space to insert or resize gates, the circuit layout that
targeted optimizations that can improve the overall quality and reflects spatial information has a large impact on sigh-off timing
efficiency of the design. Moreover, leveraging machine learning performance. Neglecting layout information can lead to inaccurate
models trained on aligned data sets allows for the automation of timing predictions and sub-optimal design decisions. Through
some optimization tasks, further enhancing the design efficiency netlist-layout representation alignment, LCM can provide more
and effectiveness. accurate estimates of sign-off timing performance. This enables
Finally, the alignment between RTL and netlist representations designers to identify and address timing issues early in the design
significantly benefits the verification process. With a comprehensive process, reducing the likelihood of sign-off violations and the need
understanding of how design intentions are manifested in the netlist, for time-consuming iterations.
verification teams can develop more accurate and efficient testing In summary, the evolution towards a multimodal symphony in
strategies. This alignment ensures that the verification process is circuit design represents not just a technical advancement, but
not only faster but also more thorough, reducing the likelihood of a reimagining of how design processes can be optimized for
errors slipping through to later stages. The ability to detect and efficiency, innovation, and coherence. The potential for such an
address potential issues early on, based on a deep understanding approach to revolutionize the field lies in its ability to harmonize
of the aligned representations, is invaluable in maintaining design disparate data types and design stages into a single, unified frame-
integrity and reliability. work, paving the way for breakthroughs in design methodology
and implementation.
6.4 Netlist-Layout Representation Alignment
The aspiration to align the circuit netlist with its physical layout 7 P IONEERING LCM A PPLICATIONS
is not merely an ambition but a transformative step in EDA. In While extensive empirical data are yet to be available, the
traditional EDA workflows, the netlist, which represents the logical potential applications of LCMs can be vividly illustrated through
abstraction of a circuit, and the physical layout, which represents hypothetical scenarios and conceptual frameworks. The narrative
the concrete geometries of the circuit, have been treated as separate examples presented in this section serve to bridge the gap between
entities. However, the increasing complexity of modern integrated abstract concepts and tangible applications, offering a glimpse into
circuits have highlighted the need for a tighter integration between the transformative impact LCMs could have on the EDA field.
the logical and physical domains.
By aligning the netlist with the physical layout, designers
can gain a deeper understanding of the relationship between the 7.1 Circuit Learning for SAT
logical function and physical form of a circuit. This alignment The Boolean Satisfiability (SAT) problem identifies if there exists
enables a unified perspective of the design, where the logical and at least one assignment that makes a given Boolean formula to
physical aspects are considered together, rather than in isolation. be True. SAT problem acts as a fundamental problem in many
It allows designers to analyze and optimize the design from a areas, especially in the EDA fields, such as logic equivalence
holistic standpoint, taking into account the impact of physical checking, model checking, and testing. Over the past few decades,
constraints on logical functionality, and vice versa. Another key the SAT community has advocated adopting the conjunctive normal
benefit of achieving this alignment is the ability to revolutionize the form (CNF) as the de facto standard format for problem instances
verification process. Traditionally, verification has been performed and developed numerous advanced CNF-based SAT solvers [231],
separately for the logical and physical domains, leading to potential [232]. However, the efficacy of CNF-based solvers recently
mismatches and design errors. With a multimodal approach that encounter bottlenecks in solving hard SAT problems, prompting
considers both domains simultaneously, designers can detect and past research to explore circuit-based solvers or strategies as a
resolve issues that arise due to the interaction between the logical potential breakthrough. In this section, we aim to demonstrate the
and physical aspects of the design. This comprehensive view of impact of the large circuit model on SAT solving.
the design across stages ensures that the final product meets the First, the circuit netlist serves as a natural representation of
desired specifications and performs as expected. SAT problems within the field of EDA and also can be efficiently
Furthermore, the integration of logical and physical information derived from various combinatorial optimization problems. Inspired
opens up new possibilities for design optimization. By presenting by an early endeavor [230], a circuit-based universally efficient
an integrated picture of the entire design space, designers can reformulation mechanism could significantly reduce the complexity
explore a wider range of possibilities and make more informed before solving these problems. The LCMs, especially the uni-modal
decisions. This comprehensive perspective allows designers to netlist encoders, are capable of capturing the structural features
identify and address potential bottlenecks or issues early in the across various netlist distributions. Exploiting this knowledge
design process, leading to improved quality and efficiency. A allows for the exploration of a global transformation flow based
specific example of the significance of integrating netlist-layout on reinforcement learning, ultimately minimizing the overall
information is in pre-routing timing prediction. Pre-routing timing complexity of the solving process.
prediction aims to accurately evaluate potential sign-off timing Table 1 shows our preliminary results when applying the netlist
violations in the early stages of the design process, reducing encoder to accelerate SAT solving for industrial logic equivalence
design cycles and avoiding costly iterations. Traditionally, pre- checking cases I1-I5. In the Baseline setting, the instances are
routing timing evaluation methods, such as static timing analysis, solved directly using the Kissat solver [232]. We denote the RL
have primarily focused on netlist information, which represents agent runtime, transformation time, and solving time as Tagent ,
the interconnections between cells in a design. However, these Ttrans and Tsolve , respectively, measured in seconds. The overall
methods often overlook the crucial role that layout information runtime, which sums up all three components, is denoted as Tall in
plays in timing prediction. As most timing optimization techniques seconds. Additionally, we list the number of variables (# Vars) and
19

TABLE 1: Solving time comparison between Ours and [230] on LEC cases.

Baseline [230] Ours


Case
# Vars # Clas Tsolve # Vars # Clauses Ttrans Tsolve Tall Red. # Vars # Clas Tagent Ttrans Tsolve Tall Red. Red.*
I1 42,069 105,711 322.46 5,616 54,529 5.31 51.49 56.80 82.39% 3,160 31,281 9.27 5.62 4.43 19.26 94.03% 66.08%
I2 44,949 112,954 708.97 6,052 60,573 5.61 147.85 153.46 78.35% 4,112 41,873 9.81 6.12 4.41 20.81 97.07% 86.44%
I3 42,038 105,629 531.94 5,612 54,825 5.21 109.89 115.10 78.36% 3,849 37,329 8.37 5.61 2.91 17.56 96.70% 84.74%
I4 37,275 93,678 289.89 5,038 49,805 4.61 90.05 94.66 67.35% 3,478 34,013 7.32 5.11 2.50 15.01 94.82% 84.14%
I5 30,087 75,537 172.79 4,006 38,069 3.91 38.77 42.67 75.30% 2,311 22,473 4.78 4.31 1.10 10.50 93.92% 75.39%
Avg. 405.21 92.54 77.16% 16.63 95.90% 82.03%

clauses (# Clas), the reduction in Tall compared to Baseline (Red.) characteristics such as timing, area, and power more accurately.
and compared to [230] (Red.*). The solving time is reduced by Integrating physical awareness, LCMs offer a groundbreaking tool
96.14% and 82.03% on average, respectively. in logic optimization, enabling designers to base their decisions
Second, gate-level embeddings proficiently encapsulate the on a nuanced understanding of circuit behavior. This foresight not
logical correlations among gates within a circuit netlist, ensuring only refines optimization strategies but also facilitates superior PPA
that gates sharing functional similarities are closely aligned trade-offs, marking a leap forward in logic synthesis.
within the embedding space. This alignment allows for a precise The crux of technology mapping, especially in FPGA and ASIC
representation of logical connections between variables in the design, lies in navigating the constraints of heterogeneous logic
SAT formulation. By integrating these gate-level embeddings, we blocks, interconnect resources, and optimal cell selection while
can highlight and utilize the discerned correlations to expedite balancing the PPA trade-offs. Addressing structural bias during
the SAT-solving process. This is achieved by embedding these technology mapping demands meticulous algorithmic strategies.
correlations as additional constraints in the initial SAT problem LCMs herald a new era in technology mapping by leveraging their
instances, thereby enhancing the solver’s efficiency. ability to learn from diverse logic representations and adapt map-
Third, traditional heuristic strategies (e.g., branching heuristics) ping strategies to the nuanced requirements of the input data and
predominantly depend on the correlation between variables in CNF technological constraints. Their versatility in overcoming structural
representations, which cannot preserve the circuit’s topological biases through contextually aware mappings, coupled with the
structure. Recent advancements, such as [17], showcase the iterative feedback loop and physical information integration, offers
effectiveness of a unimodal netlist encoder in capturing the intricate tailored insights for refining mapping strategies. LCMs’ scalability
gate-level logic correlations within circuit netlists. further underscores their effectiveness in managing complex circuit
Building upon the above, the LCMs excel in identifying gate- designs, presenting a compelling case for overcoming longstanding
level functional relationships within circuit netlists based on the challenges in technology mapping.
unimodal netlist encoders. By harnessing the power of LCMs, new In essence, the conceptual application of LCMs in logic
and efficient circuit-based SAT-solving strategies can be developed, synthesis promises a shift towards more efficient, accurate, and
ultimately improving the overall performance and effectiveness of adaptable design processes, positioning them as the cornerstone for
heuristic designs. the next generation of circuit design methodologies.

7.2 LCM for Logic Synthesis 7.3 LCM for Equivalence Checking
Logic synthesis stands at the crossroads of multiple representations Equivalence checking stands as a critical verification step in digital
and sophisticated algorithms, such as truth tables, sum-of-products, circuit design, ensuring that functionality is preserved through
binary decision diagrams (BDDs), and directed acyclic graphs synthesis or manual modifications. Traditional methods, while
(DAGs), with none asserting complete dominance. This diversity reliable, struggle with scalability in the face of increasingly dense
underscores a fundamental challenge: selecting and optimizing designs and the complex optimizations required to meet PPA
the most effective representation for a logic function. Herein lies goals. Here, LCMs emerge as a transformative solution, offering
the transformative potential of LCM. By learning and internally a paradigm shift towards interactive equivalence checking that
representing the same logic function across diverse formats, LCMs enhances the efficiency and effectiveness of the process.
exhibit unparalleled adaptability. Their deep understanding of LCMs have the unique potential to revolutionize this domain by
intricate relationships and optimization pathways within logic enabling an end-to-end interactive equivalence checking process.
synthesis allows for a flexible approach to representing complex This approach is particularly beneficial for ECO optimizations and
logic functions. This adaptability becomes instrumental in handling custom design styles, where the goal extends beyond functional
multifaceted inputs and expressions of logic, showcasing LCMs’ equivalence to include high-quality design modifications. Leverag-
capability to revolutionize the representation and optimization of ing their deep understanding of circuit semantics, LCMs can offer
logic functions in a way previously unattainable. insightful recommendations for design adjustments and patches
As we venture into the realm of nanometer-scale technologies, during the interactive ECO phase. Drawing from extensive training
the significance of technology-independent optimizations becomes on diverse circuit data, LCMs can identify underlying patterns
increasingly pronounced, focusing on metrics like the number of and rules of successful designs, suggesting targeted modifications
literals and logic depth in DAGs for area and delay evaluation. Mar- to resolve detected discrepancies. These suggestions are not only
rying these optimization strategies with the physical realities of the based on historical success but are also ranked according to their
technology landscape introduces a new layer of complexity. LCMs anticipated impact on PPA, empowering designers with informed
are poised to address this challenge head-on by predicting physical choices that align with their specific objectives.
20

Furthermore, the iterative nature of LCMs means that these and spatial relationships. By aligning with additional modalities,
recommendations can be refined based on designer feedback, designers gain the unprecedented ability to pinpoint layout hotspots
creating an efficient feedback loop that streamlines the equivalence at earlier design stages and implement preemptive countermeasures.
checking and modification process. This iterative engagement not This proactive approach facilitated by LCMs allows for the early
only accelerates the identification of viable design solutions but identification of potential issues related to timing, power, and
also enhances the overall quality of the final design. thermal management, enabling adjustments before they escalate
In addition to transforming equivalence checking into an into more significant challenges.
interactive dialogue, LCMs hold promise for augmenting exist- To sum up, LCMs could learn the underlying characteristics of
ing equivalence checking systems. Traditional algorithms have a physical representation and reveal new directions for design and
exploited the empirical distribution of circuit designs, wherein optimization. More excitingly, they have the potential to serve as
current practices include: 1) partitioning and selecting fine-grained the foundation of new learning-based heuristics and revolutionize
proof strategies [233], 2) adapting various encodings from a the traditional way of physical design, eliminating the burden of
problem instance to a canonical solver instance [234], and 3) constantly designing new algorithms.
employing design-specific equivalence checking strategies (e.g.,
for multipliers [235]). These solutions remain limited by the
8 TAILORING LCM S FOR S PECIALIZED C IRCUITS
need for hand-crafted heuristics and specialized strategies. For
instance. LCMs, with their ability to automatically understand Exploring specialized circuit domains reveals a diverse array of
design intent and manage the distribution of design data, can act unique designs that extend beyond the standard digital circuits
as a neural backbone for these systems. They can manage various typically encountered in EDA workflows. Standard cell designs,
heuristics in formal solvers or function as a neural scheduler for task datapath circuits, memory macros, and analog circuits possess
distribution, significantly enhancing the performance and efficiency distinct characteristics that necessitate custom approaches. The
of equivalence checking processes. expansion of LCMs into these specialized arenas heralds a
This dual approach—transforming equivalence checking into an promising enhancement for design efficiency and optimization.
interactive process and augmenting existing systems—highlights
the pioneering potential of LCMs. By leveraging the power of 8.1 Large Circuit Models for Standard Cells
LCMs, designers can navigate the complexities of modern circuit Standard cells form the fundamental building blocks of digital
verification with greater ease and precision, promising to elevate the designs, comprising basic logic gates and complex combinational
verification process to new heights of efficiency and effectiveness. functions. Their design is critical for the overall performance and
power efficiency of the chip. LCMs in this domain could leverage
7.4 LCM for Physical Design generative models to propose new cell architectures that optimize
for a variety of constraints, including power, performance, area,
Physical design is the stage that converts the logical representations
and even novel objectives like robustness to process variations.
of a circuit into the physical representations. In this stage, a physical
Furthermore, these models could predict the impact of cell design
layout is generated by partitioning, floorplanning, placement, and
changes on the higher levels of the design hierarchy, enabling a
routing. This process requires solving many NP-hard combinato-
holistic approach to optimization.
rial optimization problems and is extremely complex and time-
For the front-end design of standard cells, LCM can be
consuming. As the scale of an electronic design keeps increasing
employed for library pruning and cell characterization. The
and the feature size keeps shrinking, traditional approaches to
requirements for standard cell libraries differ between high-
physical design face serious challenges. LCMs, on the other
performance circuit design and low-power circuit design [238],
hand, could provide new perspectives on processing the physical
[239]. Historically, designers have often relied on experience and
representations of an electronic design and even new methodologies
extensive simulations to select a subset for a new cell library.
in dealing with these tricky combinatorial optimization problems.
LCM can leverage existing selection experiences to better choose
A trained LCM could offer guidance to placement and routing suitable cells for the specific design scenario. Additionally, it can
for wirelength, routability, and timing optimization. Consider leverage generative models to continuously explore new topological
the placement optimization process, where traditional methods architectures, subsequently refining the generation process based on
have leveraged unimodal information for guidance, from gradient SPICE simulation results as feedback for continual improvement.
prediction [236] to routing congestion forecasting [114]. Common Characterization is the most time-consuming step in standard cell
practices involve transforming layout features into image-like design, requiring extensive SPICE simulations to generate liberty
data for machine learning model predictions, often employing libraries. However, the significance varies across different PVT
vision-based models like CNNs and Vision Transformers. Yet, this corners and standard cells. Therefore, accuracy-aware supervised
approach may overlook crucial interconnect information, given the learning can enhance the overall precision of libraries while
challenges vision-based methods face in preserving topological reducing runtime by prioritizing the importance of different corners
details alongside spatial relationships. Recent explorations into and cells [240], [241].
multimodal representations for physical design, however, illuminate
a promising path forward. Studies like LHNN [237] introduce
dual GNNs to capture both topological (circuit interconnections) 8.2 Large Circuit Models for Datapath Circuits
and spatial relationships, merging these insights in latent space. Datapath circuits, essential for performing arithmetic and log-
Similarly, Lay-Net [118] proposes substituting the GNN with CNN ical operations within microarchitectures, stand at the core of
for spatial analysis, capitalizing on the superior spatial awareness performance-critical computing. These components notably benefit
of vision-based methods. Despite these advancements, LCMs from bit-level optimization, necessitating a detailed focus on timing
have the capacity to move beyond merely integrating topological and power constraints.
21

evaluations compared to their digital counterparts, which are


primarily logic-driven. In analog circuits, device-level topology
Topology Generation

Schematic Graph
and physical implementation are crucial. The sub-structure of
ADC transistors, capacitors, and resistors determines circuit functionality,
Analog Sizing
PLL making the detailed graph structures (such as network motifs) and
Analog
LCM Amplifiers device parameters essential for capture by analog LCMs. Moreover,
Placement Nat. Lang.
Analog Placement
analog circuits involve various types and evaluations, with different
performance evaluations requiring specific circuit implementations.
Analog Routing
Routing Image Analog circuit design is an art that melds intricate knowledge
Data Sources Data Structures Aligned Representation Applications of device physics with the subtleties of the intended application.
LCMs for analog circuits must capture this depth of knowledge,
Fig. 7: Overview of large circuit models for analog EDA. translating it into models that can navigate the analog design space
with its continuous variables and stringent performance metrics.
These models could predict analog behavior from device-level
In datapath design, the optimization of a myriad of parameters up to system-level specifications, assist in layout generation, and
is crucial for balancing PPA effectively. Acting as surrogate automate the tedious tuning process of analog parameters. By doing
models during design space exploration, LCMs can significantly so, LCMs could drastically reduce the design time and enhance
streamline the optimization process. LCMs specifically tailored the performance of analog circuits, which remain a bottleneck in
for datapath circuits offer a promising approach by employing mixed-signal chip design.
specialized architectures adept at understanding the complexities TAG [195] represents an early effort to develop a circuit
of arithmetic operations. This enables them to enhance logical representation model for analog EDA. It introduces a netlist
efficiency while optimizing the physical layout. Through training embedding mechanism and a “pretrain-then-finetune” strategy to
on diverse datasets, encompassing both synthetic and real-world apply embedding vectors across various applications. However, it
datapath designs, LCMs pave the way for exploring innovative lacks a unified, aligned representation across all design stages, with
datapath configurations that extend beyond traditional design its effectiveness constrained by the initial pretraining target, layout
methodologies. distance. Its potential applications are somewhat limited compared
Specifically, LCMs bring a new dimension to datapath design
to a comprehensive LCM for analog circuits.
evaluation, offering a comprehensive and accurate analysis that
Fig. 7 presents our vision for future analog large circuit models.
transcends traditional limitations. Traditional evaluation methods
These models are inherently multimodal, capable of processing
often rely on a constrained set of benchmarks, limiting the scope
various data structures from different design stages. Text and graph
of assessment. In contrast, LCMs draw upon a broad and profound
structures can represent a netlist, while images may be used for
understanding of target applications, facilitating a more extensive
layout designs. An Analog LCM converts these inputs into vectors,
evaluation of datapath designs. This broad perspective enables a
mapping the designs to a unified embedding space. The generated
“shift-left” in the evaluation process, providing early and insightful
circuit embedding vectors can then support various downstream
assessments that encompass not only architectural considerations
tasks across different circuit types (such as amplifiers, PLLs, and
but also subsequent stages like placement and routing. Such a shift
ADCs), catering to a range of applications from topology design to
enhances the overall efficiency and effectiveness of the evaluation
routing.
process.
Moreover, LCMs’ deep domain knowledge in both the logical
functions and physical implementations of datapath circuits allows 9 C HALLENGES AND O PPORTUNITIES : T HE D UAL
them to automatically pinpoint optimization bottlenecks. This ca- E DGES
pability not only accelerates the design optimization cycle but also
furnishes designers with critical insights for further enhancements. Embarking on the quest for LCMs unveils a realm filled with
By integrating LCMs into the datapath design process, engineers both challenges and opportunities. The journey is strewn with
can achieve a level of optimization and efficiency previously hurdles like data scarcity, scalability issues, and interoperability
unattainable, heralding a new era in the evolution of datapath with existing EDA tools, yet each challenge surmounted paves the
circuits and their implementation in modern microarchitectures. way for uncharted opportunities.
In summary, LCMs’ detailed grasp of datapath complexities
allows them to offer strategic recommendations that go beyond 9.1 Data Issues
parameter adjustments, influencing the architectural framework of Data scarcity stands out as a critical hurdle, given the dependency
the circuit’s RTL design. The ultimate goal is to utilize LCMs for of LCMs on extensive, high-quality datasets for training. The
the automated generation of circuit datapaths, tailored to specific realm of circuit design, particularly at the granularity required
Process Design Kits (PDKs) and targeted software applications, for effective LCM operation, suffers from a lack of publicly
thereby revolutionizing the design process. available data, posing risks of overfitting and undermining the
models’ generalization capabilities. Tackling this issue head-on, we
8.3 Large Circuit Models for Analog Circuits introduce three possible solutions.
Analog EDA shares similarities and differences with digital EDA. First, innovative data augmentation techniques emerge as a
Like digital workflows, analog EDA encompasses front-end netlist key solution. For instance, equivalent circuits can be generated
design and back-end layout design. Analog LCMs also demand through systematic circuit transformations, effectively expanding
holistic solutions that span different design flow stages. Conversely, the dataset without the need for additional real-world data. This
analog circuits exhibit distinct data structures and performance approach not only enhances the diversity of training material but
22

also deepens the model’s understanding of circuit variability and 9.3 New Opportunities
design principles.
Beyond merely enhancing existing EDA tools, LCMs present the
Second, on the synthetic data front, leveraging LLMs to exciting prospect of birthing entirely new categories of EDA tools,
generate realistic RTL code presents an exciting opportunity. This ones that could fundamentally alter how design, verification, and
strategy involves using LLMs’ advanced generative capabilities to optimization are approached.
create new RTL designs, which can then serve both as additional
One of the most promising opportunities presented by LCMs
training data and as benchmarks for further refining the RTL
is the ability to conduct early-stage, precise PPA estimation.
generative models themselves. This creates a self-reinforcing loop
Traditionally, accurate PPA metrics could only be determined
where LCMs continually improve through iterative training on
after substantial design progress, often at the post-synthesis or
both real and synthetically generated data. Such a mechanism not
post-layout stages. LCMs, however, can predict these critical
only addresses the issue of data scarcity but also contributes to
metrics much earlier in the design process, leveraging aligned
the evolution of more sophisticated and capable generative models,
representations among modalities. This capability allows for more
marking a significant leap towards fully realizing the transformative
informed decision-making at the outset of a project, guiding
potential of LCMs in the EDA landscape.
design choices in alignment with PPA objectives and significantly
Third, the development of community-driven platforms for accelerating the optimization cycle. Early-stage PPA estimation
data sharing and collaboration could significantly alleviate the not only enhances design efficiency but also enables a more agile
scarcity issue. By fostering an ecosystem where academia and response to evolving design requirements and constraints.
industry share circuit data and design challenges, the field can
LCMs also enable a paradigm shift towards cross-stage ver-
collectively advance the state of LCM research, ensuring a diverse
ification, a holistic approach that transcends the conventional,
and comprehensive dataset that mirrors the multifaceted nature of
compartmentalized verification processes. Traditional EDA method-
electronic design automation.
ologies often treat verification as a stage-specific task, siloed
In essence, while data scarcity presents a formidable challenge, within the design flow. However, LCMs, with their comprehensive
it also opens the door to a range of inventive strategies that not understanding of circuit knowledge across various stages, facilitate
only address the immediate issue but also enrich the EDA domain. a unified verification framework. This cross-stage verification
Through collaborative efforts, technological advancements, and a can detect inconsistencies and errors early in the design process,
commitment to innovation, the potential of LCMs in revolutionizing reducing the iterative cycles typically required to rectify such issues.
circuit design remains within reach. By leveraging the predictive power of LCMs, designers can ensure
coherence and fidelity from the initial specifications to the final
physical layouts, significantly streamlining the verification process.
9.2 Scalability and Interoperability Moreover, LCMs unlock the potential for generative design,
particularly for well-structured circuits such as datapath units. Data-
Scalability emerges as a pivotal challenge in the realm of LCMs, path units, with their regular structures and predictable performance
especially as we delve into complex, vast-scale circuit designs metrics, are ideal candidates for LCM-driven generative design
that define the next generation of electronic devices. The quest for approaches. LCMs can generate optimal circuit configurations that
scalability is not just about accommodating larger designs but also meet specified criteria, exploring a vast design space that might
about enhancing computational efficiency and sophistication in be infeasible for human designers to cover comprehensively. This
model architecture. This involves pioneering hierarchical modeling generative capability can lead to innovative circuit designs that
techniques that can intuitively decompose complex designs into optimize PPA metrics, potentially discovering novel architectural
manageable submodules, algorithmic optimizations that streamline solutions that traditional design methodologies might overlook.
model training and inference, and the implementation of par- Furthermore, generative design facilitated by LCMs can automate
allel processing strategies to distribute computational workload aspects of the design process for these structured circuits, reduc-
effectively. Each of these advancements contributes to a robust ing manual effort and enabling a focus on higher-level design
foundation, equipping LCMs to tackle increasingly ambitious challenges.
design projects while maintaining precision and efficiency. Finally, the synergy between large language models and LCMs
Moreover, as LCMs grow in complexity and capability, ensuring presents a particularly promising area of exploration. LLMs, with
their interoperability with the existing mosaic of EDA tools their advanced natural language processing capabilities, can serve
becomes paramount. The modern circuit design ecosystem is a as intuitive, conversational interfaces for designers, translating high-
tapestry of specialized design flows, tools, scripts, libraries, and level design specifications into actionable insights and suggestions;
technologies, each contributing to various stages of the design While the LCMs, with their deep understanding of circuitry and
process. Bridging the gap between the innovative potential of design principles, can analyze and optimize the granular details
LCMs and the established practices of current EDA workflows of the netlist, ensuring that the final design aligns with the
necessitates a concerted effort for deeper collaboration between the desired performance, power, and area constraints. This collaborative
AI research community and EDA professionals. Such collaboration interaction between LLMs and LCMs allows for a seamless
aims to weave AI-driven methodologies seamlessly into the fabric transition from abstract design concepts to concrete, optimized
of EDA, enhancing tool compatibility, data exchange protocols, circuit representations. Bridging the gap between high-level design
and user interfaces. This symbiotic relationship stands to not only intent and detailed technical execution, this synergy enables a more
streamline the integration of LCMs into existing design pipelines holistic and integrated approach to circuit design.
but also to catalyze the mutual evolution of both AI technologies In summary, the development of LCMs is fraught with chal-
and EDA tools and methodologies, heralding a new era of design lenges, yet each obstacle surmounted brings the EDA community
automation that is both more intelligent and intuitive. one step closer to realizing the full potential of these innovative
23

models. The promise of LCMs to significantly streamline the design [10] A. Ramesh, M. Pavlov, G. Goh, S. Gray, C. Voss, A. Radford, M. Chen,
process, elevate design quality, and accelerate the development of and I. Sutskever, “Zero-shot text-to-image generation,” in International
Conference on Machine Learning (ICML), 2021, pp. 8821–8831.
cutting-edge electronic systems highlights the critical importance [11] R. Rombach, A. Blattmann, D. Lorenz, P. Esser, and B. Ommer, “High-
of addressing these challenges. resolution image synthesis with latent diffusion models,” in IEEE
Conference on Computer Vision and Pattern Recognition (CVPR), 2022,
pp. 10 684–10 695.
10 C ONCLUSION [12] A. Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gustafson,
T. Xiao, S. Whitehead, A. C. Berg, W.-Y. Lo et al., “Segment anything,”
As we navigate the evolving landscape of AI-driven EDA, the arXiv preprint arXiv:2304.02643, 2023.
potential of large circuit models emerges as a beacon of innovation, [13] Z. Yang, L. Li, K. Lin, J. Wang, C.-C. Lin, Z. Liu, and L. Wang, “The
promising to redefine the paradigms of circuit design and analysis. dawn of LLMs: Preliminary explorations with GPT-4V(ision),” arXiv
preprint arXiv:2309.17421, vol. 9, no. 1, 2023.
Specifically, we advocate for a paradigm shift from task-
[14] G. Team, R. Anil, S. Borgeaud, Y. Wu, J.-B. Alayrac, J. Yu, R. Soricut,
oriented AI4EDA methodologies to more integrated, AI-native J. Schalkwyk, A. M. Dai, A. Hauth et al., “Gemini: A family of highly
foundation models. LCMs stand at the crossroads of this transition, capable multimodal models,” arXiv preprint arXiv:2312.11805, 2023.
offering a holistic representation that encapsulates the multifaceted [15] M. Rapp, H. Amrouch, Y. Lin, B. Yu, D. Z. Pan, M. Wolf, and J. Henkel,
“MLCAD: A survey of research in machine learning for CAD keynote
aspects of circuit design—from logical structuring to physical real- paper,” IEEE Transactions on Computer-Aided Design of Integrated
ization. The promise of LCMs lies in their ability to harness deep Circuits and Systems (TCAD), vol. 41, no. 10, pp. 3162–3181, 2022.
learning for capturing the intricate dependencies and characteristics [16] M. Li, S. Khan, Z. Shi, N. Wang, Y. Huang, and Q. Xu, “DeepGate:
Learning neural representations of logic gates,” in ACM/IEEE Design
of large-scale circuit netlists, thereby facilitating more efficient,
Automation Conference, 2022, pp. 667–672.
accurate, and innovative design strategies. [17] Z. Shi, H. Pan, S. Khan, M. Li, Y. Liu, J. Huang, H.-L. Zhen,
Looking ahead, the journey toward fully realizing the potential M. Yuan, Z. Chu, and Q. Xu, “DeepGate2: Functionality-aware circuit
of LCMs is laden with a vast array of research problems waiting representation learning,” in 2023 IEEE/ACM International Conference
on Computer Aided Design (ICCAD), 2023, pp. 1–9.
to be addressed. From the refinement of representation learning [18] “OpenCores.” [Online]. Available: http://opencores.org/
techniques to accommodate the unique circuit characteristics at [19] “XiangShan RISC-V processor.” [Online]. Available: https://github.com/
various design stages, to the development of scalable, effective OpenXiangShan/XiangShan
alignment models capable of interpreting and optimizing complex [20] K. Asanović, R. Avizienis, J. Bachrach, S. Beamer, D. Biancolin,
C. Celio, H. Cook, D. Dabbelt, J. Hauser, A. Izraelevitz, S. Karandikar,
netlists, the field is ripe for exploration. B. Keller, D. Kim, J. Koenig, Y. Lee, E. Love, M. Maas, A. Magyar,
In conclusion, the dawn of AI-native EDA heralded by LCMs H. Mao, M. Moreto, A. Ou, D. A. Patterson, B. Richards, C. Schmidt,
presents a transformative vision for the future of circuit design S. Twigg, H. Vo, and A. Waterman, “The rocket chip generator,” EECS
Department, University of California, Berkeley, Tech. Rep. UCB/EECS-
and analysis. By embracing this new frontier, we stand to unlock 2016-17, Apr 2016.
unprecedented levels of efficiency, creativity, and precision in the [21] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez,
creation of the next generation of electronic devices. Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” Advances in
Neural Information Processing Systems (NeurIPS), vol. 30, 2017.
[22] J. Zhou, G. Cui, S. Hu, Z. Zhang, C. Yang, Z. Liu, L. Wang, C. Li, and
R EFERENCES M. Sun, “Graph neural networks: A review of methods and applications,”
AI open, vol. 1, pp. 57–81, 2020.
[1] R. Bommasani, D. A. Hudson, E. Adeli, R. Altman, S. Arora, S. von [23] T. Baltrušaitis, C. Ahuja, and L.-P. Morency, “Multimodal machine
Arx, M. S. Bernstein, J. Bohg, A. Bosselut, E. Brunskill et al., “On learning: A survey and taxonomy,” IEEE transactions on pattern analysis
the opportunities and risks of foundation models,” arXiv preprint and machine intelligence, vol. 41, no. 2, pp. 423–443, 2018.
arXiv:2108.07258, 2021. [24] J. Austin, A. Odena, M. Nye, M. Bosma, H. Michalewski, D. Dohan,
[2] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “BERT: Pre- E. Jiang, C. Cai, M. Terry, Q. Le et al., “Program synthesis with large
training of deep bidirectional transformers for language understanding,” language models,” arXiv preprint arXiv:2108.07732, 2021.
in Conference of the North American Chapter of the Association for [25] C.-C. Lin, K.-C. Chen, S.-C. Chang, M. Marek-Sadowska, and K.-T.
Computational Linguistics: Human Language Technologies, Volume 1 Cheng, “Logic synthesis for engineering change,” in ACM/IEEE Design
(Long and Short Papers), J. Burstein, C. Doran, and T. Solorio, Eds., Automation Conference, 1995, pp. 647–652.
2019, pp. 4171–4186. [26] G. De Micheli, “Chip challenge,” IEEE Solid-State Circuits Magazine,
[3] Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, vol. 2, no. 4, pp. 22–26, 2010.
L. Zettlemoyer, and V. Stoyanov, “RoBERTa: A robustly optimized [27] J. Bachrach, H. Vo, B. Richards, Y. Lee, A. Waterman, R. Avižienis,
BERT pretraining approach,” arXiv preprint arXiv:1907.11692, 2019. J. Wawrzynek, and K. Asanović, “Chisel: Constructing Hardware
[4] C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, in a Scala Embedded Language,” in ACM/IEEE Design Automation
Y. Zhou, W. Li, and P. J. Liu, “Exploring the limits of transfer learning Conference (DAC), 2012, pp. 1216–1225.
with a unified text-to-text transformer,” The Journal of Machine Learning [28] S. C. Johnson, Lint, a C program checker. Bell Telephone Laboratories
Research, vol. 21, no. 1, pp. 5485–5551, 2020. Murray Hill, 1977.
[5] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, [29] C. I. C. Marquez, M. Strum, and W. J. Chau, “Formal equivalence
A. Neelakantan, P. Shyam, G. Sastry, A. Askell et al., “Language models checking between high-level and RTL hardware designs,” in 2013 14th
are few-shot learners,” Advances in Neural Information Processing Latin American Test Workshop-LATW. IEEE, 2013, pp. 1–6.
Systems (NeurIPS), vol. 33, pp. 1877–1901, 2020. [30] R. Mukherjee, M. Purandare, R. Polig, and D. Kroening, “Formal
[6] T. Chen, S. Kornblith, M. Norouzi, and G. Hinton, “A simple framework techniques for effective co-verification of hardware/software co-designs,”
for contrastive learning of visual representations,” in International in Proceedings of the 54th Annual Design Automation Conference 2017,
Conference on Machine Learning (ICML), 2020, pp. 1597–1607. 2017, pp. 1–6.
[7] K. He, H. Fan, Y. Wu, S. Xie, and R. Girshick, “Momentum contrast [31] Synopsys. (2024) Vc formal datapath validation. [Online]. Available:
for unsupervised visual representation learning,” in IEEE Conference on https://www.synopsys.com/verification/static-and-formal-verification/
Computer Vision and Pattern Recognition (CVPR), 2020, pp. 9729–9738. vc-formal/vc-formal-datapath-validation.html
[8] K. He, X. Chen, S. Xie, Y. Li, P. Dollár, and R. Girshick, “Masked [32] A. Koelbl, R. Jacoby, H. Jain, and C. Pixley, “Solver technology for
autoencoders are scalable vision learners,” in IEEE Conference on system-level to RTL equivalence checking,” in 2009 Design, Automation
Computer Vision and Pattern Recognition (CVPR), 2022, pp. 16 000– & Test in Europe Conference & Exhibition. IEEE, 2009, pp. 196–201.
16 009. [33] B.-Y. Huang, H. Zhang, P. Subramanyan, Y. Vizel, A. Gupta, and
[9] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, S. Malik, “Instruction-level abstraction (ILA) a uniform specification
G. Sastry, A. Askell, P. Mishkin, J. Clark et al., “Learning transferable for system-on-chip (SoC) verification,” ACM Transactions on Design
visual models from natural language supervision,” in International Automation of Electronic Systems (TODAES), vol. 24, no. 1, pp. 1–24,
Conference on Machine Learning (ICML), 2021, pp. 8748–8763. 2018.
24

[34] A. Mishchenko, S. Chatterjee, R. Brayton, and N. Een, “Improvements in IEEE/ACM International Conference on Computer-Aided Design
to combinational equivalence checking,” in Proceedings of the 2006 (ICCAD), 2021, pp. 1–9.
IEEE/ACM international conference on Computer-aided design, 2006, [55] Q. Zhang, S. Li, G. Zhou, J. Pan, C.-C. Chang, Y. Chen, and Z. Xie,
pp. 836–843. “PANDA: Architecture-level power evaluation by unifying analytical and
[35] J. Baumgartner, H. Mony, V. Paruthi, R. Kanzelman, and G. Janssen, machine learning solutions,” in IEEE/ACM International Conference on
“Scalable sequential equivalence checking across arbitrary design trans- Computer-Aided Design (ICCAD), 2023, pp. 01–09.
formations,” in 2006 International Conference on Computer Design. [56] C. Bai, Q. Sun, J. Zhai, Y. Ma, B. Yu, and M. D. Wong, “BOOM-
IEEE, 2006, pp. 259–266. Explorer: RISC-V BOOM microarchitecture design space exploration
[36] Z. Chen, X. Zhang, Y. Qian, Q. Xu, and S. Cai, “Integrating exact simu- framework,” in IEEE/ACM International Conference On Computer Aided
lation into sweeping for datapath combinational equivalence checking,” Design (ICCAD), 2021.
in 2023 IEEE/ACM International Conference on Computer Aided Design [57] N. Ardalani, C. Lestourgeon, K. Sankaralingam, and X. Zhu, “Cross-
(ICCAD). IEEE, 2023, pp. 1–9. architecture performance prediction (XAPP) using CPU code to predict
[37] Y.-Y. Dai, K.-Y. Khoo, and R. K. Brayton, “Sequential equivalence GPU performance,” in IEEE/ACM International Symposium on Microar-
checking of clock-gated circuits,” in Proceedings of the 52nd Annual chitecture (MICRO), 2015.
Design Automation Conference, 2015, pp. 1–6. [58] G. Wu, J. L. Greathouse, A. Lyashevsky, N. Jayasena, and D. Chiou,
[38] C. J. Alpert, D. P. Mehta, and S. S. Sapatnekar, Handbook of algorithms “GPGPU performance and power estimation using machine learning,”
for physical design automation. CRC press, 2008. in IEEE International Symposium on High Performance Computer
[39] X. Li, Z. Huang, S. Tao, Z. Huang, C. Zhuang, H. Wang, Y. Li, Y. Qiu, Architecture (HPCA), 2015.
G. Luo, H. Li, H. Shen, M. Chen, D. Bu, W. Zhu, Y. Cai, X. Xiong, [59] Z. Qian, D.-C. Juan, P. Bogdan, C.-Y. Tsui, D. Marculescu, and
Y. Jiang, Y. Heng, P. Zhang, B. Yu, B. Xie, and Y. Bao, “iEDA: An R. Marculescu, “SVR-NoC: A performance analysis tool for network-
open-source infracstructure of EDA,” in Asia and South Pacific Design on-chips using learning-based support vector regression model,” in
Automation Conference (ASPDAC). IEEE, 2024. IEEE/ACM Proceedings Design, Automation and Test in Eurpoe (DATE),
[40] Y.-L. Li, S.-T. Lin, S. Nishizawa, H.-Y. Su, M.-J. Fong, O. Chen, and 2013, pp. 354–357.
H. Onodera, “NCTUcell: A DDA-aware cell library generator for FinFET [60] Z. Shi, X. Huang, A. Jain, and C. Lin, “Applying deep learning to the
structure with implicitly adjustable grid map,” in ACM/IEEE Design cache replacement problem,” in IEEE/ACM International Symposium on
Automation Conference (DAC), 2019, pp. 1–6. Microarchitecture (MICRO), 2019, pp. 413–425.
[41] C.-K. Cheng, C.-T. Ho, D. Lee, and D. Park, “A routability-driven [61] R. Bera, K. Kanellopoulos, A. Nori, T. Shahroodi, S. Subramoney, and
complimentary-FET (CFET) standard cell synthesis framework using O. Mutlu, “Pythia: A customizable hardware prefetching framework
SMT,” in IEEE/ACM International Conference on Computer-Aided using online reinforcement learning,” in IEEE/ACM International
Design (ICCAD), 2020, pp. 1–8. Symposium on Microarchitecture (MICRO), 2021.
[42] D. Park, D. Lee, I. Kang, S. Gao, B. Lin, and C.-K. Cheng, “SP&R: [62] S. Lu, R. Tessier, and W. Burleson, “Reinforcement learning for thermal-
Simultaneous placement and routing framework for standard cell aware many-core task allocation,” in Great Lakes Symposium on VLSI,
synthesis in sub-7nm,” in IEEE/ACM Asia and South Pacific Design 2015.
Automation Conference (ASPDAC), 2020, pp. 345–350. [63] N. AbouGhazaleh, A. Ferreira, C. Rusu, R. Xu, F. Liberato, B. Childers,
[43] S. Choi, J. Jung, A. B. Kahng, M. Kim, C.-H. Park, B. Pramanik, and D. Mosse, and R. Melhem, “Integrated CPU and L2 cache voltage scaling
D. Yoon, “PROBE3.0: A systematic framework for design-technology using machine learning,” in ACM SIGPLAN/SIGBED Conference on
pathfinding with improved design enablement,” IEEE Transactions Languages, Compilers, and Tools for Embedded Systems (LCTES), 2007.
on Computer-Aided Design of Integrated Circuits and Systems, 2023. [64] C. Dubach, T. M. Jones, E. V. Bonilla, and M. F. O’Boyle, “A predictive
[Online]. Available: https://doi.org/10.1109/TCAD.2023.3334591 model for dynamic microarchitectural adaptivity control,” in IEEE/ACM
[44] A. Beaumont-Smith and C.-C. Lim, “Parallel prefix adder design,” in International Symposium on Microarchitecture (MICRO), 2010, pp. 485–
Proceedings 15th IEEE Symposium on Computer Arithmetic. ARITH-15 496.
2001. IEEE, 2001, pp. 218–225. [65] S.-C. Kao, G. Jeong, and T. Krishna, “ConfuciuX: Autonomous Hard-
[45] S. Rakesh and K. V. Grace, “A comprehensive review on the vlsi ware Resource Assignment for DNN Accelerators using Reinforcement
design performance of different parallel prefix adders,” Materials Today: Learning,” in IEEE/ACM International Symposium on Microarchitecture
Proceedings, vol. 11, pp. 1001–1009, 2019. (MICRO), 2020, pp. 622–636.
[46] N. P. Jouppi, C. Young, N. Patil, D. Patterson, G. Agrawal, R. Bajwa, [66] S. Dai, Y. Zhou, H. Zhang, E. Ustun, E. F. Young, and Z. Zhang, “Fast
S. Bates, S. Bhatia, N. Boden, A. Borchers et al., “In-datacenter and accurate estimation of quality of results in high-level synthesis
performance analysis of a tensor processing unit,” in Proceedings of the with machine learning,” in Symposium on Field-Programmable Custom
44th annual international symposium on computer architecture, 2017, Computing Machines (FCCM), 2018.
pp. 1–12. [67] H. M. Makrani, F. Farahmand, H. Sayadi, S. Bondi, S. M. P. Dinakarrao,
[47] H. Chen, M. Liu, X. Tang, K. Zhu, N. Sun, and D. Z. Pan, “Challenges H. Homayoun, and S. Rafatirad, “Pyramid: Machine learning framework
and opportunities toward fully automated analog layout design,” Journal to estimate the optimal timing and resource usage of a high-level
of Semiconductors, vol. 41, no. 20070021, p. 111407, 2020. synthesis design,” in International Conference on Field-Programmable
[48] Z. Zhao and L. Zhang, “An automated topology synthesis framework Logic and Applications (FPL), 2019.
for analog integrated circuits,” IEEE Transactions on Computer-Aided [68] E. Ustun, C. Deng, D. Pal, Z. Li, and Z. Zhang, “Accurate operation delay
Design of Integrated Circuits and Systems (TCAD), vol. 39, no. 12, pp. prediction for FPGA HLS using graph neural networks,” in IEEE/ACM
4325–4337, 2020. International Conference on Computer-Aided Design (ICCAD), 2020,
[49] W. Lyu, P. Xue, F. Yang, C. Yan, Z. Hong, X. Zeng, and D. Zhou, “An pp. 1–9.
efficient Bayesian optimization approach for automated optimization of [69] J. Zhao, T. Liang, S. Sinha, and W. Zhang, “Machine learning based
analog circuits,” IEEE Transactions on Circuits and Systems I, vol. 65, routing congestion prediction in fpga high-level synthesis,” in 2019
no. 6, pp. 1954–1967, 2018. Design, Automation & Test in Europe Conference & Exhibition (DATE).
[50] K. Zhu, H. Chen, M. Liu, X. Tang, N. Sun, and D. Z. Pan, “Effective IEEE, 2019, pp. 1130–1135.
analog/mixed-signal circuit placement considering system signal flow,” [70] Z. Lin, Z. Yuan, J. Zhao, W. Zhang, H. Wang, and Y. Tian, “PowerGear:
in IEEE/ACM International Conference on Computer-Aided Design Early-stage power estimation in FPGA HLS via heterogeneous edge-
(ICCAD), 2020. centric GNNs,” in IEEE/ACM Proceedings Design, Automation and Test
[51] H. Ren and J. Hu, Machine Learning Applications in Electronic Design in Eurpoe (DATE), 2022, pp. 1341–1346.
Automation. Springer, 2022. [71] H.-Y. Liu and L. P. Carloni, “On learning-based methods for design-space
[52] P. Joseph, K. Vaswani, and M. J. Thazhuthaveetil, “Construction and use exploration with high-level synthesis,” in Design automation conference
of linear regression models for processor performance analysis,” in IEEE (DAC), 2013.
International Symposium on High Performance Computer Architecture [72] P. Meng, A. Althoff, Q. Gautier, and R. Kastner, “Adaptive threshold
(HPCA), 2006. non-Pareto elimination: Re-thinking machine learning for system level
[53] C. Mendis, A. Renda, S. Amarasinghe, and M. Carbin, “Ithemal: design space exploration on FPGAs,” in IEEE/ACM Proceedings Design,
Accurate, portable and fast basic block throughput estimation using deep Automation and Test in Eurpoe (DATE), 2016, pp. 918–923.
neural networks,” in International Conference on Machine Learning [73] R. G. Kim, J. R. Doppa, and P. P. Pande, “Machine learning for design
(ICML), 2019. space exploration and optimization of manycore systems,” in IEEE/ACM
[54] J. Zhai, C. Bai, B. Zhu, Y. Cai, Q. Zhou, and B. Yu, “McPAT-Calib: International Conference on Computer-Aided Design (ICCAD), 2018,
A microarchitecture power modeling framework for modern CPUs,” pp. 1–6.
25

[74] A. Mahapatra and B. C. Schafer, “Machine-learning based simulated [96] Y. Katz, M. Rimon, A. Ziv, and G. Shaked, “Learning microarchitectural
annealer method for high level synthesis design space exploration,” in behaviors to improve stimuli generation quality,” in ACM/IEEE Design
Electronic System Level Synthesis Conference (ESLsyn), 2014, pp. 1–6. Automation Conference (DAC), 2011.
[75] Z. Wang and B. C. Schafer, “Machine leaming to set meta-heuristic [97] W. L. Neto, M. Austin, S. Temple, L. Amaru, X. Tang, and P.-E.
specific parameters for high-level synthesis design space exploration,” in Gaillardon, “Lsoracle: A logic synthesis framework driven by artificial
ACM/IEEE Design Automation Conference (DAC), 2020, pp. 1–6. intelligence,” in IEEE/ACM International Conference on Computer-
[76] Q. Sun, T. Chen, S. Liu, J. Chen, H. Yu, and B. Yu, “Correlated multi- Aided Design (ICCAD). IEEE, 2019, pp. 1–6.
objective multi-fidelity optimization for HLS directives design,” ACM [98] C. Yu, H. Xiao, and G. De Micheli, “Developing synthesis flows without
Transactions on Design Automation of Electronic Systems (TODAES), human knowledge,” in ACM/IEEE Design Automation Conference (DAC),
vol. 27, no. 4, pp. 1–27, 2022. 2018.
[77] Z. Yu, C. Bail, S. Hu, R. Chen, T. He, M. Yuan, B. Yu, and M. Wong, [99] C. Yu and W. Zhou, “Decision making in synthesis cross technologies
“IT-DSE: Invariance risk minimized transfer microarchitecture design using LSTMs and transfer learning,” in ACM/IEEE Workshop on Machine
space exploration,” in IEEE/ACM International Conference on Computer Learning for CAD (MLCAD), 2020, pp. 55–60.
Aided Design (ICCAD), 2023, pp. 1–9. [100] Z. Pei, F. Liu, Z. He, G. Chen, H. Zheng, K. Zhu, and B. Yu, “AlphaSyn:
[78] Q. Xiao, S. Zheng, B. Wu, P. Xu, X. Qian, and Y. Liang, “HASCO: Logic synthesis optimization with efficient monte carlo tree search,”
Towards agile hardware and software co-design for tensor computation,” in IEEE/ACM International Conference on Computer Aided Design
in IEEE/ACM International Symposium on Computer Architecture (ICCAD). IEEE, 2023, pp. 1–9.
(ISCA), 2021, pp. 1055–1068. [101] W. L. Neto, M. T. Moreira, Y. Li, L. Amarù, C. Yu, and P.-E. Gaillardon,
[79] C. Xu, C. Kjellqvist, and L. W. Wills, “SNS’s not a synthesizer: a “SLAP: A supervised learning approach for priority cuts technology
deep-learning-based synthesis predictor,” in International Symposium on mapping,” in ACM/IEEE Design Automation Conference (DAC), 2021,
Computer Architecture (ISCA), 2022. pp. 859–864.
[80] P. Sengupta, A. Tyagi, Y. Chen, and J. Hu, “How good is your Verilog [102] W. L. Neto, M. T. Moreira, L. Amaru, C. Yu, and P.-E. Gaillardon, “Read
RTL code? a quick answer from machine learning,” in Proceedings of the your circuit: leveraging word embedding to guide logic optimization,”
41st IEEE/ACM International Conference on Computer-Aided Design, in IEEE/ACM Asia and South Pacific Design Automation Conference
2022. (ASPDAC), 2021, pp. 530–535.
[81] C. Xu, P. Sharma, T. Wang, and L. W. Wills, “Fast, robust and transferable [103] Z. Xie, R. Liang, X. Xu, J. Hu, C.-C. Chang, J. Pan, and Y. Chen,
prediction for hardware logic synthesis,” in IEEE/ACM International “Preplacement net length and timing estimation by customized graph
Symposium on Microarchitecture, 2023, pp. 167–179. neural network,” IEEE Transactions on Computer-Aided Design of
[82] W. Fang, Y. Lu, S. Liu, Q. Zhang, C. Xu, L. W. Wills, H. Zhang, and Integrated Circuits and Systems (TCAD), vol. 41, no. 11, pp. 4667–4680,
Z. Xie, “MasterRTL: A pre-synthesis PPA estimation framework for 2022.
any RTL design,” in IEEE/ACM International Conference on Computer- [104] Y. Zhang, H. Ren, and B. Khailany, “GRANNITE: Graph neural network
Aided Design (ICCAD), 2023. inference for transferable power estimation,” in Design Automation
[83] D. S. Lopera and W. Ecker, “Applying GNNs to timing estimation Conference (DAC), 2020.
at RTL,” in IEEE/ACM International Conference on Computer-Aided [105] M. Rakesh, P. Das, A. Terkar, and A. Acharyya, “GRASPE: Accurate
Design (ICCAD), 2022. post-synthesis power estimation from RTL using graph representation
[84] N. Wu, J. Lee, Y. Xie, and C. Hao, “Lostin: Logic optimization via learning,” in IEEE International Symposium on Circuits and Systems
spatio-temporal information with hybrid graph models,” in International (ISCAS), 2023, pp. 1–5.
Conference on Application-specific Systems, Architectures and Proces- [106] S. Khan, Z. Shi, M. Li, and Q. Xu, “DeepSeq: Deep sequential circuit
sors (ASAP), 2022. learning,” arXiv preprint arXiv:2302.13608, 2023.
[85] Y. Zhou, H. Ren, Y. Zhang, B. Keller, B. Khailany, and Z. Zhang, “PRI- [107] S. D. Chowdhury, K. Yang, and P. Nuzzo, “ReIGNN: State register
MAL: Power inference using machine learning,” in Design Automation identification using graph neural networks for circuit reverse engineering,”
Conference (DAC), 2019. in IEEE/ACM International Conference on Computer-Aided Design
[86] D. Lee, L. K. John, and A. Gerstlauer, “Dynamic power and performance (ICCAD), 2021, pp. 1–9.
back-annotation for fast and accurate functional hardware simulation,” in [108] L. Alrahis, A. Sengupta, J. Knechtel, S. Patnaik, H. Saleh, B. Mohammad,
IEEE/ACM Proceedings Design, Automation and Test in Eurpoe (DATE), M. Al-Qutayri, and O. Sinanoglu, “GNN-RE: Graph neural networks
2015. for reverse engineering of gate-level netlists,” IEEE Transactions on
[87] A. K. A. Kumar and A. Gerstlauer, “Learning-based CPU power Computer-Aided Design of Integrated Circuits and Systems (TCAD),
modeling,” in ACM/IEEE Workshop on Machine Learning for CAD vol. 41, no. 8, pp. 2435–2448, 2021.
(MLCAD), 2019. [109] Z. He, Z. Wang, C. Bail, H. Yang, and B. Yu, “Graph learning-based
[88] Z. Xie, S. Li, M. Ma, C.-C. Chang, J. Pan, Y. Chen, and J. Hu, arithmetic block identification,” in IEEE/ACM International Conference
“DEEP: Developing extremely efficient runtime on-chip power meters,” On Computer Aided Design (ICCAD). IEEE, 2021, pp. 1–8.
in IEEE/ACM International Conference on Computer-Aided Design [110] N. Wu, Y. Li, C. Hao, S. Dai, C. Yu, and Y. Xie, “Gamora: Graph
(ICCAD), 2022. learning based symbolic reasoning for large-scale Boolean networks,” in
[89] D. Zoni, L. Cremona, and W. Fornaciari, “PowerProbe: Run-time ACM/IEEE Design Automation Conference (DAC), 2023.
power modeling through automatic RTL instrumentation,” in IEEE/ACM [111] S. Ward, D. Ding, and D. Z. Pan, “PADE: A high-performance
Proceedings Design, Automation and Test in Eurpoe (DATE), 2018. placer with automatic datapath extraction and evaluation through high
[90] D. J. Pagliari, V. Peluso, Y. Chen, A. Calimera, E. Macii, and M. Poncino, dimensional data learning,” in ACM/IEEE Design Automation Conference
“All-digital embedded meters for on-line power estimation,” in IEEE/ACM (DAC), 2012, pp. 756–761.
Proceedings Design, Automation and Test in Eurpoe (DATE), 2018. [112] Y. Lin, S. Dhar, W. Li, H. Ren, B. Khailany, and D. Z. Pan, “DREAM-
[91] Z. Xie, X. Xu, M. Walker, J. Knebel, K. Palaniswamy, N. Hebert, Place: Deep learning toolkit-enabled GPU acceleration for modern VLSI
J. Hu, H. Yang, Y. Chen, and S. Das, “APOLLO: An automated power placement,” in ACM/IEEE Design Automation Conference (DAC), 2019,
modeling framework for runtime power introspection in high-volume pp. 1–6.
commercial microprocessors,” in IEEE/ACM International Symposium [113] A. Agnesina, P. Rajvanshi, T. Yang, G. Pradipta, A. Jiao, B. Keller,
on Microarchitecture (MICRO), 2021. B. Khailany, and H. Ren, “AutoDMP: Automated DREAMPlace-based
[92] D. Kim, J. Zhao, J. Bachrach, and K. Asanović, “Simmani: Runtime macro placement,” in ACM International Symposium on Physical Design
power modeling for arbitrary RTL with automatic signal selection,” in (ISPD), 2023.
IEEE/ACM International Symposium on Microarchitecture (MICRO), [114] Z. Xie, Y.-H. Huang, G.-Q. Fang, H. Ren, S.-Y. Fang, Y. Chen, and
2019. J. Hu, “RouteNet: Routability prediction for mixed-size designs using
[93] J. Yang, L. Ma, K. Zhao, Y. Cai, and T.-F. Ngai, “Early stage real-time convolutional neural network,” in IEEE/ACM International Conference
SoC power estimation using RTL instrumentation,” in IEEE/ACM Asia on Computer-Aided Design (ICCAD), 2018.
and South Pacific Design Automation Conference (ASPDAC), 2015. [115] Y.-H. Huang, Z. Xie, G.-Q. Fang, T.-C. Yu, H. Ren, S.-Y. Fang, Y. Chen,
[94] S. Fine and A. Ziv, “Coverage directed test generation for functional and J. Hu, “Routability-driven macro placement with embedded CNN-
verification using bayesian networks,” in ACM/IEEE Design Automation based prediction model,” in IEEE/ACM Proceedings Design, Automation
Conference (DAC), 2003. and Test in Eurpoe (DATE), 2019.
[95] S. Vasudevan, W. J. Jiang, D. Bieber, R. Singh, C. R. Ho, C. Sutton [116] C.-C. Chang, J. Pan, T. Zhang, Z. Xie, J. Hu, W. Qi, C. Lin, R. Liang,
et al., “Learning semantic representations to verify hardware designs,” J. Mitra, E. Fallon, and Y. Chen, “Automatic routability predictor devel-
Advances in Neural Information Processing Systems (NeurIPS), vol. 34, opment using neural architecture search,” in IEEE/ACM International
pp. 23 491–23 504, 2021. Conference on Computer-Aided Design (ICCAD), 2021.
26

[117] J. Pan, C.-C. Chang, Z. Xie, A. Li, M. Tang, T. Zhang, J. Hu, and [138] Y. Watanabe, T. Kimura, T. Matsunawa, and S. Nojima, “Accurate
Y. Chen, “Towards collaborative intelligence: Routability estimation lithography simulation model based on convolutional neural networks,”
based on decentralized private data,” in ACM/IEEE Design Automation in Optical Microlithography XXX, vol. 10147. SPIE, 2017, pp. 137–145.
Conference (DAC), 2022. [139] W. Ye, M. B. Alawieh, Y. Lin, and D. Z. Pan, “LithoGAN: End-to-
[118] S. Zheng, L. Zou, P. Xu, S. Liu, B. Yu, and M. Wong, “Lay-Net: Grafting end lithography modeling with generative adversarial networks,” in
netlist knowledge on layout-based congestion prediction,” in IEEE/ACM ACM/IEEE Design Automation Conference (DAC), 2019.
International Conference on Computer Aided Design (ICCAD), 2023, [140] Y. Lin, M. Li, Y. Watanabe, T. Kimura, T. Matsunawa, S. Nojima, and
pp. 1–9. D. Z. Pan, “Data efficient lithography modeling with transfer learning and
[119] S. Liu, Q. Sun, P. Liao, Y. Lin, and B. Yu, “Global placement with active data selection,” IEEE Transactions on Computer-Aided Design of
deep learning-enabled explicit routability optimization,” in IEEE/ACM Integrated Circuits and Systems (TCAD), vol. 38, no. 10, pp. 1900–1913,
Proceedings Design, Automation and Test in Eurpoe (DATE), 2021, pp. 2018.
1821–1824. [141] G. Chen, Z. Pei, H. Yang, Y. Ma, B. Yu, and M. Wong, “Physics-
[120] J. Chen, J. Kuang, G. Zhao, D. J.-H. Huang, and E. F. Young, “PROS: informed optical kernel regression using complex-valued neural fields,”
A plug-in for routability optimization applied in the state-of-the-art in ACM/IEEE Design Automation Conference (DAC), 2023, pp. 1–6.
commercial EDA tool using deep learning,” in IEEE/ACM International [142] H. Yang, L. Luo, J. Su, C. Lin, and B. Yu, “Imbalance aware
Conference on Computer-Aided Design (ICCAD), 2020. lithography hotspot detection: a deep learning approach,” Journal of
[121] S. Zheng, L. Zou, S. Liu, Y. Lin, B. Yu, and M. Wong, “Mitigating Micro/Nanolithography, MEMS, and MOEMS, vol. 16, no. 3, pp. 033 504–
distribution shift for congestion optimization in global placement,” in 033 504, 2017.
ACM/IEEE Design Automation Conference (DAC), 2023, pp. 1–6. [143] J. Chen, Y. Lin, Y. Guo, M. Zhang, M. B. Alawieh, and D. Z.
[122] E. C. Barboza, N. Shukla, Y. Chen, and J. Hu, “Machine learning-based Pan, “Lithography hotspot detection using a double inception module
pre-routing timing prediction with reduced pessimism,” in ACM/IEEE architecture,” Journal of Micro/Nanolithography, MEMS, and MOEMS,
Design Automation Conference (DAC), 2019. vol. 18, no. 1, pp. 013 507–013 507, 2019.
[123] X. He, Z. Fu, Y. Wang, C. Liu, and Y. Guo, “Accurate timing prediction [144] Y. Jiang, F. Yang, B. Yu, D. Zhou, and X. Zeng, “Efficient layout
at placement stage with look-ahead RC network,” in ACM/IEEE Design hotspot detection via binarized residual neural network ensemble,” IEEE
Automation Conference (DAC), 2022, pp. 1213–1218. Transactions on Computer-Aided Design of Integrated Circuits and
[124] P. Cao, G. He, and T. Yang, “TF-Predictor: Transformer-based pre- Systems (TCAD), vol. 40, no. 7, pp. 1476–1488, 2020.
routing path delay prediction framework,” IEEE Transactions on [145] A. Ciccazzo, G. Di Pillo, and V. Latorre, “A SVM surrogate model-
Computer-Aided Design of Integrated Circuits and Systems (TCAD), based method for parametric yield optimization,” IEEE Transactions
no. 99, pp. 1–1, 2022. on Computer-Aided Design of Integrated Circuits and Systems (TCAD),
[125] Z. Guo, M. Liu, J. Gu, S. Zhang, D. Z. Pan, and Y. Lin, “A timing engine vol. 35, no. 7, pp. 1224–1228, 2015.
inspired graph neural network model for pre-routing slack prediction,” in [146] K. Nakata, R. Orihara, Y. Mizuoka, and K. Takagi, “A comprehensive big-
ACM/IEEE Design Automation Conference (DAC), 2022, pp. 1207–1212. data-based monitoring system for yield enhancement in semiconductor
[126] Z. Wang, S. Liu, Y. Pu, S. Chen, T.-Y. Ho, and B. Yu, “Restructure- manufacturing,” IEEE Transactions on Semiconductor Manufacturing
tolerant timing prediction via multimodal fusion,” in ACM/IEEE Design (TSM), vol. 30, no. 4, pp. 339–344, 2017.
Automation Conference (DAC), 2023, pp. 1–6. [147] M. B. Alawieh, D. Boning, and D. Z. Pan, “Wafer map defect patterns
[127] R. Liang, Z. Xie, J. Jung, V. Chauha, Y. Chen, J. Hu, H. Xiang, and G.-J. classification using deep selective learning,” in ACM/IEEE Design
Nam, “Routing-free crosstalk prediction,” in IEEE/ACM International Automation Conference (DAC), 2020, pp. 1–6.
Conference on Computer-Aided Design (ICCAD), 2020. [148] J. Kwon, M. M. Ziegler, and L. P. Carloni, “A learning-based rec-
[128] S. Liu, Z. Wang, F. Liu, Y. Lin, B. Yu, and M. Wong, “Concurrent sign- ommender system for autotuning design fiows of industrial high-
off timing optimization via deep steiner points refinement,” in ACM/IEEE performance processors,” in ACM/IEEE Design Automation Conference
Design Automation Conference (DAC), 2023, pp. 1–6. (DAC), 2019.
[129] A. B. Kahng, U. Mallappa, and L. Saul, “Using machine learning to [149] Z. Xie, G.-Q. Fang, Y.-H. Huang, H. Ren, Y. Zhang, B. Khailany, S.-Y.
predict path-based slack from graph-based timing analysis,” in IEEE Fang, J. Hu, Y. Chen, and E. C. Barboza, “FIST: A feature-importance
International Conference on Computer Design (ICCD), 2018, pp. 603– sampling and tree-based method for automatic design flow parameter
612. tuning,” in IEEE/ACM Asia and South Pacific Design Automation
[130] Y. Ye, T. Chen, Y. Gao, H. Yan, B. Yu, and L. Shi, “Graph-learning-driven Conference (ASPDAC), 2020.
path-based timing analysis results predictor from graph-based timing [150] H. Geng, T. Chen, Y. Ma, B. Zhu, and B. Yu, “PTPT: physical design
analysis,” in IEEE/ACM Asia and South Pacific Design Automation tool parameter tuning via multi-objective bayesian optimization,” IEEE
Conference (ASPDAC), 2023, pp. 547–552. Transactions on Computer-Aided Design of Integrated Circuits and
[131] C.-T. Ho and A. B. Kahng, “IncPIRD: Fast learning-based prediction Systems (TCAD), vol. 42, no. 1, pp. 178–189, 2022.
of incremental IR drop,” in IEEE/ACM International Conference on [151] M. Cho, K. Yuan, Y. Ban, and D. Z. Pan, “Eliad: Efficient lithography
Computer-Aided Design (ICCAD), 2019. aware detailed routing algorithm with compact and macro post-opc
[132] C.-H. Pao, A.-Y. Su, and Y.-M. Lee, “XGBIR: An XGBoost-based IR printability prediction,” IEEE Transactions on Computer-Aided Design
drop predictor for power delivery network,” in IEEE/ACM Proceedings of Integrated Circuits and Systems, vol. 28, no. 7, pp. 1006–1016, 2009.
Design, Automation and Test in Eurpoe (DATE), 2020, pp. 1307–1310. [152] Synopsys, “Synopsys. ai unveiled as industry’s
[133] Y.-C. Fang, H.-Y. Lin, M.-Y. Sui, C.-M. Li, and E. J.-W. Fang, “Machine- first full-stack, ai-driven eda suite for chipmakers,”
learning-based dynamic IR drop prediction for ECO,” in IEEE/ACM 2023. [Online]. Available: https://news.synopsys.com/
International Conference on Computer-Aided Design (ICCAD), 2018, 2023-03-29-Synopsys-ai-Unveiled-as-Industrys-First-Full-Stack,
pp. 1–7. -AI-Driven-EDA-Suite-for-Chipmakers
[134] M. B. Alawieh, Y. Lin, Z. Zhang, M. Li, Q. Huang, and D. Z. Pan, [153] G. Liu and Z. Zhang, “PIMap: A flexible framework for improving
“GAN-SRAF: subresolution assist feature generation using generative LUT-based technology mapping via parallelized iterative optimization,”
adversarial networks,” IEEE Transactions on Computer-Aided Design ACM Transactions on Reconfigurable Technology and Systems (TRETS),
of Integrated Circuits and Systems (TCAD), vol. 40, no. 2, pp. 373–385, vol. 11, no. 4, pp. 1–23, 2019.
2020. [154] C. Yu, “FlowTune: Practical multi-armed bandits in Boolean optimiza-
[135] H. Yang, S. Li, Z. Deng, Y. Ma, B. Yu, and E. F. Young, “GAN-OPC: tion,” in IEEE/ACM International Conference on Computer-Aided Design
Mask optimization with lithography-guided generative adversarial nets,” (ICCAD), 2020, pp. 1–9.
IEEE Transactions on Computer-Aided Design of Integrated Circuits [155] K. Zhu, M. Liu, H. Chen, Z. Zhao, and D. Z. Pan, “Exploring logic
and Systems (TCAD), vol. 39, no. 10, pp. 2822–2834, 2020. optimizations with reinforcement learning and graph convolutional
[136] G. Chen, Z. Yu, H. Liu, Y. Ma, and B. Yu, “DevelSet: Deep neural level network,” in ACM/IEEE Workshop on Machine Learning for CAD
set for instant mask optimization,” IEEE Transactions on Computer- (MLCAD), 2020, pp. 145–150.
Aided Design of Integrated Circuits and Systems (TCAD), vol. 42, no. 12, [156] A. Hosny, S. Hashemi, M. Shalan, and S. Reda, “DRiLLS: Deep
pp. 5020–5033, 2023. reinforcement learning for logic synthesis,” in IEEE/ACM Asia and
[137] B. Zhu, S. Zheng, Z. Yu, G. Chen, Y. Ma, F. Yang, B. Yu, and M. D. South Pacific Design Automation Conference (ASP-DAC), 2020, pp.
Wong, “L2O-ILT: Learning to optimize inverse lithography techniques,” 581–586.
IEEE Transactions on Computer-Aided Design of Integrated Circuits [157] Y. V. Peruvemba, S. Rai, K. Ahuja, and A. Kumar, “RL-guided runtime-
and Systems (TCAD), 2023. constrained heuristic exploration for logic synthesis,” in IEEE/ACM
International Conference On Computer Aided Design (ICCAD), 2021,
pp. 1–9.
27

[158] W. Haaswijk, E. Collins, B. Seguin, M. Soeken, F. Kaplan, S. Süsstrunk, [179] Y. Zhang, H.-L. Zhen, Z. Pei, Y. Lian, L. Yin, M. Yuan, and B. Yu, “Sola:
and G. De Micheli, “Deep learning for logic optimization algorithms,” in Solver-layer adaption of llm for better logic reasoning,” arXiv preprint
IEEE International Symposium on Circuits and Systems (ISCAS), 2018, arXiv:2402.11903, 2024.
pp. 1–4. [180] B. Ahmad, S. Thakur, B. Tan, R. Karri, and H. Pearce, “Fixing
[159] X. Timoneda and L. Cavigelli, “Late breaking results: Reinforcement hardware security bugs with large language models,” arXiv preprint
learning for scalable logic optimization with graph neural networks,” in arXiv:2302.01215, 2023.
ACM/IEEE Design Automation Conference (DAC), 2021, pp. 1378–1379. [181] M. Nair, R. Sadhukhan, and D. Mukhopadhyay, “Generating
[160] A. Mirhoseini, A. Goldie, M. Yazgan, J. W. Jiang, E. Songhori, S. Wang, secure hardware using ChatGPT resistant to CWEs,” Cryptology
Y.-J. Lee, E. Johnson, O. Pathak, A. Nazi et al., “A graph placement ePrint Archive, Paper 2023/212, 2023. [Online]. Available: https:
methodology for fast chip design,” Nature, vol. 594, pp. 207–212, 2021. //eprint.iacr.org/2023/212
[161] Q. Xu, H. Geng, S. Chen, B. Yuan, C. Zhuo, Y. Kang, and X. Wen, “Good- [182] R. Kande, H. Pearce, B. Tan, B. Dolan-Gavitt, S. Thakur, R. Karri, and
Floorplan: Graph convolutional network and reinforcement learning- J. Rajendran, “LLM-assisted generation of hardware assertions,” 2023.
based floorplanning,” IEEE Transactions on Computer-Aided Design of [183] Z. He, H. Wu, X. Zhang, X. Yao, S. Zheng, H. Zheng, and B. Yu,
Integrated Circuits and Systems (TCAD), vol. 41, no. 10, pp. 3492–3502, “ChatEDA: A large language model powered autonomous agent for
2021. EDA,” 2023.
[162] A. Agnesina, K. Chang, and S. K. Lim, “VLSI placement parameter opti- [184] Y. Fu, Y. Zhang, Z. Yu, S. Li, Z. Ye, C. Li, C. Wan, and Y. C.
mization using deep reinforcement learning,” in IEEE/ACM International Lin, “GPT4AIGChip: Towards next-generation AI accelerator design
Conference on Computer-Aided Design, 2020, pp. 1–9. automation via large language models,” in IEEE/ACM International
[163] Y.-C. Lu, S. Nath, V. Khandelwal, and S. K. Lim, “RL-Sizer: VLSI gate Conference on Computer Aided Design (ICCAD), 2023, pp. 1–9.
sizing for timing optimization using deep reinforcement learning,” in [185] Z. Yan, Y. Qin, X. S. Hu, and Y. Shi, “On the viability of using LLMs for
ACM/IEEE Design Automation Conference (DAC), 2021, pp. 733–738. SW/HW co-design: An example in designing CiM DNN accelerators,”
[164] Y.-C. Lu, W.-T. Chan, D. Guo, S. Kundu, V. Khandelwal, and S. K. arXiv preprint arXiv:2306.06923, 2023.
Lim, “RL-CCD: Concurrent clock and data optimization using attention- [186] Z. Liang, J. Cheng, R. Yang, H. Ren, Z. Song, D. Wu, X. Qian, T. Li, and
based self-supervised reinforcement learning,” in ACM/IEEE Design Y. Shi, “Unleashing the potential of LLMs for quantum computing: A
Automation Conference (DAC), 2023, pp. 1–6. study in quantum architecture design,” arXiv preprint arXiv:2307.08191,
[165] X. Liang, Y. Ouyang, H. Yang, B. Yu, and Y. Ma, “RL-OPC: Mask 2023.
optimization with deep reinforcement learning,” IEEE Transactions on [187] M. Li, W. Fang, Q. Zhang, and Z. Xie, “SpecLLM: Exploring generation
Computer-Aided Design of Integrated Circuits and Systems (TCAD), and review of VLSI design specification with large language model,”
vol. 43, no. 1, pp. 340–351, 2024. arXiv preprint arXiv:2401.13266, 2024.
[166] Y.-C. Lu, J. Lee, A. Agnesina, K. Samadi, and S. K. Lim, “GAN- [188] H. Ren and M. Fojtik, “Invited- NVCell: Standard cell layout in advanced
CTS: A generative adversarial framework for clock tree prediction and technology nodes with reinforcement learning,” in ACM/IEEE Design
optimization,” in IEEE/ACM International Conference on Computer- Automation Conference (DAC), 2021, pp. 1291–1294.
Aided Design (ICCAD), 2019. [189] ——, “Standard cell routing with reinforcement learning and genetic
[167] Y. Lu, S. Liu, Q. Zhang, and Z. Xie, “RTLLM: An open-source algorithm in advanced technology nodes,” in IEEE/ACM Asia and South
benchmark for design RTL generation with large language model,” arXiv Pacific Design Automation Conference (ASPDAC), 2021, pp. 684–689.
preprint arXiv:2308.05345, 2023. [190] A. C.-W. Liang, C. H.-P. Wen, and H.-M. Huang, “A general and
[168] M. Liu, N. Pinckney, B. Khailany, and H. Ren, “VerilogEval: evaluating automatic cell layout generation framework with implicit learning on
large language models for Verilog code generation,” in IEEE/ACM design rules,” IEEE Transactions on Very Large Scale Integration Systems
International Conference on Computer-Aided Design (ICCAD), 2023. (TVLSI), vol. 30, no. 9, pp. 1341–1354, 2022.
[169] X. Liang, “Hardware descriptions code completion based on a pre- [191] S. Roy, Y. Ma, J. Miao, and B. Yu, “A learning bridge from architec-
training model,” in IEEE Conference on Telecommunications, Optics tural synthesis to physical design for exploring power efficient high-
and Computer Science (TOCS), 2021, pp. 228–232. performance adders,” in IEEE/ACM International Symposium on Low
[170] K. Chang, Y. Wang, H. Ren, M. Wang, S. Liang, Y. Han, H. Li, and Power Electronics and Design (ISLPED), Jul. 2017, pp. 1–6.
X. Li, “ChipGPT: How far are we from natural language hardware [192] H. Geng, Y. Ma, Q. Xu, J. Miao, S. Roy, and B. Yu, “High-speed adder
design,” arXiv preprint arXiv:2305.14019, 2023. design space exploration via graph neural processes,” IEEE Transactions
[171] S. Thakur, J. Blocklove, H. Pearce, B. Tan, S. Garg, and R. Karri, on Computer-Aided Design of Integrated Circuits and Systems (TCAD),
“AutoChip: Automating HDL generation using LLM feedback,” arXiv vol. 41, no. 8, pp. 2657–2670, Aug. 2022.
preprint arXiv:2311.04887, 2023. [193] J. Cheng, Y. Xiao, Y. Shao, G. Dong, S. Lyu, and W. Yu, “Machine-
[172] J. Blocklove, S. Garg, R. Karri, and H. Pearce, “Chip-Chat: Challenges learning-driven architectural selection of adders and multipliers in logic
and opportunities in conversational hardware design,” in ACM/IEEE 5th synthesis,” ACM Transactions on Design Automation of Electronic
Workshop on Machine Learning for CAD (MLCAD), Sep. 2023. Systems (TODAES), vol. 28, no. 2, pp. 20:1–20:16, Mar. 2023.
[173] M. Liu, T.-D. Ene, R. Kirby, C. Cheng, N. Pinckney, R. Liang, J. Alben, [194] D. Zuo, Y. Ouyang, and Y. Ma, “RL-MUL: Multiplier design optimiza-
H. Anand, S. Banerjee, I. Bayraktaroglu, B. Bhaskaran, B. Catanzaro, tion with deep reinforcement learning,” in ACM/IEEE Design Automation
A. Chaudhuri, S. Clay, B. Dally, L. Dang, P. Deshpande, S. Dhodhi, Conference (DAC), 2023, pp. 1–6.
S. Halepete, E. Hill, J. Hu, S. Jain, B. Khailany, G. Kokai, K. Kunal, [195] K. Zhu, H. Chen, W. J. Turner, G. F. Kokai, P.-H. Wei, D. Z. Pan,
X. Li, C. Lind, H. Liu, S. Oberman, S. Omar, S. Pratty, J. Raiman, and H. Ren, “TAG: Learning circuit spatial embedding from layouts,”
A. Sarkar, Z. Shao, H. Sun, P. P. Suthar, V. Tej, W. Turner, K. Xu, and in IEEE/ACM International Conference on Computer-Aided Design
H. Ren, “ChipNeMo: Domain-adapted LLMs for chip design,” arXiv (ICCAD), 2022.
preprint arXiv:2311.00176, 2023. [196] J. Lu, L. Lei, F. Yang, L. Shang, and X. Zeng, “Topology optimization
[174] S. Liu, W. Fang, Y. Lu, Q. Zhang, H. Zhang, and Z. Xie, “RTLCoder: of operational amplifier in continuous space via graph embedding,” in
Outperforming GPT-3.5 in design RTL generation with our open-source IEEE/ACM Proceedings Design, Automation and Test in Eurpoe (DATE),
dataset and lightweight solution,” arXiv preprint arXiv:2312.08617, 2022, p. 142–147.
2023. [197] S. Fan, N. Cao, S. Zhang, J. Li, X. Guo, and X. Zhang, “From specifica-
[175] Z. Pei, H.-L. Zhen, M. Yuan, Y. Huang, and B. Yu, “BetterV: Controlled tion to topology: Automatic power converter design via reinforcement
Verilog Generation with Discriminative Guidance,” arXiv preprint learning,” in IEEE/ACM International Conference on Computer-Aided
arXiv:2402.03375, 2024. Design (ICCAD), 2021.
[176] M. Orenes-Vera, M. Martonosi, and D. Wentzlaff, “Using LLMs to [198] Z. Zhao, J. Luo, J. Liu, and L. Zhang, “Signal-division-aware analog
facilitate formal verification of RTL,” arXiv preprint arXiv:2309.09437, circuit topology synthesis aided by transfer learning,” IEEE Transactions
2023. on Computer-Aided Design of Integrated Circuits and Systems (TCAD),
[177] C. Sun, C. Hahn, and C. Trippel, “Towards improving verification vol. 42, no. 11, pp. 3481–3490, 2023.
productivity with circuit-aware translation of natural language to [199] S. Poddar, A. Budak, L. Zhao, C.-H. Hsu, S. Maji, K. Zhu, Y. Jia, and
systemverilog assertions,” in First International Workshop on Deep D. Z. Pan, “A data-driven analog circuit synthesizer with automatic
Learning-aided Verification (DAV), 2023. topology selection and sizing,” in IEEE/ACM Proceedings Design,
[178] W. Fang, M. Li, M. Li, Z. Yan, S. Liu, H. Zhang, and Z. Xie, “AssertLLM: Automation and Test in Eurpoe (DATE), 2024.
Generating and evaluating hardware verification assertions from design [200] J. Lu, Y. Li, F. Yang, L. Shang, and X. Zeng, “High-level topology
specifications via multi-LLMs,” arXiv preprint arXiv:2402.00386, 2024. synthesis method for ∆-Σ modulators via bi-level bayesian optimization,”
IEEE Transactions on Circuits and Systems II: Express Briefs, vol. 70,
no. 12, pp. 4389–4393, 2023.
28

[201] M. Fayazi, M. T. Taba, E. Afshari, and R. Dreslinski, “Angel: Fully- [220] Y. Fang, Z. Liu, Y. Lu, J. Liu, J. Li, Y. Jin, J. Chen, Y. Chen, H. Zheng,
automated analog circuit generator using a neural network assisted and Y. Xie, “NPS: A framework for accurate program sampling using
semi-supervised learning approach,” IEEE Transactions on Circuits and graph neural network,” arXiv preprint arXiv:2304.08880, 2023.
Systems I, vol. 70, no. 11, pp. 4516–4529, 2023. [221] L. Li, T. Flynn, and A. Hoisie, “Learning independent program and
[202] K. Hakhamaneshi, M. Nassar, M. Phielipp, P. Abbeel, and V. Stojanovic, architecture representations for generalizable performance modeling,”
“Pretraining graph neural networks for few-shot analog circuit modeling arXiv preprint arXiv:2310.16792, 2023.
and design,” IEEE Transactions on Computer-Aided Design of Integrated [222] X. Yi, J. Lu, X. Xiong, D. Xu, L. Shang, and F. Yang, “Graph
Circuits and Systems (TCAD), vol. 42, no. 7, pp. 2163–2173, 2023. representation learning for microarchitecture design space exploration,”
[203] A. Budak, M. Gandara, W. Shi, D. Pan, N. Sun, and B. Liu, “An in ACM/IEEE Design Automation Conference (DAC), 2023, pp. 1–6.
efficient analog circuit sizing method based on machine learning assisted [223] C. Sakhuja, Z. Shi, and C. Lin, “Leveraging domain information for
global optimization,” IEEE Transactions on Computer-Aided Design of the efficient automated design of deep learning accelerators,” in IEEE
Integrated Circuits and Systems (TCAD), vol. 41, no. 5, pp. 1209–1221, International Symposium on High-Performance Computer Architecture
2022. (HPCA), 2023, pp. 287–301.
[204] H. Wang, K. Wang, J. Yang, L. Shen, N. Sun, H.-S. Lee, and S. Han, [224] M. Li, Z. Shi, Q. Lai, S. Khan, S. Cai, and Q. Xu, “On EDA-driven
“GCN-RL circuit designer: Transferable transistor sizing with graph learning for SAT solving,” in 2023 60th ACM/IEEE Design Automation
neural networks and reinforcement learning,” in ACM/IEEE Design Conference (DAC), 2023, pp. 1–6.
Automation Conference (DAC), 2020. [225] Z. Shi, M. Li, S. Khan, L. Wang, N. Wang, Y. Huang, and Q. Xu,
[205] A. Zhao, X. Wang, Z. Lin, Z. Bi, X. Li, C. Yan, F. Yang, L. Shang, “DeepTPI: Test point insertion with deep reinforcement learning,” in
D. Zhou, and X. Zeng, “cVTS: A constrained Voronoi tree search method IEEE International Test Conference (ITC), 2022, pp. 194–203.
for high dimensional analog circuit synthesis,” in ACM/IEEE Design [226] Z. Wang, C. Bai, Z. He, G. Zhang, Q. Xu, T.-Y. Ho, B. Yu, and Y. Huang,
Automation Conference (DAC), 2023, pp. 1–6. “Functionality matters in netlist representation learning,” in ACM/IEEE
[206] A. F. Budak, S. Zhang, M. Liu, W. Shi, K. Zhu, and D. Z. Pan, Machine Design Automation Conference, 2022, pp. 61–66.
Learning for Analog Circuit Sizing. Cham: Springer International [227] Z. Xie, H. Ren, B. Khailany, Y. Sheng, S. Santosh, J. Hu, and
Publishing, 2022, pp. 307–335. Y. Chen, “PowerNet: Transferable dynamic IR drop estimation via
[207] S. M. Burns, H. Chen, T. Dhar, R. Harjani, J. Hu, N. Karmokar, K. Kunal, maximum convolutional neural network,” in Asia and South Pacific
Y. Li, Y. Lin, M. Liu, M. Madhusudan, P. Mukherjee, D. Z. Pan, Design Automation Conference (ASP-DAC), 2020.
J. Poojary, S. Ramprasath, S. S. Sapatnekar, A. K. Sharma, W. Xu, [228] Z. Feng, D. Guo, D. Tang, N. Duan, X. Feng, M. Gong, L. Shou,
S. Yaldiz, and K. Zhu, Machine Learning for Analog Layout. Cham: B. Qin, T. Liu, D. Jiang et al., “CodeBERT: A pre-trained model for
Springer International Publishing, 2022, pp. 505–544. programming and natural languages,” arXiv preprint arXiv:2002.08155,
[208] K. Kunal, P. Poojary, T. Dhar, M. Madhusudan, R. Harjani, and 2020.
S. Sapatnekar, “A general approach for identifying hierarchical symmetry [229] M. Orenes-Vera, M. Martonosi, and D. Wentzlaff, “From RTL to SVA:
constraints for analog circuit layout,” in IEEE/ACM International LLM-assisted generation of formal verification testbenches,” arXiv
Conference on Computer-Aided Design (ICCAD), 2020. preprint arXiv:2309.09437, 2023.
[209] K. Zhu, H. Chen, M. Liu, and D. Z. Pan, “Automating analog constraint [230] N. Eén, A. Mishchenko, and N. Sörensson, “Applying logic synthesis for
extraction: From heuristics to learning: (invited paper),” in IEEE/ACM speeding up SAT,” in Theory and Applications of Satisfiability Testing.
Asia and South Pacific Design Automation Conference (ASPDAC), 2022, Springer, 2007, pp. 272–286.
pp. 108–113. [231] N. Sorensson and N. Een, “MiniSAT v1.13 - a SAT solver with conflict-
[210] K. Zhu, M. Liu, Y. Lin, B. Xu, S. Li, X. Tang, N. Sun, and D. Z. clause minimization,” SAT, vol. 2005, no. 53, pp. 1–2, 2005.
Pan, “GeniusRoute: A new analog routing paradigm using generative [232] A. Fleury and M. Heisinger, “C A D I C A L, K ISSAT, PARACOOBA,
neural network guidance,” in IEEE/ACM International Conference on P LINGELING and T REENGELING entering the SAT competition 2020,”
Computer-Aided Design (ICCAD), 2019. SAT Competition, vol. 2020, p. 50, 2020.
[211] B. Xu, Y. Lin, X. Tang, S. Li, L. Shen, N. Sun, and D. Z. Pan, “WellGAN: [233] Cadence, “Conformal Smart LEC,” 2022. [Online].
Generative-adversarial-network-guided well generation for analog/mixed- Available: https://www.cadence.com/en US/home/resources/datasheets/
signal circuit layout,” in ACM/IEEE Design Automation Conference conformal-smart-lec-ds.html
(DAC), 2019, pp. 1–6. [234] S. Zou, J. Zhang, B. Shi, and G. Luo, “BESWAC: Boosting exact
[212] A. Gusmão, N. Horta, N. Lourenço, and R. Martins, “Late breaking synthesis via wiser SAT solver call,” in IEEE/ACM Proceedings Design,
results: Attention in Graph2Seq neural networks towards push-button Automation and Test in Eurpoe (DATE), 2024.
analog IC placement,” in ACM/IEEE Design Automation Conference [235] Z. Chen, X. Zhang, Y. Qian, Q. Xu, and S. Cai, “Integrating exact simu-
(DAC), 2021, pp. 1360–1361. lation into sweeping for datapath combinational equivalence checking,”
[213] P.-C. Wang, M. P.-H. Lin, C.-N. J. Liu, and H.-M. Chen, “Layout in IEEE/ACM International Conference On Computer Aided Design
synthesis of analog primitive cells with variational autoencoder,” in In- (ICCAD), 2023, pp. 1–9.
ternational Conference on Synthesis, Modeling, Analysis and Simulation [236] L. Liu, B. Fu, M. D. F. Wong, and E. F. Y. Young, “Xplace: An Extremely
Methods and Applications to Circuit Design (SMACD), 2023. Fast and Extensible Global Placement Framework,” in Proceedings of
[214] M. Liu, K. Zhu, J. Gu, L. Shen, X. Tang, N. Sun, and D. Z. Pan, “Towards the 59th ACM/IEEE Design Automation Conference, 2022.
decrypting the art of analog layout: Placement quality prediction via [237] B. Wang, G. Shen, D. Li, J. Hao, W. Liu, Y. Huang, H. Wu, Y. Lin,
transfer learning,” in IEEE/ACM Proceedings Design, Automation and G. Chen, and P. A. Heng, “LHNN: Lattice hypergraph neural network
Test in Eurpoe (DATE), 2020, pp. 496–501. for VLSI congestion prediction,” in ACM/IEEE Design Automation
[215] Y. Lin, Y. Li, D. Fang, M. Madhusudan, S. S. Sapatnekar, R. Harjani, and Conference (DAC), San Francisco, CA, July 2022.
J. Hu, “Are analytical techniques worthwhile for analog IC placement?” [238] Y. Pu, C. Shi, G. Samson, D. Park, K. Easton, R. Beraha, A. Newham,
in IEEE/ACM Proceedings Design, Automation and Test in Eurpoe M. Lin, V. Rangan, K. Chatha, D. Butterfield, and R. Attar, “A 9-mm2
(DATE), 2022, pp. 154–159. ultra-low-power highly integrated 28-nm CMOS SoC for Internet of
[216] P. Xu, J. Li, T.-Y. Ho, B. Yu, and K. Zhu, “Performance-driven analog Things,” IEEE Journal Solid-State Circuits, vol. 53, no. 3, pp. 936–948,
layout automation: Current status and future directions,” in IEEE/ACM 2018.
Asia and South Pacific Design Automation Conference (ASPDAC), 2024. [239] S. Jain, S. Khare, S. Yada, V. Ambili, P. Salihundam, S. Ramani,
[217] H. Ren, G. F. Kokai, W. J. Turner, and T.-S. Ku, “ParaGraph: Layout S. Muthukumar, M. Srinivasan, A. Kumar, S. K. Gb, R. Ramanarayanan,
parasitics and device parameter prediction using graph neural networks,” V. Erraguntla, J. Howard, S. Vangal, S. Dighe, G. Ruhl, P. Aseron,
in ACM/IEEE Design Automation Conference (DAC), 2020. H. Wilson, N. Borkar, V. De, and S. Borkar, “A 280mV-to-1.2V wide-
[218] Q. Zhang, S. Su, J. Liu, and M. S.-W. Chen, “CEPA: CNN-based operating-range IA-32 processor in 32nm CMOS,” in IEEE International
early performance assertion scheme for analog and mixed-signal circuit Solid-State Circuits Conference (ISSCC), 2012, pp. 66–68.
simulation,” in IEEE/ACM International Conference on Computer-Aided [240] F. Klemme and H. Amrouch, “Efficient learning strategies for machine
Design (ICCAD), 2020. learning-based characterization of aging-aware cell libraries,” IEEE
[219] K. Hakhamaneshi, N. Werblun, P. Abbeel, and V. Stojanović, “BagNet: Transactions on Circuits and Systems I, vol. 69, no. 12, pp. 5233–5246,
Berkeley analog generator with layout optimizer boosted with deep 2022.
neural networks,” in IEEE/ACM International Conference on Computer- [241] Mentor, a Siemens Business, Solido Characterization Suite, 2023.
Aided Design (ICCAD), 2019. [Online]. Available: https://eda.sw.siemens.com/en-US/ic/solido/

You might also like