Machine Learning for CAD/EDA
Machine Learning for
CAD/EDA: The Road
Ahead
Andrew B. Kahng
Computer Science and Engineering Department
Electrical and Computer Engineering Department
University of California at San Diego
La Jolla, CA 92093 USA
Then, there are many
Editor’s notes: steps and decision points
This keynote article sketches out a vision for the development of machine (e.g., test insertion, retim-
learning (ML) for computer-aided design (CAD): from today’s ML-assisted ing, and leakage optimi-
CAD (MLCAD) to tomorrow’s ML-based design automation (MLDA). zation effort) on the way
—Ulf Schlichtmann, Technical University of Munich to a complete outcome
(e.g., postroute layout and
signoff reports) at a flow end leaf. From start to end is
The road ahead for machine learning (ML) and expensive, requiring weeks to months of effort. The goal
computer-aided design (CAD)/EDA is built from three is to achieve the best possible outcome, but there is an
elements—learning, optimization, and CAD itself. enormous space of trajectories, and the design optimi-
Learning is the improvement of a computer agent’s zation must stay within a given “box” of resources: 1)
perception, knowledge, or actions based on experi- servers; 2) licenses; 3) people; and 4) schedule. There
ence or data [1]. Optimization is the universal quest
are never enough resources for optimization.
to do better: it is a centuries-old discipline that is at
With the slowdown of device and process scaling,
the heart of leading-edge IC design. CAD is our world:
more burden is placed on design technology-based
a high-stakes use domain for learning and optimiza-
“equivalent scaling” to improve IC product quality,
tion that brings staggering scale and complexity along
development schedule, and cost. Here, ML offers
with multiple abstractions and objectives. Learning,
important boosters to CAD/EDA. First, ML enables
optimization, and CAD are united in service of scal-
prediction: seeing what lies ahead in the design pro-
ing, which is the realization of more value while con-
cess. ML models provide predictions that can be
suming fewer resources (energy, time, area, and cost).
Scaling makes the impossible possible: it propels IC leveraged in design exploration, while also serving
CAD/EDA, IC design, and the broader semiconductor as objectives for higher-level optimizations. What we
ecosystem forward into the future. cannot predict, we must guardband—and what we
The IC design optimization problem has essentially do not have time to explore, we leave on the table.
unbounded complexity. As illustrated in Figure 1, Second, ML enables optimization, not only help-
there is a starting point for design, with inputs such ing to solve difficult optimizations, but also giving
as register-transfer level Verilog and constraints. new perspectives on classic optimization formula-
tions. Frameworks such as learning to optimize [2],
Digital Object Identifier 10.1109/MDAT.2022.3161593 graphical neural network (GNN)-based embedding
Date of publication: 13 July 2022; date of current version: [3], and reinforcement learning (RL) [4] lie on the
20 January 2023. road from today’s ML-assisted CAD (MLCAD) to
8 2168-2364/22©2022 IEEE Copublished by the IEEE CEDA, IEEE CASS, IEEE SSCS, and TTTC IEEE Design&Test
Authorized licensed use limited to: Nanjing University. Downloaded on November 23,2024 at 07:25:47 UTC from IEEE Xplore. Restrictions apply.
tomorrow’s ML-based design automation (MLDA):
intelligent, self-driving CAD/EDA tools and flows.
Third, ML is effectively deployed on modern com-
pute resources such as cloud and GPU. This provides
new paths to scalable CAD/EDA optimizations that
can exploit massive data and numbers of threads.
This is a crucial aspect of what we might think of as
a future “EDA2.0.” Here, relevant frameworks include
federated and evolutionary methods, stochastic gradi-
ent descent, sampling, and gradient-free optimization.
ML and prediction
Within the design process, models and predic-
tors are essential for cost and schedule reduction, as Figure 1. A huge search space of options is implicit in
well as for the quality of results (QoR) improvement. design optimization. The optimization itself must live
ML-based predictions of delay, slew, coupling, and in a “box” of resources.
other parameters can shift the cost-accuracy tradeoff
curve for electrical analysis, as illustrated in Figure 2
[5]. Reducing uncertainty means less guardband-
ing and increased QoR. Analogously, prediction
of eventual quality metrics or tool failures enables
early pruning of “doomed runs,” allowing optimiza-
tion resources to be better spent. Predicting farther
ahead, with greater accuracy, affords more leverage.
(The flip side of this is the development of more pre-
dictable and modelable heuristics and tools.)
Quality metrics such as clock skew or pin access
failures in detailed routing become easier to predict
as a design progresses from RTL to the netlist, floor-
plan, global placement, and onward, that is, as more Figure 2. ML shifts the accuracy-versus-cost tradeoff
information becomes available. Eventually, such met- curve in electrical analyses. ML-based predictions
rics become frozen (e.g., after detailed routing) and can similarly shift the tradeoff of accuracy versus
trivial to “predict.” In this light, ML-based predictions information regime in design implementation.
during design implementation also shift a tradeoff
curve, namely, one of accuracy versus available infor- as manufacturing variations have moved from “ran-
mation. The choice of tradeoff point (i.e., of error or dom” to “systematic” with improved understanding.
risk versus wall time) is an important hyperparameter
Second, IC design optimizations often bring
in sampling and distributed learning for optimization.
“max” (as opposed to “sum”) objectives and cost
ML for modeling and prediction in CAD/EDA
landscapes in which useful gradient information is
must surmount several basic challenges. First, the
difficult to discern. It is the longest timing path or
predictability of today’s metaheuristic optimization
the worst routing hotspot that must be estimated and
outcomes decreases as solution quality increases. In
other words, our heuristics and tools become nois- optimized. Structurally dissimilar solutions can have
ier and “chaotic” when pushed to their limits, which similar quality metrics such as worst timing slack or
unfortunately is the regime in which IC design needs wirelength, but which paths are critical or which lay-
the most help. Future mitigations may include more out regions are congested can differ greatly between
predictable heuristics, predictors of distributions and these solutions.
order statistics of ensembles of runs, and improved Third, what is predicted—and how predictive
sampling methods. Better predictors will also explain models will be used—needs careful consideration.
more of the variability within the design process, just For example, predicting achievable solution quality
January/February 2023
9
Authorized licensed use limited to: Nanjing University. Downloaded on November 23,2024 at 07:25:47 UTC from IEEE Xplore. Restrictions apply.
Machine Learning for CAD/EDA
(e.g., maximum clock frequency) is relatively use- met by the use of layered model architectures along
less without a “certificate,” namely, a flow setup and with transfer learning and few-shot learning. There
runscript that will actually achieve such a solution. are also longer-range challenges, particularly for
Even perfect predictions can be difficult or even combinatorial optimizations. These include solution
harmful when we try to use them, as noted in [6]. quality, generalization, and mechanisms for diagno-
This is because when we predict an outcome, such sis and debugging of models.
as the total area of buffers that will be inserted in a
netlist or the final placed location of a macro-block, From AI-boosted to AI-based
acting on the prediction will change the predicted Second, making tools and flows “smarter” has
outcome. Hence, “Be careful what you ask for from been an active area for ML application, with well-doc-
ML (and, how you will use it).” umented benefits. Supervised learning and intelligent
sampling of hyperparameter spaces have been used
ML and optimization to develop smart flows that are inherently tuned to
Recent years have shown that ML can help to task distributions. Autotuning methods use sampling
solve difficult CAD/EDA optimizations via predictive and estimation of distributions to perform sequen-
modeling, efficient sampling, hyperparameter tun- tial black-box hyperparameter optimization. Such
ing, RL, and many other means. Conversely, optimi- methods balance exploration and exploitation within
zation is a fundamental component of modern ML, optimization resource constraints and have important
whether in stochastic gradient descent, sampling, adjacencies in stochastic and evolutionary optimiza-
or distributed contexts. (At a meta-level, CAD/EDA tion. On the other hand, smart flows and autotuning
optimizations improve the hardware systems on still remain at a “knob-twiddling” level, following
which learning takes place.) The future scaling of whatever structure happens to be imposed by exist-
CAD/EDA technology will build on the close inter- ing CAD/EDA optimization tools. This largely ignores
play between learning and optimization, as well expert designer knowledge, as well as the stack of
as the representations that unite them via domain models and symbolic representations that underlie
knowledge and mathematical structure. Example chip design. A longer-range challenge is to marry
research foci include: 1) the interaction between data-centric and knowledge-centric approaches to
discrete-combinatorial and continuous methods; 2) achieve AI-based optimization and intelligent design
optimization and sampling on manifolds; 3) sequen- flows, in the sense of “third-wave AI.”
tial decision-making; 4) nonconvex optimization
and deep learning; and 5) distributed and federated Lessons of “The Bitter Lesson”
learning and optimization [7]. Observations regard- Third, it is timely to revisit “the bitter lesson” [9],
ing several near-term prospects are as follows. which is that “general methods that leverage com-
putation are ultimately the most effective, and by a
Embedding and scaling large margin.” As witnessed by the histories of chess,
First, CAD/EDA has always contended with scale Go, and speech recognition, the general methods
through problem size reductions, notably by par- that have scaled well with available computation
titioning and clustering of (embedded) graph or are search and learning. Theoretical advances such
hypergraph representations. In recent years, large- as the double descent risk curve of Belkin et al. [10]
scale (message-passing) GNNs have shown promis- explain the success of large, overparameterized neu-
ing applications to combinatorial optimization and ral-network models that operate beyond an “inter-
algorithm alignment [2], [3], [8]. GNN-based meth- polation threshold.” Recently, Mirhoseini et al. [4]
ods, with attention and transformers, offer the prom- describe a deep RL approach to chip floorplanning
ise of new optimizations and solution quality beyond that yields solutions “superior or comparable to
what human experts can achieve [4], as well as clus- those produced by humans in all key metrics”; this
tering and embedding that more naturally discovers is potentially another instance of the bitter lesson,
important problem structures without hand-crafting which CAD/EDA and IC design teams are seeking to
of representations and hyperparameters. Orthogo- reproduce and translate across many contexts.
nally (see the upcoming section), several challenges Corollaries of the bitter lesson, while discomfiting,
posed by sparse and/or confidential data have been might also give us encouragement. In our lifetimes,
10 IEEE Design&Test
Authorized licensed use limited to: Nanjing University. Downloaded on November 23,2024 at 07:25:47 UTC from IEEE Xplore. Restrictions apply.
available computation via process, circuit, and
system innovations has scaled by only several tens
of orders of magnitude, while the state or solution
spaces for design optimization have grown by much
larger factors. We might ask whether “the bitter les-
son” means that expert humans are not that hard to
beat, and/or whether (and why) CAD/EDA problems
are not so difficult, after all.
Mind the (suboptimality) gap
As noted in [11], the reality of optimization is “bet-
ter, faster, cheaper—pick any two.” Somewhat curi-
ously, in our field, we often insist (as a customer to
an EDA supplier, or as an academic peer reviewer to
an author with a new heuristic) on “I want all three.”
PhD students are trained to formulate and attack dif-
ficult optimizations using integer-linear programs,
minimum-cost flows, primal-dual methods, dynamic
programming, satisfiability, and so on. But in prac-
tice, “Our customers need an answer overnight,”
“The approach is impractical due to its runtime,” or
“While the method improves wirelength by 1%, this
is not a fair comparison because runtimes are three
times longer.” Over the past decades, such messages
have driven the CAD/EDA field to cut corners and
add more heuristics on top of existing stacks of heu-
ristics. This has come at a cost. Arguably, we are as
ill-informed about suboptimality gaps for classical
EDA optimizations and about the potential benefits
of the longer running and/or distributed CAD opti-
mizations, as we were 20+ years ago. However, in
today’s era of optimization, 1% matters.
In Figure 3a, heuristic B clearly dominates heuris-
tic A. Heuristic B also seems to dominate heuristic C
(e.g., with effort = t1) until the computational resource
is expanded (effort = t2), and C pulls ahead. Thus, B
and C together define the quality-effort Pareto. It is Figure 3. (a) Multiple approaches together
unfortunate when heuristic C never sees the light of define the quality-versus-effort Pareto
day, particularly since we have little idea how close frontier. For many optimizations, the
any of these heuristics are to reaching optimality. suboptimality gap is unknown. In the
Note, too, that quality and effort cannot be sepa- figure, quality = “1.0” denotes optimality (in
rated from each other in the Pareto frontier of optimi- optimization) or accuracy (in analysis). (b)
zation. Figure 3b and c illustrates aspects of how EDA At the limits of achievable QoR, heuristic
optimization must live with this inseparability. Figure behavior becomes less smooth, and the
3b illustrates the “noise” seen in today’s CAD/EDA density of feasible solutions can decrease
tools. This highlights the importance of sampling to even while remaining nonzero. (c) ML can
find the good tail of distributions. A key aspect of improve QoR achieved with a given effort
this is that even sampling needs to become much (resource budget), and/or reduce the effort
smarter, for example, to decide how much sampling needed to achieve a given QoR. See the
is needed before giving up on a QoR target as “not “Inset” for two case studies.
January/February 2023
11
Authorized licensed use limited to: Nanjing University. Downloaded on November 23,2024 at 07:25:47 UTC from IEEE Xplore. Restrictions apply.
Machine Learning for CAD/EDA
possible.” The figure illustrates how aiming for a and other desiderata. Real designs can be perturbed
given design QoR target can result in a mix of red into artificial test cases with known achievable solu-
fails and green successes within the distribution of tion costs or scaled to yield bootstrapped lower
QoR outcomes. Figure 3c shows how rearchitecting bounds on heuristic suboptimality. Alternatively,
tools and ML-based optimization methods will “shift optimal solutions can be “planted” in artificial test
left and up,” moving the state of CAD/EDA technol- cases, enabling suboptimality to be exactly meas-
ogy from B to B’. See the accompanying “Inset” for ured. Several approaches produce artificial netlists
two case studies. that try to match prescribed degree distributions,
I/O counts, path depths, sequential/combinational
A road starts with a roadbed instance counts, and other criteria, which is in itself
At the nexus of learning, optimization, and CAD, a difficult optimization.
several foundational elements provide a “roadbed”
for the road ahead. These include: 1) benchmark- Data
ing and roadmapping of CAD/EDA optimizations; Second, data is essential to ML for CAD/EDA.
2) data to enable data-driven methods; and 3) “EDA That is, ML enables optimization methods to be
2.0” that broadly reinvents core optimization algo- conceived from a data-driven, rather than a CS-the-
rithms and tool architectures for scalability on mod- oretical, perspective [2]. Unfortunately, in our field
ern compute substrates. data is sparse, closely guarded, and in constant flux
with the evolution of architectures, standards, and
Benchmarking and roadmapping technologies. When paths to real data are infeasible,
First, CAD/EDA optimization is a technology, and research enablement requires “data virtual reality”:
as with any technology, its progress must be meas- data that is artificial yet scalable, free from biases,
ured, benchmarked, and roadmapped. Indeed, and indistinguishable from real data from the perspec-
DA has always been included as a key supplier of tive of optimization. This brings numerous research
technology in the semiconductor industry’s tech- challenges such as matching the behavior of real
nology roadmap. At the same time, benchmarking circuits through multistage optimizations (e.g., logic
has long been a fraught topic in CAD/EDA. Public transforms, physical embedding, timing closure, and
benchmarks, even when “real,” are obfuscated (e.g., sizing) that span multiple abstractions. Jumping-off
module hierarchy stripped), incomplete (no clock), points include (graphical) generative adversarial
nonvertical (usable only for a specific academic network models and (federated) differential privacy
contest), and past any competitive relevance (old). techniques, in conjunction with obfuscation and
Moreover, no independent, trusted entity exists to noising—as well as the finding of relevant topolog-
assess EDA capability. An “Underwriters Laboratories ical and other motifs that characterize real designs.
for EDA” that can measure and benchmark IC design Data augmentation, along with transfer learning and
tools and flows would be a welcome development. low-shot learning, can further mitigate the unavaila-
Benchmarking is naturally complemented by the bility of proprietary data. New techniques will also
roadmapping of CAD/EDA optimizations accord- be needed to project data—from process design kits
ing to metrics such as the solution quality achieved (PDKs) and libraries to system designs—forward
within a given resource bound, or the resource into future technologies. This is because optimiza-
needed to achieve a given solution quality. A strong tions that meet the leading edge of practice cannot
culture of benchmarking, calibration, and “measure be driven only by the rearview mirror.
to improve” goes hand in hand with instrumentation
and data collection, transparency, and reproduci- Computing platforms
bility. This is reflected in recent (robust design flow, Third, reinventing CAD/EDA to leverage modern
calibrations, and Metrics4ML) activities of the IEEE computational resources is also a mandatory aspect
CEDA DA Technical Committee [12]. of closing the suboptimality gap. Since the 1980s,
With real benchmarks, optimal solution costs, and CAD/EDA research and development has focused
remaining suboptimality gaps are unknown. Thus, on single-threaded or single-server turnaround
artificial benchmarks are needed that can balance times. However, future solution quality and turna-
realism, known optimal solution quality, scalability, round time gains must leverage the hyperscaling
12 IEEE Design&Test
Authorized licensed use limited to: Nanjing University. Downloaded on November 23,2024 at 07:25:47 UTC from IEEE Xplore. Restrictions apply.
of available compute on platforms that span cloud,
GPU, and accelerator hardware. Core EDA algo-
rithms and architectures will need to be reinvented
accordingly.
As researchers make foundational advances in
(sum-of-squares, manifold, nonconvex, min–max,
etc.) optimization, distributable algorithm realiza-
tions and distributed-federated learning and opti-
mization will also be needed. Basic open questions
surround the tension between exploration and
exploitation, and the need to deliver “anytime”
solution quality that improves monotonically with
the given footprint (threads × runtime) of the com-
putational resource. For example, what new princi-
ples will improve our understanding of how much
additional compute is needed to achieve a pre-
scribed wall time reduction, iso-QoR? Or, how much
additional compute and/or wall time is needed to
achieve a prescribed increase in QoR? From particle
swarm optimization to differential evolution and the
class of EDAs for EDA!, there is a rich body of work
on metaheuristics and “parallel problem-solving
from nature” that can potentially contribute to
the development of future cloud-scalable (and
learning-enabled) CAD/EDA.
We build roads as infrastructure so that we can
go farther and go faster. How far and how fast we
can travel along the road ahead for ML and CAD/
EDA is yet to be seen. There are difficult technical
challenges, such as debugging of models, marrying
of data- and knowledge-driven AI, end-to-end direct
Figure 4. Road ahead for ML and CAD/EDA is a
inference of design solutions, and comprehension
highway with many lanes and many travelers.
of security as an optimization objective. Progress
will also depend on business models, investments
in basic research, minimization of redundant efforts, every step of the way, ML will help EDA tools and
and other nontechnical factors. And, without a road- flows extract more from optimization resources,
bed, there can be no road: benchmarking, roadmap- enabling more iterations, and exploration within
ping, data, and the use of modern compute platforms the design schedule. Building this road and driv-
are all essential to the scaling of ML-enabled EDA ing along it will unite learning, optimization, and
optimization. Last, if there are too many toll booths CAD—and stakeholders across the design and EDA
and speed bumps (closed ecosystems, absence of ecosystem as well—to bring future scaling benefits
standards, assertions of IP, etc.), then there will be to the industry and society at large.
fewer travelers on the road ahead.
At the same time, as shown in Figure 4, the road Acknowledgments
ahead for ML and CAD/EDA is a highway with many Many thanks are due to the Guest Editors for their
lanes [13]. It leads to self-driving IC design tools invitation to write this article, which builds on [14].
and flows that make design innovation and solu- Thanks are also due to Leon Stok, Gi-Joon Nam, and
tion space exploration more effective, efficient, and IBM colleagues, as well as to many collaborators in
accessible. It also takes us from today’s ML-assisted the TILOS AI institute [7], for inspiring discussions
CAD (MLCAD) to tomorrow’s intelligent MLDA. At and shared visions of the road ahead.
January/February 2023
13
Authorized licensed use limited to: Nanjing University. Downloaded on November 23,2024 at 07:25:47 UTC from IEEE Xplore. Restrictions apply.
Machine Learning for CAD/EDA
Inset the probability of functional failures (red points)
The challenges illustrated in Figure 3b can and the mapping between target and effective CP
be seen in the following case study using a lead- values.
ing commercial place-and-route tool. Figure 5 Next, note that the “shift left and up” illustrated
contrasts what is requested from the tool, versus in Figure 3c has significant potential value in, for
what is delivered. The x-axis gives the target clock example, the detailed routing stage of physical
period (CP) of the design, while the y-axis gives design. The challenge is to identify “doomed runs”
the effective, that is, actual, CP of the place-and- and to intelligently reassign or repurpose their CPU
route outcome. resources, such that overall success is more likely,
The x-axis has 23 target CP values, spaced 50 ps and overall turnaround time is reduced.
apart. For each, 101 distinct tool runs are made, all Here, a second illuminating case study involves
aiming within half a picosecond of that target; this is detailed routing. Modern detailed routing tools
achieved by stepping the CP target by 0.01 ps within partition the routing task into disjoint “tiles” or
a 1 ps range. How to obtain the best-possible (e.g., “switchboxes,” assigning a worker thread to each
minimum-CP) outcome within a given budget of tile. After all tiles have been processed, those that
samples is an open challenge for MLCAD. Methods do not yet have a clean routing solution are repro-
are needed to characterize and sample from the cessed—for example, with using a variant costing
transition between easy CP targets on the left, and or connection ordering strategy—in the next itera-
impossible targets on the right—comprehending tion of the router. Ideally, the number of unsolved
Figure 5. Mapping between desired (target) CP and actual (effective) CP achieved
by a commercial place-and-route tool. At each x-axis value, results are obtained
for 101 target CP values within a 1 ps range of the given target CP. The highest
blue data point in the plot corresponds to the minimum CP achieved, that is, the
maximum-frequency result.
14 IEEE Design&Test
Authorized licensed use limited to: Nanjing University. Downloaded on November 23,2024 at 07:25:47 UTC from IEEE Xplore. Restrictions apply.
Figure 6. Locations of detailed routing tiles (workers) with DRC violations seen at
two iterations (7 and 54) of a leading academic routing tool (left). Distribution of
runtimes, shown in seconds on a logarithmic scale, for the workers (right). Stubborn
routing tiles could receive more attention (more assigned workers, executing
variant solution strategies) if identified earlier in the routing process.
design rule check violations (DRCs) and remain- per worker are a fraction of a second, but in later
ing tiles will decrease with each iteration of the iterations some workers can run for hundreds of
router. seconds. This offers high-value opportunities for
Figure 6 shows the location of tiles that have learning (e.g., predicting “doomed” runs or where
remaining DRC violations, as well as runtimes per stubborn DRC violations will occur) as well as
worker, at two iterations of a leading academic optimization (e.g., adaptive strategies to resolve
routing tool. (Note the logarithmic scale used for DRCs in a given tile, and budgeting of available
runtimes.) In early iterations, average runtimes worker threads to remaining tiles).
References [3] Q. Cappart et al., “Combinatorial optimization
[1] C. Manning. (Sep. 2020). Artificial Intelligence and reasoning with graph neural networks,” 2021,
Definitions. [Online]. Available: https://hai.stanford.edu/ arXiv:2102.09544.
sites/default/files/2020-09/AI-Definitions-HAI.pdf [4] A. Mirhoseini et al., “A graph placement methodology
[2] T. Chen et al., “Learning to optimize: A primer and a for fast chip design,” Nature, vol. 594, no. 10, p. 207,
benchmark,” 2021, arXiv:2103.12828. Jun. 2021.
January/February 2023
15
Authorized licensed use limited to: Nanjing University. Downloaded on November 23,2024 at 07:25:47 UTC from IEEE Xplore. Restrictions apply.
Machine Learning for CAD/EDA
[5] A. B. Kahng, “Machine learning applications in physical [12] IEEE CEDA Design Automation Technical Committee.
design: Recent results and directions,” in Proc. ISPD, GitHub Repository. Accessed: Jul. 10, 2021. [Online].
2018, pp. 68–73. Available: https://github.com/ieee-ceda-datc
[6] T.-B. Chan, A. B. Kahng, and M. Woo, “Revisiting [13] G. Huang et al., “Machine learning for electronic
design automation: A survey,” 2021, arXiv:2102.03357.
inherent noise floors for interconnect prediction,” in
[14] A. B. Kahng, “MLCAD today and tomorrow: Learning,
Proc. ACM/IEEE Int. Workshop Syst.-Level Interconnect
optimization and scaling,” in Proc. MLCAD Workshop,
Problems Pathfinding, Nov. 2020, pp. 1–7.
Nov. 2020, p. 1. [Online]. Available: https://www.
[7] The Institute for Learning-Enabled Optimization at youtube.com/watch?v=oVF3yUhyc
Scale. Accessed: Jul. 10, 2021. [Online]. Available:
https://tilos.ai/ Andrew B. Kahng is a Distinguished Professor
[8] Y. Bengio, A. Lodi, and A. Prouvost, “Machine learning of computer science and engineering and electri-
for combinatorial optimization: A methodological cal and computer engineering at the University of
California San Diego (UC San Diego), La Jolla, CA
tour d’horizon,” Eur. J. Oper. Res., vol. 290, no. 2,
92093 USA. His research interests include IC physi-
pp. 405–421, Apr. 2021. cal design, design for manufacturability (DFM), tech-
[9] R. Sutton. (Mar. 2019). The Bitter Lesson. [Online]. nology roadmapping, and machine learning (ML) for
Available: http://www.incompleteideas.net/IncIdeas/ EDA. Kahng has a PhD in computer science from UC
BitterLesson.html San Diego. He is a Fellow of IEEE and ACM.
[10] M. Belkin et al., “Reconciling modern machine learning
Direct questions and comments about this article to
practice and the bias-variance trade-off,” 2018,
Andrew B. Kahng, Computer Science and Engineering
arXiv:1812.11118.
Department and Electrical and Computer Engineering
[11] A. B. Kahng, “Advancing placement,” in Proc. ISPD, Department, University of California San Diego (UC San
2021, pp. 15–22. Diego) La Jolla, CA 92093 USA; [email protected].
16 IEEE Design&Test
Authorized licensed use limited to: Nanjing University. Downloaded on November 23,2024 at 07:25:47 UTC from IEEE Xplore. Restrictions apply.