0% found this document useful (0 votes)
244 views22 pages

Recent Advances On Machine Learning For Computational Fluid Dynamics A Survey

This survey paper reviews recent advancements in applying Machine Learning (ML) techniques to Computational Fluid Dynamics (CFD), categorizing methods into Data-driven Surrogates, Physics-Informed Surrogates, and ML-assisted Numerical Solutions. It highlights the roles of ML in enhancing simulation accuracy, reducing computational time, and addressing challenges in CFD, while also discussing real-world applications across various scientific and engineering fields. The paper aims to guide future research directions and inspire further developments in the rapidly evolving intersection of ML and CFD.

Uploaded by

qqydss
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
244 views22 pages

Recent Advances On Machine Learning For Computational Fluid Dynamics A Survey

This survey paper reviews recent advancements in applying Machine Learning (ML) techniques to Computational Fluid Dynamics (CFD), categorizing methods into Data-driven Surrogates, Physics-Informed Surrogates, and ML-assisted Numerical Solutions. It highlights the roles of ML in enhancing simulation accuracy, reducing computational time, and addressing challenges in CFD, while also discussing real-world applications across various scientific and engineering fields. The paper aims to guide future research directions and inspire further developments in the rapidly evolving intersection of ML and CFD.

Uploaded by

qqydss
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

JOURNAL OF LATEX CLASS FILES, VOL. 14, NO.

8, AUGUST 2023 1

Recent Advances on Machine Learning for


Computational Fluid Dynamics: A Survey
Haixin Wang, Yadi Cao, Zijie Huang, Yuxuan Liu, Peiyan Hu, Xiao Luo, Zezheng Song, Wanjia
Zhao, Jilin Liu, Jinan Sun† , Shikun Zhang, Long Wei, Yue Wang, Tailin Wu, Zhi-Ming Ma, Yizhou Sun

Abstract—This paper explores the recent advancements in enhancing Computational Fluid Dynamics (CFD) tasks through Machine
Learning (ML) techniques. We begin by introducing fundamental concepts, traditional methods, and benchmark datasets, then examine
the various roles ML plays in improving CFD. The literature systematically reviews papers in recent five years and introduces a novel
classification for forward modeling: Data-driven Surrogates, Physics-Informed Surrogates, and ML-assisted Numerical Solutions.
arXiv:2408.12171v1 [cs.LG] 22 Aug 2024

Furthermore, we also review the latest ML methods in inverse design and control, offering a novel classification and providing an
in-depth discussion. Then we highlight real-world applications of ML for CFD in critical scientific and engineering disciplines, including
aerodynamics, combustion, atmosphere & ocean science, biology fluid, plasma, symbolic regression, and reduced order modeling.
Besides, we identify key challenges and advocate for future research directions to address these challenges, such as multi-scale
representation, physical knowledge encoding, scientific foundation model and automatic scientific discovery. This review serves as a
guide for the rapidly expanding ML for CFD community, aiming to inspire insights for future advancements. We draw the conclusion that
ML is poised to significantly transform CFD research by enhancing simulation accuracy, reducing computational time, and enabling
more complex analyses of fluid dynamics. The paper resources can be viewed at https://github.com/WillDreamer/Awesome-AI4CFD.

Index Terms—Machine Learning, Computational Fluid Dynamics, AI for PDE, Physics Simulation, Inverse Problem.

1 I NTRODUCTION

F LUID dynamics is a fundamental discipline that studies


the motion and behavior of fluid flow. It serves as a
foundation across a wide range of scientific and engineer-
ing fields, including aerodynamics [1], [2], [3], chemical
engineering [4], [5], [6], biology [7], [8], [9], and environ-
mental science [10], [11], [12], [13], [14], [15]. CFD employs
mathematical models to simulate fluid dynamics through
partial differential equations (PDEs) [16]. The primary goal
of CFD is to obtain simulated results under various working
conditions, thereby reducing the need for costly real-world
experiments and accelerating engineering design and con-
trol processes.
Despite decades of advancement in research and engi-
neering practice, CFD techniques continue to face significant Fig. 1: The approximate annual number of papers on ML
challenges. These include high computational costs due for CFD presented at top-tier ML publication and leading
to demanding restrictions on spatial or temporal resolu- journals in fluid dynamics appeared in Table 1 and 2
tions, difficulties in capturing subscale dynamics such as
in turbulence [17], and stability issues with numerical algo- of ML techniques with the extensive fluid dynamics data
rithms [16], among others. On the other hand, ML, famous accumulated over recent decades offers a transformative
for its ability to learn patterns and dynamics from observed approach to augment CFD practices (see Fig. 1). As the field
data, has recently emerged as a trend that can reshape or of ML continues to expand rapidly, it becomes increasingly
enhance any general scientific subject [18]. The integration challenging for researchers to stay updated. In response, this
review aims to shed light on the multifaceted roles ML plays
in enhancing CFD.
• H. Wang, J. Liu, J. Sun and S. Zhang are with Peking University. E-mail: Actually, there have already been some surveys on the
[email protected], [email protected].
• Y. Cao, Z. Huang, X. Luo, Y. Liu and Y. Sun are with University of
application of ML methods in the CFD field. However, most
California, Los Angeles. of these surveys have the following two limitations: 1) Only
• P. Hu and Z. Ma are from Academy of Mathematics and Systems Science, earlier attempts. For instance, Wang et al. [19] and Huang et
Chinese Academy of Sciences. al. [20] both provide a detailed discussion on incorporating
• Z. Song is from University of Maryland, College Park.
• W. Zhao is from Stanford University. physics-based modeling into ML, emphasizing dynamic
• L. Wei and T. Wu work at Westlake University. systems and hybrid approaches. Similarly, Vinuesa et al.
• Y. Wang works at Microsoft AI4Science. [21] explores promising ML directions from the perspective
• † Corresponding author.
of CFD domain, such as direct numerical simulations, a
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2023 2

large-eddy simulation (LES), a schematic of the turbulence ward modeling and inverse problems for the first time.
spectrum, Reynolds-averaged Navier–Stokes (RANS) simu- (3) Comprehensive Discussion. This survey provides a com-
lations, and a dimensionality-reduction method. However, prehensive discussion, covering background, data, forward
they only review early ML applications to PDEs, with a modeling/inverse design methods, and applications, which
focus on works prior to 2021. 2) Incomplete overview. helps researchers quickly and thoroughly understand this
The current body of surveys on ML applications in CFD field. (4) Future Guidance. Our work summarizes the most
primarily focuses on integrating physical knowledge and recent advancements in CFD and highlights the challenges
common model architectures for PDEs. Zhang et al. [22] in current CFD research, which can provide guidance and
examine ML for both forward and inverse modeling of direction for future work in the field, i.e., scientific founda-
PDEs, highlighting four key challenges but ignoring system- tion model.
atic classification and the potential applications in this area. Broader Impact. The impact of our survey lies in two points.
Meanwhile, Lino et al. [23] roughly differentiate between (1) To Science-related Community. Our survey summarizes ef-
physics-driven and data-driven approaches and address fective ML approaches for CFD, which can help researchers
several methodological limitations, but similarly overlook in physics and mechanics find solutions and benefits from
systematic classification on the motivation of each method. ML. (2) To ML Community. Our survey can also provide
Despite these contributions, there remains a gap in com- guidance for ML researchers and help them apply their
prehensive, cutting-edge, and profound systematization of knowledge to real-world scientific applications in CFD.
ML methods for CFD. Our work represents the first survey
that consolidates these fragmented insights into a cohe-
2 P RELIMINARY
sive framework. We systematically review the fundamental
knowledge, data, methodologies, applications, challenges 2.1 Fundamental Theories of Fluid
and future directions in the field. The structure of this paper Studying fluid problems often involves analyzing the
is shown in Fig. 2 and organized as follows: Navier-Stokes (N-S) equations, which describe the motion
In Section 2, we introduce the fundamental concepts and of fluid substances. The N-S equations are a set of PDEs
knowledge of CFD, accompanied by an annotated list of that describe the motion of viscous fluid substances. These
all the types of PDEs addressed by the literature reviewed. equations are fundamental in the field of fluid dynamics
We then systematically review the literature from the re- and are key to understanding and predicting the behavior
cent five years, categorizing the selected studies into three of fluids in various scientific and engineering contexts. Take
primary disciplines with demonstration in Fig. 4: Data- the incompressible flow as an example, the N-S equation can
driven Surrogates (Section 3), which rely exclusively on be simplified as follows:
observed data for training; Physics-Informed Surrogates 
∇ · (ρu) = 0
(Section 4), which integrate selected physics-informed priors
(1)
into ML modeling; and ML-assisted Numerical Solutions  ∂(ρu) + ∇ · (ρuu) = −∇p + ∇ · ν[∇u + (∇u)⊤ ],
(Section 5), which partially replace traditional numerical ∂t
solvers to achieve a balance between efficiency, accuracy, where ρ denotes the fluid density, ν signifies the dynamic
and generalization. In addition, we introduce the setup of viscosity, u = (ux , uy , uz )⊤ is the velocity field, and p
inverse design and control problems (Section 6), which are is the pressure. The first line is the continuity equation,
two fundamental problems when applying CFD to real- which encapsulates the principle of mass conservation. It
world applications. The former optimizes the design pa- asserts that the product of density ρ and the velocity field
rameters, e.g., initial and boundary conditions, for certain u, when considering a fixed volume, exhibits no net change
design objectives. And the latter is to control a physical over time. The second line correspond to the momentum
system to achieve a specific objective by applying time- equation, articulates the fluid’s momentum variation in re-
varying external forces. action to the combined effect of internal and external forces.
Following this, Section 7 discusses the application of Sequentially, the terms in momentum equation correspond
these methods across key scientific and engineering disci- to the unsteady term, the convective term, the pressure
plines, showcasing their impact and potential. Finally, Sec- gradient, and the viscous forces, each playing a distinct role
tion 8 addresses the key challenges and limitations within in the momentum balance.
the current state-of-the-art and outlines prospective research In CFD, the governing equations describing fluid motion
directions. We aim to draw the attention of the broader are often solved numerically due to their inherent complex-
ML community to this review, enriching their research with ity and the challenges associated with finding analytical so-
fundamental CFD knowledge and advanced developments, lutions, except in the simplest scenarios. Numerical methods
thus inspiring future work in the field. for solving these equations fall into two primary categories:
Differences from Existing Surveys. Compared with ex- Eulerian and Lagrangian approaches. Eulerian methods
isting surveys, our survey has four inclusive features: (1) focus on analyzing fluid behavior at fixed locations within
Update-to-date Summarization. This survey focuses on the the space through which the fluid flows. This approach is
newest papers from 2020 to 2024 based on the current particularly effective in fixed grid systems where the grid
state of development. In contrast, existing related sur- does not move with the fluid. Eulerian methods are Advec-
veys were published before 2022. (2) Innovative Taxonomy. tionantageous for problems involving complex boundary
This survey systematically reviews the ML methods in the interactions and steady-state or quasi-steady flows. They
CFD field and introduces a novel classification based on effectively handle multi-phase flow scenarios because they
the motivations behind methods designed for both for- can easily accommodate changes in flow properties at fixed
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2023 3

§3.1.1 Structu-
DPUF [24], TF-Net [25], EquNet [26], RSteer [27]
red Grids
GNS [28], MGN [29], MP-PDE [30], Han et al. [31], TIE [32], MAgNet [33],
§3.1 Dependent §3.1.2 Unstru-
GNODE [34], FCN [35], Zhao et al. [36], DINo [37], LAMP [38], CARE [39],
on Discretization ctured Mesh
BENO [40], HAMLET [41]
§3.1.3 Lagran-
CC [42], Wessels et al. [43], FGN [44], MCC [45], LFlows [46], Li et al. [47]
gian Particles
§3 Data-driven
Surrogates DeepONet [48], PI-DeepONet [49], MIONet [50], B-DeepONet [51], NOMAD [52],
§3.2.1 Deep Op-
Fourier-MIONet [13], Shift-DeepONet [53], HyperDeepONet [54], L-DeepONet [55],
erator Network
SVD-DeepONet [56]
§3.2 Independent §3.2.2 In Phys-
MGNO [57], G. et al. [58], GNOT [59], CNO [60], LNO [61], KNO [62], ICON [63]
on Discretization ical Space

§3.2.3 In Four- FNO [64], PINO [65], GEO-FNO [66], U-NO [67], SSNO [68], F-FNO [69], CFNO [70],
Forward Modeling

ier Space CMWNO [71], MCNP [72], G-FNO [73], GINO [74], DAFNO [75], FCG-NO [76]

PINN [77], Bai et al. [78], PINN-SR [79], NSFnet [80], Phygeonet [81], PINN-LS [82], ResPINN [83],
§4.1 PINNs CPINN [84], stan [85], Meta-Auto-Decoder [86], BINN [87], Hodd-PINN [88], DATS [89],
§4 Physics-driven PINNsFormer [90], PeRCNN [91], NASPINN [92]
Surrogates
DGM[93], AmorFEA[94], EDNN[95], Liu et al.[96], Gao et al. [97], ADLGM et al. [98], INSR [99],
§4.2 Constraint-Informed
Neu-Gal [100], PPNN[101], FEX [102]

Zhang et al. [6], Kochkov et al. [103], Despres et al. [104], Bar-Sinai et al. [105],
§5.1 Coarser Scales
List et al. [106], Sun et al. [107]
Machine Learning for Computational Fluid Dynamics

§5 ML-Assisted
Numerical §5.2 Preconditioning Greenfeld et al. [108], Luz et al. [109], Sapplet al. [110]
Solutions
§5.3 Miscellaneous Pathak et al. [111], Obiols et al. [112], Um et al. [113], CFD-GCN [114]

§6.1.1 PDE-constrained hPINN [115], gPINN [116], Bi-PINN [117], Pokkunuru et al. [118]
§6.1 Inverse Design
Allen et al. [119], Wu et al. [120], Ardizzone et al. [121], INN [122],
§6.1.2 Data-driven
Invertible AE [123], cINN [124], Ren et al. [125], Kumar et al. [126]
§6 Inverse
Design & Control §6.2.1 Supervised Learning Holl et al. [127], Hwang et al. [128]

§6.2 Control §6.2.2 Reinforcement Learning Viquerat et al. [129], Garnier et al. [130], Garnier et al. [131]

§6.2.3 PDE-constrained Mowlavi et al.[132], Barrystraume et al. [133]

Mao et al. [134], Huang et al. [135], Sharma et al. [136], Auddy et al. [137], Shan et al. [138],
§7.1 Aerodynamics
Deng et al. [2], Mufti et al. [3]

§7.2 Combustion & Reacting Flow Ji et al. [139], Zhang et al. [6]

§7.3 Atmosphere & Ocean Science Pathak et al. [11], Bi et al. [12], Lam et al. [14], Rajagopal et al. [15], Jiang et al. [13]

§7 Applications §7.4 Biology Fluid Yin et al. [7], Voorter et al. [8], Shen et al. [9]

§7.5 Plasma Zhong et al. [140], Gopakumar et al. [141], Kim et al. [142]

§7.6 Symbolic Regression FEX [143], Becker et al. [144]

§7.7 Reduced Order Modeling Leask et al. [145], Geneva et al. [58], Kneer [146], Arnold et al. [147], Wentland et al. [148]

Fig. 2: Taxonomy of CFD methods based on ML techniques. We first investigate into forward modeling approaches,
including data-driven surrogates, physics-driven surrogates, and ML-assisted methods. Besides, we conduct an in-depth
analysis of inverse problems. Moreover, we review the practical applications of these methods across various domains.

spatial points. Lagrangian methods, in contrast, track the algebraic equations. These methods include the finite differ-
motion of individual fluid particles as they traverse through ence method (FDM) [149], which utilizes finite differences
the domain. This particle-tracking approach is often more to approximate derivatives, effectively transforming the
computationally demanding but provides a detailed depic- continuous mathematical expressions into discrete counter-
tion of the fluid dynamics, making it suitable for capturing parts. Another popular approach, the finite volume method
unsteady, complex fluid structures and interactions, such as (FVM) [16], involves dividing the domain into control vol-
in turbulent flows or when dealing with discrete phases. umes, within which fluxes are calculated to conserve the
underlying physical quantities. The finite element method
2.2 Traditional Numerical Methods (FEM) [150] applies variational principles to minimize error
In the traditional realm of CFD, numerical methods are across interlinked elements, offering flexibility in handling
employed to discretize the fluid domain into a mesh frame- complex geometries and boundary conditions. Additionally,
work, allowing the transformation of PDEs into solvable the spectral method [151] employs Fourier series to lin-
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2023 4

Fig. 3: Demonstration of various datasets of ML for CFD. Row 1 from left to right: (1) 1D Diffusion, (2)-(3) 1D Advection,
(4) 1D compressible Navier-Stokes, (5) 1D Reaction-Diffusion, (6) 2D Darcy flow solution, (7) 2D Darcy flow coefficient,
(8) Cavity flow. Row 2 from left to right: (1)-(4) 2D Shallow Water at t = 0.25, 0.5, 0.75, 1, (5)-(8) 2D Reaction-Diffusion at
t = 1.25, 2.5, 3.75, 5. Row 3 from left to right: (1) Cylinder flow, (2) Airfoil flow, (3)-(6) 2D Compressible Navier-Stokes at
t = 0, 0.5, 1, 2., (7)-(8) 3D compressible Navier-Stokes at t = 1, 2.

earize nonlinear problems, proving especially potent under plex fluid dynamics, which have experienced swift advance-
periodic boundary conditions. Lastly, the lattice Boltzmann ments. These models are impactful and can be broadly
method (LBM) [152] adopts a mesoscale perspective, accel- classified based on their approach to spatial discretization,
erating computations significantly though sometimes at the ranging from methods that: 1) Dependent on discretization,
expense of precision. 2) Independent of discretization. The former requires di-
viding the data domain into a specific grid, mesh or particle
2.3 Benchmark & Dataset structure and designing the model architecture, while the
Due to the limitations of rapid development of the field, latter does not rely on discretization techniques, but instead
existing benchmarks often cannot comprehensively cover directly learns the solution in the continuous space.
all Advanced ML methods and conduct detailed categorical
analyses. Therefore, we summarize existing benchmarks in 3.1 Dependent on Discretization
order to better promote the development of this field. From We categorize these methods based on the type of dis-
the famous DeepXDE [153] to PDEBench [154], and PIN- cretization into three categories: 1) on Regular grid, 2)
Nacle [155] to BLASTNet [156], an expanding array of CFD on Irregular mesh, and 3) on Lagrangian particles.
simulation scenarios are being incorporated for comparison.
In addition to the widely utilized governing equations that 3.1.1 Regular Grids
cover a broad spectrum of mathematical properties, the data This section provides a comprehensive review of pivotal
is also simulated under a variety of settings. These set- contributions in regular grid approaches. These innovative
tings include complex geometries, multi-scale phenomena, methods have established a solid groundwork for harmo-
nonlinear behaviors, and high dimensionality, enriching the nizing neural network architectures, particularly CNNs,
diversity and complexity of the simulation scenarios. Fur- with CFD for enhanced predictive capabilities, marking
thermore, contributions such as AirFRANS [157] has made them as trailblazers in the field. DPUF [24] is the first to
significant strides in addressing specific scenarios like airfoil implement CNNs on regular grids for learning transient
simulation in aerodynamics more concretely. fluid dynamics. Their approach is notable for incorporat-
In this context, we provide a thorough review of the ing physical loss functions, which explicitly enforces the
prevalent datasets employed to evaluate the performance conservation of mass and momentum within the networks.
of ML models in CFD simulations, including 21 PDEs This integration of physical principles and ML marks a
and 13 specific fluid flow problems. PDEs are Advec- significant step in the field. And then TF-Net [25] aims to
tion, Allen-Cahn, Anti-Derivative, Bateman–Burgers, Burg- predict turbulent flow by learning from the highly nonlinear
ers, Diffusion, Duffing, Eikonal, Elastodynamic, Euler, Gray- dynamics of spatiotemporal velocity fields. These fields are
Scott, Heat, Korteweg-de Vries, Kuramoto-Sivashinsky, derived from large-scale fluid flow simulations, pivotal in
Laplace, Poisson, Reynold Averaged Navier-Stokes, Re- turbulence and climate modeling. This work represents a
action–Diffusion, Schrödinger, Shallow Water, and Wave significant effort in understanding complex fluid behaviors
Equations. Besides, the flow problems are Ahmed-Body, through ML. Incorporating theoretical principles into ML,
Airfoil, Beltrami flow, Cavity flow, Cylinder flow, Dam flow, EquNet [26], seeks to improve accuracy and generalization
Darcy flow, Kovasznay flow, Kolmogorov flow, Navier- in fluid dynamics modeling. It achieves this by incorporat-
Stokes flow, Rayleigh-Bénard flow, and Transonic flow. Par- ing various symmetries into the learning process, thereby
tial demonstration examples are shown in Fig. 3. enhancing the model’s robustness and theoretical ground-
ing. RSteer [27] presentes a novel class of approximately
3 DATA - DRIVEN S URROGATES equivariant networks. These networks were specifically de-
Data-driven Surrogates are models that rely solely on ob- signed for modeling dynamics that are imperfectly symmet-
served data to train algorithms capable of simulating com- ric, relaxing equivariance constraints. This approach offered
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2023 5

Trajectory Data-driven Surrogates Application of ML


· On Regular Grids · On Irregular Mesh · On Lagrangian Particles for CFD Simulation

Dependent
Turbulence

· Operator Network · In Physical Space · In Fourier Space


Combustion

Independent
BranchNet
Concat

TrunkNet

Convolutional layer Fourier layer


Physical Fields
Reaction Flow
· Pressure
ML-assisted Numerical Solutions
· Velocity
· Density Improve Coarser Simulation
x
··············· Flu Advected

Constraints
Interpolation

Filter
Initial Condition Advecting
Interpolation
Weather

Preconditioning

Machine Learning Oceans


Boundary Condition

Physics-driven Surrogates
· Physics-Informed · Discretized Constraint-Informed
Neural Network (PINN) Neural Network
Plasma
Physical Laws Physical Laws

· Governing Equations

Biology Fluid

Fig. 4: Overview of ML for computational fluid dynamics simulation. The left column encompasses various types of input
data used in the models, including physical laws. The middle columns consist of three common frameworks used in
constructing models with ML. The right column pertains to applications in various scenarios.
a new perspective on handling complex dynamic systems enhancing these approaches, MP-PDE [30] replaces heuristic
in fluid dynamics. Collectively, these studies demonstrate components with backprop-optimized neural function map-
the burgeoning potential of ML in fluid dynamics. They ping, refining PDE solutions, while LAMP [38] optimizes
underscore the effectiveness of combining traditional fluid spatial resolutions to focus resources on dynamic regions.
dynamics principles with data-driven techniques, partic- Additionally, the GNODE model [34] introduces a graph-
ularly in the realms of regular grids and discretization- based neural ordinary differential equation to learn the time
dependent methods. evolution of dynamical systems, and the Flow Completion
Network (FCN)[35] uses GNNs to infer fluid dynamics
3.1.2 Irregular Mesh from incomplete data. Recognizing the limitations of CNNs,
Zhao et al.[36] propose a novel model that integrates CNNs
Irregular mesh challenges regular grid-based surrogates,
with GNNs to better capture connectivity and flow paths
prompting the adoption of adaptable solutions like Graph
within porous media. Addressing the challenge of training
Neural Networks (GNNs) for diverse structures and sizes.
GNNs on high-resolution meshes, MS-GNN-Grid [160], MS-
The GNS model [28] and MeshGraphNets [29] both employ
MGN [161], and BSMS-GNN [162] explore strategies to man-
GNNs, with the former representing physical systems as
age mesh resolution and connectivity, with the latter provid-
particles in a graph using learned message-passing for
ing a systematic solution without the need for manual mesh
dynamics computation, and the latter simulating complex
drawing. Complementing these developments, CARE [39]
phenomena on irregular meshes, demonstrating the net-
utilizes a context variable from trajectories to model dy-
works’ proficiency in handling irregular topology. Further
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2023 6

TABLE 1: Overview of Data-driven Surrogates methods for computational fluid dynamics.


Methods Source Backbone Scenarios
3 Data-driven Surrogates
3.1.1 On Regular Grids
DPUF [24] J. Fluid Mech. CNN CylinderFlow
TF-Net [25] KDD 2020 CNN Turbulent flow
EquNet [26] ICLR 2021 CNN Rayleigh-Bénard, Oceans, Heat
RSteer [27] ICML 2022 CNN smoke
3.1.2 On Irregular Mesh
GNS [28] ICML 2020 GNN Various materials
MGN [29] ICLR 2021 GNN CylinderFlow, Transonic
MP-PDE [30] ICLR 2022 GNN Burgers, Wave, Navier-Stokes
Han et al. [31] ICLR 2022 GNN/Transformer CylinderFlow, Transonic
TIE [32] ECCV 2022 Transformer FluidFall, FluidShake
MAgNet [33] NeurIPS 2022 GNN/CNN Burgers
GNODE [34] ICLR 2023 GNN N-pendulums, Spring
FCN [35] PHYS FLUIDS. GNN Navier-Stokes
Zhao et al. [36] Arxiv 2023 GNN Navier-Stokes
DINo [37] ICLR 2023 FourierNet Wave, Navier-Stokes, Shallow Water
LAMP [38] ICLR 2023 GNN Nonlinear PDE, Mesh
CARE [39] NeurIPS 2023 GNN CylinderFlow, Particle
BENO [40] ICLR 2024 GNN/Transformer Poisson
HAMLET [41] Arxiv 2024 GNN/Transformer Shallow Water, Darcy, Diffusion, Airfoil
3.1.3 On Lagrangian Particles
CC [42] ICLR 2020 CNN Particles, DamBreak
Wessels et al. [43] Comput Method Appl M MLP DamBreak
FGN [44] Comput Graph GNN Particles
MCC [45] AAAI 2023 CNN DamBreak
LFlows [46] ICLR 2024 Bijective layer Bird migration
Li et al. [47] Nat. Mach. Intell. Diffusion 3D Navier-Stokes
3.2.1 Deep Operator Network
DeepONet [48] Nat. Mach. Intell. MLP Reaction–Diffusion, other ODEs
PI-DeepONet [49] Sci. Advection MLP Anti-Derivative, Reaction–Diffusion, Burgers, Eikonal, Transonic
MIONet [50] Siam J Sci Comput MLP ODE, Anti-Derivative, Reaction–Diffusion
Fourier-MIONet [13] Arxiv 2023 MLP Gas saturation
NOMAD [52] NeurIPS 2022 MLP Advection, Shallow Water, Anti-Derivative
Shift-DeepONet [53] ICLR 2023 MLP Advection, Burgers, Euler
HyperDeepONet [54] ICLR 2023 MLP Advection, Burgers, Shallow Water
B-DeepONet [51] J. Comput. Phy. MLP Anti-Derivative, Reaction–Diffusion, Advection
SVD-DeepONet [56] Comput Method Appl M MLP ODEs
L-DeepONet [55] Arxiv 2023 MLP Rayleigh-Bénard, Shallow Water
3.2.2 In Physical Space
MGNO [57] NeurIPS 2020 GNN Darcy, Burgers
Geneva et al. [58] Neural Networks Transformer Navier-Stokes, Lorenz
GNOT [59] ICML 2023 Transformer Transonic, Elastodynamic, Navier-Stokes, Darcy
CNO [60] NeurIPS 2023 CNN Poisson, Allen-Cahn, Navier-Stokes, Darcy
FactFormer [158] NeurIPS 2023 Transformer Kolmogorov flow, Smoke buoyancy
LNO [61] Arxiv 2023 Laplace Diffusion, Duffing, Reaction–Diffusion
KNO [62] Arxiv 2023 Koopman Bateman–Burgers, Navier-Stokes
ICON [63] PNAS Transformer Poisson, Reaction–Diffusion, ODEs
Transolver [159] ICML 2024 Transformer CarDesign, Airfoil
3.2.3 In Fourier Space
FNO [64] ICLR 2021 Fourier Burgers, Darcy, Navier-Stokes
PINO [65] ACM/IMS Trans. Data Sci. Fourier Burgers, Darcy, Navier-Stokes, Kolmogorov
GEO-FNO [66] JMLR Fourier Advection, Elastodynamic, Airfoil, Navier-Stokes
U-NO [67] TMLR Fourier Darcy, Navier-Stokes
SSNO [68] IEEE Access Fourier Burgers, Darcy, Navier-Stokes
F-FNO [69] ICLR 2023 Fourier Kolmogorov, Transonic
CFNO [70] ICLR 2023 Fourier Navier-Stokes, Shallow Water, Maxwell
CMWNO [71] ICLR 2023 Fourier Gray-Scott
MCNP [72] Arxiv 2023 Fourier Diffusion, Allen-Cahn, Navier-Stokes
G-FNO [73] ICML 2023 Fourier Navier-Stokes, Shallow Water
GINO [74] NeurIPS 2023 Fourier Ahmed-Body, Reynold Averaged Navier-Stokes
DAFNO [75] NeurIPS 2023 Fourier Hyperelasticity, Airfoil
FCG-NO [76] ICML 2024 Fourier Poisson, Diffusion

namic environments, enhancing adaptability and precision. temporal range to identify and correct drifts. Han et al.
BENO [40] introduces a boundary-embedded neural opera- [31] first uses GNN to aggregate local data, then coarsens
tor based on graph message passing to incorporate complex the graph to pivotal nodes in a low-dimensional latent
boundary shape into PDE solving. It is similarly employed space. A transformer model predicts the next latent state
in HAMLET [41], which utilizes graph transformers along- by focusing on the sequence, and another GNN restores
side modular input encoders to seamlessly integrate PDE the full graph. Additionally, TIE [32] effectively captures
information into the solution process. the complex semantics of particle interactions without using
edges, by modifying the self-attention module to mimic the
GNN is naturally suitable for irregular mesh, however, update mechanism of graph edges in GNNs.
they are either steady-state or next-step prediction models,
which often experience drift and accumulate errors over Another technical approach is based on coordinate-
time. In contrast, sequence models leverage their extended based Implicit Neural Representation (INR) networks,
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2023 7

which allow the model to operate without the need for It utilizes the Fourier transform to model the integral oper-
interpolating or regularizing the input grid. For example, ators in a spectral space to capture global dependencies.
MAgNet [33] employs INR to facilitate zero-shot generaliza-
tion to new non-uniform meshes and to extend predictions 3.2.1 Deep Operator Network
over longer time spans. Besides, DINo [37] models the DeepONet [48] represents a significant advance in neu-
flow of a PDE using continuous-time dynamics for spatially ral operator theory, marking the transition towards learn-
continuous functions, enabling flexible extrapolation at ar- ing mappings between functional spaces. Following this
bitrary spatio-temporal locations. pioneering work, PI-DeepONet [49] uses PDE residuals
for unsupervised training, enhancing the ability to learn
3.1.3 Lagrangian Particles without explicit supervision. MIONet [50] and Fourier-
Fluid representations vary, with particle-based Lagrangian MIONet [13] expand on this by introducing a neural op-
representations being also popular due to their wide usage. erator with branch nets for input function encoding and
Yet, fluid samples typically comprise tens of thousands of a trunk net for output domain encoding, with the latter
particles, especially in complex scenes. CC [42] utilizes a incorporating the FNO to model multi-phase flow dynam-
unique type of convolutional network, which acts as the ics under varied conditions. NOMAD [52] further inno-
main differentiable operation, effectively linking particles vates by introducing a nonlinear decoder map that can
to their neighbors and simulating Lagrangian fluid dy- model nonlinear submanifolds within function spaces. Hy-
namics with enhanced precision. Then, Wessels et al. [43] perDeepONet [54] and Shift-DeepONet [53] respectively
combines PINNs with the updated Lagrangian method to utilize a hyper-network to reduce parameter count while
solve incompressible free surface flow problems. FGN [44] enhancing learning efficiency, and a sophisticated nonlinear
conceptualizes fluid particles as nodes within a graph, reconstruction mechanism to approximate discontinuous
with their interactions depicted as edges. Prantl et al. [163] PDE solutions. Additionally, B-DeepONet [51] incorporates
designs architecture to conserve momentum. Furthermore, a Bayesian framework using replica exchange Langevin dif-
Micelle-CConv [45] introduces a dynamic multi-scale grid- fusion to optimize training convergence and uncertainty es-
ding method. It aims to minimize the number of elements timation. SVD-DeepONet [56] and L-DeepONet [55] employ
that need processing by identifying and leveraging repeated methods derived from proper orthogonal decomposition
particle motion patterns within consistent regions. More and latent representations identified by auto-encoders to
recently, LFlows [46] models fluid densities and velocities improve model design and handle high-dimensional PDE
continuously in space and time by expressing solutions to functions, respectively.
the continuity equation as time-dependent density transfor-
mations through differentiable and invertible maps. Li et 3.2.2 In Physical Space
al. [47] use a diffusion model to accurately reproduce the Implementing functional mapping in physical spaces using
statistical and topological properties of particle trajectories diverse network architectures has led to the development
in Lagrangian turbulence. of novel neural operators. MGNO [57] utilizes a class of
integral operators, with the kernel integration being com-
puted through graph-based message passing on different
3.2 Independent of Discretization
GNNs. Besides, LNO [61] enhances interpretability and
Neural Operator is a powerful means to achieve the goal generalization ability, setting it apart from Fourier-based
of being independent of discretization. It learns the solution approach by leveraging a more intrinsic mathematical re-
mapping between two infinite-dimensional function spaces lationship in function mappings. GNOT [59] introduces
for PDE solving. Let N be a neural operator that aims to a scalable and efficient transformer-based framework fea-
approximate the solution operator S of the fluid governing turing heterogeneous normalized attention layers, which
equations (i.e., NS Equation). For given fluid dynamics offers exceptional flexibility to accommodate multiple input
parameters µ (i.e., Reynolds number), we seek the velocity functions and irregular meshes. Recently, CNO [60] adapts
field v and pressure field p. It is formalized as follows: CNNs to handle functions as inputs and outputs. CNO
offers a unique approach that maintains continuity even
N : M → H, (2) in a discretized computational environment, diverging from
µ 7→ (v, p) ≈ S[µ], (3) FNO’s emphasis on Fourier space to focus on convolutional
processing. Furthermore, Geneva et al. [58] and KNO [62]
where M is the parameter space, H is the function space center on approximating the Koopman operator. It acts on
composed of velocity and pressure fields, S is the exact so- the flow mapping of dynamic systems, enabling the solution
lution operator of the NS equations, and N [µ] is the approx- of an entire family of non-linear PDEs through simpler
imate solution given by the neural operator. We categorize linear prediction problems.
existing methods by the means of realizing the integral func- Additionally, Transformer architecture is also applied.
tion approximation. 1) Deep Operator Network. It approx- FactFormer [158] introduces a low-rank structure that en-
imates operators by dividing the input into two branches: hances model efficiency through multidimensional fac-
one branch learns representations of the function, while the torized attention. Besides, In-Context Operator Networks
other captures the specific points at which the function is (ICON) [63] revolutionizes operator learning by training a
evaluated. 2) In Physical Space. It leverages the flexibility neural network capable of adapting to different problems
of neural networks on physical spaces (i.e., GNNs, CNNs) without retraining, contrasting with methods that require
to model complex relationships in data. 3) In Fourier Space. specific solutions or retraining for new problems. More
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2023 8

recently, Transolver [159] has introduced a novel physics- fluid governing differential equations. Formally, PINN uti-
based attention mechanism, which adaptively divides the lizes the neural network that approximates a function u(x),
discretized domain to effectively capture complex physical subject to a differential equation D(u) = 0 over a domain
correlations. Ω and boundary conditions B(u) = 0 on the boundary ∂Ω.
The essence of PINN is captured by the loss function:
3.2.3 In Fourier Space
L(θ) = Ldata (θ) + Lphysics (θ), (4)
The Fourier Neural Operator (FNO) [64] marks a significant
development in neural operators by innovatively parame- where Ldata is the data-driven term ensuring fidelity to
terizing the integral kernel directly in Fourier space, thus known solutions or measurements, and Lphysics encodes the
creating an expressive and efficient architecture. Expanding physical laws, typically differential equations, governing the
on this concept, GEO-FNO [66] and GINO [74] address system. This is mathematically represented as:
the complexities of solving PDEs on arbitrary geometries
N
and large-scale variable geometries, respectively. U-NO [67] 1 X
Ldata (θ) = ∥uθ (xi ) − u(xi )∥2 , (5)
enhances the structure with a U-shaped, memory-enhanced N i=1
design for deeper neural operators, while PINO [65] com-
bines training data with physics constraints to learn so- 1 X
M
1 X
P
lution operators even without training data. Further ad- Lphysics (θ) = ∥D(uθ )(xj )∥2 + ∥B(uθ )(xk )∥2 ,
M j=1 P k=1
vancements include F-FNO [69], which incorporates sepa-
rable spectral layers, augments residual connections, and (6)
applies sophisticated training strategies. Similarly, Rafiq with θ denoting the parameters of the neural network, uθ
et al. [164] utilizes spectral feature aggregation within a the neural network’s approximation of u, and xi , xj , xk
deep Fourier neural network. SSNO [68] integrates spectral sampled points from the domain and boundary.
and spatial feature learning, and MCNP [72] progresses Building upon this foundational concept, the seminal
unsupervised neural solvers with probabilistic represen- work [77] lays the foundation for PINN, demonstrating
tations of PDEs. DAFNO [75] and RecFNO [165] respec- their capability in solving forward and inverse problems
tively address surrogate models for irregular geometries governed by differential equations. Subsequent advance-
and evolving domains, and improve accuracy and mesh ments, PINN-SR [79] further enhances PINN by integrating
transferability. CoNO [166] delves deeper by parameter- deep neural networks for enriched representation learning
izing the integral kernel within the complex fractional and sparse regression, thereby refining the approximation
Fourier domain. Choubineh et al. [167] and CMWNO [71] of system variables. NSFnet [80] introduces a breakthrough
use FEM-calculated outputs as benchmarks and decou- with Navier-Stokes flow nets (NSFnets), specializing PINN
ple integral kernels during multiwavelet decomposition in for simulating incompressible laminar and turbulent flows
Wavelet space, enhancing the model’s analytical capabilities. by directly encoding governing equations, thus reducing
CFNO [70] integrates multi-vector fields with Clifford con- the reliance on labeled data. Advancing into more sophis-
volutions and Fourier transforms, addressing time evolution ticated domains, Meta-Auto-Decoder [86] leverages a mesh-
in correlated fields. G-FNO [73] innovates by extending free and unsupervised approach, utilizing meta-learning to
group convolutions into the Fourier domain, crafting layers encode PDE parameters as latent vectors. This innovation
that maintain equivariance to spatial transformations like allows for quick adaptation of pre-trained models to specific
rotations and reflections, thus enhancing model versatility. equation instances, enhancing flexibility and efficiency. Bai
et al. [78] expand PINNs’ applications in fluid dynamics,
particularly in simulating flow past cylinders without la-
4 P HYSICS - DRIVEN S URROGATES beled data, by transforming equations into continuum and
Although data-driven models have demonstrated poten- constitutive formulations, showcasing PINNs’ potential in
tial in CFD simulations, they are not without their chal- flow data assimilation. Similarly, Phygeonet [81] has de-
lenges, such as the significant expense associated with veloped a unique approach by morphing between com-
data collection and concerns over their generalization and plex meshes and a square domain and proposing a novel
robustness. Consequently, integrating physics-based priors physics-constrained CNN architecture that enables learning
is crucial, leveraging the power of physical laws to en- on irregular domains without relying on labeled data.
hance model reliability and applicability. We categorize Technological enhancements in the field include the de-
them based on the type of embedded knowledge into: velopment of PINN-LS [82], which optimizes the learning
1) Physics-Informed, 2) Constraint-Informed. The former process by treating the objective function as a regularization
transforms physical knowledge into constraints for the neu- term with adaptive weights. The integration of ResNet
ral network, ensuring that predictions adhere to known blocks into PINNs, termed ResPINN [83], has been crucial
physical principles. The latter draws inspiration from tra- for solving fluid flows dependent on partial differential
ditional PDE solvers, integrating these approaches into the equations and enables precise predictions of velocity and
neural network’s training process. pressure fields across spatio-temporal domains. Moreover,
competitive PINNs like CPINN [84] enhance model ac-
curacy by training a discriminator to identify and correct
4.1 Physics-Informed Neural Network (PINN) errors, and Stan et al. [85] have introduced the smooth Stan
The development of PINN marks a significant evolution, function to streamline the gradient flow necessary for com-
blending deep learning with physical laws to solve complex puting derivatives and scaling the input-output mapping,
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2023 9

TABLE 2: Overview of Physics-driven Surrogates methods for computational fluid dynamics.


Methods Source Backbone Scenarios
4 Physics-driven Surrogates
4.1 Physics-Informed Neural Networks
PINN [77] J. Comput. Phy. MLP Allen-Cahn, Navier-Stokes, Schrodinger, Korteweg-de Vries, Burgers
Bai et al. [78] J. Hydrodyn. MLP Propagating wave, Lid-driven cavity flow, Turbulent flows
PINN-SR [79] Nat. Commun. MLP Kuramoto-Sivashinsky, Navier-Stokes, Schrodinger, Reaction–Diffusion, Burgers
NSFnet [80] J. Comput. Phys. MLP Kovasznay, CylinderFlow, 3D Beltrami
Phygeonet [81] J. Comput. Phys. CNN Heat, Poisson, Navier-Stokes
PINN-LS [82] APS DFD Meet. Abstr. MLP Laplace, Burgers, Kuramoto-Sivashinsky, Navier-Stokes
ResPINN [83] Water MLP Burgers, Navier-Stokes
CPINN [84] Arxiv 2022 MLP Poisson, Schrodinger, Burgers, Allen-Cahn
stan [85] Arxiv 2022 MLP Dirichlet, Neumann, Klein-Gordon, Heat
Meta-Auto-Decoder [86] NeurIPS 2022 MLP Burgers, Laplace, Maxwell
BINN [87] Comput Method Appl M MLP Poisson, Navier-Stokes
Hodd-PINN [88] Arxiv 2023 Resnet Convection, Heat
DATS [89] ICLR 2024 MLP Burgers, Reaction–Diffusion, Helmholtz, Navier-Stokes
PINNsFormer [90] ICLR 2024 MLP/Transformer Reaction–Diffusion, Wave, Navier-Stokes
PeRCNN [91] Nat. Mach. Intell. CNN Reaction–Diffusion, Burgers, Kolmogorov
NASPINN [92] J. Comput. Phys. MLP Burgers, Advection, Poisson
4.2 Discretized Constraint-Informed Neural Network
DGM [93] J. Comput. Phys. RNN Free Boundary PDE
AmorFEA [94] ICML 2020 MLP Poisson
EDNN [95] Phys. Rev. E MLP Advection, Burgers, Navier-Stokes, Kuramoto-Sivashinsky
Liu et al.[96] Commun. Phys. CNN Reaction–Diffusion, Burgers, Navier-Stokes
Gao et al.[97] Comput Method Appl M GNN Poisson, Elastodynamic, Navier-Stokes
ADLGM [98] J. Comput. Phys. MLP Poisson, Burgers, Allen-Cahn
INSR [99] ICML 2023 MLP Advection, Euler, Elastodynamic
Neu-Gal [100] J. Comput. Phys. MLP Korteweg-de Vries, Allen-Cahn, Advection
PPNN [101] Commun. Phys. CNN Reaction–Diffusion, Burgers, Navier-Stokes
FEX [102] Arxiv 2024 Tree Hindmarsh-Rose, FHN

improving learning efficiency. There also have been notable call discretized Constraint-informed neural networks. Rep-
integrations and applications of PINNs. For instance, Binn resentative of this approach is the Deep Galerkin Method
[87] has successfully merged PINNs with the boundary (DGM) [93], which approximates high-dimensional PDE
integral method, facilitating their use in complex geomet- solutions using deep neural networks trained to satisfy dif-
rical scenarios. Additionally, the integration of high-order ferential operators, initial conditions, and boundary condi-
numerical schemes into PINNs is exemplified by Hodd- tions. Similarly, EDNN [95] numerically updates neural net-
PINN [88], which combines high-order finite difference works to predict extensive state-space trajectories, enhanc-
methods, Weighted Essentially Non-Oscillatory (WENO) ing parameter space navigation. AmorFEA [94] combines
discontinuity detection, and traditional PINNs to bolster the accuracy of PDE solutions with the advantages of tra-
their capability in modeling complex fluid dynamics. ditional finite element methods. Additionally, Liu et al. [96]
However, traditional meta-learning approaches often incorporate partially known PDE operators into the CNN
treat all Physics Informed Neural Network (PINN) tasks kernel, improving stability for extended roll-outs, while Gao
uniformly. To address this, DATS [89] advances the field et al. [97] integrate PINNs with adaptive mesh using the
by deriving an optimal analytical solution that tailors the Galerkin method to reduce training complexity on general
sampling probability of individual PINN tasks to minimize geometries. Further developments include INSR [99], which
their validation loss across various scenarios. integrates classical time integrator into neural networks to
Furthermore, PeRCNN [91] introduces a novel phys- effectively address non-linearity. ADLGM [98] and Neu-
ically encoded architecture that embeds prior physical Gal [100] refine this integration through adaptive sampling
knowledge into its structure, leveraging a spatio-temporal and Neural Galerkin schemes, respectively, both enhancing
learning paradigm to aim for a robust, universal model learning of PDE solutions through active learning strategies.
with enhanced interpretability and adherence to physical Moreover, PPNN [101] innovatively embeds physics-based
principles. Additionally, the PINNsFormer [90] brings archi- priors into its architecture by mapping discretized govern-
tectural innovation with transformers, accurately approxi- ing equations to network structures, highlighting the deep
mating PDE solutions by leveraging multi-head attention connections between PDE operators and network design.
mechanisms to effectively capture temporal dependencies. FEX [102] represents dynamics on complex networks using
More recently, NAS-PINN [92] propels the field forward by binary trees composed of finite mathematical operators with
introducing a neural architecture search-guided method for minor prior knowledge on complex networks.
PINNs, automating the search for optimal neural architec-
tures tailored to solve specific PDEs, pushing the boundaries 5 ML- ASSISTED N UMERICAL S OLUTIONS
of what can be achieved with ML in physical sciences.
Despite advancements in end-to-end surrogate modeling,
they have yet to match the accuracy of numerical solvers, es-
pecially for long-term rollouts where error accumulation be-
4.2 Discretized Constraint-Informed Neural Network
comes significant, and in scenarios involving unseen work-
Recent studies have explored to merge the core princi- ing conditions during training. Consequently, researchers
ples of PDE equations with neural network architectures are exploring a blend of ML and numerical solvers, carefully
to address complex fluid dynamics problems, which we replacing only parts of the numerical solver to balance
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2023 10

speed, accuracy, and generalization. We categorize these PDE-Constrained Methods


Inverse Design
methods into three primary classes: 1) enabling accurate ℬ +
!" ,
!

simulations at coarser resolutions or with fewer degrees & !, ℬ + PDE loss



of freedom, ranging from learning discretization schemes Data-Driven Methods

and fluxes, closure modeling, and reduced modeling; ℬ, !"


surrogate
model
!
!" !# !$ … !%
2) employing learned preconditioners to accelerate lin- back
&
propagation
ear system solutions; and 3) a range of miscellaneous Invertible Neural Networks
forward (simulation): ℬ, !" → &
techniques, ranging from super-resolution to correcting ℬ, !" INN
4

iterative steps. inverse (sampling): &, 4 → (ℬ, !")


& &(!, ℬ)

Fig. 5: Demonstration of inverse design to optimize the de-


5.1 Assist Simulation at Coarser Scales
sign parameters. We review existing methods with a novel
In numerical methods such as finite-difference (FD) and classification including PDE-constrained Methods and Data-
finite-volume (FV), discretization errors arise as mesh res- driven Methods.
olution coarsens. The fundamental idea behind learnable
discretization schemes or fluxes is to generate space- and surpass traditional methods in performance. Sappl et al.
time-varying FD or FV coefficients, or corrections to stan- [110] employ CNNs to learn preconditioners in regular
dard coefficients, that maintain high accuracy even at coarse domains, demonstrating superior performance over tra-
mesh resolutions. In the study by [105], FV coefficients are ditional methods. In the context of improving MultiGrid
learned and applied to solve 1D equations such as Burgers, preconditioners, Greenfeld et al. [108] develop a model that
Kuramoto-Sivashinsky, and Korteweg-de Vries. Building on maps local discretization information to local prolongation
this, Kochkov et al. [103] and List et al. [106] use CNNs matrices, relying the spectral radius of the error propagation
to learn fluxes, extending the approach to 2D turbulent matrix as a self-supervised loss. This approach does not
flows. [104] introduces learnable fluxes for bi-material 2D require labeled data, and outperforms previous supervised,
compressible flows with complex boundary shapes at coarse black-box models. Subsequently, Luz et al. [109] adopt a
resolutions. This method involves learning fluxes for vari- similar unsupervised fashion but extends learning prolon-
ous geometric primitives and approximating general shapes gation operators to complex meshes, utilizing GNNs as the
by decomposing them into these primitives. backbone.
Regarding simulations with larger time steps, Zhang
et al. [6] focus on learning to decouple the stiff chemical
5.3 Miscellaneous
reaction component from the coupled flow-combustion sys-
tem, allowing the flow to be simulated by any time steps. Besides the methods above, there exist other ML-assisted
More recently, Sun et al. [107] tackle the issue of substantial approaches that are harder to categorize. Pathak et al. [111]
information loss in down-sampled temporal resolutions. and Um et al. [113] both implement a strategy where they
They combined time-series sequencing and accounted for time-stepped PDEs at a coarser scale and then appended ML
more than just the most recent states to predict temporal modules to upscale the results, aiming to align them with
terms with enriched information. high-resolution simulation results. Their work primarily fo-
cused on regular domains. In a similar path, CFD-GCN [114]
utilizes Graph Convolutional Networks (GCNs) to conduct
5.2 Preconditioning
upscaling, allowing for this application to the general mesh.
In the simulation of incompressible flows, the fractional step CFDNet [112] focuses on accelerating the convergence with
method (also known as the advection-projection method) fewer iterative steps by learning to correct the temporary
has proven to be highly effective and is increasingly at- solutions during iterations.
tracting attention in the field. This method involves two
primary stages: firstly, the prediction step, which advects the
current velocity un to an intermediate state u∗n+1 , ignoring 6 I NVERSE D ESIGN & C ONTROL
the influence of the pressure gradient; and secondly, the 6.1 Inverse Design
pressure-projection step, which ensures the incompressibil-
The problem of inverse design aims at finding a set of
ity constraint, ∇ · un+1 = 0, by projecting u∗n+1 into a
high-dimensional design parameters (e.g., boundary and
divergence-free space. The updated velocity is determined
initial conditions) for a physical system to optimize a set of
according to Eq. 7:
specified objectives and constraints. It occurs across many
engineering domains such as mechanical, materials, and
un+1 = u∗n+1 − (∆t/ρ)∇pn+1 , (7)
aerospace engineering [168]. In the context of CFD, this
where the new pressure field is governed by Eq. 8: problem can be formulated as:
ρ
∇2 pn+1 = ∇ · u∗n+1 . (8)
∆t u∗ , p∗ , γ ∗ = arg min J (u, p, γ),
u,p,γ
Solving Eq. 8 often involves the preconditioned conju- (9)
s.t. C(u, p, γ) = 0,
gate gradient (PCG) method, where traditionally, the design
of preconditioners necessitates expert domain knowledge. where γ represents the design parameters, (u, p) contains
However, advancements in ML have begun to provide a velocity and pressure in E.q. (1), J is the design objective,
data-driven approach for developing preconditioners that and C(u, p, γ) = 0 comprises the PDE constraints in E.q.
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2023 11

(1) with γ involved additionally as initial and boundary Control


Supervised Learning
surrogate
& !
conditions. This inverse design is closely related to inverse model

back
problems in CFD, which takes a similar formulation as in &# &$ &' &% propagation
(

the problem (9). The main difference is that the inverse Reinforcement Learning
agent
problems adopt J as a measure of PDE state estimation …
!" !# !$ !% state reward action

error, while the inverse design problems use J to evaluate a environment

particular design property [168]. Nevertheless, models and PDE-Constrained


!
,
algorithms for both kinds of problems are similar. -
&

The inverse design problem is challenging due to the ((!, &) ℒ((, PDE ℒ)

following factors: (1) it requires accurate and efficient sim-


ulation of the PDE systems; (2) the design space is typi- Fig. 6: Demonstration of inverse control to control a phys-
cally high-dimensional with complex constraints; (3) gener- ical system to achieve a specific objective with Supervised
alization ability to more complex scenarios than observed Learning Methods, Reinforcement Learning Methods, and
samples in data-driven inverse design tasks. Along with PDE-constrained Methods.
the rapid progress of ML for PDE simulations, ML for
inverse design also gained increasing attention in recent Another line of data-driven methods is not limited to
years. Existing work could generally be classified into three the inverse design of PDEs. Instead, these methods are
categories shown in Fig 5. proposed for general inverse design tasks. By including
them in this survey, we aim to provide a broader view of the
inverse design problems, which may inspire future research
6.1.1 PDE-constrained Methods
on the inverse design of CFD. The main idea of these
Given the explicit formulation of PDE dynamics, a straight- methods is to directly learn an inverse mapping from the
forward approach for inverse design is to optimize the de- design objective to the design parameters [121], [124]. The
sign parameters by minimizing the design objective through model variants include Invertible Neural Network (INN)
the PDE dynamics. This approach is closely related to PINN. [121], Conditional Invertible Neural Network (cINN) [124],
Considering all the constraints in PINNs are soft, hPINN im- Invertible Residual Network [122], and Invertible Auto-
poses hard constraints by using the penalty method and the encoder [123], and so on. They are compared in [124] and
augmented Lagrangian method [115]. Later on, to improve a more recent empirical study is conducted in [125]. A more
the accuracy and training efficiency of PINNs [116], gPINN general setting of inverse design is considered in MINs
utilizes the gradient information from the PDE residual [126], where the objective function is unknown while only
and incorporates it into the loss function. Bi-PINN [117] a dataset of pairs of design parameters and objective values
further presents a novel bi-level optimization framework are available. It learns an inverse mapping from the objec-
by decoupling the optimization of the objective and con- tive value to the design parameters such that the design
straints to avoid setting the subtle hyperparameter in the parameters maximize the unknown objective function.
unconstrained version of the problem (9). [118] proposes
a Bayesian approach through a data-driven energy-based
model (EBM) as a prior, to improve the overall accuracy and 6.2 Control
quality of tomographic reconstruction. Towards the inverse The control problem of PDE systems is also fundamental
problem where the training of PINNs may be sensitive, the and has wide applications. The primary goal of control
data-driven energy-based model (EBM) is introduced as a problems is to control a physical system to achieve a
prior in a Bayesian approach by a recent study on electrical specific objective by applying time-varying external forces.
impedance tomography. The time-varying nature of the external force terms adds
complexity to this issue, making it more challenging com-
6.1.2 Data-driven Methods pared to inverse design. In the context of physical systems
For scenarios where explicit PDE dynamics are not attain- constraint to PDEs, the control problem typically involves
able, inverse design can be performed in a data-driven way. formulating a control function f to steer the system:
A representative paradigm is proposed in [119]. It first trains
f ∗ = arg min J (f, u), (10)
a surrogate model to approximate the PDE dynamics, then f
optimizes the design parameters through backpropagation ∂u

∂u ∂ 2 u

of the surrogate model by minimizing the design objective. s.t. = F u, , 2 , . . . + f (t, x), (11)
∂t ∂x ∂x
The effectiveness of this paradigm in CFD inverse design
is verified by manipulating fluid flows and optimizing the where u is the solution, f is the external force term, F is a
shape of an airfoil to minimize drag. Regarding the archi- function and J represents the objective of control.
tectures of the surrogate models, GNNs are natural choices In the last few decades, there have been widely used
as they are effective in characterizing the dependency of the classical methods for solving PDE control problems in-
design objective on design parameters over the evolution cluding adjoint methods [170], [171], Proportional-Integral-
of PDE dynamics on mesh data structures [169], [119]. Derivative [172] and Model Predictive Control [173]. How-
In addition, by employing the latent space, computational ever, these approaches possess certain drawbacks, including
efficiency could be significantly improved since backpropa- high computational costs and limited applicability.
gation is performed in a much smaller latent dimension and Consequently, ML techniques have emerged as a popular
evolution model [120]. method for addressing these issues. In the domain of fluid
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2023 12

dynamics [129], various specific problems such as drag re- high-dimensional distributions, which inspires the genera-
duction [174], [175], conjugate heat transfer [176], [177], and tive inverse design paradigm. The recent approach SMDP
swimming [178] have been addressed using ML techniques. optimizes the inital state of a physical system by moving
We categorize the ML-based methods into three types for the system’s current state backward in time step by step
the following discussion, which is also shown in Fig 6. via an approximate inverse physics simulator and a learned
correction function [182]. In contrast, CinDM [183], which
6.2.1 Supervised Learning Methods optimizes the energy function captured by diffusion models
A significant category of deep learning-based control meth- of observed trajectories, denoises the whole trajectories from
ods involves training a surrogate model for forward pre- a random noise and does not involve a simulator. A notable
diction, and then obtaining the optimal control utilizing feature of CinDM is that it enables flexible compositional
gradients derived via backpropagation. In [127], researchers inverse design during inference. Due to the promising
introduce a hierarchical predictor-corrector approach for performance of diffusion models in various design tasks
controlling complex nonlinear physical systems across ex- [184], [185], we believe that they will become one of the
tended periods. In a subsequent study, researchers develop mainstream methods for inverse design in the future.
a more direct two-stage method: initially learn the solution As for the control problems, several new methodolo-
operator and subsequently employ gradient descent to de- gies have emerged recently, employing novel architectural
termine the optimal control [128]. frameworks. [186] introduces the Decision Transformer, an
architecture that frames decision making as a problem of
6.2.2 Reinforcement Learning Methods conditional sequence modeling, using a causally masked
Deep reinforcement learning algorithms are also widely Transformer to directly output optimal actions. The model
applied to control fluid systems [129], [130], [131]. Existing conditions on desired returns, past states, and actions, en-
works have extensively utilized a wide range of reinforce- abling it to generate future actions that achieve specified
ment learning algorithms, including Deep Q Learning, Deep goals. [187] employs a diffusion model to simultaneously
Deterministic Policy Gradient, Trust Region Policy Opti- learn the entire trajectories of both states and control signals.
mization, Soft Actor Critic and Proximal policy optimization During inference, they introduce guidance related to the
[179]. There is also an open-source Python platform DRLin- objective J along with the prior reweighting technique to
Fluids [180] that implements numerous reinforcement learn- assist in identifying control signals closer to optimal.
ing algorithms for controlling fluid systems. The embedded
algorithms within the platform include famous model-free 7 A PPLICATIONS
RL methods and model-based RL methods. These methods
7.1 Aerodynamics
above usually teach an agent to make decisions sequentially
to maximize the reward and do not consider physics infor- In the realm of aerodynamics, CFD is used to simulate and
mation directly. analyze the flow of air over aircraft surfaces, optimizing de-
sign for improved performance and efficiency. ML methods
6.2.3 PDE-constrained Methods have emerged as a transformative force, driving forward
In addition to the two types of methods mentioned above, the capabilities for more precise simulations and innovative
there is another category of algorithms that can find con- design methodologies. Mao et al. [134] utilize PINNs to ap-
trol signals that meet the requirements solely through the proximate the Euler equations, which are crucial for model-
form of PDEs without data. These algorithms are all based ing high-speed aerodynamic phenomena. Similarly, Huang
on PINN [77]. [132] propose a concise two-stage method. et al. [135] explores the integration of PINNs with the direct-
Initially, they train PINN’s parameters by addressing a for- forcing immersed boundary method, pioneering a novel ap-
ward problem. Subsequently, they employ a straightforward proach within computational fluid dynamics that enhances
yet potent line search strategy by evaluating the control the simulation capabilities of boundary interactions in fluid
objective using a separate PINN forward computation that flows. Sharma et al. [136] has developed a physics-informed
takes the PINN optimal control as input. Control Physics- ML method that integrates neural networks with physical
Informed Neural Networks (Control PINNs) [133], on the laws to predict melt pool dynamics, such as temperature,
contrary, is a one-stage approach that learns the system velocity, and pressure, without relying on any training data
states, the adjoint system states, and the optimal control for velocity. This optimization of the PINN architecture
signals simultaneously. The first method may be computa- significantly enhances the efficiency of model training. Au-
tionally intensive and produce non-physical results due to ddy et al. [137] introduces the Gravity-Informed Neural
the indirect relationships in the model. As for the second Network (GRINN), a PINN-based framework designed to
approach, it offers direct computation of variables and more simulate 3D self-gravitating hydrodynamic systems, show-
efficient handling of complex systems, but it may result in ing great potential for modeling astrophysical flows. On
large systems of equations. the other hand, Shan et al. [138] apples these networks
to turbulent flows involving both attached and separated
conditions, significantly improving prediction accuracy for
6.3 Discussion new flow conditions and varying airfoil shapes. Further-
The rapid development of generative models, especially more, DAIML [1] illustrates how data-driven optimization
the recent diffusion models [181], opens a new horizon for can lead to the creation of highly efficient airfoil shapes
the inverse design of CFD. The significant advantage of for aerial robots, thereby breaking new ground in the op-
diffusion models is their ability to effectively sample from timization of aerodynamic performance. And Deng et al.
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2023 13

[2] employs a transformer-based encoder-decoder network such as blood flow in arteries, contributing to advancements
for the prediction of transonic flow over supercritical air- in medical research and healthcare. Yin et al. [7] utilizes
foils, ingeniously encoding the geometric input with diverse DeepONet to simulate the delamination process in aortic
information points. More recently, DIP [3] showcases ML dissection. This approach allows for the accurate prediction
models’ capability to predict aerodynamic flows around of differential strut distributions, marking a significant ad-
complex shapes, enhancing simulation accuracy while re- vancement in modeling the biomechanics of aortic dissec-
ducing computational demands. These studies collectively tion. Besides, Vooter et al. [8] employs PINNs to enhance
underscore the pivotal role of ML in pushing the boundaries the analysis of multi-b-value diffusion and extracts detailed
of traditional aerodynamics, offering novel solutions that biofluid information from MRI data, focusing on the mi-
range from refining flow dynamics to achieving superior crostructural integrity of interstitial fluid and microvascular
design efficacy. images. This innovation not only improves the quality of
biomedical imaging but also contributes to the broader
7.2 Combustion & Reacting Flow understanding of microvascular flows and tissue integrity.
Applications in turbulent combustion is extensive, encom- Then, Shen et al. [9] explores the capabilities of multi-case
passing areas such as chemical reactions, combustion mod- PINNs in simulating biomedical tube flows, particularly in
eling, engine performance and combustion instabilities pre- scenarios involving varied geometries. By parameterizing
dictions. CFD is utilized to model and study the complex in- and pre-training the network on different geometric cases,
teractions of chemical reactions and fluid dynamics, aiding the model can predict flows in unseen geometries in real-
in the design of efficient and cleaner combustion systems. time. Collectively, the advancements in surrogate modeling
Ji et al. [139] initially explores the efficacy of PINN in and physics-informed analysis represent critical steps for-
addressing stiff chemical kinetic problems governed by stiff ward in bioengineering, medical diagnostics, and treatment
ordinary differential equations. The findings demonstrates planning, showcasing the potential of ML to revolutionize
the effectiveness of mitigating stiffness through the use intervention strategies in biofluid dynamics.
of Quasi-Steady-State Assumptions. Besides, Zhang et al.
[6] suggests employing Box-Cox transformation and multi- 7.5 Plasma
scale sampling for preprocessing combustion data. It indi- CFD is used in plasma physics to simulate and analyze
cates that while the DNN trained on manifold data success- the behavior of ionized gases, aiding in the development
fully captures chemical kinetics in limited configurations, it of applications such as nuclear fusion, space propulsion,
lacks robustness against perturbations. and advanced materials processing. Simulations involving
high-dimensional and high-frequency plasma dynamics, are
7.3 Atmosphere & Ocean Science particularly well-suited for the application of ML methods.
CFD is employed in atmosphere and ocean science to sim- Zhong et al. [140] introduces two innovative networks to ad-
ulate and predict weather patterns, ocean currents, and dress specific challenges. The first, CS-PINN, is designed to
climate dynamics, enhancing our understanding of envi- approximate solution-dependent coefficients within plasma
ronmental systems. Starting with atmospheric applications, equations. The second, RK-PINN, integrates the implicit
Fourcastnet [11] enhances forecasts of dynamic atmospheric Runge–Kutta formalism to facilitate large-time step pre-
conditions, such as wind speed and precipitation. Then, dictions for transient plasma behavior. Gopakumar et al.
PanGu-weather [12] demonstrates how deep networks with [141] have illustrated that FNO can predict the magneto-
Earth-specific priors effectively capture complex weather hydrodynamic models governing plasma dynamics with a
patterns. Furthermore, GraphCast [14] predicts weather speed that is six orders of magnitude faster than traditional
variables globally at a high resolution in under a minute, numerical solvers, while still achieving a notable accuracy.
showcasing ML’s speed and accuracy. Transitioning to Furthermore, Kim et al. [142] presents an innovative 3D
oceanic science, Rajagopal et al. [15] excels in forecasting field optimization approach that leverages ML and real-
ocean current time series data, which significantly enhances time adaptability to overcome the instability of the transient
marine navigation and climate research. Moreover, in ad- energy burst at the boundary of plasmas.
dressing the pivotal challenge of CO2 trajectory prediction,
U-FNO [10] leverages advanced ML architectures to tackle 7.6 System Identification and Symbolic Regression
complex multiphase problems, representing a significant Identifying and recovering the governing equations for dy-
step forward in environmental modeling. Complementing namical systems are important modeling problems where
this, Fourier-MIONet [13] offers innovative solutions to one approximates the underlying equation of motion using
computational challenges in 4D numerical simulations. In data-driven methods. Bongard and Lipson [188] introduce
essence, it marks a significant leap in atmosphere modeling, the symbolic regression for modeling the variables from
enhancing the accuracy and resolution of environmental time series. The SINDy algorithm [189] identifies the gov-
models. This evolution not only enriches our understanding erning equations by learning a sparse representation of the
of complex natural phenomena but also opens new avenues dynamical system from a dictionary of candidate functions,
for addressing global environmental challenges. usually consisting of trigonometric or polynomial functions.
Sparse optimization methods for identifying the model
7.4 Biology Fluid from spatio-temporal data are developed in [190], where
CFD is also applied in biological fluid dynamics to model differential operators are included in the dictionary. The
and analyze the behavior of fluids within biological systems, governing equation is obtained using the LASSO method,
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2023 14

which offers explicit error bounds and recovery rates. The dynamics within the latent space through RNNs. Further-
sparse optimization techniques are later integrated with more, Lyu et al. [192] develop a multi-fidelity learning ap-
PINNs [79] to obtain models from scarce data. proach using the FNO that synergizes abundant low-fidelity
More recently, symbolic regression techniques are com- data with scarce high-fidelity data within a transfer learning
bined with modern architectures to balance performance framework. This multi-level data integration is similarly
and explanability. In [144], a sequence-to-sequence trans- echoed in Cascade-Net [193], which hierarchically predicts
former model is used to directly output the underlying velocity fluctuations across different scales, ensuring the
ODEs in prefix notation. The FEX method [143] uses deep energy cascade in turbulence is accurately represented.
reinforcement learning to approximate the governing PDEs Innovations also extend to GNNs, where the BSMS-GNN
with symbolic trees. Although these methods haven’t been model [162] introduces a novel pooling strategy to incorpo-
directly applied to CFD, the governing equations obtained rate multi-scale structures. Concurrently, the MS-MGN [161]
using these methods can be used for future simulation when adapts GNNs for multi-scale challenges by manually draw-
the underlying model is unknown. ing meshes. More recently, Fang et al. [194] utilize the Gaus-
sian process conditional mean for efficient predictions at
massive collocation points. Additionally, SineNet [195] em-
7.7 Reduced Order Modeling
ploys a sequence of U-shaped network blocks to refine high-
Reduced-order modeling (ROM) involves identifying a resolution features progressively across multiple stages.
space that captures the dynamics of a full model with Another roadmap focuses on altering the solution
significantly fewer degrees of freedom (DOFs), such as space’s degree of freedom and the topology during mesh re-
Dynamic Mode Decomposition (DMD) and the Koopman finement to enhance simulation accuracy and computational
method. Traditionally, these methods heavily rely on do- efficiency. For example, LAMP [38] utilizes a reinforcement
main knowledge and are typically applied to simple exam- learning algorithm to dictate the h-adaptation policy, facil-
ples. The development of ML enables the learning of more itating dynamic changes in the topology of meshes. Fur-
complex and realistic systems. For instance, combining ML thermore, CORAL [196] revolutionizes mesh adaptation by
architectures like CNNs [145] and transformers [58] with the removing constraints on the input mesh.
Koopman method has shown positive results. Incorporating Promising future. One promising direction is the develop-
symmetries (or physics system invariant) into the model ment of hybrid models that seamlessly combine data-driven
architecture [146] is a solid technical improvement, reducing approaches with traditional physics-based simulations, en-
the amount of training data required and allowing the hancing their ability to generalize across different scales
learned ROM to obey the governing rules explicitly. and scenarios. The continuous improvement of transfer
Besides serving as a surrogate model, a learned ROM learning techniques will also play a crucial role, enabling
can also be combined with numerical solvers via Galerkin models to leverage knowledge from related problems and
projection to form a hybrid approach. Specifically, the gov- datasets to improve performance with limited high-fidelity
erning equations can be projected onto a reduced space to data. Moreover, the exploration of novel architectures will
be solved with less computational cost. For example, Arnold further advance the capability to capture complex interac-
et al. [147] apply adaptive projection-based reduced model- tions across scales. These architectures can be enhanced with
ing for combustion-flow problems in gas turbine engines. more sophisticated pooling and aggregation strategies, as
By using the first 1% of the full model simulation, they well as improved interpretability to ensure that the learned
identify a reduced space that allowed the simulation to models adhere to known physical laws. Additionally, ad-
achieve a 100x speedup using ROM. Wentland et al. [148] vancements in computational hardware, such as the use
subsequently extend this approach to general multi-phase of specialized processors and distributed computing frame-
flow simulations. works, will enable the execution of more complex and large-
scale simulations.
8 C HALLENGES & F UTURE WORKS
8.1 Multi-Scale Dynamics 8.2 Explicit Physical Knowledge Encoding
The challenge of multi-scale modeling lies in accurately Another primary challenges is effectively incorporating the
capturing the interactions across vastly different scales, from fundamental physical laws governing fluid dynamics from
microscopic molecular motions to macroscopic flow behav- diverse sources explicitly into a coherent high-dimensional
iors, within the constraints of limited high-fidelity data and and nonlinear framework. Explicitly integrating physical
computational resources. Fortunately, ML has been pivotal knowledge differs from PINNs in that the former directly
in bridging the gap caused by limited high-fidelity data incorporates physical laws and constraints into the model,
availability. The challenge is compounded by the intrinsic while PINNs embed these laws within the neural network’s
complexity of multi-scale systems, where phenomena at loss function to guide the learning process.
different scales can influence each other in non-linear and Representative works. Raissi et al. [197] leverage the NS
often unpredictable ways. For instance, microscopic molec- equations to inform the learning process, ensuring that the
ular dynamics can have significant impacts on macroscopic dynamics of fluid flow are accurately captured in scien-
properties such as viscosity and turbulence in fluid flows. tifically relevant scenarios. Further advancements are seen
Representative works. Vlachas et al. [191] have leveraged in approaches like FINN [198], which merges the learning
auto-encoders to establish a connection between fine- and capabilities of ANNs with the physical and structural in-
coarse-grained representations, subsequently evolving the sights derived from numerical simulations. More recently,
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2023 15

Sun et al. [91] have introduced a framework that integrates equations) and point-wise information (e.g. boundaries),
a specific physics structure into a recurrent CNN, aiming Unisolver shows generalization to out-of-distribution pa-
to enhance the learning of spatio-temporal dynamics in rameters.
scenarios with sparse data. Another line of works focus on transfer learning tech-
As for encoding boundary and initial conditions, Sun niques, where a pretrained model is finetuned to align with
et al. [199] introduce a physics-constrained DNN and in- downstream tasks. Subramanian et al. [209] use a transfer
corporate boundary conditions directly into the learning learning approach, showing models pre-trained on multi-
process. BOON [200] modifies the operator kernel to ensure physics can be adapted to various downstream tasks. The
boundary condition satisfaction. Besides, Rao et al. [91] multi-physics pre-train (MPP) [210] approach, akin to many
take a different approach by encoding the physical struc- brute force LLMs, seeks to establish a foundational model
ture directly into a recurrent CNN. BENO [40] presents a applicable across diverse physics systems. OmniArch [211]
boundary-embedded neural operator that integrates com- pre-trains on all 1D/2D/3D PDE data using auto-regressive
plex boundary shapes and inhomogeneous boundary values tasks and fine-tunes with physics-informed reinforcement
into the solution of elliptic PDEs. Neural IVP [201] offers a learning to align with physical laws, which excels in few-
solver for initial value problems based on ODEs. By pre- shot and zero-shot learning for new physics. DPOT [212]
venting the network from becoming ill-conditioned, Neural introduces a novel auto-regressive denoising pre-training
IVP enables the evolution of complex PDE dynamics within approach that enhances the stability and efficiency of pre-
neural networks. training.
Promising future. Future meaningful research directions Promising future. One promising future direction is to
include the development of more novel implicit network design the network to simultaneously handle different com-
architectures. These architectures should be designed to plex geometries. This requires having a network capable
embed physical knowledge seamlessly. Furthermore, com- of processing heterogeneous data, as well as a large col-
bining manifold learning and graph relationship learning lection of high-quality (real & synthetic) training data. Be-
techniques in ML can help extract underlying physical rela- sides, while pretrained LLMs are not directly suitable for
tionships and laws. This approach aims to enhance the abil- scientific computing tasks, incorporating their huge pre-
ity of ML models to understand and incorporate complex trained knowledge base would be beneficial, especially in
physical systems, leading to more accurate predictions. the data-scarce regime. Additionally, small language models
equipped with scalable training strategies [213] can provide
an effective and efficient approach.
8.3 Multi-physics Learning & Scientific Foundation
Model
A primary objective for scientific ML is to develop methods 8.4 Automatic Data Generation & Scientific Discovery
that can generalize and extrapolate beyond the training The success of all the aforementioned applications in this
data. Surrogate models typically perform well only under review heavily depends on the size and coverage of the
the working conditions or geometries for which they are training dataset. This is especially true for multi-physics
trained. Specifically, PINNs often solve only a single in- models as in Sec. 8.3, as demonstrated by emerging effects
stance of PDE, while Neural Operator generalizes only to in LLMs [214]. Unlike textual or visual data readily avail-
a specific family of parametric PDEs. Similarly, ML-assisted able online, just like many scientific domains, CFD data is
approaches, such as those for closure modeling, tend to be characterized by a large number of samples due to complex
limited by their working conditions or the shapes of walls system parameter combinations, spanning a wide variety
during training. of different models, and typically incurring significantly
Representative works. Lozano et al. [202] involve training high costs to obtain. This combination presents a significant
multiple ML wall models as candidates for different flow challenge in generating a sufficiently large and diverse
regimes, followed by training a classifier to select the most dataset for the above purposes. In the previous Sec. 8.1 and
suitable candidate for the current conditions. In-Context Sec. 8.2, we mention incorporating symmetries and physics
Operator Network (ICON) [63], [203], [204], leverages pairs knowledge to decrease the dependence on training dataset
of physics fields before and after numerical time steps as size. However, automatically and efficiently guiding the ML
contexts. This model learns to solve the time steps based on model to generate data still poses challenges.
these contexts, showcasing an ability to handle different time Representative works. Generative modeling, notably dif-
scales and generalize across different types of PDE operators. fusion models [181], has recently emerged as a promising
PROSE [205], [206] combines data and equation information direction for high-quality generation in computer vision.
through a multimodal fusion approach for simultaneous Conditioned diffusion models [215] provide more control
prediction and system identification, and demonstrates zero- during the generation of designed dataset samples and is a
shot extrapolation to different data distribution, unseen physical promising candidate for automatic data generation. Diffu-
features, and unseen equations. FMint [207] combines the sion models have been naturally extended to CT, medical
precision of human-designed algorithms with the adaptabil- imaging, and MRI [216], turbulence [217],and molecular
ity of data-driven methods, and is specifically engineered dynamics [218], and more.
to achieve high-accuracy simulation of dynamical systems. Promising future. Automated experimentation (auto-lab)
Unisolver [208] integrates all PDE components available, has become a promising pipeline for automatic data gen-
including equation symbols, boundary information and eration and scientific discovery. By leveraging trained sur-
PDE coefficients. By separately process domain-wise (e.g. rogate models (the verifier), auto-lab trains an additional
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2023 16

ML model to propose trials (the proposer), which can [12] K. Bi, L. Xie, H. Zhang, X. Chen, X. Gu, and Q. Tian, “Accurate
then be efficiently filtered by the verifier to retain only medium-range global weather forecasting with 3d neural net-
works,” Nature, vol. 619, no. 7970, pp. 533–538, 2023.
those with a high success rate. Real experiments or high-
[13] Z. Jiang, M. Zhu, D. Li et al., “Fourier-mionet: Fourier-enhanced
fidelity simulations are carried out only on filtered trials. multiple-input neural operators for multiphase modeling of geo-
The obtained results progressively enrich the dataset and logical carbon sequestration,” Arxiv, 2023.
retrain the ML models to highlight trial directions with [14] R. Lam, A. Sanchez-Gonzalez, M. Willson, P. Wirnsberger, M. For-
tunato, F. Alet, S. Ravuri, T. Ewalds, Z. Eaton-Rosen, W. Hu
higher success rates. This pipeline tends to automate and
et al., “Graphcast: Learning skillful medium-range global weather
scale up traditional experiments and has been applied in forecasting,” Arxiv, 2022.
materials science [219], [220], metamaterials [221], protein [15] E. Rajagopal, A. N. Babu, T. Ryu, P. J. Haley, C. Mirabito, and P. F.
structure [222], robotics [223] and more. Similar works can Lermusiaux, “Evaluation of deep neural operator models toward
ocean forecasting,” in OCEANS. IEEE, 2023, pp. 1–9.
be promising in the field of CFD and related areas.
[16] H. K. Versteeg et al., An introduction to computational fluid dynamics:
the finite volume method. Pearson education, 2007.
[17] Y. Zhang, D. Zhang, and H. Jiang, “Review of challenges and
9 C ONCLUSION opportunities in turbulence modeling: A comparative analysis
In conclusion, this paper has systematically explored the of data-driven machine learning approaches,” Journal of Marine
Science and Engineering, vol. 11, no. 7, p. 1440, 2023.
significant advancements in leveraging ML for CFD. We [18] J. Thiyagalingam, M. Shankar, G. Fox, and T. Hey, “Scientific
have proposed a novel classification approach for forward machine learning benchmarks,” Nature Reviews Physics, vol. 4,
modeling and inverse problems, and provided a detailed no. 6, pp. 413–420, 2022.
introduction to the latest methodologies developed in the [19] R. Wang and R. Yu, “Physics-guided deep learning for dynamical
systems: A survey,” Arxiv, 2021.
past five years. We also highlight the promising applications
[20] S. Huang, W. Feng, C. Tang, and J. Lv, “Partial differential
of ML in critical scientific and engineering domains. Addi- equations meet deep neural networks: A survey,” Arxiv, 2022.
tionally, we discussed the challenges and future research [21] R. Vinuesa and S. L. Brunton, “Enhancing computational fluid
directions in this rapidly evolving domain. Overall, it is dynamics with machine learning,” Nature Computational Science,
evident that ML has the potential to significantly transform vol. 2, no. 6, pp. 358–366, 2022.
[22] X. Zhang, L. Wang, J. Helwig, Y. Luo, C. Fu, Y. Xie, M. Liu, Y. Lin,
CFD research. Z. Xu, K. Yan et al., “Artificial intelligence for science in quantum,
atomistic, and continuum systems,” Arxiv, 2023.
[23] M. Lino, S. Fotiadis, A. A. Bharath, and C. D. Cantwell, “Current
R EFERENCES and emerging deep-learning methods for the simulation of fluid
dynamics,” Proceedings of the Royal Society, vol. 479, no. 2275, 2023.
[1] M. O’Connell, G. Shi, X. Shi et al., “Neural-fly enables rapid
[24] S. Lee and D. You, “Data-driven prediction of unsteady flow
learning for agile flight in strong winds,” Science Robotics, vol. 7,
over a circular cylinder using deep learning,” Journal of Fluid
no. 66, 2022.
Mechanics, vol. 879, pp. 217–254, 2019.
[2] Z. Deng, J. Wang, H. Liu, H. Xie, B. Li, M. Zhang, T. Jia,
Y. Zhang, Z. Wang, and B. Dong, “Prediction of transonic flow [25] R. Wang, K. Kashinath, M. Mustafa, A. Albert, and R. Yu,
over supercritical airfoils using geometric-encoding and deep- “Towards physics-informed deep learning for turbulent flow
learning strategies,” Arxiv, 2023. prediction,” in KDD, 2020, pp. 1457–1466.
[3] B. Mufti, A. Bhaduri, S. Ghosh, L. Wang, and D. N. Mavris, [26] R. Wang, R. Walters, and R. Yu, “Incorporating symmetry into
“Shock wave prediction in transonic flow fields using domain- deep dynamics models for improved generalization,” Arxiv 2020.
informed probabilistic deep learning,” Physics of Fluids, vol. 36, [27] Wang, Rui and Walters, Robin and Yu, Rose, “Approximately
no. 1, 2024. equivariant networks for imperfectly symmetric dynamics,” in
[4] Y. Cao and R. Li, “A liquid plug moving in an annular pipe—flow International Conference on Machine Learning, 2022.
analysis,” Physics of Fluids, vol. 30, no. 9, 2018. [28] A. Sanchez-Gonzalez, J. Godwin, T. Pfaff, R. Ying, J. Leskovec,
[5] Y. Cao, X. Gao, and R. Li, “A liquid plug moving in an annular and P. Battaglia, “Learning to simulate complex physics with
pipe–heat transfer analysis,” International Journal of Heat and Mass graph networks,” in International conference on machine learning.
Transfer, vol. 139, pp. 1065–1076, 2019. PMLR, 2020, pp. 8459–8468.
[6] T. Zhang, Y. Yi, Y. Xu, Z. X. Chen, Y. Zhang, E. Weinan, and Z.- [29] T. Pfaff, M. Fortunato, A. Sanchez-Gonzalez et al., “Learning
Q. J. Xu, “A multi-scale sampling method for accurate and robust mesh-based simulation with graph networks,” Arxiv, 2020.
deep neural network to predict combustion chemical kinetics,” [30] J. Brandstetter, D. Worrall, and M. Welling, “Message passing
Combustion and Flame, vol. 245, p. 112319, 2022. neural pde solvers,” Arxiv, 2022.
[7] M. Yin, E. Ban, B. V. Rego, E. Zhang, C. Cavinato, J. D. Humphrey, [31] X. Han, H. Gao, T. Pfaff et al., “Predicting physics in mesh-
and G. Em Karniadakis, “Simulating progressive intramural reduced space with temporal attention,” Arxiv, 2022.
damage leading to aortic dissection using deeponet: an operator– [32] Y. Shao, C. C. Loy, and B. Dai, “Transformer with implicit edges
regression neural network,” Journal of the Royal Society Interface, for particle-based physics simulation,” in European Conference on
vol. 19, no. 187, p. 20210670, 2022. Computer Vision. Springer, 2022, pp. 549–564.
[8] P. H. Voorter, W. H. Backes, O. J. Gurney-Champion, S.-M. Wong,
[33] O. Boussif, Y. Bengio, L. Benabbou, and D. Assouline, “Magnet:
J. Staals, R. J. van Oostenbrugge, M. M. van der Thiel, J. F.
Mesh agnostic neural pde solver,” Advances in Neural Information
Jansen, and G. S. Drenthen, “Improving microstructural integrity,
Processing Systems, vol. 35, pp. 31 972–31 985, 2022.
interstitial fluid, and blood microcirculation images from multi-
b-value diffusion mri using physics-informed neural networks in [34] S. Bishnoi, R. Bhattoo, S. Ranu, and N. Krishnan, “Enhancing
cerebrovascular disease,” Magnetic Resonance in Medicine, 2023. the inductive biases of graph neural ode for modeling dynamical
[9] H. Shen Wong, W. X. Chan, B. Huan Li, and C. Hwai Yap, systems,” Arxiv, 2022.
“Multiple case physics-informed neural network for biomedical [35] X. He, Y. Wang, and J. Li, “Flow completion network: Inferring
tube flows,” Arxiv, 2023. the fluid dynamics from incomplete flow information using
[10] G. Wen, Z. Li, K. Azizzadenesheli, A. Anandkumar, and S. M. graph neural networks,” Physics of Fluids, vol. 34, no. 8, 2022.
Benson, “U-fno—an enhanced fourier neural operator-based [36] Q. Zhao, X. Han, R. Guo, and C. Chen, “A computationally
deep-learning model for multiphase flow,” Advances in Water efficient hybrid neural network architecture for porous media: In-
Resources, vol. 163, p. 104180, 2022. tegrating cnns and gnns for improved permeability prediction,”
[11] J. Pathak, S. Subramanian, P. Harrington, S. Raja, A. Chattopad- Arxiv, 2023.
hyay, M. Mardani, T. Kurth, D. Hall, Z. Li, K. Azizzadenesheli [37] Y. Yin, M. Kirchmeyer, J.-Y. Franceschi, A. Rakotomamonjy, and
et al., “Fourcastnet: A global data-driven high-resolution weather P. Gallinari, “Continuous pde dynamics forecasting with implicit
model using adaptive fourier neural operators,” Arxiv, 2022. neural representations,” Arxiv, 2022.
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2023 17

[38] T. Wu, T. Maruyama, Q. Zhao, G. Wetzstein, and J. Leskovec, [62] W. Xiong, X. Huang, Z. Zhang, R. Deng, P. Sun, and Y. Tian,
“Learning controllable adaptive simulation for multi-resolution “Koopman neural operator as a mesh-free solver of non-linear
physics,” Arxiv, 2023. partial differential equations,” Arxiv, 2023.
[39] X. Luo, H. Wang, Z. Huang, H. Jiang, A. S. Gangan, S. Jiang, and [63] L. Yang, S. Liu, T. Meng, and S. J. Osher, “In-context operator
Y. Sun, “Care: Modeling interacting dynamics under temporal learning for differential equation problems,” Arxiv, 2023.
environmental variation,” in Thirty-seventh Conference on Neural [64] Z. Li, N. Kovachki, K. Azizzadenesheli, B. Liu, K. Bhattacharya,
Information Processing Systems, 2023. A. Stuart, and A. Anandkumar, “Fourier neural operator for
[40] H. Wang, L. Jiaxin, A. Dwivedi, K. Hara, and T. Wu, “BENO: parametric partial differential equations,” Arxiv, 2020.
Boundary-embedded neural operators for elliptic PDEs,” in The [65] Z. Li, H. Zheng, N. Kovachki, D. Jin, H. Chen, B. Liu, K. Az-
Twelfth International Conference on Learning Representations, 2024. izzadenesheli, and A. Anandkumar, “Physics-informed neural
[41] A. Bryutkin, J. Huang, Z. Deng, G. Yang, C.-B. Schönlieb, and operator for learning partial differential equations,” Arxiv, 2021.
A. Aviles-Rivero, “Hamlet: Graph transformer neural operator [66] Z. Li, D. Z. Huang, B. Liu, and A. Anandkumar, “Fourier neural
for partial differential equations,” Arxiv, 2024. operator with learned deformations for pdes on general geome-
[42] B. Ummenhofer, L. Prantl, N. Thuerey, and V. Koltun, “La- tries,” Arxiv, 2022.
grangian fluid simulation with continuous convolutions,” in [67] M. A. Rahman, Z. E. Ross, and K. Azizzadenesheli, “U-no: U-
International Conference on Learning Representations, 2019. shaped neural operators,” Arxiv, 2022.
[43] H. Wessels, C. Weißenfels, and P. Wriggers, “The neural par- [68] M. Rafiq, G. Rafiq, H.-Y. Jung, and G. S. Choi, “Ssno: Spatio-
ticle method–an updated lagrangian physics informed neural spectral neural operator for functional space learning of partial
network for computational fluid dynamics,” Computer Methods differential equations,” IEEE Access, vol. 10, 2022.
in Applied Mechanics and Engineering, vol. 368, p. 113127, 2020. [69] A. Tran, A. Mathews et al., “Factorized fourier neural operators,”
[44] Z. Li and A. B. Farimani, “Graph neural network-accelerated in ICLR, 2023.
lagrangian fluid simulation,” Computers & Graphics, vol. 103, pp. [70] J. Brandstetter, R. v. d. Berg, M. Welling, and J. K. Gupta, “Clifford
201–211, 2022. neural layers for pde modeling,” Arxiv, 2022.
[45] J. Liu, Y. Chen, B. Ni, W. Ren, Z. Yu, and X. Huang, “Fast fluid [71] X. Xiao, D. Cao, R. Yang, G. Gupta, G. Liu, C. Yin, R. Balan, and
simulation via dynamic multi-scale gridding,” in Proceedings of P. Bogdan, “Coupled multiwavelet operator learning for coupled
the AAAI Conference on Artificial Intelligence, vol. 37, no. 2, 2023, differential equations,” in The Eleventh International Conference on
pp. 1675–1682. Learning Representations, 2022.
[46] F. A. Torres, M. M. Negri, M. Inversi, J. Aellen, and V. Roth, [72] R. Zhang, Q. Meng, R. Zhu, Y. Wang, W. Shi, S. Zhang, Z.-M. Ma,
“Lagrangian flow networks for conservation laws,” Arxiv, 2023. and T.-Y. Liu, “Monte carlo neural operator for learning pdes via
[47] T. Li, L. Biferale, F. Bonaccorso, M. A. Scarpolini, and M. Buzzi- probabilistic representation,” Arxiv, 2023.
cotti, “Synthetic lagrangian turbulence by generative diffusion [73] J. Helwig, X. Zhang, C. Fu, J. Kurtin, S. Wojtowytsch, and S. Ji,
models,” Nature Machine Intelligence, pp. 1–11, 2024. “Group equivariant fourier neural operators for partial differen-
[48] L. Lu, P. Jin, G. Pang, Z. Zhang, and G. E. Karniadakis, “Learning tial equations,” Arxiv, 2023.
nonlinear operators via deeponet based on the universal approx- [74] Z. Li, N. B. Kovachki, C. Choy, B. Li, J. Kossaifi, S. P. Otta,
imation theorem of operators,” Nature machine intelligence, vol. 3, M. A. Nabian, M. Stadler, C. Hundt, K. Azizzadenesheli et al.,
no. 3, pp. 218–229, 2021. “Geometry-informed neural operator for large-scale 3d pdes,”
Arxiv, 2023.
[49] S. Wang, H. Wang, and P. Perdikaris, “Learning the solution
operator of parametric partial differential equations with physics- [75] N. Liu, S. Jafarzadeh, and Y. Yu, “Domain agnostic fourier neural
informed deeponets,” Science advances, vol. 7, no. 40, 2021. operators,” Arxiv, 2023.
[76] A. Rudikov, V. Fanaskov, E. Muravleva, Y. M. Laevsky, and
[50] P. Jin, S. Meng, and L. Lu, “Mionet: Learning multiple-input op-
I. Oseledets, “Neural operators meet conjugate gradients: The
erators via tensor product,” SIAM Journal on Scientific Computing,
fcg-no method for efficient pde solving,” Arxiv, 2024.
vol. 44, no. 6, pp. A3490–A3514, 2022.
[77] M. Raissi, P. Perdikaris, and G. E. Karniadakis, “Physics-informed
[51] G. Lin, C. Moya, and Z. Zhang, “B-deeponet: An enhanced
neural networks: A deep learning framework for solving forward
bayesian deeponet for solving noisy parametric pdes using ac-
and inverse problems involving nonlinear partial differential
celerated replica exchange sgld,” Journal of Computational Physics,
equations,” Journal of Computational physics, vol. 378, 2019.
vol. 473, p. 111713, 2023.
[78] X.-d. Bai, Y. Wang, and W. Zhang, “Applying physics informed
[52] J. Seidman, G. Kissas, P. Perdikaris, and G. J. Pappas, “Nomad: neural network for flow data assimilation,” Journal of Hydrody-
Nonlinear manifold decoders for operator learning,” Advances in namics, 2020.
Neural Information Processing Systems, vol. 35, pp. 5601–5613, 2022.
[79] Z. Chen, Y. Liu, and H. Sun, “Physics-informed learning of
[53] S. Lanthaler, R. Molinaro, P. Hadorn, and S. Mishra, “Nonlinear governing equations from scarce data,” Nature communications,
reconstruction for operator learning of pdes with discontinu- vol. 12, no. 1, p. 6136, 2021.
ities,” Arxiv, 2022. [80] X. Jin, S. Cai, H. Li, and G. E. Karniadakis, “Nsfnets (navier-
[54] J. Y. Lee, S. W. Cho, and H. J. Hwang, “Hyperdeeponet: learning stokes flow nets): Physics-informed neural networks for the
operator with complex target function space using the limited incompressible navier-stokes equations,” Journal of Computational
resources via hypernetwork,” Arxiv, 2023. Physics, vol. 426, p. 109951, 2021.
[55] K. Kontolati, S. Goswami, G. E. Karniadakis, and M. D. Shields, [81] H. Gao, L. Sun, and J.-X. Wang, “Phygeonet: Physics-informed
“Learning in latent spaces improves the predictive accuracy of geometry-adaptive convolutional neural networks for solving
deep neural operators,” Arxiv, 2023. parameterized steady-state pdes on irregular domain,” Journal
[56] S. Venturi and T. Casey, “Svd perspectives for augmenting deep- of Computational Physics, vol. 428, p. 110079, 2021.
onet flexibility and interpretability,” Computer Methods in Applied [82] S. Mowlavi and S. Nabi, “Optimal control of pdes using physics-
Mechanics and Engineering, vol. 403, p. 115718, 2023. informed neural networks (PINNs),” in APS Division of Fluid
[57] Z. Li, N. Kovachki, K. Azizzadenesheli, B. Liu, A. Stuart, K. Bhat- Dynamics Meeting Abstracts, 2021, pp. H23–005.
tacharya, and A. Anandkumar, “Multipole graph neural operator [83] C. Cheng and G.-T. Zhang, “Deep learning method based on
for parametric partial differential equations,” Advances in Neural physics informed neural network with resnet block for solving
Information Processing Systems, vol. 33, pp. 6755–6766, 2020. fluid flow problems,” Water, vol. 13, no. 4, p. 423, 2021.
[58] N. Geneva and N. Zabaras, “Transformers for modeling physical [84] Q. Zeng, Y. Kothari, S. H. Bryngelson, and F. Schäfer, “Competi-
systems,” Neural Networks, vol. 146, pp. 272–289, 2022. tive physics informed networks,” Arxiv, 2022.
[59] Z. Hao, Z. Wang, H. Su, C. Ying, Y. Dong, S. Liu, Z. Cheng, [85] R. Gnanasambandam, B. Shen, J. Chung, X. Yue et al., “Self-
J. Song, and J. Zhu, “Gnot: A general neural operator transformer scalable tanh (stan): Faster convergence and better generalization
for operator learning,” in International Conference on Machine in physics-informed neural networks,” Arxiv, 2022.
Learning. PMLR, 2023, pp. 12 556–12 569. [86] X. Huang, Z. Ye, H. Liu, S. Ji, Z. Wang, K. Yang, Y. Li, M. Wang,
[60] B. Raonic, R. Molinaro, T. Rohner, S. Mishra, and E. de Bezenac, H. Chu, F. Yu et al., “Meta-auto-decoder for solving parametric
“Convolutional neural operators,” in ICLR 2023 Workshop on partial differential equations,” Advances in Neural Information
Physics for Machine Learning, 2023. Processing Systems, vol. 35, pp. 23 426–23 438, 2022.
[61] Q. Cao, S. Goswami, and G. E. Karniadakis, “Lno: Laplace neural [87] J. Sun, Y. Liu, Y. Wang, Z. Yao, and X. Zheng, “Binn: A deep
operator for solving differential equations,” Arxiv, 2023. learning approach for computational mechanics problems based
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2023 18

on boundary integral equations,” Computer Methods in Applied [111] J. Pathak, M. Mustafa, K. Kashinath, E. Motheau, T. Kurth,
Mechanics and Engineering, vol. 410, p. 116012, 2023. and M. Day, “Using machine learning to augment coarse-grid
[88] R. You, S. Zhang, T. Kong, and H. Li, “High-order discontinuity computational fluid dynamics simulations,” Arxiv, 2020.
detection physics-informed neural network,” Arxiv, 2023. [112] O. Obiols-Sales, A. Vishnu, N. Malaya, and A. Chan-
[89] M. Toloubidokhti, Y. Ye, R. Missel, X. Jiang, N. Kumar, dramowliswharan, “Cfdnet: A deep learning-based accelerator
R. Shrestha, and L. Wang, “Dats: Difficulty-aware task sampler for fluid simulations,” in Proceedings of the 34th ACM international
for meta-learning physics-informed neural networks,” in The conference on supercomputing, 2020, pp. 1–12.
Twelfth International Conference on Learning Representations, 2023. [113] K. Um, R. Brand, Y. R. Fei, P. Holl, and N. Thuerey, “Solver-in-
[90] L. Z. Zhao, X. Ding, and B. A. Prakash, “Pinnsformer: A the-loop: Learning from differentiable physics to interact with
transformer-based framework for physics-informed neural net- iterative pde-solvers,” Advances in Neural Information Processing
works,” Arxiv, 2023. Systems, vol. 33, pp. 6111–6122, 2020.
[91] C. Rao, P. Ren, Q. Wang, O. Buyukozturk, H. Sun, and Y. Liu, [114] F. D. A. Belbute-Peres, T. Economon, and Z. Kolter, “Combining
“Encoding physics to learn reaction–diffusion processes,” Nature differentiable pde solvers and graph neural networks for fluid
Machine Intelligence, vol. 5, no. 7, pp. 765–779, 2023. flow prediction,” in international conference on machine learning.
[92] Y. Wang and L. Zhong, “Nas-pinn: neural architecture search- PMLR, 2020, pp. 2402–2411.
guided physics-informed neural network for solving pdes,” Jour- [115] L. Lu, R. Pestourie, W. Yao, Z. Wang, F. Verdugo, and S. G. John-
nal of Computational Physics, vol. 496, p. 112603, 2024. son, “Physics-informed neural networks with hard constraints
[93] J. Sirignano and K. Spiliopoulos, “Dgm: A deep learning al- for inverse design,” SIAM Journal on Scientific Computing, vol. 43,
gorithm for solving partial differential equations,” Journal of no. 6, pp. B1105–B1132, 2021.
computational physics, vol. 375, pp. 1339–1364, 2018. [116] J. Yu, L. Lu, X. Meng, and G. E. Karniadakis, “Gradient-enhanced
[94] T. Xue, A. Beatson, S. Adriaenssens, and R. Adams, “Amortized physics-informed neural networks for forward and inverse pde
finite element analysis for fast pde-constrained optimization,” in problems,” Computer Methods in Applied Mechanics and Engineer-
International Conference on Machine Learning. PMLR, 2020, pp. ing, vol. 393, p. 114823, 2022.
10 638–10 647. [117] Z. Hao, C. Ying, H. Su, J. Zhu, J. Song, and Z. Cheng, “Bi-level
[95] Y. Du and T. A. Zaki, “Evolutional deep neural network,” Physical physics-informed neural networks for pde constrained optimiza-
Review E, vol. 104, no. 4, p. 045303, 2021. tion using broyden’s hypergradients,” Arxiv, 2022.
[118] A. Pokkunuru, P. Rooshenas, T. Strauss, A. Abhishek, and
[96] X.-Y. Liu, H. Sun, M. Zhu, L. Lu, and J.-X. Wang, “Predicting
T. Khan, “Improved training of physics-informed neural net-
parametric spatiotemporal dynamics by multi-resolution pde
works using energy-based priors: a study on electrical impedance
structure-preserved deep learning,” Arxiv, 2022.
tomography,” in The Eleventh International Conference on Learning
[97] H. Gao, M. J. Zahr, and J.-X. Wang, “Physics-informed graph
Representations, 2022.
neural galerkin networks: A unified framework for solving pde-
[119] K. Allen, T. Lopez-Guevara, K. L. Stachenfeld, A. Sanchez Gon-
governed forward and inverse problems,” Computer Methods in
zalez, P. Battaglia, J. B. Hamrick, and T. Pfaff, “Inverse design
Applied Mechanics and Engineering, vol. 390, p. 114502, 2022.
for fluid-structure interactions using graph network simulators,”
[98] A. C. Aristotelous, E. C. Mitchell, and V. Maroulas, “ADLGM: Advances in Neural Information Processing Systems, vol. 35, pp.
An efficient adaptive sampling deep learning galerkin method,” 13 759–13 774, 2022.
Journal of Computational Physics, vol. 477, p. 111944, 2023.
[120] T. Wu, T. Maruyama, and J. Leskovec, “Learning to accelerate
[99] H. Chen, R. Wu, E. Grinspun, C. Zheng, and P. Y. Chen, “Im- partial differential equations via latent global evolution,” Ad-
plicit neural spatial representations for time-dependent pdes,” in vances in Neural Information Processing Systems, vol. 35, 2022.
International Conference on Machine Learning. PMLR, 2023, pp. [121] L. Ardizzone, J. Kruse, S. Wirkert, D. Rahner, E. W. Pellegrini,
5162–5177. R. S. Klessen, L. Maier-Hein, C. Rother, and U. Köthe, “Analyzing
[100] J. Bruna, B. Peherstorfer, and E. Vanden-Eijnden, “Neural inverse problems with invertible neural networks,” Arxiv, 2018.
galerkin schemes with active learning for high-dimensional evo- [122] J. Behrmann, W. Grathwohl, R. T. Chen, D. Duvenaud, and
lution equations,” Journal of Computational Physics, vol. 496, p. J.-H. Jacobsen, “Invertible residual networks,” in International
112588, 2024. conference on machine learning. PMLR, 2019, pp. 573–582.
[101] X.-Y. Liu, M. Zhu, L. Lu, H. Sun, and J.-X. Wang, [123] Y. Teng and A. Choromanska, “Invertible autoencoder for do-
“Multi-resolution partial differential equations preserved learn- main adaptation,” Computation, vol. 7, no. 2, p. 20, 2019.
ing framework for spatiotemporal dynamics,” Communications [124] J. Kruse, L. Ardizzone, C. Rother, and U. Köthe, “Benchmarking
Physics, vol. 7, no. 1, p. 31, 2024. invertible architectures on inverse problems,” Arxiv, 2021.
[102] Z. Song, C. Wang, and H. Yang, “Finite expression method for [125] S. Ren, W. Padilla, and J. Malof, “Benchmarking deep inverse
learning dynamics on complex networks,” arXiv, 2024. models over time, and the neural-adjoint method,” Advances in
[103] D. Kochkov, J. A. Smith, A. Alieva, Q. Wang, M. P. Brenner, Neural Information Processing Systems, vol. 33, pp. 38–48, 2020.
and S. Hoyer, “Machine learning–accelerated computational fluid [126] A. Kumar and S. Levine, “Model inversion networks for model-
dynamics,” Proceedings of the National Academy of Sciences, vol. based optimization,” Advances in neural information processing
118, no. 21, p. e2101784118, 2021. systems, vol. 33, pp. 5126–5137, 2020.
[104] B. Després and H. Jourdren, “Machine learning design of volume [127] P. Holl, N. Thuerey, and V. Koltun, “Learning to control pdes with
of fluid schemes for compressible flows,” Journal of Computational differentiable physics,” in International Conference on Learning
Physics, vol. 408, p. 109275, 2020. Representations, 2020.
[105] Y. Bar-Sinai, S. Hoyer, J. Hickey, and M. P. Brenner, “Learning [128] R. Hwang, J. Y. Lee, J. Y. Shin, and H. J. Hwang, “Solving
data-driven discretizations for partial differential equations,” pde-constrained control problems using operator learning,” in
Proceedings of the National Academy of Sciences, vol. 116, no. 31, Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36,
pp. 15 344–15 349, 2019. no. 4, 2022, pp. 4504–4512.
[106] B. List, L.-W. Chen, and N. Thuerey, “Learned turbulence mod- [129] A. Larcher and E. Hachem, “A review on deep reinforcement
elling with differentiable fluid solvers: physics-based loss func- learning for fluid mechanics: An update,” Physics of Fluids,
tions and optimisation horizons,” Journal of Fluid Mechanics, vol. vol. 34, no. 11, 2022.
949, p. A25, 2022. [130] P. Garnier, J. Viquerat, J. Rabault, A. Larcher, A. Kuhnle, and
[107] Z. Sun, Y. Yang, and S. Yoo, “A neural pde solver with temporal E. Hachem, “A review on deep reinforcement learning for fluid
stencil modeling,” Arxiv, 2023. mechanics,” Computers & Fluids, vol. 225, p. 104973, 2021.
[108] D. Greenfeld, M. Galun, R. Basri, I. Yavneh, and R. Kimmel, [131] G. Paul, V. Jonathan, R. Jean, L. Aurélien, K. Alexander, and
“Learning to optimize multigrid pde solvers,” in International H. Elie, “A review on deep reinforcement learning for fluid
Conference on Machine Learning. PMLR, 2019, pp. 2415–2423. mechanics,” Computers & Fluids, vol. 225, p. 104973, 2021.
[109] I. Luz, M. Galun, H. Maron, R. Basri, and I. Yavneh, “Learning al- [132] S. Mowlavi and S. Nabi, “Optimal control of pdes using physics-
gebraic multigrid using graph neural networks,” in International informed neural networks,” Journal of Computational Physics, vol.
Conference on Machine Learning. PMLR, 2020, pp. 6489–6499. 473, p. 111731, 2023.
[110] J. Sappl, L. Seiler, M. Harders, and W. Rauch, “Deep learning [133] J. Barry-Straume, A. Sarshar, A. A. Popov, and A. Sandu,
of preconditioners for conjugate gradient solvers in urban water “Physics-informed neural networks for pde-constrained opti-
related problems,” Arxiv, 2019. mization and control,” 2022.
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2023 19

[134] Z. Mao, A. D. Jagtap, and G. E. Karniadakis, “Physics-informed reynolds-averaged navier–stokes solutions,” Advances in Neural
neural networks for high-speed flows,” Computer Methods in Information Processing Systems, vol. 35, pp. 23 463–23 478, 2022.
Applied Mechanics and Engineering, vol. 360, p. 112789, 2020. [158] Z. Li, D. Shu, and A. Barati Farimani, “Scalable transformer for
[135] Y. Huang, Z. Zhang, and X. Zhang, “A direct-forcing immersed pde surrogate modeling,” Advances in Neural Information Process-
boundary method for incompressible flows based on physics- ing Systems, vol. 36, 2024.
informed neural network,” Fluids, vol. 7, no. 2, p. 56, 2022. [159] H. Wu, H. Luo, H. Wang, J. Wang, and M. Long, “Transolver:
[136] R. Sharma, W. G. Guo, M. Raissi, and Y. Guo, “Physics-informed A fast transformer solver for pdes on general geometries,” arXiv
machine learning of argon gas-driven melt pool dynamics,” preprint arXiv:2402.02366, 2024.
Arxiv, 2023. [160] M. Lino, S. Fotiadis, A. A. Bharath, and C. D. Cantwell, “Multi-
[137] S. Auddy, R. Dey, N. J. Turner, and S. Basu, “Grinn: A physics- scale rotation-equivariant graph neural networks for unsteady
informed neural network for solving hydrodynamic systems in eulerian fluid dynamics,” Physics of Fluids, vol. 34, no. 8, 2022.
the presence of self-gravity,” Arxiv, 2023. [161] M. Fortunato, T. Pfaff, P. Wirnsberger, A. Pritzel, and P. Battaglia,
[138] X. Shan, Y. Liu, W. Cao, X. Sun, and W. Zhang, “Turbulence mod- “Multiscale meshgraphnets,” Arxiv, 2022.
eling via data assimilation and machine learning for separated [162] Y. Cao, M. Chai, M. Li, and C. Jiang, “Efficient learning of mesh-
flows over airfoils,” AIAA Journal, vol. 61, no. 9, 2023. based physical simulation with bi-stride multi-scale graph neural
[139] W. Ji, W. Qiu, Z. Shi, S. Pan, and S. Deng, “Stiff-pinn: Physics- network,” in International Conference on Machine Learning. PMLR,
informed neural network for stiff chemical kinetics,” The Journal 2023, pp. 3541–3558.
of Physical Chemistry A, vol. 125, no. 36, pp. 8098–8106, 2021. [163] L. Prantl, B. Ummenhofer, V. Koltun, and N. Thuerey, “Guaran-
[140] L. Zhong, B. Wu, and Y. Wang, “Low-temperature plasma simu- teed conservation of momentum for learning particle-based fluid
lation based on physics-informed neural networks: frameworks dynamics,” Advances in Neural Information Processing Systems,
and preliminary applications,” Physics of Fluids, vol. 34, no. 8, vol. 35, pp. 6901–6913, 2022.
2022. [164] M. Rafiq, G. Rafiq, and G. S. Choi, “Dsfa-pinn: Deep spectral
[141] V. Gopakumar, S. Pamela, L. Zanisi, Z. Li, A. Anandkumar, and feature aggregation physics informed neural network,” IEEE
M. Team, “Fourier neural operator for plasma modelling,” Arxiv, Access, vol. 10, pp. 22 247–22 259, 2022.
2023. [165] X. Zhao, X. Chen, Z. Gong, W. Zhou, W. Yao, and Y. Zhang,
[142] S. Kim, R. Shousha, S. Yang, Q. Hu, S. Hahn, A. Jalalvand, J.-K. “Recfno: a resolution-invariant flow and heat field reconstruction
Park, N. C. Logan, A. O. Nelson, Y.-S. Na et al., “Highest fusion method from sparse observations via fourier neural operator,”
performance without harmful edge energy bursts in tokamak,” International Journal of Thermal Sciences, vol. 195, p. 108619, 2024.
Nature Communications, vol. 15, no. 1, p. 3990, 2024. [166] K. Tiwari, N. Krishnan et al., “Cono: Complex neural operator for
[143] Z. Jiang, C. Wang, and H. Yang, “Finite expression methods for continuous dynamical systems,” Arxiv, 2023.
discovering physical laws from data,” Arxiv, 2023. [167] A. Choubineh, J. Chen, D. A. Wood, F. Coenen, and F. Ma,
[144] S. Becker, M. Klein, A. Neitz, G. Parascandolo, and N. Kilbertus, “Fourier neural operator for fluid flow in small-shape 2d sim-
“Predicting ordinary differential equations with transformers,” ulated porous media dataset,” Algorithms, vol. 16, no. 1, p. 24,
in International Conference on Machine Learning. PMLR, 2023, pp. 2023.
1978–2002. [168] X. Zhang, L. Wang, J. Helwig, Y. Luo, C. Fu, Y. Xie, M. Liu, Y. Lin,
[145] S. Leask, V. McDonell, and S. Samuelsen, “Modal extraction Z. Xu, K. Yan et al., “Artificial intelligence for science in quantum,
of spatiotemporal atomization data using a deep convolutional atomistic, and continuum systems.” Arxiv, 2023.
koopman network,” Physics of Fluids, vol. 33, no. 3, 2021. [169] Q. Zhao, D. B. Lindell, and G. Wetzstein, “Learning to solve pde-
[146] S. Kneer, T. Sayadi, D. Sipp, P. Schmid, and G. Rigas, constrained inverse problems with graph networks,” Arxiv, 2022.
“Symmetry-aware autoencoders: s-pca and s-nlpca,” arXiv [170] A. McNamara, A. Treuille, Z. Popović, and J. Stam, “Fluid control
preprint arXiv:2111.02893, 2021. using the adjoint method,” ACM Transactions On Graphics (TOG),
[147] N. Arnold-Medabalimi, C. Huang, and K. Duraisamy, “Large- vol. 23, no. 3, pp. 449–456, 2004.
eddy simulation and challenges for projection-based reduced- [171] B. Protas, “Adjoint-based optimization of pde systems with alter-
order modeling of a gas turbine model combustor,” International native gradients,” Journal of Computational Physics, vol. 227, no. 13,
Journal of Spray and Combustion Dynamics, vol. 14, no. 1-2, pp. pp. 6490–6510, 2008.
153–175, 2022. [172] Y. Li, K. H. Ang, and G. Chong, “Pid control system analysis and
[148] C. R. Wentland, K. Duraisamy, and C. Huang, “Scalable design,” IEEE Control Systems Magazine, vol. 26, no. 1, 2006.
projection-based reduced-order models for large multiscale fluid [173] M. Schwenzer, M. Ay, T. Bergs, and D. Abel, “Review on model
systems,” AIAA Journal, vol. 61, no. 10, pp. 4499–4523, 2023. predictive control: An engineering perspective,” The International
[149] A. Taflove, S. C. Hagness, and M. Piket-May, “Computational Journal of Advanced Manufacturing Technology, vol. 117, no. 5-6, pp.
electromagnetics: the finite-difference time-domain method,” The 1327–1349, 2021.
Electrical Engineering Handbook, vol. 3, no. 629-670, p. 15, 2005. [174] J. Rabault, M. Kuchta, A. Jensen, U. Réglade, and N. Cerardi,
[150] O. C. Zienkiewicz, R. L. Taylor, and J. Z. Zhu, The finite element “Artificial neural networks trained through deep reinforcement
method: its basis and fundamentals. Elsevier, 2005. learning discover control strategies for active flow control,” Jour-
[151] J. de Frutos and J. Novo, “A spectral element method for the nal of fluid mechanics, vol. 865, pp. 281–302, 2019.
navier–stokes equations with improved accuracy,” SIAM Journal [175] M. Elhawary, “Deep reinforcement learning for active flow con-
on Numerical Analysis, vol. 38, no. 3, pp. 799–819, 2000. trol around a circular cylinder using unsteady-mode plasma
[152] S. Chen and G. D. Doolen, “Lattice boltzmann method for fluid actuators,” Arxiv, 2020.
flows,” Annual review of fluid mechanics, vol. 30, no. 1, 1998. [176] G. Beintema, A. Corbetta, L. Biferale, and F. Toschi, “Controlling
[153] L. Lu, X. Meng, Z. Mao, and G. E. Karniadakis, “Deepxde: A deep rayleigh–bénard convection via reinforcement learning,” Journal
learning library for solving differential equations,” SIAM review, of Turbulence, vol. 21, no. 9-10, pp. 585–605, 2020.
vol. 63, no. 1, pp. 208–228, 2021. [177] E. Hachem, H. Ghraieb, J. Viquerat, A. Larcher, and P. Meliga,
[154] M. Takamoto, T. Praditia, R. Leiteritz, D. MacKinlay, F. Alesiani, “Deep reinforcement learning for the control of conjugate heat
D. Pflüger, and M. Niepert, “Pdebench: An extensive benchmark transfer,” Journal of Computational Physics, vol. 436, p. 110317,
for scientific machine learning,” Advances in Neural Information 2021.
Processing Systems, vol. 35, pp. 1596–1611, 2022. [178] S. Verma, G. Novati, and P. Koumoutsakos, “Efficient collective
[155] Z. Hao, J. Yao, C. Su, H. Su, Z. Wang, F. Lu, Z. Xia, Y. Zhang, swimming by harnessing vortices through deep reinforcement
S. Liu, L. Lu et al., “Pinnacle: A comprehensive benchmark of learning,” Proceedings of the National Academy of Sciences, vol. 115,
physics-informed neural networks for solving pdes,” Arxiv, 2023. no. 23, pp. 5849–5854, 2018.
[156] W. T. Chung, B. Akoush, P. Sharma, A. Tamkin, K. S. Jung, [179] A. K. Shakya, G. Pillai, and S. Chakrabarty, “Reinforcement learn-
J. Chen, J. Guo, D. Brouzet, M. Talei, B. Savard et al., “Turbulence ing algorithms: A brief survey,” Expert Systems with Applications,
in focus: Benchmarking scaling behavior of 3d volumetric super- p. 120495, 2023.
resolution with blastnet 2.0 data,” Advances in Neural Information [180] Q. Wang, L. Yan, G. Hu, C. Li, Y. Xiao, H. Xiong, J. Rabault, and
Processing Systems, vol. 36, 2024. B. R. Noack, “DRLinFluids: An open-source Python platform of
[157] F. Bonnet, J. Mazari, P. Cinnella, and P. Gallinari, “Airfrans: High coupling deep reinforcement learning and OpenFOAM,” Physics
fidelity computational fluid dynamics dataset for approximating of Fluids, vol. 34, no. 8, p. 081801, 08 2022.
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2023 20

[181] J. Ho, A. Jain, and P. Abbeel, “Denoising diffusion probabilistic [204] L. Yang and S. J. Osher, “Pde generalization of in-context operator
models,” Advances in neural information processing systems, vol. 33, networks: A study on 1d scalar nonlinear conservation laws,”
pp. 6840–6851, 2020. Arxiv, 2024.
[182] B. Holzschuh, S. Vegetti, and N. Thuerey, “Solving inverse [205] Y. Liu, Z. Zhang, and H. Schaeffer, “Prose: Predicting opera-
physics problems with score matching,” Advances in Neural In- tors and symbolic expressions using multimodal transformers,”
formation Processing Systems, vol. 36, 2023. Arxiv, 2023.
[183] T. Wu, T. Maruyama, L. Wei, T. Zhang, Y. Du, G. Iaccarino, and [206] J. Sun, Y. Liu, Z. Zhang, and H. Schaeffer, “Towards a foundation
J. Leskovec, “Compositional generative inverse design,” Arxiv, model for partial differential equation: Multi-operator learning
2024. and extrapolation,” Arxiv, 2024.
[184] J. L. Watson, D. Juergens, N. R. Bennett, B. L. Trippe, J. Yim, H. E. [207] Z. Song, J. Yuan, and H. Yang, “Fmint: Bridging human designed
Eisenach, W. Ahern, A. J. Borst, R. J. Ragotte, L. F. Milles et al., “De and data pretrained models for differential equation foundation
novo design of protein structure and function with rfdiffusion,” model,” Arxiv, 2024.
Nature, vol. 620, no. 7976, pp. 1089–1100, 2023. [208] Z. Hang, Y. Ma, H. Wu, H. Wang, and M. Long, “Unisolver: Pde-
[185] J.-H. Bastek and D. M. Kochmann, “Inverse design of nonlinear conditional transformers are universal pde solvers,” Arxiv, 2024.
mechanical metamaterials via video denoising diffusion models,” [209] S. Subramanian, P. Harrington, K. Keutzer, W. Bhimji, D. Mo-
Nature Machine Intelligence, vol. 5, no. 12, pp. 1466–1475, 2023. rozov, M. W. Mahoney, and A. Gholami, “Towards foundation
[186] L. Chen, K. Lu, A. Rajeswaran, K. Lee, A. Grover, M. Laskin, models for scientific machine learning: Characterizing scaling
P. Abbeel, A. Srinivas, and I. Mordatch, “Decision transformer: and transfer behavior,” Advances in Neural Information Processing
Reinforcement learning via sequence modeling,” Arxiv, 2021. Systems, vol. 36, 2024.
[210] M. McCabe, B. R.-S. Blancard, L. H. Parker et al., “Multiple
[187] L. Wei, P. Hu, R. Feng, Y. Du, T. Zhang, R. Wang, Y. Wang, Z.-M.
physics pretraining for physical surrogate models,” Arxiv, 2023.
Ma, and T. Wu, “Generative PDE control,” in ICLR 2024 Workshop
[211] T. Chen, H. Zhou, Y. Li, H. Wang, C. Gao, S. Zhang, and
on AI4DifferentialEquations In Science, 2024.
J. Li, “Building flexible machine learning models for scientific
[188] J. Bongard and H. Lipson, “Automated reverse engineering of computing at scale,” Arxiv, 2024.
nonlinear dynamical systems,” Proceedings of the National Academy [212] Z. Hao, C. Su, S. Liu, J. Berner, C. Ying, H. Su, A. Anandkumar,
of Sciences, vol. 104, no. 24, pp. 9943–9948, 2007. J. Song, and J. Zhu, “Dpot: Auto-regressive denoising operator
[189] S. L. Brunton, J. L. Proctor, and J. N. Kutz, “Discovering govern- transformer for large-scale pde pre-training,” Arxiv, 2024.
ing equations from data by sparse identification of nonlinear dy- [213] S. Hu, Y. Tu, X. Han, C. He, G. Cui, X. Long, Z. Zheng, Y. Fang,
namical systems,” Proceedings of the National Academy of Sciences, Y. Huang, W. Zhao et al., “Minicpm: Unveiling the potential of
vol. 113, no. 15, p. 3932–3937, Mar. 2016. small language models with scalable training strategies,” arXiv
[190] H. Schaeffer, “Learning partial differential equations via data dis- preprint arXiv:2404.06395, 2024.
covery and sparse optimization,” Proceedings of the Royal Society [214] J. Wei, Y. Tay, R. Bommasani, C. Raffel, B. Zoph, S. Borgeaud,
A: Mathematical, Physical and Engineering Sciences, vol. 473, no. D. Yogatama, M. Bosma, D. Zhou, D. Metzler et al., “Emer-
2197, p. 20160446, 2017. gent abilities of large language models,” arXiv preprint
[191] P. R. Vlachas, G. Arampatzis, C. Uhler, and P. Koumoutsakos, arXiv:2206.07682, 2022.
“Multiscale simulations of complex systems by learning their [215] G. Batzolis, J. Stanczuk, C.-B. Schönlieb, and C. Etmann, “Con-
effective dynamics,” Nature Machine Intelligence, vol. 4, no. 4, pp. ditional image generation with score-based diffusion models,”
359–366, 2022. arXiv preprint arXiv:2111.13606, 2021.
[192] Y. Lyu, X. Zhao, Z. Gong, X. Kang, and W. Yao, “Multi-fidelity [216] C. Liu, Z. Liu, J. Holmes, L. Zhang, L. Zhang, Y. Ding, P. Shu,
prediction of fluid flow based on transfer learning using fourier Z. Wu, H. Dai, Y. Li et al., “Artificial general intelligence for
neural operator,” Physics of Fluids, vol. 35, no. 7, 2023. radiation oncology,” Meta-radiology, p. 100045, 2023.
[193] J. Mi, X. Jin, and H. Li, “Cascade-net for predicting cylinder wake [217] H. Gao, X. Han, X. Fan, L. Sun, L.-P. Liu, L. Duan, and J.-X.
at reynolds numbers ranging from subcritical to supercritical Wang, “Bayesian conditional diffusion models for versatile spa-
regime,” Physics of Fluids, vol. 35, no. 7, 2023. tiotemporal turbulence generation,” Computer Methods in Applied
[194] S. Fang, M. Cooley, D. Long, S. Li, R. Kirby, and S. Zhe, “Solving Mechanics and Engineering, vol. 427, p. 117023, 2024.
high frequency and multi-scale pdes with gaussian processes,” [218] M. Arts, V. Garcia Satorras, C.-W. Huang, D. Zugner, M. Federici,
Arxiv, 2023. C. Clementi, F. Noe, R. Pinsler, and R. van den Berg, “Two for one:
[195] X. Zhang, J. Helwig, Y. Lin, Y. Xie, C. Fu, S. Wojtowytsch, and Diffusion models and force fields for coarse-grained molecular
S. Ji, “Sinenet: Learning temporal dynamics in time-dependent dynamics,” Journal of Chemical Theory and Computation, vol. 19,
partial differential equations,” Arxiv, 2024. no. 18, pp. 6151–6159, 2023.
[196] L. Serrano, L. L. Boudec, A. K. Koupaï, T. X. Wang, Y. Yin, [219] A. Merchant, S. Batzner, S. S. Schoenholz, M. Aykol, G. Cheon,
J.-N. Vittaut, and P. Gallinari, “Operator learning with neural and E. D. Cubuk, “Scaling deep learning for materials discovery,”
fields: Tackling pdes on general geometries,” Neural Information Nature, pp. 1–6, 2023.
Processing Systems, 2023. [220] Y. Du, Y. Wang, Y. Huang, J. C. Li, Y. Zhu, T. Xie, C. Duan,
[197] M. Raissi, A. Yazdani, and G. E. Karniadakis, “Hidden fluid J. Gregoire, and C. P. Gomes, “M2hub: Unlocking the potential
mechanics: Learning velocity and pressure fields from flow vi- of machine learning for materials discovery,” Advances in Neural
sualizations,” Science, vol. 367, no. 6481, pp. 1026–1030, 2020. Information Processing Systems, vol. 36, pp. 77 359–77 378, 2023.
[221] C. S. Ha, D. Yao, Z. Xu, C. Liu, H. Liu, D. Elkins, M. Kile,
[198] M. Karlbauer, T. Praditia, S. Otte, S. Oladyshkin, W. Nowak,
V. Deshpande, Z. Kong, M. Bauchy et al., “Rapid inverse design of
and M. V. Butz, “Composing partial differential equations with
metamaterials based on prescribed mechanical behavior through
physics-aware neural networks,” in International Conference on
machine learning,” Nature Communications, vol. 14, no. 1, p. 5765,
Machine Learning. PMLR, 2022, pp. 10 773–10 801.
2023.
[199] L. Sun, H. Gao, S. Pan, and J.-X. Wang, “Surrogate modeling for [222] J. Jumper, R. Evans, A. Pritzel, T. Green, M. Figurnov, O. Ron-
fluid flows based on physics-constrained deep learning without
neberger, K. Tunyasuvunakool, R. Bates, A. Žídek, A. Potapenko
simulation data,” Computer Methods in Applied Mechanics and
et al., “Highly accurate protein structure prediction with al-
Engineering, vol. 361, p. 112732, 2020.
phafold,” Nature, vol. 596, no. 7873, pp. 583–589, 2021.
[200] N. Saad, G. Gupta, S. Alizadeh, and D. C. Maddix, “Guiding [223] E. O. Pyzer-Knapp, J. W. Pitera, P. W. Staar, S. Takeda, T. Laino,
continuous operator learning through physics-based boundary D. P. Sanders, J. Sexton, J. R. Smith, and A. Curioni, “Acceler-
constraints,” Arxiv, 2022. ating materials discovery using artificial intelligence, high per-
[201] M. Finzi, A. Potapczynski, M. Choptuik, and A. G. Wilson, “A formance computing and robotics,” npj Computational Materials,
stable and scalable method for solving initial value pdes with vol. 8, no. 1, p. 84, 2022.
neural networks,” Arxiv, 2023.
[202] A. Lozano-Durán and H. J. Bae, “Machine learning building-
block-flow wall model for large-eddy simulation,” Journal of Fluid
Mechanics, vol. 963, p. A35, 2023.
[203] L. Yang, T. Meng, S. Liu, and S. J. Osher, “Prompting in-context
operator learning with sensor data, equations, and natural lan-
guage,” Arxiv, 2023.
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2023 21

10 A PPENDIX 10.8 Eikonal Equation


Here are refined explanations for each PDE, adding more Eikonal equation is essential in optics and computational
detail to their descriptions. In addition, the discussion of geometry for modeling the propagation of wavefronts:
flow problems introduces scenarios such as Ahmed-Body
∥∇ϕ∥ = F (x)
flow, crucial for automotive aerodynamics research, empha-
sizing drag reduction and flow separation.
10.9 Elastodynamic Equation

10.1 Advection Equation Elastodynamic equation models the behavior of elastic ma-
terials subjected to external stresses and strains:
Advection equation models the transport of a scalar quan-
tity with the fluid flow at a velocity u: ∂2u
ρ − µ∆u − (λ + µ)∇(∇ · u) = 0
∂ϕ ∂ϕ ∂t2
+u =0
∂t ∂x 10.10 Euler Equation
Euler equation governs the motion of an inviscid fluid,
10.2 Allen-Cahn Equation
emphasizing the conservation of mass and momentum:
Allen-Cahn equation describes the phase field dynamics of
multi-phase systems, capturing the evolution of interfaces
∂ρ
+ ∇ · (ρ⃗u) = 0
between different phases with the parameter ϵ controlling ∂t
the interface width:
10.11 Gray-Scott Equation
∂ϕ 1
= ϵ∆ϕ − ϕ(ϕ2 − 1) Gray-Scott equations model the dynamics of reaction and
∂t ϵ diffusion processes in chemical kinetics, specifically the in-
teraction between two chemical species undergoing reaction
10.3 Anti-Derivative Equation
and diffusion:
Anti-Derivative equation models the mathematical property ∂u
where the derivative of an integral of a function returns the = −uv 2 + F (1 − u) + Du ∆u
∂t
original function:
10.12 Heat Equation
Z 
d
ϕ dx = ϕ
dx Heat equation describes how temperature changes over
time within a given region, accounting for thermal diffusion:
10.4 Bateman–BurgersEquation ∂T
= κ∆T
Bateman–Burgers equation combines the effects of viscous ∂t
diffusion and nonlinear convection in fluid dynamics:
10.13 Korteweg-de Vries Equation
∂u ∂u ∂2u
+u =ν 2 Korteweg-de Vries equation models the propagation of soli-
∂t ∂x ∂x tary waves in shallow water and is crucial in the study of
10.5 Burgers Equation nonlinear wave phenomena in various physical contexts:

Burgers is a fundamental equation in fluid mechanics and ∂u ∂u ∂3u


+u +δ 3 =0
nonlinear acoustics that simplifies the dynamics of fluid ∂t ∂x ∂x
flows with viscosity. It is particularly useful in the study
of shock wave formation and turbulence modeling: 10.14 Kuramoto-Sivashinsky Equation
Kuramoto-Sivashinsky equation captures the behavior of
∂u ∂u ∂2u instabilities and chaotic patterns in systems such as laminar
+u =ν 2
∂t ∂x ∂x flame fronts and fluid interfaces:
10.6 Diffusion Equation ∂u ∂4u ∂2u ∂u
+ν 4 + +u =0
∂t ∂x ∂x2 ∂x
Diffusion equation is for describing how substances spread
through space over time due to diffusion. It’s a basic model 10.15 Laplace Equation
for heat conduction, mass transport in porous media, and
Laplace equation is fundamental in electrostatics, fluid me-
other diffusion processes:
chanics, and geometric theory, describing potential fields in
∂ϕ the absence of free charges or sources:
= D∆ϕ
∂t ∆ϕ = 0
10.7 Duffing Equation
10.16 Poisson Equation
Duffing equation is a non-linear differential equation mod-
els the dynamics of oscillators with non-linear restoring Poisson equation is a generalization of the Laplace equation,
forces, which can exhibit both periodic and chaotic behavior: incorporating a source term, f , which describes the potential
field ϕ in response to a given source distribution:
d2 x dx
+δ + αx + βx3 = γ cos(ωt) ∆ϕ = f
dt2 dt
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2023 22

10.17 Reynold Averaged Navier-Stokes Equation 10.25 Kolmogorov Flow


Reynold Averaged Navier-Stokes equation models turbu- Kolmogorov flow is governed by the NS equations with an
lent flows by averaging the NS equations over time. It is external forcing term, typically sinusoidal. Rayleigh-Bénard
a crucial tool in fluid dynamics for designing and analyzing flow investigates convection patterns in coupled heat trans-
systems where turbulence plays a significant role, simplify- fer and fluid flow equations.
ing the complex interactions of turbulent flow:
 
∂⃗u 10.26 Transonic Flow
ρ + ⃗u · ∇⃗u = −∇p + µ∇2 ⃗u + ρf⃗
∂t Transonic flow is critical in aerospace engineering, focusing
on the behavior of airflows that include regions of subsonic
10.18 Reaction-Diffusion Equation flow around objects.
Reaction-Diffusion equation combines chemical reaction dy-
namics with diffusion processes to describe patterns such
as stripes, spirals, and chaos that emerge in chemical and
biological systems:
∂ϕ
= D∆ϕ + R(ϕ)
∂t

10.19 Schr´’odinger Equation


Schr´’odinger equation describes how the quantum state of a
physical system changes over time:
∂ψ ℏ2
iℏ =− ∆ψ + V ψ
∂t 2m

10.20 Shallow Water Equation


Shallow Water equations describe fluid flow in shallow
waters, where the horizontal dimensions are much greater
than the vertical dimension:
∂h
+ ∇ · (h⃗u) = 0
∂t

10.21 Wave Equation


Wave equation models the propagation of various types
of waves, such as sound, light, and water waves through
different mediums:
∂2ϕ
= c2 ∆ϕ
∂t2

10.22 Airfoil Flow


Airfoil flow, governed by the Navier-Stokes equations,
serves to analyze lift and drag in aeronautical engineering,
spanning subsonic to supersonic conditions. In more com-
plex geometries, Beltrami flow, characterized by:
∇ × u = λu
which explores fluid dynamics where vorticity aligns with
the velocity, offering insights into vortex-dominated flows.

10.23 Cavity Flow


Cavity flow, CylinderFlowinder flow, and Dam flow are all
driven by the NS equations under specific no-slip boundary
conditions. Darcy flow describes the flow of fluid through
porous media as:
∇ · (k∇p) = S

10.24 Kovasznay Flow


Kovasznay flow is a specific solution of the NS equations
for low Reynolds number flows.

You might also like