0% found this document useful (0 votes)
5 views

Netn 8 1 178

Uploaded by

mA
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Netn 8 1 178

Uploaded by

mA
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

METHODS

Spectral dynamic causal modeling: A didactic


introduction and its relationship with
functional connectivity
Leonardo Novelli1 , Karl Friston2 , and Adeel Razi1,2,3
1
Turner Institute for Brain and Mental Health, School of Psychological Sciences, and Monash Biomedical Imaging,
Monash University, Australia
2
Wellcome Centre for Human Neuroimaging, University College London, London, United Kingdom
3
CIFAR Azrieli Global Scholars Program, Toronto, Canada

Keywords: Effective connectivity, Functional connectivity, State-space modeling, fMRI


an open access journal

ABSTRACT

We present a didactic introduction to spectral dynamic causal modeling (DCM), a Bayesian


state-space modeling approach used to infer effective connectivity from noninvasive
neuroimaging data. Spectral DCM is currently the most widely applied DCM variant for
resting-state functional MRI analysis. Our aim is to explain its technical foundations to an
audience with limited expertise in state-space modeling and spectral data analysis. Particular
attention will be paid to cross-spectral density, which is the most distinctive feature of spectral
DCM and is closely related to functional connectivity, as measured by (zero-lag) Pearson
correlations. In fact, the model parameters estimated by spectral DCM are those that best
reproduce the cross-correlations between all measurements—at all time lags—including the
Citation: Novelli, L., Friston, K., & Razi,
A. (2024). Spectral dynamic causal zero-lag correlations that are usually interpreted as functional connectivity. We derive the
modeling: A didactic introduction
and its relationship with functional functional connectivity matrix from the model equations and show how changing a single
connectivity. Network Neuroscience,
8(1), 178–202. https://doi.org
effective connectivity parameter can affect all pairwise correlations. To complicate matters, the
/10.1162/netn_a_00348 pairs of brain regions showing the largest changes in functional connectivity do not necessarily
DOI: coincide with those presenting the largest changes in effective connectivity. We discuss the
https://doi.org/10.1162/netn_a_00348 implications and conclude with a comprehensive summary of the assumptions and limitations
Supporting Information: of spectral DCM.
https://doi.org/10.1162/netn_a_00348

Received: 23 June 2023


Accepted: 23 October 2023

Competing Interests: The authors have


declared that no competing interests
INTRODUCTION
exist.
Dynamic causal modeling (DCM) refers to the Bayesian fitting of state-space models to explain
Corresponding Author:
Leonardo Novelli
observed physiological signals in terms of hidden neuronal activity and connectivity (K.
[email protected] Friston, Harrison, & Penny, 2003; Triantafyllopoulos, 2021). The distinction between observed
Handling Editor: and hidden variables is particularly relevant in neuroscience because the signals recorded via
Olaf Sporns noninvasive neuroimaging are not a direct measurement of neuronal states or connectivity. In
fact, using observed recordings to infer unobserved neural interactions is the main purpose of
DCM and the reason for its widespread adoption. It is also a distinctive feature that sets it apart
Copyright: © 2024 from functional connectivity analysis, which simply characterizes statistical dependencies in
Massachusetts Institute of Technology
Published under a Creative Commons observed time series. Over the last 20 years, the versatility offered by state-space models has
Attribution 4.0 International
(CC BY 4.0) license seen DCM applications in most neuroimaging modalities (Frässle et al., 2021; K. Friston et al.,
2003, 2019; K. J. Friston, Kahan, Biswal, & Razi, 2014; Jung, Kang, Chung, & Park, 2019;

The MIT Press


Spectral dynamic causal modeling: A didactic introduction

Kiebel, Garrido, Moran, & Friston, 2008; Moran et al., 2009; Tak, Kempny, Friston, Leff, &
Penny, 2015), along with recent applications to epidemiology (K. J. Friston, Flandin, & Razi,
2022) and beyond (Bach, Daunizeau, Friston, & Dolan, 2010). Navigating the vast and tech-
nical DCM literature, however, is by no means a trivial task—especially to the novice learner.
Happily, there are excellent introductory resources on individual- and group-level analysis
using deterministic versions of DCM, which are designed for neuroimaging experiments
involving behavioral tasks (Stephan, 2004; Stephan et al., 2010; Zeidman, Jafarian, Corbin,
et al., 2019; Zeidman, Jafarian, Seghier, et al., 2019). A recent primer on variational Laplace
explains how Bayesian inference is performed in DCM (Zeidman, Friston, & Parr, 2023) using
SPM: the SPM software (https://www.fil.ion.ucl.ac.uk/spm). However, there is a lack of introductory
The Statistical Parametric Mapping material on DCM for resting-state data analysis, despite the remarkable growth of the resting-
open software is widely used for state paradigm and the widespread uptake of these methods. Here, we fill this gap with a didac-
dynamic causal modeling.
tic introduction to spectral DCM that aims to explain its technical aspects (K. J. Friston et al.,
2014; Razi, Kahan, Rees, & Friston, 2015).
What distinguishes spectral DCM from other DCM versions, and when should we choose
it? Firstly, spectral DCM employs random differential equations instead of deterministic ones.
Endogenous fluctuations: These are used to model spontaneous endogenous fluctuations in neuronal activity, enabling
Intrinsic stochastic fluctuations that resting-state analysis in the absence of experimental inputs. But its distinctive feature is the
serve as a proxy for thoughts or
focus on modeling the measured cross-spectral density, which is a second-order summary sta-
mind-wandering-like processes
during resting-state brain activity. tistic of the time series data. This is closely related to Pearson’s correlation, another second-
order statistic and the most widely used measure of functional connectivity in neuroimaging
(see diagram in Figure 1; for mathematical relationships, also see K. J. Friston et al., 2014,
Figure 1). In fact, the correlation is obtained by normalizing the covariance such that its values
are restricted to the [−1, 1] interval. In turn, the covariance is a special case of the
cross-covariance function between two time series, when there is no time lag between them.
Finally, the Fourier transform of the cross-covariance function gives the cross-spectral density
(under stationarity assumptions). In other words, the cross-spectral density is the equivalent
representation of the cross-covariance function in the frequency domain instead of the time
domain—an important relationship that we will unpack later in this article.

Figure 1. The distinctive feature of spectral dynamic causal modeling (DCM) is the focus on
modeling the measured cross-spectral density (top right), which is a second-order summary statistic
of the time series data. This is closely related to Pearson’s correlation (bottom left), another second-
order statistic and the most widely used measure of functional connectivity in neuroimaging. In fact,
both can be derived from the cross-covariance function (top left). The cross-spectral density is
obtained directly via the Fourier transform (horizontal arrow). As such, it is the equivalent represen-
tation of the cross-covariance function in the frequency domain instead of the time domain. The
correlation is obtained in two steps (vertical arrows). First, we compute the covariance as a special
case of the cross-covariance function between two time series, by setting the time lag between them
to zero. Second, we normalize the covariance such that its values are restricted to the [−1, 1] inter-
val to obtain the zero-lag correlation. For the mathematical relationships among these quantities,
we refer readers to K. J. Friston et al. (2014, Figure 1).

Network Neuroscience 179


Spectral dynamic causal modeling: A didactic introduction

Spectral DCM fits the parameters of a linear, continuous-time model to the observed cross-
spectral density. The estimated parameters are those that best reproduce the cross-correlations
Effective connectivity: between all variables, at all time lags. In particular, the estimated effective connectivity also
The directed effect of one brain reproduces the zero-lag correlations between the observed time series—the most common mea-
region on another, measured as a rate sure of functional connectivity in the literature. This would be appealing to researchers who are
of change.
interested in both effective and functional connectivity. The nuanced relationship between effec-
tive and functional connectivity is explored in the Effective and Functional Connectivity section.
Forward generative model: Prior to that, we introduce and explain the various components of the generative model, that is,
A model that generates the data the model that generates the cross-spectral density given a set of parameters. These basic build-
feature of interest using a set of ing blocks are used routinely in signal processing and control theory and are often presented
biologically plausible parameters. In
only briefly in the DCM literature. Here, we adopt an inclusive and slower pace for those
spectral DCM, the generative model
produces the BOLD cross-spectral who are not familiar with state-space models and spectral data analysis. That said, we count
density using neuronal and on the reader to fill in the gaps and look up concepts such as the Fourier transform (Oppenheim,
hemodynamic parameters. Willsky, & Nawab, 1997; Smith, 2002) or convolution (Smith, 2002), if needed. Moving from
theory to practice, a step-by-step guide to running spectral DCM on a real resting-state functional
MRI (fMRI) dataset is provided in Chapter 38 of the SPM12 manual (Ashburner et al., 2020).
A final reason for choosing spectral DCM is its computational advantage compared with sto-
chastic DCM (Li et al., 2011). It is important to note that the lower computational complexity and
the resulting increase in speed rely on the assumption that the statistics of endogenous neuronal
fluctuations are conserved over the experimental time window, making spectral DCM suitable for
resting-state neuroimaging experiments. Experimental inputs can also be included via an additional
term in the model, although applications to task experiments are infrequent in the literature. Intro-
ducing even stronger assumptions leads to even faster schemes, such as regression DCM, which
can analyze hundreds of brain regions in minutes (Frässle et al., 2021). However, this method
forgoes the strict separation between hidden and observed variables that is typical of state-space
modeling and that we have used to define DCM herein. As the name suggests, regression DCM is
more akin to the Bayesian fitting of a multivariate autoregressive model in the frequency domain.
The key assumptions made in spectral DCM are summarized in the Assumptions and
Limitations section.

BUILDING THE GENERATIVE MODEL, ONE ELEMENT AT A TIME

The signals recorded via noninvasive neuroimaging are not a direct measurement of neuronal activ-
ity. In the case of fMRI, the observed blood-oxygen-level-dependent (BOLD) signal captures changes
in blood oxygenation that indirectly reflect neuronal activity. For this simple reason, spectral DCM
models the neuronal and the observed variables separately (denoted by x and y, respectively). Such a
distinction represents both the main strength and the challenge of the DCM framework.

Neuronal Model

In spectral DCM, the neuronal model is defined by the linear random differential equation
ẋðt Þ ¼ Ax ðt Þ þ v ðt Þ; (1)

where x(t ) is the n-dimensional state vector


2 3
x1 ðt Þ
6 7
6 7
x ðt Þ ¼ 6
6 ⋮ 7
7
4 5
xN ð t Þ

Network Neuroscience 180


Spectral dynamic causal modeling: A didactic introduction

whose N scalar components represent different brain regions. These are called state variables
in the state-space modeling literature (Durbin & Koopman, 2012; Williams & Lawrence,
2007). The time derivative of the state vector is denoted as ẋðt Þ, where differentiation with
respect to time is performed component-wise, that is, ẋðt Þ ¼ ½ẋ1 ðt Þ; …; ẋN ðt Þ⊺ . The activity of
the system is sustained by stochastic, non-Markovian, endogenous fluctuations denoted as
v(t ), which we will consider in the Endogenous Fluctuations section. Let us first turn our
attention to the A matrix, which defines and parameterizes the effective connectivity.

Effective Connectivity

The effective connectivity quantifies the directed effect of one brain region on another. Effec-
tive connectivityThe directed effect of one brain region on another, measured as a rate of
change. To better understand its meaning, consider a simple deterministic system with two
brain regions and no stochastic components. The matrix-vector notation in Equation 1 can
be unpacked into two scalar components,
ẋ 1 ðt Þ ¼ a11 x1 ðt Þ þ a12 x2 ðt Þ; (2a)

ẋ 2 ðt Þ ¼ a21 x1 ðt Þ þ a22 x2 ðt Þ; (2b)

where ajk corresponds to the element in the j th row and k th column of A. More explicitly, in
this example, we have
 
a11 a12
A¼ : (3)
a21 a22

Let’s initially assume that x1 is inactive at time t1 and set x1(t1) = 0 in Equation 2a. We get
ẋ 1 ðt1 Þ ¼ a12 x2 ðt1 Þ; (4)

stating that the instantaneous change in x1 is proportional to the input from x2. The effective
connectivity a12 is simply the coefficient that determines the rate of such change. Therefore, in
DCM, the effective connectivity ajk quantifies the instantaneous response rate of xj caused by a
change in xk, in the ideal case where all other variables were kept fixed or set to zero (readers
who are familiar with multivariate calculus would recognize this as a partial derivative and the
A matrix as a Jacobian). Being a rate, effective connectivity is always measured in hertz
(change per second).
Figure 2 shows the impact of a12 on the response of x1 to a constant input from x2 with
duration Δt. Note that the effective connectivity determines the initial slope of the curve,
which is steeper for high values of a12. However, once the input from x2 ceases, the magnitude
and duration of the response in x1 no longer depend on a12; instead, they only depend on the
self-connection a11. That is, after the time interval Δt, we have x2(t ) = 0 and Equation 2a
becomes
ẋ 1 ðt Þ ¼ a11 x1 ðt Þ; (5)

which has the simple exponential solution


x1 ðt Þ ¼ ce a11 t ; (6)

for each time t > (t1 + Δt ), where the constant factor c is the value of x1 when the input from x2
ceases, that is, c = x1(t1 + Δt ). It is useful, and biologically plausible, to impose a negativity
constraint on the rate constant of the self-connections (i.e., a11) to avoid instability and diver-
gence to infinity. In this example, a negative value of a11 guarantees that x1(t ) converge to zero.

Network Neuroscience 181


Spectral dynamic causal modeling: A didactic introduction

Figure 2. The role of the effective connectivity parameters in a deterministic linear system with
two variables. The parameter a12 determines the instantaneous rate of response of x1 to an input
from x2. In this example, the input is constant with duration Δt. The initial slope of the response is
steeper for higher values of a12. Once the input from x2 ceases, the activity of x1 decays exponen-
tially in the absence of other inputs. The decay rate is determined by the self-connection a11, with
larger negative values resulting in faster decay (shorter memory).

In the multivariate case, the stability of a linear dynamical system is guaranteed when all the
eigenvalues of the effective connectivity matrix A have negative real parts (Izhikevich, 2006).
(Note: The reason why DCM studies often report positive values on the diagonal of the
effective connectivity matrix A is that the self-connections are transformed using the logarith-
mic function log(−2a) by SPM. This convention is technically motivated by the use of
log-normal priors to enforce positivity or negativity on certain parameters; here, to enforce
recurrent or self inhibition. A reported zero value for a self-connection corresponds to
−0.5 Hz, which is the default prior self-connectivity value in SPM12. Negative values corre-
spond to slower decay rates in the (−0.5, 0) range, while positive values correspond to faster
decays, that is, <−0.5 Hz.)

Power Spectral Density and Cross-Spectral Density

A time-varying signal z(t ) is a function of time. However, the same function could be repre-
sented as a sum of elementary sine waves, each characterized by a single frequency. This
sum is weighted, with some frequencies carrying more weight than others (i.e., the sine waves
can have different amplitudes). Every function, z(t ), is a unique mix of frequencies, some more
pronounced, some less. This unique profile is called Fourier frequency spectrum. The time- and
frequency-domain representations of a function are two sides of the same coin: They are
equally informative but reveal different and complementary aspects of the same data. The
Fourier transform (F ) is the mathematical tool that turns the coin over: It converts a function
of time into the corresponding function of frequency (while the inverse Fourier transform does
the opposite). If we denote the (angular) frequency by ω, then F turns the time function z(t ) into
the frequency function Z(ω). Mathematically, this transformation is achieved via the integral
Z ∞
Z ðω Þ ¼ F fz ðt Þg ¼ z ðt Þe −iωt dt (7)
−∞

(for time signals, we often only consider positive values of t and compute the integral in the
[0, ∞] interval; this is equivalent to setting z(t ) = 0 for all t < 0). The resulting Z(ω) is a function
of the frequency ω and no longer depends on time. Somewhat ambiguously, the term “Fourier
transform” is used to denote both the mathematical operation and the resulting function, Z(ω).
Note that Z(ω) typically returns complex values, owing to the presence of the imaginary unit i in

Network Neuroscience 182


Spectral dynamic causal modeling: A didactic introduction

Equation 7. Yet, the squared magnitude of a complex number (e.g., |Z(ω)|2) is a real number,
defined as the square of its real part plus the square of its complex part. Therefore, the magni-
tude of the Fourier transform is a function that returns only real values, which makes it easier to
understand and visualize. This function is the power spectral density of the signal. The simplic-
ity of interpretation comes with a loss of information. After computing the squared magnitude to
obtain the power spectral density, we cannot go back and recover the original complex-valued
Fourier transform (similarly to what happens for real numbers, where the square produces a
unique result but the square root has two solutions). A similar information loss affects all
second-order properties of the time series, including the cross-spectral density and the (cross-)
correlation, which are two key concepts that we will discuss and connect later.
Until now, we have considered only deterministic signals. However, spectral DCM is con-
cerned with stochastic (nondeterministic) processes, such as the endogenous fluctuations that
we will examine in the next section, which are a proxy for thoughts or mind-wandering-like
Stochastic process: processes during resting-state brain activity. A stochastic process is a sequence of random var-
A sequence of random variables, iables. If x is a stochastic process indexed by time, then x(t ) is not a number but a random
typically indexed by time. variable with a given probability distribution (see Figure S1 in the Supporting Information
for an illustration). The simplest example is the white noise process, which follows a normal
distribution at each time point, independent from previous time points. The rest of this
paragraph explains why white noise has a flat power spectral density and is meant for the
mathematically versed reader. According to stochastic calculus, the Fourier transform of a
stochastic process is also a stochastic process; however, it is indexed by frequency instead
of time. In the case of white noise, each frequency ω corresponds to a distinct random variable
that follows the same complex normal distribution as the other frequencies but is independent
of them. In turn, the power spectral density is also a stochastic process indexed by frequency,
obtained as the squared magnitude of the Fourier transform. Therefore, in the case of stochas-
 
tic processes, we will consider the expected value of the power spectral density (E jZ ðω Þj2 ),
which is a number, that is, a scalar function of frequency. Being the expectation of the squared
magnitude, the power spectral density can also be understood as the variance of the Fourier
transform of a stochastic process, if the latter has zero mean:
   
Var½F fz ðt Þg ¼ Var½Z ðω Þ ¼ E jZ ðω Þj2 þ j E½Z ðω Þ j2 ¼ E jZ ðω Þj2 : (8)
|fflfflfflffl{zfflfflfflffl}
¼0

In conclusion, white noise has a flat power spectral density because the variance of its Fourier
transform is the same for all frequencies. (Technical note: for simplicity, we assume that the
Fourier transform exists. The general definition of the power spectral density involves a limit
over the bounds of the Fourier integral; Miller & Childers, 2012).
From here, we can seamlessly transition to multivariate stochastic processes using the same
mathematical tools. If x(t ) is a vector with one element per brain region, its Fourier transform
X(ω) is also a vector. This is important because it applies to the stochastic neuronal variable in
Equation 1. The multivariate analogue of the power spectral density is the cross-spectral
density, defined as the covariance matrix
h i
Gx ðω Þ ¼ cov½X ðω Þ; X ðω Þ ¼ E X ðω ÞX ðω Þ† ; (9)

where † indicates the conjugate transpose of the vector X(ω). The dot product between the
column vector X(ω) and its conjugate transpose is a square matrix. Specifically, Gx(ω) is an
N × N matrix whose diagonal elements are the power spectral densities (variances) of

Network Neuroscience 183


Spectral dynamic causal modeling: A didactic introduction

individual neuronal variables, representing various brain regions. These are real positive
numbers. Each off-diagonal element describes the cross-spectral density (covariance)
between a different pair of variables. Unlike the diagonal elements, they generally take
complex values.
Admittedly, the cross-spectral density definition as a covariance in the frequency domain is
quite abstract. A better intuition will develop after exploring the close relationship between
cross-spectral density and functional connectivity. In the Cross-Spectral Density and Func-
tional Connectivity section, we’ll see how the cross-spectral density is the Fourier transform
of the cross-covariance function, which captures both the correlation matrix and its time-
lagged extensions. For now, the power spectral density definition given above is sufficient
to understand how endogenous fluctuations are modeled in spectral DCM.

Endogenous Fluctuations

Stable deterministic linear systems can only converge to a state of permanent equilibrium or
produce an infinite sequence of identical oscillations. To overcome these limitations and add
variability to the neuronal oscillations, we can introduce endogenous (intrinsic) fluctuations in
the system, sometimes referred to as state noise. Endogenous fluctuationsIntrinsic stochastic
fluctuations that serve as a proxy for thoughts or mind-wandering-like processes during
resting-state brain activity. For example, adding the stochastic term v1(t) to Equation 5 gives

ẋ 1 ðt Þ ¼ a11 x1 ðt Þ þ v1 ðt Þ: (10)

At each time t, the random variable v1(t) provides an endogenous input to the neuronal var-
iable so that x1(t ) doesn’t converge to zero despite the negative self-decay rate a11. This holds
true even in the absence of experimental inputs and inputs from other variables, as is the case
in Equation 10. In other words, the neuronal activity is now also modeled as an intrinsically
fluctuating signal, that is, a stochastic process. The addition of a stochastic term to a dynamical
system is traditionally used to model noise, often assumed to be white (that is, serially uncor-
related and with a flat spectral density). Spectral DCM relaxes this assumption and allows the
endogenous fluctuations to be temporally correlated, which makes them non-Markovian and
smooth. Specifically, their power spectral density is modeled to follow a power-law decay as a
function of the frequency ω:

−βvj
Gvj ðω Þ ¼ αvj ω : (11)

The parameters αvj and βvj determine the amplitude and the decay rate of the power-law and
may differ between neuronal regions ( j = 1, …, N ). Note that the power-law family includes the
flat spectrum (white noise) as a special case where βvj = 0.
The endogenous fluctuations driving one neuronal variable are assumed to be independent
of those driving the others. The result is that the cross-spectral density of the endogenous fluc-
tuations vector v(t ) = [v1(t ),…,vN (t )]⊺ is a diagonal matrix with entries Gvj(ω) defined according
to Equation 11. More precisely, the Fourier transform of v(t), denoted as the vector V(ω), is a
multivariate Gaussian random variable with zero mean and diagonal covariance matrix
 
Gv ðω Þ ¼ E½V ðω ÞV ðω Þ†  ¼ diag Gvj ðω Þ ; for j ¼ 1; …; N; (12)

where each diagonal entry is the power spectral density of an endogenous fluctuation variable.
We will return to this expression when assembling all the elements of the generative model.

Network Neuroscience 184


Spectral dynamic causal modeling: A didactic introduction

Observation Function

We motivated the use of state-space models by their ability to distinguish between hidden and
observed variables. The function that relates the two is known as the observation function.
Imagine hearing thunder, where the sound (observed variable) is generated by lightning (hid-
den variable). The role of the observation function is to describe the intensity and delay of the
sound based on the distance from the lightning. The specific observation function used in fMRI
is the hemodynamic response function (HRF), which links the neuronal activity to the
observed BOLD signal. Similarly to the lightning and thunder example, there is a delay
between the neuronal activity and the ensuing peak of the BOLD response. The profile of
the response depends on several region-specific biophysical parameters and can be modeled
mathematically (Stephan, Weiskopf, Drysdale, Robinson, & Friston, 2007). For simplicity, we
will denote the HRF of a brain region j as hj (t ), without explicitly indicating the biophysical
parameters. The BOLD signal yj (t ) is obtained via convolution of the HRF with the neuronal
activity:
yj ðt Þ ¼ hj ðt Þ  xj ðt Þ þ ej ðt Þ; (13)

where j = 1, …, N and ej (t ) denotes the observation noise. By analogy with the endogenous
fluctuations in Equation 11, spectral DCM assumes that the power spectral density of the
observation noise also follows a power-law decay:
−βej
Gej ðω Þ ¼ αej ω : (14)

The equivalent representation in vector notation is


y ðt Þ ¼ hðt Þ  x ðt Þ þ eðt Þ;

E ðω Þ ∼ N ð0; Ge ðω ÞÞ; (15)


where h(t ) is a diagonal matrix with diagonal elements hj (t ) for j = 1, …, N (as before, N is the
number of regions). The noise terms in the vector e(t ) = [e1(t ), …,eN(t )]⊺ are assumed to be
independent of each other, that is, the noise in each region is independent of the noise in
the other regions. Thus, the Fourier transform of e(t ), denoted as E(ω), is a multivariate
Gaussian random variable with zero mean and diagonal covariance matrix E½E ðω ÞE ðω Þ†  ¼
Ge ðω Þ, whose diagonal entries are G e j (ω), for all j = 1, …, N. When working in the
frequency domain, we can implement the hemodynamic response function as a
filter—usually suppressing high frequencies—and implementing delays by operating on the
imaginary parts of the Fourier coefficients.

Putting It All Together

The full state-space model used in spectral DCM is


ẋðt Þ ¼ Ax ðt Þ þ vðt Þ;

yðt Þ ¼ hðt Þ  xðt Þ þ eðt Þ;

V ðω Þ ∼ N ð0; Gv ðω ÞÞ;

E ðω Þ ∼ N ð0; Ge ðω ÞÞ; (16)

where each equation has been introduced in its respective section above (for simplicity, we
have omitted the additional term describing the sampling error of the cross-spectral density;
K. J. Friston et al., 2014). Since DCM uses a Bayesian framework, all the model parameters

Network Neuroscience 185


Spectral dynamic causal modeling: A didactic introduction

are equipped with a prior distribution (K. J. Friston et al., 2014). Their posterior distribution is
then computed via Bayesian inference using variational Laplace (K. J. Friston, Mattout,
Trujillo-Barreto, Ashburner, & Penny, 2007; Zeidman et al., 2023). In spectral DCM, this goal
is achieved by fitting the generative model in Equation 16 to the cross-spectral density of the
data, which is defined as the matrix

Gy ðω Þ ¼ E½F fyðt ÞgF fyðt Þg† : (17)

In SPM12, the cross-spectral density is estimated in a parametric fashion, by fitting a vector


autoregressive model to the observed time series and leveraging its properties (the default
model order used for the slow BOLD fluctuations is 8). Going “backwards” from the
observed cross-spectral density to the parameter distribution is only possible once we specify
the forward model. Forward generative modelA model that generates the data feature of
interest using a set of biologically-plausible parameters. In spectral DCM, the generative
model produces the BOLD cross-spectral density using neuronal and hemodynamic param-
eters. Specifying the forward model means to derive Gy(ω) as a function of the model param-
eters. We can start by invoking the convolution theorem, which states that the Fourier
transform of a convolution of two functions is the (dot) product of their Fourier transforms.
In the case of the observed signal y(t), we get

F fyðt Þg ¼ F fhðt Þ  xðt Þ þ eðt Þg

¼ F fhðt ÞgF fxðt Þg þ F feðt Þg

¼ H ðω ÞX ðω Þ þ E ðω Þ; (18)

where the terms in capital letters are the Fourier transforms of the corresponding lowercase
functions. Linear control theory gives us a useful expression for X(ω), obtained as the solution
to the linear differential equation in Equation 1 via the Laplace method:

X ðω Þ ¼ ðiωI − AÞ−1 V ðω Þ; (19)

where I is the N-dimensional identity matrix (Pipes, 1968). Plugging Equations 18 and 19
into Equation 17 yields

Gy ðω Þ ¼ E½H ðω ÞX ðω ÞX ðω Þ† H ðω Þ†  þ E½E ðω ÞE ðω Þ† 

þ E½H ðω ÞX ðω ÞE½E ðω Þ† þ E½E ðω ÞE½H ðω ÞX ðω Þ† (20)

 −
¼ E½H ðω ÞðiωI − AÞ− V ðω ÞV ðω Þ† −iωI − A⊺ H ðω Þ† 

þ E½E ðω ÞE ðω Þ†  (21)

 −
¼ H ðω ÞðiωI−AÞ− Gv ðω Þ −iωI − A⊺ H ðω Þ† þ Ge ðω Þ: (22)

The last two terms in Equation 20 vanish since the noise term E(ω) is a Gaussian random variable
with zero mean, that is, E½E ðω Þ ¼ 0. In Equation 22, we have substituted E½V ðω ÞV ðω Þ†  ¼
Gv ðω Þ and E½E ðω ÞE ðω Þ†  ¼ Ge ðω Þ, as defined in the Endogenous Fluctuations and the Obser-
vation Function sections.

Network Neuroscience 186


Spectral dynamic causal modeling: A didactic introduction

We have now fully described the forward model used in spectral DCM. The predicted
cross-spectral density can be computed using the following model parameters:

(a) the effective connectivity parameters in the A matrix;


(b) the power-law parameters (i.e., the amplitude and the exponent) describing the spectrum
of the endogenous fluctuations and the observation noise (Equations 11 and 14); and
(c) the observation function parameters, such as the biophysical parameters of the BOLD
balloon model.

Crucially, neither the neuronal state variables X(ω) nor the endogenous fluctuations V(ω)
appear in Equation 22, only the parameters describing their cross-spectral densities. This
parameterization allows spectral DCM to infer the model parameters listed above without
inferring the hidden neuronal states. Inferring the state variables (neuronal time series) is a
computationally harder problem addressed by stochastic DCM (Li et al., 2011).

Simulated and Empirical Cross-Spectral Density

The forward model enables Bayesian inversion and also allows us to understand how different
parameters affect the observed cross-spectral density. Figure 3 shows how the cross-spectral
density [Gy(ω)]21 varies in a system with two hidden neuronal state variables, as a function of
their effective connectivity strength. Specifically, the chosen effective connectivity matrix is
2 1 3

2
0
A¼4 5: (23)
1
a21 − 2

The asymmetry in A indicates a directed effect of the first state variable on the second, but not
vice versa. The strength of the connection is determined by a21. In this first simple example,
increasingly large and positive values of a21 generate increasingly large and positive cross-
spectral density amplitudes. Similarly, negative values generate negative amplitudes (but we’ll
soon encounter more complex scenarios that violate this monotonic relationship). When
a21 = 0, the two neuronal state variables are independent of each other and the cross-spectral

Figure 3. Cross-spectral density of the BOLD signal as a function of the effective connectivity
between two neuronal state variables (a21). In this first example, increasingly large and positive
values of a21 generate increasingly large and positive cross-spectral density amplitudes, while neg-
ative values generate negative amplitudes. However, this monotonic relationship doesn’t hold in
more realistic scenarios. When a21 = 0, the two neuronal state variables are independent and the
cross-spectral density is zero at all frequencies.

Network Neuroscience 187


Spectral dynamic causal modeling: A didactic introduction

density is zero at all frequencies (Figure 3; also see the Supporting Information for an expla-
nation of the real and imaginary parts). Here, the cross-spectral densities of the endogenous
fluctuations and of the observation noise are identical to each other and identical for both state
variables (so that both diagonal entries are equal):
2 1 3
0
6 ð1 þ ω Þ 7
2
−2
Gv ðω Þ ¼ Ge ðω Þ ¼ 4 5 ¼ ð1 þ ω Þ I: (24)
1
0 2
ð1 þ ω Þ

The observation function is also identical for both state variables. Spectral DCM employs an
HRF model with region-specific biophysical parameters (Stephan et al., 2007). However, for
simplicity, the examples in this section are based on the canonical HRF gamma-mixture,
whose Fourier spectrum is

6ðiω þ 1Þ10 − 1
H ðω Þ ¼ I: (25)
5ðiω þ 1Þ16

High frequencies are “cut off” since the slow HRF smooths the faster neuronal activity (a typical
feature of the BOLD signal).
Let’s now extend the system in Equation 23 by adding a third neuronal state variable (see
Figure 4) and a third row and column to the effective connectivity matrix:
2 1 3

2
0 0
6 7
6 7
6
A ¼ 6 a21 − 2 0 7
1
7: (26)
4 5
1 3 1
− −
2 2 2

As before, the strength of the directed connection from x1 to x2 is set via the parameter a21.
The third neuronal state variable receives an inhibitory influence from x1 and an excitatory
influence from x2. All three state variables have the same negative self-connections repre-
sented by the diagonal elements of A. The resulting cross-spectral density is plotted in
Figure 5, assuming the same canonical HRF defined in Equation 25 and the same power-
law parameters as in Equation 24. The figure shows only the special case where a21 = 0,
but different values of a21 would generate different sets of nine plots.
It is instructive to qualitatively compare the simulated cross-spectral density plots in
Figure 5 with the empirical plots in Figure 6, obtained by fitting the spectral DCM model to
a real resting-state dataset (Razi et al., 2015). Note that the empirical plots correspond to a

Figure 4. Network graph of the system described by the effective connectivity matrix A in
Equation 26. The strength of the directed connection from x1 to x2 is set via the parameter a21.
The third neuronal state variable receives an inhibitory influence from x1 and an excitatory influ-
ence from x2. All three variables have the same negative self-connections represented by the diag-
onal elements of A.

Network Neuroscience 188


Spectral dynamic causal modeling: A didactic introduction

Figure 5. Simulated BOLD cross-spectral density of the system in Figure 4, generated via the forward model in Equation 22. The effective
connectivity matrix is presented in Equation 26. This figure shows only the special case where a21 = 0, but using different values of a21 would
generate different sets of nine plots. Each plot corresponds to a pair of state variables and, if we chose a specific value for the frequency ω, we
would obtain a 3 × 3 matrix (the cross-spectral density matrix at that specific frequency). The three plots on the diagonal represent the power
spectral densities, which take real values (zero imaginary part). The same canonical hemodynamic response function defined in Equation 25 is
used for all variables in this example, although spectral DCM employs the BOLD balloon model with region-specific biophysical parameters.
The power-law parameter of the endogenous fluctuations and of the observation noise are set as in Equation 24. For simplicity, we have
normalized the rows and columns of the cross-spectral density so that integrating over all frequencies directly generates the correlation matrix
instead of the covariance matrix.

system with four neuronal variables instead of three. Here, a four-region default mode network
is modeled with posterior cingulate cortex (PCC), medial prefrontal cortex (mPFC), and bilat-
eral inferior parietal cortices (IPC) as its nodes. We used a fully connected model where each
state variable (or node) is connected to every other state variable. The figure is reproduced

Network Neuroscience 189


Spectral dynamic causal modeling: A didactic introduction

Figure 6. BOLD cross-spectral density obtained via spectral DCM analysis of a real resting-state
fMRI dataset. The dashed lines represent the predicted cross-spectral density and the solid lines the
observed ones. Reproduced with permission from Chapter 38 of the SPM12 manual (Ashburner
et al., 2020).

from Chapter 38 of the SPM12 manual (Ashburner et al., 2020), which provides the link to the
data and a step-by-step tutorial to replicate the DCM specification and estimation results using
the SPM graphical interface. Despite the clear differences due to different data and parameter
settings, the empirical cross-spectral density plots also feature a single large peak at low fre-
quencies, followed by a decay at larger frequencies (with smaller fluctuations). If the data are
too noisy, owing to head motion and various physiological artifacts, additional large peaks
may appear at higher frequencies. These empirical plots can be visualized using the review
function (spm_dcm_fmri_csd_results()) in SPM. Another DCM diagnostics function
(spm_dcm_fmri_check()) in SPM also reports the percentage of variance explained (R2),
which is a useful performance metric to judge the quality and success of the model fit to the
data (see Figure S2 in the Supporting Information).

Network Neuroscience 190


Spectral dynamic causal modeling: A didactic introduction

EFFECTIVE AND FUNCTIONAL CONNECTIVITY

The main difference between effective and functional connectivity is that the former charac-
terizes the interaction between neuronal state variables, while the latter describes statistical
dependence between the observed variables (e.g., the BOLD signals in fMRI). Given its logical
and biological precedence, effective connectivity can be used to derive functional connectiv-
ity. Specifically, the functional connectivity matrix of the observed time series can be obtained
by integrating the cross-spectral density over all frequencies. The reason will become clear in
the following sections, and a mathematical proof will be given in Equation 37. Let’s first return
to the simulated example depicted in Figure 4 to understand the relevant implications. The
effective connectivity matrix of the system is given in Equation 26. Integrating the cross-
spectral density in Figure 5 over all frequencies gives us the 3 × 3 symmetric correlation matrix
R, typically used to quantify the functional connectivity:
2 3
1 ρ21 ρ31
6 7
6 7
R¼6
6 ρ21 1 ρ32 7
7: (27)
4 5
ρ31 ρ32 1

The nine integrals produce real numbers because all the imaginary parts are “odd functions”
so their integrals are equal to zero (see the Supporting Information for a discussion of even and
odd functions and their Fourier transforms). Crucially, all three pairwise correlations (ρ21, ρ31,
ρ32) explicitly depend on the effective connectivity parameter a21, as illustrated in Figure 7 (for
the analytic solutions, see Equation S3 in the Supporting Information).
This shows that a local variation in a single effective connectivity parameter in the A matrix
can have a global impact across all functional connectivity values in the network. Let’s exam-
ine each pair to unpack some of the many nuances involved. First, the symmetric nature of the
correlation matrix requires that ρkj = ρjk, even if akj ≠ ajk. This is a key difference between
functional and effective connectivity: Only the latter is directed and is able to differentiate
between two bilateral connections.

Figure 7. Simulated BOLD functional connectivity of the system in Figure 4, as a function of the
effective connectivity parameter a21. The three curves corresponding to the correlation values are
obtained analytically by integrating the cross-spectral density shown in Figure 5 over all frequen-
cies. Changing a single effective connectivity parameter, a21, has a global impact across all pairwise
functional connectivity values in the network. Both correlations ρ21 and ρ31 increase monotonically
with a21, but this is not the case for ρ32. Therefore, there is no one-to-one mapping between effec-
tive and functional connectivity that holds in general.

Network Neuroscience 191


Spectral dynamic causal modeling: A didactic introduction

Both correlations ρ21 and ρ31 increase monotonically with a21, but this is not the case for
ρ32. Therefore, there is no one-to-one mapping between effective and functional connectivity
that holds in general (Park & Friston, 2013). This poses a challenge for spectral DCM because
it makes model fitting an ill-posed problem with multiple potential solutions. Technically, this
problem is mitigated by using the cross-spectral density that implicitly contains information
about functional connectivity over all lags (we will unpack this below) and by using priors
on the solutions implicit in the functional form of the DCM. Although model fitting remains
an ill-posed problem, these two additional constraints allow spectral DCM to find better solu-
tions, such that the model can reproduce a larger set of statistical relationships between the
observed time series. One way to appreciate the amount of additional information and constraints
provided by the cross-spectral density over the zero-lag correlation is to inspect Figure 5 again.
Zero-lag correlation measures only the area under the curves in the nine plots, regardless of their
detailed shapes. On the other hand, spectral DCM fits all values taken by the cross-spectral den-
sity curves at different frequencies. This requirement narrows the parameter space in a useful way
(note that the area under the curve is still reproduced as a consequence).
The sign of ρ31 (representing a positive or a negative correlation) either matches or contradicts
the sign of the underlying effective connectivity a31 (representing an excitatory or an inhibitory
connectivity), depending on how we set a21. There is a value of a21 such that ρ31 = 0, even though
the underlying effective connectivity a31 is negative (i.e., inhibitory). Again, this shows that each
pairwise functional connectivity value is a summary statistic (global property) of the system:
Given a pair of variables j and k, the correlation value ρkj depends not only on the corresponding
effective connectivity parameter akj but also, potentially, on all the entries of the A matrix.
While here we have discussed its dependence on only effective connectivity, functional
connectivity also depends on the parameters characterizing the endogenous fluctuations,
the observation function, and the observation noise. This is because all these parameters
appear in the forward model for the cross-spectral density derived in Equation 22 and, in turn,
determine the correlation matrix of the system. In fMRI, specific and reproducible spatial pat-
terns of functional connectivity could simply arise from specific and reproducible variations in
the hemodynamic response across brain regions—even in the absence of interregional effec-
tive connectivity (Rangaprakash, Wu, Marinazzo, Hu, & Deshpande, 2018). When comparing
groups, differences in BOLD functional connectivity may reflect differences in the vasculature
rather than in effective connectivity (or both), for example because of aging or a neurodegen-
erative or psychiatric disease (Tsvetanov, Henson, & Rowe, 2020). This important point is fur-
ther discussed in K. J. Friston (2011), where it is shown that the correlation values are also
influenced by different levels of observation noise:

One can see a change in correlation by simply changing the signal-to-noise ratio of the
data. This can be particularly important when comparing correlations between different
groups of subjects. For example, obsessive compulsive patients may have a heart rate
variability that differs from normal subjects. This may change the noise in observed
hemodynamic responses, even in the absence of neuronal differences or changes in
effective connectivity. (p. 22)

In fact, separating these confounding factors from the effective connectivity was one of the
main motivations for the development of DCM. This is not to say that DCM is without issues:
Fitting a large model with many parameters—using the limited amount of data available in
typical fMRI studies—is not guaranteed to produce optimal estimates. We’ll discuss this and
other limitations in the Assumptions and Limitations section.

Network Neuroscience 192


Spectral dynamic causal modeling: A didactic introduction

Functional connectivity faces yet another challenge. Even though we observed that any
variations in effective connectivity can propagate through the network and affect the correla-
tion among several brain areas, one could hope that the pair of variables showing the largest
functional connectivity change would coincide with the pair evincing the largest effective con-
nectivity change. Alas, this is also not guaranteed. When a21 increases from zero to one, R
shows the following changes:
2 3
0 0:513367 0:379316
6 7
6 7
ΔR ¼ 66 0:513367 0 0:782089 7:
7 (28)
4 5
0:379316 0:782089 0

The largest functional connectivity change is observed in ρ32, not in ρ21 as we might have
expected. The fact that the variable pairs showing the largest changes in functional connec-
tivity do not necessarily coincide with the pairs with the largest changes in effective connec-
tivity may hinder the use of functional connectivity as a quick and easy way for selecting
regions of interest in DCM studies.
On the other hand, functional connectivity has proven valuable for fingerprinting, that is,
identifying an individual based on their brain activity (Finn et al., 2015). It is also useful in
studies aiming to differentiate between two groups or conditions (by detecting statistically sig-
nificant changes in correlation between observed time series), rather than in identifying which
effective connections between brain regions underlie that change. Moreover, being a global
property of the system, each pairwise functional connectivity value naturally captures higher
order interactions, which is a topic of growing interest in complex systems, network science,
and neuroimaging (Benson, Gleich, & Leskovec, 2016; Rosas et al., 2022).
So far, we have examined only the zero-lag correlation matrix because it is widely used to
quantify the functional connectivity in neuroimaging. However, spectral DCM doesn’t explain
just the zero-lag correlation but also the cross-correlations between all variables at all time
lags. To understand this point, in the next section, we will invoke the elegant Wiener-Khinchin
theorem, which links the cross-covariance function to the cross-spectral density.

Autocovariance and Cross-Covariance Functions

Let’s briefly revisit the role of the self-connections in the presence of the endogenous fluctu-
ations. As discussed in the Effective Connectivity section, a self-connection determines the rate
of decay of a variable. It can also be understood as quantifying the memory of a variable, that
is, whether inputs have a short-lived or long-lasting impact on its activity. Large negative self-
connections reflect short memory: The variable “forgets” its past quickly and responds
promptly to any new inputs. On the contrary, small negative values reflect long memory:
The neuronal variable integrates and smooths out the endogenous fluctuations and other
inputs, resulting in slower oscillations and lower frequencies.
The concept of a system having memory can be quantified via the autocovariance function.
For a deterministic signal z(t ), there is a simple intuition: The autocovariance function at a time
lag Δt measures the similarity (i.e., sample covariance) between the time series z(t ) and a
shifted version of itself, that is, z(t + Δt ). The agreement is perfect when there is no shift (Δt = 0)
and it typically decreases with longer time lags, unless the signal is constant or periodic. How-
ever, spectral DCM is concerned with stochastic (nondeterministic) processes, as we discussed
in the Power Spectral Density and Cross-Spectral Density section and illustrated in Figure S1 in
the Supporting Information. Let’s consider the single stochastic neuronal variable x1. At any

Network Neuroscience 193


Spectral dynamic causal modeling: A didactic introduction

given time point, x1(t ) is not a number but a random variable. If, after a time interval Δt, the
random variable x1(t + Δt ) is still positively correlated with its previous state x1(t ), the auto-
covariance between the two time points would be positive. If, as the time interval further
increases, x1 forgets its past state and becomes independent of it, the autocovariance would
become zero at that point (for BOLD signals, time intervals or time lags are always multiples of
the repetition time). In summary, the autocovariance function measures the covariance
between the states of the same stochastic process at two different points in time. For the sta-
tionary processes considered here, the autocovariance only depends on the time interval Δt:

σ 11 ðΔt Þ ¼ cov ½x1 ðt Þ; x1 ðt þ Δt Þ: (29)

Dividing the autocovariance function by σ11(0) would produce the commonly used autocor-
relation function, whose values are normalized to the [−1, 1] interval.
We can compute the autocovariance explicitly in the simple scenario of Equation 10,
where x1 is driven by endogenous fluctuations only, that is, it doesn’t receive any other inputs.
For simplicity, assume that the endogenous fluctuations were modeled as a white noise pro-
cess with αv1 = 1 and βv1 = 0. The equation would then describe an Ornstein-Uhlenbeck pro-
cess that has the following autocovariance function (Vatiwutipong & Phewchean, 2019):

e a11 jΔtj
σ11 ðΔt Þ ¼ : (30)
−2a11

This confirms the intuition above: Large negative values of a11 correspond to short memory; in
this case, the decay is exponential.
There is another reason why the autocovariance function is relevant for spectral DCM and,
more generally, for DCM applications to electroencephalography (EEG) and magnetoenceph-
alography (MEG). These models and modalities are concerned with the power spectral density
of signals, and the Wiener-Khinchin theorem proves that the power spectral density of a sta-
tionary process is the Fourier transform of its autocovariance function. In the case of x1, we get

e a11 jΔtj 1
Gx1 ðω Þ ¼ F fσ11 ðΔt Þg ¼ F ¼ : (31)
−2a11 a211 þ ω 2

This offers an alternative way to compute the power spectral density compared with the def-
inition given in the Power Spectral Density and Cross-Spectral Density section. The two
equivalent representations are often portrayed with arrow diagrams in spectral DCM papers

Figure 8. The two equivalent representations of the power spectral density. Starting from the
Ornstein-Uhlenbeck process as an example (top left), the power spectral density can be computed
either as the variance of the Fourier transform (top right, then bottom right), or as the Fourier trans-
form of the autocovariance function (bottom left, then bottom right). The former uses the definition
of the power spectral density; the latter uses the Wiener-Khinchin theorem. In the Ornstein-
Uhlenbeck process, the endogenous fluctuations (v1(t)) are modeled as a white noise process. Its
Fourier transform, denoted as V1(ω), is a normal random variable for each frequency ω.

Network Neuroscience 194


Spectral dynamic causal modeling: A didactic introduction

(K. J. Friston et al., 2014; Razi & Friston, 2016). Here, a similar diagram is provided in Figure 8
using the Ornstein-Uhlenbeck process as an example.
In the presence of multiple state variables, the autocovariance function is replaced by the
more general cross-covariance function
X
x ðΔt Þ ¼ cov ðx ðt Þ; xðt þ Δt ÞÞ: (32)
For any time lag Δt, the cross-covariance function produces an N × N matrix with entries
X  
½ x ðΔt Þjk ¼ σ jk ðΔt Þ ¼ cov xj ðt Þ; xk ðt þ Δt Þ : (33)

Each diagonal element in this matrix is the autocovariance function of a single neuronal var-
iable, introduced in Equation 29. The off-diagonal elements are the cross-covariance functions
between different state variables. For deterministic processes, the visual intuition is analogous
to the autocovariance case: For each pair of variables xj and xk, the cross-covariance function
σjk(Δt ) measures the similarity between the time series xj (t) and a shifted version of xk by a time
lag Δt. The analogy with memory can then be used to extend the intuition to stochastic
processes. When there is no time lag (Δt = 0), the cross-covariance function produces the
covariance matrix x (0), which is a symmetric matrix because σ jk (0) = σ kj (0). After
normalization, this yields the correlation matrix that is used to quantify the functional
connectivity in neuroimaging.

Cross-Spectral Density and Functional Connectivity

We can again invoke the Wiener-Khinchin theorem and obtain the cross-spectral density as
the Fourier transform of the cross-covariance function:
X
Gx ð ω Þ ¼ F f x ðΔt Þg: (34)

We previously noted that the off-diagonal elements of the cross-spectral density matrix are
complex numbers and that their definition provided in Equation 9 doesn’t lend itself to an
intuitive understanding. However—now armed with the Wiener-Khinchin theorem—we gain
another perspective. Each off-diagonal element is equivalent to the Fourier transform of the
cross-covariance function between two variables. By modeling the cross-spectral density, spec-
tral DCM doesn’t capture just the zero-lag correlations but also the cross-correlations between
all variables at all time lags. This is true for both the hidden neuronal variables and the observed
(e.g., BOLD) variables. The equivalent of Equation 34 for the observed variables is
X
Gy ðω Þ ¼ F y ðΔt Þ : (35)

We opened the Effective and Functional Connectivity section by stating that the functional
connectivity matrix can be obtained by integrating the cross-spectral density over all frequen-
cies. To understand why, we need to invert Equation 35 using the (inverse) Fourier transform,
Z ∞
X
−1
y ðt Þ ¼ F G y ðω Þ ¼ Gy ðω Þe iωt dω; (36)
−∞

which retrieves the cross-covariance function from the cross-spectral density Gx(ω). When
t = 0, we have the desired result:
Z ∞
X
−1
y ð0Þ ¼ F Gy ðω Þ ¼ Gy ðω Þdω: (37)
−∞

The correlation matrix R is finally obtained by normalizing the covariance matrix. This
concludes our treatment of the cross-spectral density. We hope that the second perspective

Network Neuroscience 195


Spectral dynamic causal modeling: A didactic introduction

provided in this section will help the reader form a fuller picture of this central concept for
spectral DCM.

ASSUMPTIONS AND LIMITATIONS


State-Space Modeling

The foundational hypothesis of the DCM framework is that the system of interest can be mod-
eled using a state-space approach. This formulation separates the equations describing the
temporal evolution of unobserved variables (e.g., the neuronal activity) from those describing
the observed variables (e.g., the BOLD). Such a separation is important because some of the
model parameters have a direct interpretation in terms of effective connectivity between unob-
served neuronal populations rather than statistical dependency between observed variables. A
subtle and important consequence of the distinction between functional effective connectivity
is that one can only estimate recurrent or self-inhibition using a state-space model. This is
because the correlation of a time series with itself is always 1 (and the variance is always pos-
itive). Assessing self-connectivity in terms of excitability or disinhibition of a node can be
empirically important (e.g., in estimating changes in excitation-inhibition balance or
condition-specific changes in the “gain” of a particular source or region).

Continuous-Time Formulation

Like most DCM approaches, spectral DCM treats time as a continuous quantity. Representing
time as a sequence of discrete steps may seem more natural, especially for fMRI, where the
observations are recorded at evenly spaced time intervals with a relatively low sampling rate.
However, DCM also models the neuronal activity, which unfolds at a much faster timescale
and in an asynchronous manner. These two features can be naturally modeled using differen-
tial equations in continuous time. The discrete formulation usually converges to the continu-
ous one with faster acquisition times, as is the case for EEG and MEG.

Separation of Timescales and Macroscopic Modeling

Spectral DCM assumes that a single macroscopic neuronal variable can capture the essential
dynamical properties of a population of neurons. This is not just a pragmatic way to avoid
billions of equations representing individual neurons. It is an approach based on the separa-
tion of timescales that has a long history in dynamical systems theory (Carr, 1981; Haken,
1983). The assumption is that the neuronal activity can be separated into slow and fast modes:
The fast modes decay quickly so that the long-term behavior of the system can be described by
a few slow modes, or even a single one. Mathematically, the assumption (often satisfied in real
systems) is that only a small number of eigenvalues of the Jacobian are near zero, while the rest
are large and negative (K. J. Friston et al., 2021; K. J. Friston, Li, Daunizeaue, & Stephan, 2011).
Macroscopic modeling also relies on the mean-field assumption that the dynamics of one
region are determined by the mean activity in another (Deco, Jirsa, Robinson, Breakspear,
& Friston, 2008). Since the mean activity is dominated by the slow modes, it is possible to
build a compact macroscopic model where only the slow modes are communicated among
brain regions, whereas the fast endogenous fluctuations affect only the local activity. This is
precisely the neuronal model in Equation 1. It is important to note that, despite being grounded
in dynamical and complex systems theory, this model is an abstraction. Biological details are
necessarily omitted to enable the analytic treatment and faster numerical computations. A
detailed critical review of the biophysical and statistical foundations of DCM is provided in
Daunizeau, David, and Stephan (2011).

Network Neuroscience 196


Spectral dynamic causal modeling: A didactic introduction

Stationarity

Among the DCM variants, spectral DCM has the most direct conceptual and analytical links to
functional connectivity, which we have examined in the Effective and Functional Connectivity
section. Both methods can be read as assuming that the observed processes are weakly sta-
tionary, that is, that their covariance remains unchanged over the length of the experiment
(Liégeois, Laumann, Snyder, Zhou, & Yeo, 2017). In the case of functional connectivity, this
assumption arises because the covariance is used as a proxy for connectivity, after being nor-
malized to obtain the correlation matrix. Therefore, assuming weak stationarity is equivalent to
assuming that the functional connectivity remains unchanged. Similarly, in spectral DCM, the
stationarity assumption allows one to interpret the effective connectivity as remaining
unchanged over the length of the time series (K. J. Friston et al., 2014; Moran et al., 2009).
This is because the effective connectivity is inferred from the observed cross-spectral density,
that is, the Fourier transform of the cross-covariance function (see the Cross-Spectral Density
and Functional Connectivity section), which remains unchanged under stationarity assump-
tions. However, in contrast to functional connectivity, spectral DCM only treats the cross-
covariance of the observed time series as a means to an end, where the end is to infer the
effective connectivity between neuronal variables (and various other model parameters). In
other words, spectral DCM looks “under the hood” of functional connectivity and beyond
the observed variables.
The stationarity assumption can certainly be challenged, and it is generally untenable in
task experiments; nonetheless, it is widely adopted in resting-state studies. Stationarity is typ-
ically assumed not just in functional connectivity and spectral DCM, but also in Granger cau-
sality analysis (Granger, 1969; Seth, Chorley, & Barnett, 2013), transfer entropy (Bossomaier,
Barnett, Harré, & Lizier, 2016), and autoregressive modeling (Liégeois et al., 2017). That said,
Granger causality and information-theoretic methods have been adapted for nonstationary
processes (Dhamala, Rangarajan, & Ding, 2008; Gómez-Herrero et al., 2015; Lizier,
Prokopenko, & Zomaya, 2008; Paluš, 2018; Wollstadt, Martínez-Zarzuela, Vicente, Díaz-
Pernas, & Wibral, 2014), and time-varying approaches to functional connectivity analysis have
been rapidly gaining popularity (Lurie et al., 2020; Novelli & Razi, 2022). A time-varying
extension of spectral DCM has also been developed (Park, Friston, Pae, Park, & Razi,
2018). All of these methods relax the stationarity assumption and allow the statistical proper-
ties of the system to change over the course of the experiment. Practically, in DCM, one
appeals to something called an adiabatic approximation: that effective connectivity is constant
over a small timescale but can change at longer timescales. This means that one can apply
spectral DCM to short segments of data and then examine (or model) slow fluctuations in
effective connectivity (Jafarian, Zeidman, Wykes, Walker, & Friston, 2021; Rosch et al.,
2019; Zarghami & Friston, 2020).

Linearity

The second assumption that spectral DCM (partially) shares with functional connectivity is
linearity, although nonlinear extensions of both methods exist (Kraskov, Stögbauer, &
Grassberger, 2004; Stephan et al., 2008). In DCM for fMRI, the use of a linear random differ-
ential equation to model the neuronal activity is motivated by the separation of timescales,
whereby the neuronal variables represent the slow modes of the system, which are assumed
to be linearly coupled, while the endogenous random fluctuations represent the fast modes
(K. J. Friston et al., 2011). Spectral DCM also extends the linearity assumption to the observa-
tion function, which can be a nonlinear function of time and frequency (as in Equation 25) but is
linearly convolved with the neuronal activity (Equation 13). In the Endogenous Fluctuations

Network Neuroscience 197


Spectral dynamic causal modeling: A didactic introduction

section, we have observed that deterministic linear models have a limited scope since they can
only describe a system that converges to equilibrium or generates a sequence of identical oscil-
lations. Adding stochastic terms to the linear differential equations allows for a richer repertoire
of behaviors.

Stochastic Fluctuations

The addition of a stochastic term to a dynamical system is traditionally used to model noise,
often assumed to be white and temporally uncorrelated. The underlying assumption is that the
noise is due to physical processes operating at a much faster timescale than the state variables,
such as microscopic thermal fluctuations. Spectral DCM relaxes this assumption and allows
the endogenous fluctuations to be temporally correlated, with a spectrum following a power-
law decay in the frequency domain (in the Endogenous Fluctuations section, we saw how this
form includes white noise as a special case). This is easily motivated by noting that the endog-
enous fluctuations are themselves generated by dynamical processes within the source or
region of interest. The ensuing temporal autocorrelation makes the endogenous fluctuations
differentiable and smooth, in line with their characterization as mixtures of fast modes of a
dynamical system (K. J. Friston et al., 2011).

Gaussianity

In line with most DCM approaches, spectral DCM assumes a Gaussian distribution for the
prior over model parameters (the nonnegative parameters are transformed using the natural
logarithm and are assumed to follow a log-normal distribution). This enables a fast Bayesian
inversion scheme called variational Laplace (K. Friston et al., 2007; Zeidman et al., 2023). As
for most of the assumptions listed in this section, the Gaussian hypothesis can also be relaxed
and other (albeit more computationally intensive) inversion schemes can be used instead, such
as Markov chain Monte Carlo methods (Aponte et al., 2022; K. Friston et al., 2007; Xie, Zhang,
& Zhao, 2023).

Many-to-One Mapping

There is no one-to-one mapping between effective and functional connectivity (or cross-
spectral density) that holds in general. This is a challenge for spectral DCM because it makes
model inversion an ill-posed problem with multiple potential solutions, in the absence of any
constraints on the way data are generated. As with all ill-posed problems, this issue is
addressed by placing prior constraints on the explanations in the form of a model and prior
densities over model parameters. When one does not know which priors to use, a weighted
average of plausible priors is often performed in DCM analysis using Bayesian model averag-
ing (Hoeting, Madigan, Raftery, & Volinsky, 1999), where each set of priors corresponds to a
separate model. A related challenge is that one cannot always use functional connectivity to
identify the regions of interest to study using DCM. As we saw in the Effective and Functional
Connectivity section, not even the regions showing the largest differences in correlation are
guaranteed to coincide with the regions with a change in effective connectivity.

Computational Complexity

Despite the simplifying assumptions mentioned above, the computational complexity of spec-
tral DCM limits the number of brain regions that can be studied in reasonable time. Selecting
the regions of interest requires more upfront work than in functional connectivity analysis,
which can be quickly performed across the whole brain. Given the large size of the parameter

Network Neuroscience 198


Spectral dynamic causal modeling: A didactic introduction

space, the specification of the model has been noted as a conceptual issue for DCM in the past
(K. J. Friston, Moran, & Seth, 2013). That said, a more recent theoretical advance now enables
the exploration of a large model space using Bayesian model reduction (Seghier & Friston,
2013). Alternatively, one can introduce further assumptions to winnow the parameter space.
Using functional connectivity to place prior constraints on the eigenvectors of the effective
connectivity matrix enables spectral DCM analyses with dozens of brain regions (Razi
et al., 2017). In fMRI, ignoring the spatial variability of the hemodynamics and removing
the separation between hidden and observed variables leads to the regression DCM scheme,
which can analyze hundreds of regions in minutes (Frässle et al., 2021). However, this method
forgoes the state-space formulation and can be understood as a Bayesian multivariate regres-
sion in the frequency domain.

ACKNOWLEDGMENTS

Adeel Razi is affiliated with the Wellcome Centre for Human Neuroimaging supported by core
funding from Wellcome (203147/Z/16/Z). Adeel Razi is a CIFAR Azrieli Global Scholar in the
Brain, Mind & Consciousness Program. Karl Friston is funded by the Wellcome Centre for
Human Neuroimaging (Ref: 205103/Z/16/Z) and a Canada-UK Artificial Intelligence Initiative
(Ref: ES/T01279X/1).

SUPPORTING INFORMATION

Supporting information for this article is available at https://doi.org/10.1162/netn_a_00348.

AUTHOR CONTRIBUTIONS

Leonardo Novelli: Conceptualization; Formal analysis; Investigation; Methodology; Software;


Visualization; Writing – original draft; Writing – review & editing. Karl Friston: Conceptualiza-
tion; Methodology; Writing – review & editing. Adeel Razi: Conceptualization; Funding acqui-
sition; Methodology; Supervision; Writing – review & editing.

FUNDING INFORMATION

Leonardo Novelli, Australian Research Council, Award ID: DP200100757. Adeel Razi,
Australian Research Council, Award ID: DE170100128. Adeel Razi, Australian Research
Council (https://dx.doi.org/10.13039/501100000923), Award ID: DP200100757. Adeel Razi,
Australian National Health and Medical Research, Award ID: 1194910. Karl Friston,
Wellcome Centre for Human Neuroimaging, Award ID: 205103/ Z/16/ Z. Karl Friston,
Canada-UK Artificial Intelligence Initiative, Award ID: ES/T01279X/1.

REFERENCES

Aponte, E. A., Yao, Y., Raman, S., Frässle, S., Heinzle, J., Penny, responses. Biological Psychology, 85(1), 163–170. https://doi
W. D., & Stephan, K. E. (2022). An introduction to thermody- .org/10.1016/j.biopsycho.2010.06.007, PubMed: 20599582
namic integration and application to dynamic causal models. Benson, A. R., Gleich, D. F., & Leskovec, J. (2016). Higher-order
Cognitive Neurodynamics, 16(1), 1–15. https://doi.org/10.1007 organization of complex networks. Science, 353(6295),
/s11571-021-09696-9, PubMed: 35116083 163–166. https://doi.org/10.1126/science.aad9029, PubMed:
Ashburner, J., Barnes, G., Chen, C.-C., Daunizeau, J., Flandin, G., 27387949
Friston, K., … Zeidman, P. (2020). SPM12 manual. Bossomaier, T., Barnett, L., Harré, M., & Lizier, J. T. (2016). An
Bach, D. R., Daunizeau, J., Friston, K. J., & Dolan, R. J. (2010). introduction to transfer entropy. Cham: Springer International
Dynamic causal modelling of anticipatory skin conductance Publishing. https://doi.org/10.1007/978-3-319-43222-9

Network Neuroscience 199


Spectral dynamic causal modeling: A didactic introduction

Carr, J. (1981). Applications of centre manifold theory. New York: https://doi.org/10.1016/j.neuroimage.2010.12.039, PubMed:
Springer US. https://doi.org/10.1007/978-1-4612-5929-9 21182971
Daunizeau, J., David, O., & Stephan, K. E. (2011). Dynamic causal Friston, K. J., Moran, R., & Seth, A. K. (2013). Analysing connectiv-
modelling: A critical review of the biophysical and statistical ity with Granger causality and dynamic causal modelling. Cur-
foundations. NeuroImage, 58(2), 312–322. https://doi.org/10 rent Opinion in Neurobiology, 23(2), 172–178. https://doi.org
.1016/j.neuroimage.2009.11.062, PubMed: 19961941 /10.1016/j.conb.2012.11.010, PubMed: 23265964
Deco, G., Jirsa, V. K., Robinson, P. A., Breakspear, M., & Friston, K. Gómez-Herrero, G., Wu, W., Rutanen, K., Soriano, M., Pipa, G., &
(2008). The dynamic brain: From spiking neurons to neural Vicente, R. (2015). Assessing coupling dynamics from an ensem-
masses and cortical fields. PLoS Computational Biology, 4(8), ble of time series. Entropy, 17(4), 1958–1970. https://doi.org/10
e1000092. https://doi.org/10.1371/journal.pcbi.1000092, .3390/e17041958
PubMed: 18769680 Granger, C. W. J. (1969). Investigating causal relations by econo-
Dhamala, M., Rangarajan, G., & Ding, M. (2008). Estimating metric models and cross-spectral methods. Econometrica,
Granger causality from Fourier and wavelet transforms of time 37(3), 424–438. https://doi.org/10.2307/1912791
series data. Physical Review Letters, 100(1), 018701. https://doi Haken, H. (1983). Nonlinear equations. The slaving principle. In
.org/10.1103/PhysRevLett.100.018701, PubMed: 18232831 Advanced synergetics. https://doi.org/10.1007/978-3-642-45553-7_7
Durbin, J., & Koopman, S. J. (2012). Time series analysis by state Hoeting, J. A., Madigan, D., Raftery, A. E., & Volinsky, C. T. (1999).
space methods (2nd ed.). Oxford/New York: Oxford University Bayesian model averaging: A tutorial (with comments by M.
Press. https://doi.org/10.1093/acprof:oso/9780199641178.001 Clyde, David Draper, and E. I. George, and a rejoinder by the
.0001 authors). Statistical Science, 14(4), 382–417. https://doi.org/10
Finn, E. S., Shen, X., Scheinost, D., Rosenberg, M. D., Huang, J., .1214/ss/1009212519
Chun, M. M., … Constable, R. T. (2015). Functional connectome Izhikevich, E. M. (2006). Two-dimensional systems. In Dynamical
fingerprinting: Identifying individuals using patterns of brain con- systems in neuroscience: The geometry of excitability and burst-
nectivity. Nature Neuroscience, 18(11), 1664–1671. https://doi ing. MIT Press. https://doi.org/10.7551/mitpress/2526.003.0005
.org/10.1038/nn.4135, PubMed: 26457551 Jafarian, A., Zeidman, P., Wykes, R. C., Walker, M., & Friston, K. J.
Frässle, S., Harrison, S. J., Heinzle, J., Clementz, B. A., Tamminga, (2021). Adiabatic dynamic causal modelling. NeuroImage, 238,
C. A., Sweeney, J. A., … Stephan, K. E. (2021). Regression 118243. https://doi.org/10.1016/j.neuroimage.2021.118243,
dynamic causal modeling for resting-state fMRI. Human Brain PubMed: 34116151
Mapping, 42(7), 2159–2180. https://doi.org/10.1002/ hbm Jung, K., Kang, J., Chung, S., & Park, H.-J. (2019). Dynamic causal
.25357, PubMed: 33539625 modeling for calcium imaging: Exploration of differential effec-
Friston, K., Harrison, L., & Penny, W. (2003). Dynamic causal tive connectivity for sensory processing in a barrel cortical col-
modelling. NeuroImage, 19(4), 1273–1302. https://doi.org/10 umn. NeuroImage, 201, 116008. https://doi.org/10.1016/j
.1016/S1053-8119(03)00202-7, PubMed: 12948688 .neuroimage.2019.116008, PubMed: 31301360
Friston, K., Mattout, J., Trujillo-Barreto, N., Ashburner, J., & Penny, Kiebel, S. J., Garrido, M. I., Moran, R. J., & Friston, K. J. (2008).
W. (2007). Variational free energy and the Laplace approxima- Dynamic causal modelling for EEG and MEG. Cognitive Neuro-
tion. NeuroImage, 34(1), 220–234. https://doi.org/10.1016/j dynamics, 2(2), 121–136. https://doi.org/10.1007/s11571-008
.neuroimage.2006.08.035, PubMed: 17055746 -9038-0, PubMed: 19003479
Friston, K., Preller, K. H., Mathys, C., Cagnan, H., Heinzle, J., Razi, Kraskov, A., Stögbauer, H., & Grassberger, P. (2004). Estimating
A., & Zeidman, P. (2019). Dynamic causal modelling revisited. mutual information. Physical Review E, 69(6), 066138. https://
NeuroImage, 199, 730–744. https://doi.org/10.1016/j doi.org/10.1103/PhysRevE.69.066138, PubMed: 15244698
.neuroimage.2017.02.045, PubMed: 28219774 Li, B., Daunizeau, J., Stephan, K. E., Penny, W., Hu, D., & Friston,
Friston, K. J. (2011). Functional and effective connectivity: A K. (2011). Generalised filtering and stochastic DCM for fMRI.
review. Brain Connectivity, 1(1), 13–36. https://doi.org/10.1089 NeuroImage, 58(2), 442–457. https://doi.org/10.1016/j
/brain.2011.0008, PubMed: 22432952 .neuroimage.2011.01.085, PubMed: 21310247
Friston, K. J., Fagerholm, E. D., Zarghami, T. S., Parr, T., Hipólito, I., Liégeois, R., Laumann, T. O., Snyder, A. Z., Zhou, J., & Yeo, B. T.
Magrou, L., & Razi, A. (2021). Parcels and particles: Markov (2017). Interpreting temporal fluctuations in resting-state
blankets in the brain. Network Neuroscience, 5(1), 211–251. functional connectivity MRI. NeuroImage, 163, 437–455.
https://doi.org/10.1162/netn_a_00175, PubMed: 33688613 https://doi.org/10.1016/j.neuroimage.2017.09.012, PubMed:
Friston, K. J., Flandin, G., & Razi, A. (2022). Dynamic causal 28916180
modelling of COVID-19 and its mitigations. Scientific Reports, Lizier, J. T., Prokopenko, M., & Zomaya, A. Y. (2008). Local infor-
12(1), 12419. https://doi.org/10.1038/s41598-022-16799-8, mation transfer as a spatiotemporal filter for complex systems.
PubMed: 35859054 Physical Review E, 77(2), 026110. https://doi.org/10.1103
Friston, K. J., Kahan, J., Biswal, B., & Razi, A. (2014). A DCM /PhysRevE.77.026110, PubMed: 18352093
for resting state fMRI. NeuroImage, 94(100), 396–407. https:// Lurie, D. J., Kessler, D., Bassett, D. S., Betzel, R. F., Breakspear, M.,
doi.org/10.1016/j.neuroimage.2013.12.009, PubMed: Kheilholz, S., … Calhoun, V. D. (2020). Questions and controver-
24345387 sies in the study of time-varying functional connectivity in resting
Friston, K. J., Li, B., Daunizeau, J., & Stephan, K. E. (2011). Net- fMRI. Network Neuroscience, 4(1), 30–69. https://doi.org/10
work discovery with DCM. NeuroImage, 56(3), 1202–1221. .1162/netn_a_00116, PubMed: 32043043

Network Neuroscience 200


Spectral dynamic causal modeling: A didactic introduction

Miller, S. L., & Childers, D. (2012). Power spectral density. In S. L. Seghier, M. L., & Friston, K. J. (2013). Network discovery with large
Miller & D. Childers (Eds.), Probability and random processes DCMs. NeuroImage, 68, 181–191. https://doi.org/10.1016/j
(2nd ed., pp. 429–471). Boston: Academic Press. https://doi.org .neuroimage.2012.12.005, PubMed: 23246991
/10.1016/B978-0-12-386981-4.50013-8 Seth, A. K., Chorley, P., & Barnett, L. C. (2013). Granger causality
Moran, R. J., Stephan, K. E., Seidenbecher, T., Pape, H. C., Dolan, analysis of fMRI BOLD signals is invariant to hemodynamic con-
R. J., & Friston, K. J. (2009). Dynamic causal models of volution but not downsampling. NeuroImage, 65, 540–555.
steady-state responses. NeuroImage, 44(3), 796–811. https://doi https://doi.org/10.1016/j.neuroimage.2012.09.049, PubMed:
.org/10.1016/j.neuroimage.2008.09.048, PubMed: 19000769 23036449
Novelli, L., & Razi, A. (2022). A mathematical perspective on Smith, S. W. (2002). Continuous signal processing. In The scientist
edge-centric brain functional connectivity. Nature Communica- and engineer’s guide to digital signal processing.
tions, 13(1), 2693. https://doi.org/10.1038/s41467-022-29775-7, Stephan, K. E. (2004). On the role of general system theory for func-
PubMed: 35577769 tional neuroimaging. Journal of Anatomy, 205(6), 443–470.
Oppenheim, A. V., Willsky, A. S., & Nawab, S. H. (1997). Signals & https://doi.org/10.1111/j.0021-8782.2004.00359.x, PubMed:
systems. Prentice Hall. 15610393
Paluš, M. (2018). Linked by dynamics: Wavelet–based mutual Stephan, K. E., Kasper, L., Harrison, L. M., Daunizeau, J., den
information rate as a connectivity measure and scale-specific Ouden, H. E. M., Breakspear, M., & Friston, K. J. (2008). Non-
networks. In A. A. Tsonis (Ed.), Advances in nonlinear geosciences linear dynamic causal models for fMRI. NeuroImage, 42(2),
(pp. 427–463). Cham: Springer. https://doi.org/10.1007/978-3-319 649–662. https://doi.org/10.1016/j.neuroimage.2008.04.262,
-58895-7_21 PubMed: 18565765
Park, H.-J., & Friston, K. (2013). Structural and functional brain net- Stephan, K. E., Penny, W. D., Moran, R. J., den Ouden, H. E. M.,
works: From connections to cognition. Science, 342(6158), Daunizeau, J., & Friston, K. J. (2010). Ten simple rules for
1238411. https://doi.org/10.1126/science.1238411, PubMed: dynamic causal modeling. NeuroImage, 49(4), 3099–3109.
24179229 https://doi.org/10.1016/j.neuroimage.2009.11.015, PubMed:
Park, H.-J., Friston, K. J., Pae, C., Park, B., & Razi, A. (2018). 19914382
Dynamic effective connectivity in resting state fMRI. Neuro- Stephan, K. E., Weiskopf, N., Drysdale, P. M., Robinson, P. A., &
Image, 180, 594–608. https://doi.org/10.1016/j.neuroimage Friston, K. J. (2007). Comparing hemodynamic models with
.2017.11.033, PubMed: 29158202 DCM. NeuroImage, 38(3), 387–401. https://doi.org/10.1016/j
Pipes, L. A. (1968). Applications of Laplace transforms of matrix .neuroimage.2007.07.040, PubMed: 17884583
functions. Journal of the Franklin Institute, 285(6), 436–451. Tak, S., Kempny, A. M., Friston, K. J., Leff, A. P., & Penny, W. D.
https://doi.org/10.1016/0016-0032(68)90049-5 (2015). Dynamic causal modelling for functional near-infrared
Rangaprakash, D., Wu, G.-R., Marinazzo, D., Hu, X., & spectroscopy. NeuroImage, 111, 338–349. https://doi.org/10
Deshpande, G. (2018). Hemodynamic response function (HRF) .1016/j.neuroimage.2015.02.035, PubMed: 25724757
variability confounds resting-state fMRI functional connectivity. Triantafyllopoulos, K. (2021). Bayesian inference of state space
Magnetic Resonance in Medicine, 80(4), 1697–1713. https://doi models: Kalman filtering and beyond. Cham: Springer Interna-
.org/10.1002/mrm.27146, PubMed: 29656446 tional Publishing. https://doi.org/10.1007/978-3-030-76124-0
Razi, A., & Friston, K. J. (2016). The connected brain: Causality, Tsvetanov, K. A., Henson, R. N. A., & Rowe, J. B. (2020). Separating
models, and intrinsic dynamics. IEEE Signal Processing Maga- vascular and neuronal effects of age on fMRI BOLD signals.
zine, 33(3), 14–35. https://doi.org/10.1109/MSP.2015.2482121 Philosophical Transactions of the Royal Society B: Biological
Razi, A., Kahan, J., Rees, G., & Friston, K. J. (2015). Construct val- Sciences, 376(1815), 20190631. https://doi.org/10.1098/rstb
idation of a DCM for resting state fMRI. NeuroImage, 106, 1–14. .2019.0631, PubMed: 33190597
https://doi.org/10.1016/j.neuroimage.2014.11.027, PubMed: Vatiwutipong, P., & Phewchean, N. (2019). Alternative way to
25463471 derive the distribution of the multivariate Ornstein–Uhlenbeck
Razi, A., Seghier, M. L., Zhou, Y., McColgan, P., Zeidman, P., Park, process. Advances in Difference Equations, 2019(1), 1–7.
H.-J., … Friston, K. J. (2017). Large-scale DCMs for resting-state https://doi.org/10.1186/s13662-019-2214-1
fMRI. Network Neuroscience, 1(3), 222–241. https://doi.org/10 Williams, R. L., II, & Lawrence, D. A. (2007). State-space funda-
.1162/NETN_a_00015, PubMed: 29400357 mentals. In Linear state-space control systems (pp. 48–107).
Rosas, F. E., Mediano, P. A. M., Luppi, A. I., Varley, T. F., Lizier, J. T., John Wiley & Sons. https://doi.org/10.1002/9780470117873
Stramaglia, S., … Marinazzo, D. (2022). Disentangling high-order .ch2
mechanisms and high-order behaviours in complex systems. Wollstadt, P., Martínez-Zarzuela, M., Vicente, R., Díaz-Pernas, F. J.,
Nature Physics, 18(5), 476–477. https://doi.org/10.1038/s41567 & Wibral, M. (2014). Efficient transfer entropy analysis of
-022-01548-5 non-stationary neural time series. PLOS ONE, 9(7), e102833.
Rosch, R., Burrows, D. R. W., Jones, L. B., Peters, C. H., Ruben, P., https://doi.org/10.1371/journal.pone.0102833, PubMed:
& Samarut, É. (2019). Functional genomics of epilepsy and asso- 25068489
ciated neurodevelopmental disorders using simple animal Xie, Y., Zhang, P., & Zhao, J. (2023). A spectral sampling algorithm
models: From genes, molecules to brain networks. Frontiers in in dynamic causal modelling for resting-state fMRI. Human Brain
Cellular Neuroscience, 13, 556. https://doi.org/10.3389/fncel Mapping, 44(8), 2981–2992. https://doi.org/10.1002/ hbm
.2019.00556, PubMed: 31920556 .26256, PubMed: 36929686

Network Neuroscience 201


Spectral dynamic causal modeling: A didactic introduction

Zarghami, T. S., & Friston, K. J. (2020). Dynamic effective connec- connectivity analysis, part 1: First level analysis with DCM for
tivity. NeuroImage, 207, 116453. https://doi.org/10.1016/j fMRI. NeuroImage, 200, 174–190. https://doi.org/10.1016/j
.neuroimage.2019.116453, PubMed: 31821868 .neuroimage.2019.06.031, PubMed: 31226497
Zeidman, P., Friston, K., & Parr, T. (2023). A primer on Variational Zeidman, P., Jafarian, A., Seghier, M. L., Litvak, V., Cagnan, H.,
Laplace ( VL). NeuroImage, 279, 120310. https://doi.org/10.1016 Price, C. J., & Friston, K. J. (2019). A guide to group effective
/j.neuroimage.2023.120310, PubMed: 37544417 connectivity analysis, part 2: Second level analysis with PEB.
Zeidman, P., Jafarian, A., Corbin, N., Seghier, M. L., Razi, A., Price, NeuroImage, 200, 12–25. https://doi.org/10.1016/j.neuroimage
C. J., & Friston, K. J. (2019). A guide to group effective .2019.06.032, PubMed: 31226492

Network Neuroscience 202

You might also like