Abstract
This paper investigates the controllability of a broad class of recurrent neural networks widely used in theoretical neuroscience, including models of large-scale human brain dynamics. Motivated by emerging applications in non-invasive neurostimulation such as transcranial direct current stimulation (tDCS), we study the control synthesis of these networks using constant and piecewise constant inputs. The neural model considered is a continuous-time Hopfield-type system with nonlinear activation functions and arbitrary input matrices representing inter-regional brain interactions. Our main contribution is the formulation and solution of a control synthesis problem for such nonlinear systems using specific solution representations. These representations yield explicit algebraic conditions for synthesizing constant and piecewise constant controls that solve a two-point boundary value problem in state space up to higher-order corrections with respect to the time horizon. In particular, the input is constructed to satisfy a tractable small-time algebraic relation involving the Jacobian of the nonlinear drift, ensuring that the synthesis reduces to verifying conditions on the system matrices. For canonical input matrices that directly actuate nodes, this implies that the reachable set (with constant inputs) of a given initial state is an affine subspace whose dimension equals the input rank and whose basis can be computed efficiently using a thin QR factorization. Numerical simulations illustrate the theoretical results and demonstrate the effectiveness of the proposed synthesis in guiding the design of brain stimulation protocols for therapeutic and cognitive applications.
Index Terms—: Control Synthesis, Nonlinear Control, Recurrent Neural Networks, Neurostimulation, Transcranial Direct Current Stimulation
I. Introduction
Non-invasive neurostimulation techniques, such as transcranial magnetic stimulation (TMS) and transcranial electrical stimulation (tES), are increasingly being used to modulate brain activity in order to achieve desired cognitive outcomes [4], [15], [22]. Conceptualizing the brain as a controlled dynamical system [24], [34] offers a powerful framework for identifying key brain regions and networks [26] involved in specific cognitive functions and understanding how they can be modulated using targeted stimulation.
In this regard, there is strong potential at the intersection of network control theory and clinical and basic neuroscience. For instance, there have been successes in the application of linear network control theory to optimize TMS, administered with a constant input over short time intervals [25]. Such applications target the control of specific brain regions based on structural network connectivity derived from diffusion spectrum imaging (DTI). Similarly, at an analysis level, linear network control theory has been employed in human neuroimaging studies, again by leveraging DTI-parameterized models [16]. In this latter study, the authors postulated how specific brain regions may be suitable targets for control based on advantageous controllability properties relative to their potential actuation. A complementary perspective views brain stimulation—particularly deep brain stimulation—as a form of vibrational control that stabilizes desired neural dynamics through high-frequency inputs, a notion that has been formalized and validated in recent theoretical work [27], [28].
While these earlier studies [16], [25] have provided valuable insights by using linear network control theory and structural connectivity data to understand brain dynamics, they are limited in key respects. Foremost, by virtue of linearity, they are only assured to provide local characterizations. In [25], it is shown that these local analyses can offer some predictive power over how stimulation induces functional changes in nonlinear models. However, the changes considered are in terms of low-dimensional correlation metrics between brain areas (network nodes), as opposed to specific configurations in state space. In contrast, our goal is to enable formal control synthesis to induce arbitrary network states in the presence of full nonlinearity, which is likely essential for understanding how different brain dynamics mediate cognitive function.
Indeed, the need for more biologically realistic, nonlinear models of large-scale brain dynamics has been appreciated [5] as a precursor for the application of control theory to cognitive neuroscience. Among the paradigms in this regard are whole-brain Mesoscale Individualized NeuroDynamic (MINDy) models [8], [32]. Our previous works have demonstrated the validity and utility of MINDy models as a generative tool for understanding the relationship between individualized neural architectures and neural dynamics in both resting-state [32], [33] and cognitive task contexts [31]. The MINDy models derived from single-subject resting-state brain imaging data operate at a macroscale level, where each node represents a distinct brain region. A given model captures the temporal evolution of brain activity and the non-linear interactions between hundreds of brain regions, taking the form of a nonlinear dynamical system of continuous-time Hopfield-type recurrent neural networks [18]. All (intrinsic) parameters–the decay and connectivity matrices and the transfer function parameters–are directly estimated from brain activity time series so that the resulting models predict future brain activity, providing a more accurate and biologically plausible representation of large-scale brain dynamics.
A MINDy model belongs to the more general class of Hopfield-type recurrent neural networks, which will constitute the focus of our study herein. Specifically, we consider a model of interconnected neural masses associated with regions, described by a set of nonlinear differential equations that represent the dynamics of each region’s state over time. The dynamic for the -th region in this network evolves as
| (1) |
where satisfies denotes the -th region’s state at time is a positive parameter reflecting the rate at which the current state of the -th region decays, are constant control inputs, represent coupling coefficients between control inputs and neurons, is a real constant that weights the connection from the -th region to the -th region, and is the activation function of the -th region.
The study of neural networks, such as (1), has long been a topic of great interest for researchers seeking to understand complex dynamical behaviors and their associated control mechanisms, [3], [13], [18], [36], [40], [41]. Among the various properties investigated, controllability–the ability to steer a network from one state to another within a finite time–stands out for its significant theoretical and practical implications. As highlighted earlier, in fields such as neuroengineering and brain stimulation, particularly transcranial electrical stimulation (tES) with direct current (stimulation via a constant input), the operative issue is to determine a constant input that can drive the state of the network (1) to a desired target state–within the state space–over a short time horizon (Figure 1). Such capability would present a promising avenue for developing stimulation protocols tailored to achieve specific therapeutic outcomes [12], such as altering the excitability of the motor cortex [4].
Fig. 1.

We consider the control of network models of the general form (1). We are motivated by emerging applications in neurostimulation involving the manipulation of brain networks by means of exogenous input.
In previous works such as [13], [36], [40], [41], controllability or stability properties of equations as (1) have been considered in the full-actuated configuration when and or in the specific cases of for all , and belonging to an open dense subset of .
In the current study, we do not impose a priori such a restriction on the input matrix . The assumptions on the activation functions are relaxed to those commonly used in the literature in studying neural networks. The general consideration on and the complexity arising from the inherent nonlinear dynamics present significant challenges, and systematic solutions for the control synthesis problem are not readily available in the current literature. To address this challenge, we derive an explicit algebraic representation for the hidden state and the associated constant input that solves the two-point boundary value problem in state space up to higher-order terms. This yields an explicit small-time algebraic condition for the constant input, showing that the required control can be determined by inverting an expression involving the drift’s Jacobian and the desired change in state, up to higher-order corrections with respect to the time horizon. From this derivation, we then provide a step function (piecewise constant control) for large periods of time. Although tDCS typically relies on constant or slowly modulated inputs, step constant controls provide a theoretically grounded and numerically efficient extension of constant-input synthesis over longer time horizons. Note that, in the case of linear activation functions, the synthesized constant control remains valid for both short and long time horizons.
For input matrices that directly actuate nodes, we show that the reachable set with constant inputs is an affine subspace whose dimension equals the rank of , and its basis can be computed explicitly via a thin QR factorization of the associated constraint matrix. This characterization applies to both linear and nonlinear activation functions, clarifying how the local structure of the input matrix shapes the short-horizon reachable set.
The remainder of the paper is organized as follows. In the next section, we introduce the general notations used throughout the paper. Section II presents our modeling framework, reformulating equation (1) into a form more suitable for controllability analysis and establishing key results concerning solution representations. Section III addresses the control synthesis problem and is divided into several parts: Section III-A focuses on the linear activation case; Section III-B develops control synthesis strategies for general nonlinear activations that appear to be the proper generalization of the linear case; and Section III-C outlines a taxonomy of synthesis approaches derived from dual representations of the solution, clarifying the conceptual origin of the forward nominal-state and backward nominal-state strategies. Section III-D then provides practical considerations by analyzing the structure of reachable sets associated with the proposed step-function control syntheses. Section IV offers broader discussion and possible extensions. Section V presents numerical illustrations of the theory. Finally, Section VI summarizes the main results and discusses directions for future research. Technical proofs of some results are deferred to the Appendix.
Notation 1. In this paper, stands to the set of integers, denote the set of positive integers and . For all , we denote by . For all , we denote by the set of matrices with real coefficients. denotes the -dimensional real column vector and the space of linear maps on that we identify, in the usual way, with the set of squared matrices of order with real coefficients. For all , we denote by the Euclidean norm of and by the scalar product of and . We denote the identity matrix in by Id, and for every matrix , denotes the spectral norm of , and denotes the Moore-Penrose pseudo-inverse of . For a symmetric matrix , and denote respectively its smaller and largest eigenvalues. For ease of notation, a diagonal matrix , will simply denoted as , where is the usual Kronecker symbol. If , then and are, respectively, the exponential and the set of eigenvalues of , viz.
| (2) |
We recall that stands to the big -notation, and means is of the order of . Finally, we use for the total derivative of w.r.t. .
II. Representation of solutions
To study the controllability of (1), it is convenient to recast it in abstract form. Introduce the (nonlinear) map
| (3) |
Then, (1) recasts as
| (4) |
where is the state’s vector, is the initial state, is a constant control, is the input matrix, is the decay matrix, is the connectivity matrix and given by is the firing rate function of the network.
Notation 2. We set , where is the decay matrix and is the connectivity matrix in (3).
Assumptions II.1. Unless otherwise stated, we assume that for every , the activation function is a , globally Lipschitz function on with a bounded second derivative, and satisfies . The latter is without loss of generality since we may always take for every and set as the new input to equation (4). Finally, for the sake of simplicity in the presentation, we also assume that . Indeed, as long as , we can always define an activation function with and .
Throughout the following, we will refer to the following fixed parameters:
Recall that the vector field is globally Lipschitz on (see, for instance, Lemma A.1). Denoting by its flow at , it follows that the family is a one parametric subgroup of , the group of diffeomorphism of . Note that for any is the unique solution of (4) when the control , namely
| (5) |
Let denote the inverse of , i.e., . Then, for every , it holds
In particular, the map is of class (see, for instance, [17, Chapter 15, Theorem 1]). Moreover, for a fixed , if we use to denote the differential of , then is a well-defined invertible matrix, and it holds
| (6) |
The classical theory of ordinary differential equations (ODE) can be invoked to justify the existence and uniqueness of solutions of (4) (see, for instance, [6, Chapter 2]). In this work, we represent the system’s state trajectory in a form reminiscent of that of linear time-invariant systems as used in [38, Theorem 4.4]. This representation emerged from the chronological calculus framework introduced by Agrachev and Gamkrelidze in the late 1970s for solving nonautonomous ODEs on finite-dimensional manifolds [2]. For a comprehensive treatment of this theory, see also [1, Chapter 6], and [21], [30] for applications in geometric control theory. This representation is noteworthy because, in the specific case of linear activation functions, it naturally aligns with the variation of the constants formula. From this perspective, the proposed representation and approach are a proper generalization of the variation of constants formula to the case of models of the form (4).
The first result of this paper is about solution representation, and it states as follows. The proof is presented in Section B-A.
Theorem II.2. Let and . The solution of (4) can be expressed as
| (7) |
When the dynamics are linear, this representation reduces to the familiar form.
Corollary II.3. Let and . If , the solution to (4) reads
| (8) |
Proof. In this case, for every where is introduced in Notation 2. In this case, and for every . It follows that and the result then follows by (7). □
We conclude this section by presenting a second representation of the solution to (4). It can be viewed as a natural extension of the linear case given in Corollary II.3, particularly at the final time horizon . The proof follows similar lines to that of Theorem II.2.
Theorem II.4. Let and . Then the solution of (4) can be expressed as
| (9) |
for all .
Remark II.5. Although Theorems II. 4 and II. 2 are stated for constant inputs , they remain valid for time-dependent inputs . Note that this is a specific case of [37, Theorem 3.2]. In this case, the corresponding solution to (4) belongs to . This observation is particularly relevant, as we will later consider step constant controls—i.e., piecewise constant controls defined on .
III. Controllability of the neural networks with STEP FUNCTION
In this section, we analyze the controllability of (4) under constant inputs over short time horizons and extend the approach to step-function controls for longer-horizon transfers in the fully nonlinear setting. When the activation functions are linear, the synthesized constant control remains valid for both short and long time horizons.
Recall, e.g., from [39, Remark 4.2.3] that a step function on is a piecewise constant function defined over , i.e, constant on each interval of a partition of . Then, a constant function corresponds to a special case of a step function with a single constant value across the entire interval.
Let us introduce the following.
Definition 3 (Step controllability). Let . System (4) is step controllable over the time interval if, for all , there exists a step function on such that the solution of (4) with satisfies .
A. The case of linear activation functions
To gain insight into the controllability of the nonlinear system (4) under constant control, it is instructive to first analyze the corresponding linearized model—a strategy that has been clearly articulated and effectively motivated in prior work, notably by [19].
Consider first the linear neural network
| (10) |
with (corresponding to for all ) and constant input . System (10) is controllable on if and only if the Kalman rank condition holds. Equivalently, the invertibility of the controllability Gramian also guarantees controllability over . Moreover, when these conditions are met, one can synthesize a time-varying control of minimal -energy that steers (10) from any to any in time (see, e.g., [10], [39]).
However, this synthesis does not systematically extend to constant control capable of achieving the same objective. If the system is controllable, and we assume the existence of a constant input that steers the system from to over the interval , then from (8) and ,
so that left multiplying by and using the identity
| (11) |
we find
| (12) |
However, (12) can be ill-posed, as shown by the following counter-example.
Remark III.1. Consider the case where , and
The linear system (10), equipped with time-varying control, is fully actuated and hence controllable, as guaranteed by the Kalman rank condition. However, any constant control must satisfy (12). Since , it follows that no constant control can steer to a nonzero target state over the interval . Notably, can be made arbitrarily small by choosing .
On the other hand, using the Gramian-based synthesis, one finds that , and
is the time-varying control of minimal -norm that steers to . In particular, . Taking yields , showing that if , then achieves the transfer from to in with small effort.
Remark III.1 suggests that the controllability of a linear system does not imply the existence of a constant control that solves the control objective.
Proposition III.2. Let and . Assume that for any . Then, the solution to (10) satisfies , if and only if, solves
| (13) |
The proof of the following result is immediate.
Corollary III.3. Under hypotheses of Proposition III.2, (13) has at least one solution , if and only if,
In this case, given by
is the corresponding least-norm constant control.
Proof of Proposition III.2. Under the assumption on the eigenvalues of , the matrices and – Id are invertible. By Corollary II.3, the solution of (10) with a constant control is given by (8). Let us show that is equivalent to (13). If , then one obtains immediately that solves (13) by invoking the same arguments used to derive (12). Conversely, if (13) is satisfied, one deduces from (11) and (8) that
so that since is invertible. □
Remark III.4. If , then the matrix is invertible for all , as ensured by Lemma C.2 and the Neumann series expansion. Moreover, for all , we have
which implies that is invertible as well.
Therefore, in Proposition III.2, the spectral assumptions become relevant only when .
Remark III.5. Remark III. 1 and Proposition III. 2 highlight a key distinction between constant and time-varying control synthesis in linear systems.
While time-varying control synthesis depends solely on the structural properties of the pair ()–as captured by the invertibility of the controllability Gramian—constant control synthesis is more restrictive. In particular, it depends not only on the matrices and , but also crucially on the time horizon , the initial and target states and , and on the spectrum of . As shown in Remark III.1, even a fully actuated system may fail to connect certain states under constant input when , rendering (12), then (13) unsolvable.
B. The case of nonlinear activation functions
Proposition III.2 highlights that controllability under constant control depends intricately on the time horizon , the spectrum of , and the initial and target states and .
Based on the solution representations of (4) given in Theorems II.2 and II.4, we note that the latter naturally extends the linear case, particularly at the final time horizon . Nonetheless, both formulations provide valid foundations for synthesizing controls that achieve the desired state transfer. The key distinction lies in their temporal structure and directionality, as well as in the specific conditions required to guarantee their applicability.
1) Forward nominal-state synthesis: Building on the solution representation of (4) given in Theorem II.4, we aim to synthesize step-function controls that achieve the control objective. Unlike the linear case, the control is first derived in an implicit form due to the nonlinear nature of the system. Then, under the assumption of a small time horizon, we provide an explicit expression that is an accurate approximation of the step-function controls that solve the control objective.
We begin by stating a key result on which the constant control synthesis is based. The proof is identical to that presented in Section C-A.
Proposition III.6. Let and . The solution of (4) can be expanded at time as
| (14) |
Here ,
| (15) |
| (16) |
where , and is the second derivative of at .
The implicit constant control synthesis of this section is therefore presented in the following theorem.
Theorem III.7. Let and . Assume that for any . Then, the solution to (4) satisfies , if and only if, solves
| (17) |
Here is defined in Proposition III.6, and letting , one has
| (18) |
Proof. Let us show that is equivalent to (17). If , then left multiplying (14) by , using (2), and
yields
| (19) |
Since , we left multiply (19) respectively by and to get
which is exactly (17) where is defined by (18). Conversely, assume that solves (17). Then, one finds
| (20) |
One the other hand, from (14), one deduces that
| (21) |
Identifying (20) with (21), one gets
| (22) |
Since is invertible, it follows from (22) that . This completes the proof of the necessary part. □
The synthesis provided in Theorem III.7 is implicit, as the operator depends on the state trajectory , and thus on the control input . This dependence poses practical limitations, even when a solution exists. In applications such as brain stimulation via tDCS, one is often interested in modulating activity over short time horizons. In what follows, we derive an expansion of the matrix in the small-time regime that enables accurate synthesis in small time horizons.
The proof of the following is presented in Section C-B.
Proposition III.8. Under the assumptions of Theorem III.7, the following expansion holds
| (23) |
In particular, the following expansion also holds
| (24) |
Remark III.9. It is worth noting that the term in (23) becomes in (24) due to the relation
since includes the factor , which behaves like as , given that is uniformly bounded with respect to and .
The explicit constant control synthesis of this section is summarized in the following theorem, which includes an estimate of the endpoint error.
Theorem III.10. Let and . Assume that for any . Then, the solution to (4) satisfies , if and only if, solves
| (25) |
where . In particular, let denote the solution of (4) corresponding to satisfying
| (26) |
Then, the following endpoint error estimate holds
Proof. First, (25) is an immediate consequence of (17), (18) and (24). Next, using the expansions (14), (2), (23), and the definition of from (26), we obtain
| (27) |
The same observation as in Remark III.9 justifies why the term in (23) reduces to in (27). □
The following result is immediate. It provides a necessary and sufficient condition for (26) to admit at least one solution.
Corollary III.11. Under the hypotheses of Theorem III.7, (26) admits at least one solution if and only if
In this case, the least-norm constant control is given by
While the previous synthesis applies in a short time, some practical scenarios require reaching the target state over a longer time. In the following, we show how a step function can be constructed from a short-time synthesis to achieve state transfer in a long-time horizon. As noted in Remark II.5, the solution to (4) can still be represented by (9).
We begin with the following implicit synthesis for step-function controls. Its proof, outlined in Section C-C, closely follows the arguments in Proposition III.6 and Theorem III.7.
Theorem III.12. Let and , where be such that for any . Define the step-function ,
where . Then, the solution to (4) corresponding to satisfies , if and only if, solves
| (28) |
Here, and
Moreover, , with and defined in (15) and (16), respectively, where the integrals are taken over .
Then, one has the following synthesis in a long-time horizon using explicit step-function controls.
Theorem III.13. Let and , where be such that for any . Let be a solution of
where . Then, the solution to (4) corresponding to the step function ,
satisfies
The proof of Theorem III.13 follows the same arguments as Theorem III.10 and is therefore omitted for brevity.
2) Backward nominal-state synthesis: Building on the solution representation of (4) given in Theorem II.4, we aim to construct step-function controls under the assumption that the terminal condition is achieved. In contrast to the forward nominal-state synthesis, the implicit control derived here does not provide a direct condition guaranteeing that the target state is reached.
We begin with the key result on which the implicit synthesis is based. The proof is presented in Section C-A.
Proposition III.14. Let and . The solution of (4) can be expanded as
| (29) |
for all . Here ,
| (30) |
| (31) |
where , and is the second derivative of at .
Theorem III.15. Let and . Assume that for any . Any constant control that ensures the corresponding solution of (4) satisfies necessarily solve
| (32) |
Here is defined in Proposition III.14, and letting , one has
| (33) |
Remark III.16. The implicit backward nominal-state synthesis in Theorem III.15 provides only a necessary condition for a constant control to achieve in (4). In contrast, the forward synthesis (17) yields a sufficient condition: any solving it guarantees , and thus also satisfies the backward condition.
Let us now present the proof of Theorem III.15.
Proof of Theorem III.15. Letting in (29), one gets from and left multiplication by that
| (34) |
by (2). Since , left multiplying (34) by and yields
It follows that solves the implicit equation
which is (32), where is defined by (33). □
Using the same machinery that leads to Theorem III.10, we obtain the following result. The proof is omitted for brevity.
Theorem III.17. Under the assumptions of Theorem III.15, let denote the solution of (4) corresponding to a constant control satisfying
| (35) |
where . Then, the following endpoint error estimate holds
The following result provides a necessary and sufficient condition for (35) to admit at least one solution.
Corollary III.18. Under the hypotheses of Theorem III.15, (35) admits at least one solution if and only if
In this case, the least-norm constant control is given by
Finally, one has the following result that synthesizes a step function for a large time horizon.
Theorem III.19. Let and , where be such that for any . Let be a solution of
where . Then, the solution to (4) corresponding to the step function ,
satisfies
Remark III.20. A key distinction between the forward and backward nominal-state syntheses lies in their respective spectral conditions: the forward synthesis depends on the spectrum at the initial state , while the backward synthesis relies on that at the target state .
In practice, only one of these conditions may be satisfied, depending on the region of the state space. This asymmetry underscores the complementarity of the two syntheses—each offers a valid control strategy under different local spectral properties, making their coexistence practically valuable.
This contrast also reflects a deeper difference between linear and nonlinear control. In the linear case, controllability depends on a uniform spectral condition (cf. Proposition III.2); failure at one point implies failure everywhere. In the nonlinear setting, local spectral conditions at either the initial or target can independently ensure controllability, highlighting a key advantage of the nonlinear synthesis framework.
C. Forward and Backward syntheses
Although Sections III-B1 and III-B2 focus on the forward and backward nominal-state syntheses, respectively, it is worth emphasizing that both arise from a broader family of synthesis strategies derived from the dual representations of the nonlinear system (4) in Theorems II.2 and II.4. Each representation admits at least two distinct expansion strategies—depending on the chosen integration by parts identity—that lead to meaningful control synthesis formulations.
Specifically, starting from the solution representation (7), we observe two synthesis paths:
Applying the identity yields the implicit backward nominal-state synthesis fully investigated in Section III-B2.
Using instead gives rise to what we refer to as an implicit backward initial-state synthesis. This formulation yields an expression of the control that provides a necessary and sufficient condition for reachability, and it is not analyzed in detail here.
Similarly, from the solution representation (9), two analogous options arise:
Using the identity leads to the implict forward nominal-state synthesis fully developed in Section III-B1.
Alternatively, using results in what we refer to as an implicit forward final-state synthesis. This yields a formally valid expansion and a necessary condition on the control, and it is not investigated in this work.
D. Reachability via control synthesis methods
In this section, we summarize the main results from Sections III-B1 and III-B2 into a set of operational insights framed in terms of the input matrix . Specifically, we analyze how the set of states reachable from a given initial condition over the interval [] depends on the structure of , under both constant and step-function control strategies synthesized via the proposed methods.
Given , we say that a state is c-reachable over from if there exists a constant control such that the solution of (4) corresponding to satisfies . The c-reachable set is denoted by (for step-function controls, we denote it by )
| (36) |
We say that (4) is controllable over from with a constant control if . If this holds for every , then (4) is said to be completely controllable over with constant controls.
1) The linear case: It is convenient to begin with the linear system (10), which helps understanding the fully nonlinear setting. In the linear control framework, controllability with a time-varying control is guaranteed if the Kalman rank condition or, equivalently, the invertibility of the controllability Gramian holds. These conditions are independent of and , and they allow for flexibility in choosing the input matrix satisfying what we refer to as the time-varying controllability condition.
However, as shown in Remark III.1, the question becomes more delicate when restricting to constant controls. Notably, even the trivial case may fail to achieve controllability with a constant control, despite satisfying the time-varying controllability condition.
Under the spectral condition in Proposition III.2, Corollary III.3 shows that (10) is completely controllable with constant controls over if and only if and is invertible. Otherwise, if or is not invertible, then
Hence, the size of is determined by the rank of : the smaller is relative to , the fewer the number of target states that are reachable from via constant control.
2) The Nonlinear Case: We now turn to the fully nonlinear setting and analyze the c-reachable set , leveraging the forward nominal-state synthesis from Section III-B1.
Unlike the linear case, the synthesis equation in Theorem III.7 is implicit in unless . When , the analysis from Section III-D1 directly applies. If 0, Proposition III.8 shows that for sufficiently small , the control satisfies (25). As a result, (4) is completely controllable with constant controls over for small if and only if and is invertible. Otherwise, when or is not invertible, the c-reachable set satisfies
and its size is again governed by the rank of .
Still, when , the small-time reachability result allows one to extend the synthesis to arbitrary horizons via step-function controls. Specifically, let be such that for any . Then, for , we have . While for , Theorem III.12 shows that system (4) is completely controllable with step-function controls over if and only if and is invertible. Otherwise, one has
and the sc-reachable set structure again depends on .
Let us characterize for the input matrix traditionally considered in network neuroscience [16], namely
| (37) |
where is the -th canonical basis vector of .
Proposition III.21. Let and . Assume that for any . If the input matrix is given by (37), then the c-reachable set satisfies
| (38) |
and the constant input that steers (4) from to is given by
| (39) |
Here collects the first rows of , and collects its remaining rows.
Proof. First, it follows from Theorem III.10 that the solution to (4) corresponding to a constant control solving
| (40) |
satisfies (see (27))
| (41) |
Next, with as in (37), one deduces from (40) that
| (42) |
where is the identity matrix, and is the zero matrix. Since is invertible, one has and therefore . Finally, (42) is equivalent to (39) and
| (43) |
Combining (36), (41) and (43) yields (38). □
Remark III.22. In Proposition III.21, the first-order characterization of the c-reachable set as an affine subspace is valid as long as the desired target shift . This ensures that the required constant input (39) remains bounded, since to leading order , one has .
Using the thin QR factorization of ,
one finds that .
Remark III.23. First, the c-reachable set characterization and the associated constant input synthesis in Proposition III.21 for a small time horizon when the input matrix is given by (37) can be improved by explicitly considering the second derivative in the expansion. We defer this analysis to future works. Next, let and be such that satisfies the spectral condition in Theorem III.7. Future work should investigate the structure and effective dimension of the c-reachable sets as functions of for a general input matrix .
IV. Comments on the main results of Section III
First, if the spectral norm condition is satisfied, then the eigenvalue-related assumptions required in Proposition III.2, Theorems III.7, and III. 15 are automatically fulfilled, as discussed, e.g., in Remark III.4. Note that this condition ensures that the nonlinear system (4) is contracting.
Contracting dynamics possess a range of desirable properties–see [20]–and contraction has become a standard structural assumption in many recent works on recurrent neural networks, such as [7], [11]. While the condition on is a sufficient criterion for contraction, it serves here as a simple and practically verifiable condition that ensures well-posedness of the synthesis proposed in this work.
More generally, the control syntheses developed in this paper apply to any nonlinear system of the form , where is constant and satisfies the regularity conditions
for some constant . Notably, we do not require any explicit boundedness condition on the vector field itself, making the framework broadly applicable.
This flexibility is particularly relevant for applications in machine learning, where recurrent neural networks (RNNs) often employ unbounded and non-smooth activation functions such as the . ReLU is differentiable everywhere except at , and it is globally Lipschitz on . In this case, the solution representations (7) and (9) no longer apply, and our control syntheses cannot be directly used. To overcome this limitation, one can consider smooth approximations of the ReLU function (e.g., softplus [14], swish function [29]), which restore differentiability and allow the application of our synthesis results.
V. Examples and numerical simulations
This section presents some numerical simulations that underpin our theoretical study. The pipeline of the experiments and analysis is visualized in Fig. 2. We included three major types of models: linear systems, “vanilla” tanh RNNs, and MINDy models [32]. While vanilla RNNs are widely used in theoretical neuroscience as task-performing models, MINDy models represent another branch of models that directly approximate experimental neural data, demonstrating the different applicative contexts of our theoretical framework.
Fig. 2.

Pipeline of the numerical experiments.
A. Implementation
We implement the linear system control synthesis (13), the nonlinear forward nominal-state synthesis (26), and the nonlinear backward nominal-state synthesis (35) in python. The scripts will be available after publication. For the nonlinear synthesis, the flow is numerically integrated using the odeint function from the torchdiffeq package [9]. We adopted the Runge-Kutta method of order 7(8) of Dormand-Prince-Shampine, which is the highest-order method available from the package. The method accepts a new step if , where represents the root mean square norm, being the estimated error and the current state. We found that this setting provided reasonable control of the numerical error related to flow integration. The Jacobian matrices were computed through auto-differentiation in pytorch. Experiments were conducted on a desktop computer with an Nvidia RTX 3080 GPU. The run time for each synthesis increased as the time horizon and number of features (neurons) increased, but remained within the scope of several hundred milliseconds to several seconds. For a fixed and multiple desired trajectories (i.e., different pairs of ), the synthesis is efficiently parallelized, and the run time remains almost constant when increasing the number of trajectories from one to 1000.
To construct a -dimensional linear system, we randomly sampled the entries of from the normal distribution . The decay matrix was set to , where is the maximum of the real parts of the eigenvalues of and is the desired maximum real part of the eigenvalues of the dynamic matrix . We tried both and to construct stable and unstable systems respectively.
To construct a “vanilla” RNN, we set and the activation function to tanh. We followed [35] and imposed a “random plus low rank” structure on the connectivity matrix . The matrix was sampled from the normal distribution , where is a scaling factor. determines the rank of the low-rank component. Such a type of model appears frequently in theoretical neuroscience studies [23]. We considered three subtypes of vanilla RNNs. The first subtype is referred to as ‘small norm tanh RNN’, constructed with and (no low-rank component). We found that such RNNs satisfied . For the next two subtypes, was set to 0.9 [35] and the spectral norm condition was violated. was sampled from standard normal distribution. was sampled from normal distribution for the second subtype and set as for the third subtype. It can be shown that systems of the second subtype will be monostable while systems of the third subtype will be bistable [35] as the number of neurons tends to infinity.
To further evaluate the method on realistic models fit to experimental data, we also included a set of Mesoscale Individualized Neurodynamic (MINDy) models from [32]. MINDy models contained 100 interconnected units representing 100 brain areas, and the parameters were optimized to approximate the activation time series of these areas measured through functional magnetic resonance imaging (fMRI). Unlike most RNNs, the activation function of MINDy is heterogeneous: , where is fixed and is optimized over the data and differs across the 100 units. It is also worth noting that the origin is unstable in most of the MINDy models, indicating that the spectral norm condition was not met. However, we found that the models satisfy the eigenvalue condition, which enabled the synthesis.
In the first experiment, we analyzed the endpoint error of the controlled trajectory for when . For each , we randomly generated 5 stable and 5 unstable linear systems, 5 small-norm, 5 monostable and 5 bistable tanh RNNs, and randomly selected 5 fitted MINDy models (from a pool of 106 models). All models were 100-dimensional. Then, we randomly generated 40 initial states . To investigate how the deviation of from the autonomous flow end point influences the performance of the method, we set where ). In half of the trials, was set to 0.1 (“small deviation”); in the other half, was set to 0.5 (“large deviation”). Control input was computed using the nonlinear forward (26) and backward (35) syntheses, and the linear synthesis (13). We recall that for the nonlinear system (4), when , the linearized system at is given by
Using (13), the following input drives the linearized system from to in exact time
| (44) |
where . We have also tried to linearize the systems at the origin (which is always a fixed point) and obtained qualitatively similar results.
In the second experiment, we followed the tradition in network neuroscience [16] and considered the input matrix defined by (37). For simplicity, we focused on 128-dimensional “small norm” tanh RNNs (with and ) and . We considered , ranging from an extremely underactuated system to a fully actuated system. We randomly generated 5 RNNs and selected 20 initial states for each model and each combination of and . Here, by Proposition III.21, and following Remark III.22, with entries of drawn from .
B. Results
Results of the first experiment were summarized in Fig. 3. Monostable and bistable tanh RNNs were combined into “Big norm tanh RNN” due to similarity. For linear systems, the error remained small. For nonlinear systems, error generally increased as the horizon increased. The forward nominal state synthesis performed better than the backward nominal state synthesis, and both of them generally performed better than linearization, particularly for large . The differences were more evident when is small (with standard deviation 0.1, left panels), where the forward synthesis sometimes outperformed linearization by orders of magnitude. Note that the MINDy models were trained to approximate timeseries with unit variance, so a “small deviation” with standard deviation 0.1 already represents a physically sizable difference, indicating the practical significance of our method.
Fig. 3. Relative endpoint error under synthesized input.

The -axis (log scale) indicates the time horizon . The -axis represents the common logarithm of the ratio between the Euclidean norm of the endpoint error and the Euclidean norm of . Results were organized according to the combination of model type (rows) and how were specified (columns). Monostable and bistable RNNs were merged into a single category of “Big norm tanh RNN” due to similarity. Results using linearization (44), nonlinear forward nominal-state synthesis (26), and nonlinear backward nominal-state synthesis (35) were shown in red, blue and green respectively. Line plots and error bands represent the mean and 95% confidence interval of relative error.
Results of the second experiment were summarized in Fig. 4. The nonlinear synthesis was still much better than linearization in underactuated systems, as expected. However, surprisingly, the endpoint error did not change dramatically and even decreased as the number of actuators decreased, as long as is close to the reachable set. These findings suggest that Proposition III.21 did provide a good estimation of the reachable set, and that the synthesis worked in underactuated systems just as good as (if not better than) fully-actuated systems as long as is reachable.
Fig. 4. Relative endpoint error under synthesized input for underactuated systems.

We considered “small norm tanh RNNs” of 128 dimension. The -axis (ordinal scale) indicates the input dimension . The -axis represents the common logarithm of the relative endpoint error. Results using linearized and nonlinear forward synthesis methods were separated into the two panels. Color indicates different time horizons . Line plots and error bars represent the mean and standard deviation of log relative error over 20 experiments.
VI. Concluding remarks and perspectives
This paper addressed the control synthesis problem for a class of nonlinear Hopfield-type recurrent neural networks motivated by neurostimulation applications. Using a solution representation that generalizes the variation of constants formula, we derived constant and piecewise constant inputs capable of steering the network to a desired target within a prescribed time interval. For linear activation functions, the exact controllability with constant input is possible over arbitrary time horizons. For a small time horizon, we also provided a characterization of the reachable set when the input matrix directly actuates a subset of nodes, showing that its dimension equals the rank of , and that its basis can be computed efficiently via a thin QR factorization. Moreover, the constant input that guarantees reachability is given by a closed-form algebraic condition, which makes the synthesis directly applicable for tDCS.
Future work will explore the synthesis of time-varying control inputs based on the proposed framework, with an emphasis on robustness to parameter uncertainty and external disturbances. As large-scale systems like (4) are highly sensitive to such perturbations, designing controls that adapt dynamically while minimizing energy costs remains a key challenge—one that links controllability with energetic performance.
Supplementary Material
Acknowledgments
This work is partially supported by grant R21MH132240 from the US National Institutes of Health to SC.
Biographies
Cyprien Tamekue was born in Nkongsamba, Cameroon, in November 1994. He received his B.Sc. degree in Fundamental Computing from the University of Douala, Cameroon, in July 2016. He obtained a M.Sc. degree in Fundamental Mathematics and Applications from Institut de Mathématiques et de Sciences Physiques, Benin, in 2019 and a M.Sc. degree in Theoretical and Applied Mathematics from Université Paris Dauphine-PSL in 2020. He completed his Ph.D. in Control Theory and Mathematical Neuroscience at Université Paris-Saclay in October 2023.
Dr. Tamekue is currently a Postdoctoral Research Associate at Washington University in St. Louis, McKelvey School of Engineering, in the Brain Dynamics and Control Research Group. He was previously a Postdoctoral Researcher in Control Theory at Institut Polytechnique des Sciences Avancées, INRIA, L2S, France. Dr. Tamekue’s research focuses on the mathematical modeling and control of nonlinear dynamical systems in large-scale brain networks, neural field models, and PDEs from sub-Riemannian geometry. He has contributed to modeling visual phenomena, such as the visual MacKay effect, by leveraging ideas from control theory to neural field models.
Ruiqi Chen received the B.Sc. degree in Intelligence Science and Technology from Peking University, China, 2021. He is currently working towards a Ph.D. degree in Neurosciences at Washington University in St. Louis, USA, under the supervision of Professor ShiNung Ching and Professor Todd Braver.
His research focuses on the modeling and analysis of brain dynamics supporting high-level cognitive functions, and the development of tools to analyze large-scale dynamical systems with the combination of theory-driven and data-driven approaches.
ShiNung Ching (Senior Member, IEEE), is Professor in the Department of Electrical and Systems Engineering at Washington University in St. Louis (St. Louis, USA). His research interests are at the intersection of engineering and computational neuroscience, particularly in using systems and control theoretic concepts to study the links between dynamics and function in neuronal networks. Dr. Ching completed his B.Eng (Hons.) and M.A.Sc degrees in Electrical and Computer Engineering from McGill University, Canada and the University of Toronto, Canada. He earned his Ph.D. in Electrical Engineering from the University of Michigan in 2009 in the area of systems and control theory. He subsequently completed post-doctoral training in computational neuroscience and anesthesiology at the Massachusetts Institute of Technology and the Harvard Medical School. He is an author on over 100 publications in academic journals and conferences and the textbook ‘Quasilinear Control’. Dr. Ching has received the CAREER Award from the US National Science Foundation, the Young Investigator Program award from the US AFOSR, a Career Award at the Scientific Interface from the Burroughs-Wellcome Fund, and has served as principal investigator on research projects funded by the National Institutes of Health, the National Science Foundation and the US Department of Defense.
Contributor Information
Cyprien Tamekue, Department of Electrical and Systems Engineering, Washington University in St. Louis, St. Louis, 63130, MO, USA.
Ruiqi Chen, Neurosciences Program in the Division of Biology and Biomedical Sciences, Washington University in St. Louis, St. Louis, 63130, MO, USA.
ShiNung Ching, Department of Electrical and Systems Engineering, Washington University in St. Louis, St. Louis, 63130, MO, USA.
References
- [1].Agračev AA, Barilari D, and Boscain U, A comprehensive introduction to sub-Riemannian geometry. Vol 181, Cambridge University Press, 2019. [Google Scholar]
- [2].Agračev AA, and Gamkrelidze RV, “ The exponential representation of flows and the chronological calculus,” Mathematics of the USSR-Sbornik, vol. 35, no. 6, pp. 727, 1978. [Google Scholar]
- [3].Albertini F and Sontag ED, “For neural networks, function determines form,” Neural Networks, vol. 6, no. 7, pp. 975–990, 1993. [Google Scholar]
- [4].Bergmann TO, Groppa S, Seeger M, Mölle M, Marshall L, and Siebner HR, “Acute changes in motor cortical excitability during slow oscillatory and constant anodal transcranial direct current stimulation,” Journal of Neurophysiology, vol. 102, no. 4, pp. 2303–2311, 2009. [DOI] [PubMed] [Google Scholar]
- [5].Breakspear M, “ Dynamic models of large-scale brain activity,” Nature Neuroscience, vol. 20, no. 3, pp. 340–352, 2017. [DOI] [PubMed] [Google Scholar]
- [6].Bressan A and Piccoli B, Introduction to the Mathematical Theory of Control, Vol. 1, American Institute of Mathematical Sciences, Springfield, 2007. [Google Scholar]
- [7].Centorrino V, Gokhale A, Davydov A, Russo G, and Bullo F, “Euclidean contractivity of neural networks with symmetric weights,” IEEE Control Systems Letters, vol. 7, pp. 1724–1729, 2023. [Google Scholar]
- [8].Chen R, Singh M, Braver TS, and Ching S, “Dynamical Models Reveal Anatomically Reliable Attractor Landscapes Embedded in Resting State Brain Networks,” Imaging Neuroscience, 2025, doi: 10.1162/imag_a_00442. Available: https://doi.org/10.1162/imag_a_00442. [DOI] [Google Scholar]
- [9].Chen RTQ, Rubanova Y, Bettencourt J, and Duvenaud D, “Neural Ordinary Differential Equations,” Advances in Neural Information Processing Systems. 2018. [Google Scholar]
- [10].Coron JM, Control and nonlinearity. No. 136, American Mathematical Soc, 2007. [Google Scholar]
- [11].Davydov A, Proskurnikov AV and Bullo F (2022, June). “Non-Euclidean contractivity of recurrent neural networks,” In 2022 American Control Conference (ACC) (pp. 1527–1534). IEEE. [Google Scholar]
- [12].DeGiorgio CM, and Krahl SE, “Neurostimulation for drug-resistant epilepsy,” Continuum: Lifelong Learning in Neurology, vol. 19, no. 3, pp. 743–755, 2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [13].Forti M and Tesi A, “New conditions for global stability of neural networks with application to linear and quadratic programming problems,” IEEE Transactions on Circuits and Systems I: Fundamental theory and applications, vol. 47, no. 7, pp. 354–366, 1995. [Google Scholar]
- [14].Glorot X, Bordes A, and Bengio Y, “Deep sparse rectifier neural networks,” in Proc. 14th Int. Conf. on Artificial Intelligence and Statistics (AISTATS), pp. 315–323, 2011. [Google Scholar]
- [15].Grover S, Wen W, Viswanathan V, Gill CT, and Reinhart RMG, “Long-lasting, dissociable improvements in working memory and long-term memory in older adults with repetitive neuromodulation,” Nature Neuroscience, vol. 25, no. 9, pp. 1237–1246, 2022. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [16].Gu S, Pasqualetti F, Cieslak M, Telesford QK, Yu AB, Kahn AE, Medaglia JD, Vettel JM, Miller MB, and Grafton ST, and et al. , “Controllability of structural brain networks,” Nature Communications, vol. 6, no. 1, pp. 8414, 2015. [Google Scholar]
- [17].Hirsch MW and Smale S, Differential Equations, Dynamical Systems, and Linear Algebra, Academic Press, New York-London, 1974. Pure and Applied Mathematics, Vol. 60. [Google Scholar]
- [18].Hopfield JJ, “Neurons with graded response have collective computational properties like those of two-state neurons,” Proceedings of the National Academy of Sciences, vol. 81, no. 10, pp. 3088–3092, 1984. [Google Scholar]
- [19].Liu Y-Y, Slotine J-J, and Barabási A-L, “Controllability of complex networks,” Nature, vol. 473, no. 7346, pp. 167–173, 2011. [DOI] [PubMed] [Google Scholar]
- [20].Lohmiller W and Slotine JJE. “On contraction analysis for nonlinear systems,” Automatica, vol. 34 no. 6, pp 683–696, 1998. [Google Scholar]
- [21].Kawski M, Chronological calculus in systems and control theory, Mathematics of Complexity and Dynamical Systems, pp. 88, 2011. [Google Scholar]
- [22].López-Alonso V, Cheeran B, Río-Rodríguez D, and Fernández-del-Olmo M, “Inter-individual variability in response to non-invasive brain stimulation paradigms”, Neuron, vol. 7, no. 3, pp. 372–380, 2014. [Google Scholar]
- [23].Mastrogiuseppe F, and Ostojic S, “Linking connectivity, dynamics, and computations in low-rank recurrent neural networks”, Neuron, vol. 99, no. 3, pp. 609–623, 2018. [DOI] [PubMed] [Google Scholar]
- [24].Medaglia JD, “Clarifying cognitive control and the controllable connectome,” Wiley Interdisciplinary Reviews: Cognitive Science, vol. 10, no. 1, p. e1471, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [25].Muldoon SF, Pasqualetti F, Gu S, Cieslak M, Grafton ST, Vettel JM, and Bassett DS, “Stimulation-based control of dynamic brain networks”, PLoS Computational Biology, vol. 12, no. 9, pp. e1005076, 2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [26].Power JD, Schlaggar BL, Lessov-Schlaggar CN, and Petersen SE, “Evidence for hubs in human functional brain networks”, Neuron, vol. 79, no. 4, pp. 798–813, 2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [27].Qin Y, Bassett DS, and Pasqualetti F, “Vibrational control of cluster synchronization: Connections with deep brain stimulation,” in Proc. 61st IEEE Conf. on Decision and Control (CDC), pp. 655–661, 2022. [Google Scholar]
- [28].Qin Y, Nobili AM, Bassett DS, and Pasqualetti F, “Vibrational stabilization of cluster synchronization in oscillator networks,” IEEE Open Journal of Control Systems, vol. 2, pp. 439–453, 2023. [Google Scholar]
- [29].Ramachandran P, Zoph B, and Le QV, “Searching for activation functions,” arXiv preprint, arXiv:1710.05941, 2017. [Google Scholar]
- [30].Sarychev AV, “Lie extensions of nonlinear control systems” Journal of Mathematical Sciences, vol. 135, no. 4, pp. 3195–3223, 2006. [Google Scholar]
- [31].Singh MF, Wang A, Cole M, Ching S, Braver TS, “Enhancing task fMRI preprocessing via individualized model-based filtering of intrinsic activity dynamics”, NeuroImage, vol. 247, pp. 118836, 2022. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [32].Singh MF, Braver TS, Cole MW, and Ching S, “Estimation and validation of individualized dynamic brain models with resting-state fMRI”, NeuroImage, vol. 221, pp. 117046, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [33].Singh MF, Matthew, Wang A, Anxu, Braver TS, and Ching S, “Scalable surrogate deconvolution for identification of partially-observable systems and brain modeling”, Journal of neural engineering, vol. 17, no. 4, pp. 046025, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [34].Schiff SJ, Neural control engineering: the emerging intersection between control theory and neuroscience, MIT Press, 2011. [Google Scholar]
- [35].Schuessler F, Dubreuil A, Mastrogiuseppe F, Ostojic S, and Barak O, “Dynamics of random recurrent networks with correlated low-rank structure”, Physical Review Research, vol. 2, no. 1, pp. 013111, 2020. [Google Scholar]
- [36].Sontag ED, and Qiao Y, “Remarks on controllability of recurrent neural networks”, Proceedings of the IEEE Conference on Decision and Control., Vol. 1. Institute of Electrical and Electronics Engineers Inc., 1998. [Google Scholar]
- [37].Tamekue C and Ching S, “Control analysis and synthesis for general control-affine systems,” arXiv preprint, arXiv:2503.05606, 2025. [Google Scholar]
- [38].Tamekue C, Prandi D, and Chitour Y, “A mathematical model of the visual MacKay effect”, SIAM Journal on Applied Dynamical Systems, vol. 23, no. 3, pp. 2138–2178, 2024. [Google Scholar]
- [39].Tucsnak M and Weiss G, Observation and control for operator semigroups, Springer, 2009. [Google Scholar]
- [40].Yi Z, Heng P-A, and Fu AW-C, “Estimate of exponential convergence rate and exponential stability for neural networks”, IEEE Transactions on Neural Networks, vol. 10, no. 6, pp. 1487–1493, 1999. [DOI] [PubMed] [Google Scholar]
- [41].Zhang H, Wang Z, and Liu D, “A comprehensive review of stability analysis of continuous-time recurrent neural networks,” IEEE Transactions on Neural Networks and Learning Systems, vol. 25, no. 7, pp. 1229–1262, 2014. doi: 10.1109/TNNLS.2014.2317880. [DOI] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
