Get Complete Project Material File(s) Now! »
Dynamical Mean Field description of 2 excitatory-inhibitory networks
In the first part of this dissertation, we study how the transition to a chaotic, slowly fluctuating dynamical regime, which was first observed in [127], translates to more realistic network mod-els. We design a non-linear firing rate network which includes novel mathematical contraints motivated by biology, and we quantitatively address its spontaneous dynamics.
If the synaptic coupling is globally weak, firing rate networks can be described with the help of standard approaches from dynamical systems theory, like linear stability analysis. However, in the strong coupling regime, a rigorous description of self-sustained fluctuations can be de-rived only at the statistical level. To this end, we adopt and extend the theoretical framework first proposed in [ 127], which provides an adequate description of irregular temporal states. In such approach, irregular trajectories are thought as random processes sampled from a con-tinuous probability distribution, whose first two momenta can be computed self-consistently [127]. This technique, commonly referred to as Dynamical Mean Field (DMF) theory, has been inherited from the study of disordered systems of interacting spins [40], and provides a powerful and flexible tool for understanding dynamics in disordered rate networks.
In this chapter, we adapt this approach to the study of more realistic excitatory-inhibitory network models. We derive the mean field equations which will become the central core of the analysis which is carried out in details in the rest of part I. To begin with, we review the methodology of DMF, and we present the results that such theory implies for the original, highly symmetrical model. This first section is effectively a re-interpretation of the short paper by [127]. In the second section, we introduce and motivate the more biologically-inspired model that we aim at studying, and we show that an analogous instability from a fixed point to chaos can be predicted by means of linear stability arguments. In order to provide an adequate self-consistent description on the irregular regime above the instability, we extend the DMF framework to include non-trivial effects due to non-vanishing first-order statistics.
Transition to chaos in recurrent random networks
The classical network model in [127] is defined through a non-linear continuous-time dynam-ics which makes it formally equivalent to a traditional firing rate model [151, 42]. Firing rate models are meant to provide a high-level description of circuit dynamics, as spiking activity is averaged over one or more degrees of freedom to derive a simpler description in terms of smooth state variables. From a classical perspective, firing rate units provide a good descrip-tion of the average spiking activity in small neural populations. Equivalently, they can well approximate the firing of single neurons if the synaptic filtering time-scale is large enough. Although in this chapter we don’t focus on any specific interpretation, a sloppy terminology where the words unit and neuron are used indifferently will be adopted.
The state of each unit in the network is described through an activation variable xi which is commonly interpreted as the net input current entering the cell. The current-to-rate transfor-mation that is performed in spiking neurons is modeled through a monotonically increasing function ϕ, such that the variable ϕ(xi) represents the instantaneous output firing rate of the unit.
As the network consists of many units (i = 1, …, N ), the current entering neuron i in-cludes many contributions, whose values are proportional to the firing rate of the pre-synaptic neurons. The strength of the synapse from neuron j to neuron i is modeled through the con-nectivity parameter Jij The coupled dynamics obey the following temporal evolution law:
N
∑j (2.1)
x˙i(t) = −xi(t) + Jij ϕ(xj (t)) =1
The first contribution on the r.h.s. is a leak term, which ensures the activation variables xi decays back to baseline in absence of any forcing current. The incoming contributions from other units in the network sum linearly. Note that we have rescaled time to set the time constant to unity.
In the paper by [127], the authors focus on a random all-to-all Gaussian connectivity (Fig. 2.1 a). We thus have Jij = gχij with χij ∼ N(µ = 0, σ2 = 1/N). Such scaling for the variance ensures that single units experience finite input fluctuations even in the limit of very large networks. The parameter g controls the global strength of synaptic coupling. As neurons can make both excitatory and inhibitory connections, this connectivity scheme does not respect Dale’s law. The activation function is a symmetric sigmoid (ϕ(x) = tanh(x)), which takes positive and negative values. In the original network, furthermore, neither constant nor noisy external inputs are considered.
As we will show in the next sections, all those elements together result in an extremely simplified dynamics, where the transition to chaos can be only measured at the level of the second-order statistics of the network activity distribution.
Linear stability analysis
To begin with, we notice that the model admits an homogeneous stationary solution for which the network is completely silent: x0i = 0 ∀i. For a fixed, randomly chosen connectivity matrix, the network we consider is fully deterministic, and can therefore be examined using standard dynamical system techniques [131]. We thus derive a first intuitive picture about the network dynamics by evaluating the linear stability of the homogeneous fixed point.
Stationary regime: g = 0.8. In b: eigenspectrum of the stability matrix Sij for a simulated network of N = 2000 units. In good agreement with the circular law prediction, the eigen-values lie in a compact circle of radius g (continuous black line). Dashed line: instability boundary. In c: sample of simulated activity for eight randomly chosen units. d-e. Chaotic regime: g = 1.2. Same figures as in b-c.
The linear response of the system when pushed away from the fixed point can be studied by tracking the time evolution of a solution in the form: xi(t) = x0 + δxi(t). Close to the fixed point, the function ϕ(x) can be expanded up to the linear order in δxi(t). This results in a system of N coupled linear differential equations, whose dynamical matrix is given by: Sij = ϕ′(0)gχij − δij . Note that here ϕ′(0) = 1.
As a result, the perturbation δxi(t) will be re-absorbed if Re(λi) < 1 for all i, λi being the ith eigenvalue of the asymmetric random matrix gχij . We are thus left with the problem of evaluating the eigenspectrum of a Gaussian random matrix. If one focuses on very large networks, the circular (or Girko’s) law can be applied [54, 136]: the eigenvalues of gχij lie in the complex plane within a circular compact set of radius g. Although its prediction is exact only in the thermodynamic limit (N → ∞), the circular law also reasonably well approximate the eigenspectrum of finite random matrices.
We derive that, at low coupling strength (g < 1), the silent fixed point is stable (Fig. 2.1 b-c). More than this, x0 = 0 is a global attractor, as Sij is a contraction [146]. Numerical sim-ulations confirm that, in this parameter region, network activity settles into the homogeneous fixed point. For g > 1, the fixed point is unstable, and the network exhibits ongoing dynamics in which single neuron activity fluctuates irregularly both in time and across different units (Fig. 2.1 d-e). As the system is deterministic, these fluctuations are generated intrinsically by strong feedback along unstable modes, which possess a random structure inherited from the random connectivity matrix.
The Dynamical Mean Field theory
The non-stationary regime cannot be easily analyzed with the tools of classical dynamical sys-tems. To this end, the authors in [127] adopted a mean field approach to develop an effective statistical description of network activity. In this section, we propose a review of such tech-nique; our analysis is based on [127] and subsequent works [99, 141, 89].
Rather than attempting to describe single trajectories, the main idea is to focus on their statistics, which can be determined by averaging over different initial conditions, time and the different instances of the connectivity matrix. Dynamical Mean Field (DMF) acts by replacing the fully deterministic interacting network by an equivalent stochastic system. More specifically, as the interaction between units j Jij ϕ(xj ) consists of a sum of a large number of terms, it can be replaced by a Gaussian stochastic process ηi(t). Such a replacement provides an exact mathematical description under specific assumptions on the chaotic nature of the dynamics [16, 90] in the limits of large network size N. In this thesis, we will treat it as an approximation, and we will assess the accuracy of this approximation by comparing the results with simulations performed for fixed N.
Replacing the interaction terms by Gaussian processes transforms the system into N iden-tical Langevin-like equations: x˙i(t) = −xi(t) + ηi(t). (2.2)
As ηi(t) is a Gaussian noise, each trajectory xi(t) emerges thus as a Gaussian stochastic process. As we will see, the stochastic processes corresponding to different units become uncorrelated and statistically equivalent in the limit of a large network, so that the network is effectively described by a single process.
Within DMF, the mean and correlations of this stochastic process are determined self-consistently, by requiring that averages over ηi(t) be identical to averages over time, instances of the connectivity matrix and initial conditions in the original system. Both averages will be indicated with []. For the mean, we get: [ηi(t)] = g[ χij ϕ(xj (t))] = g [χij ][ϕ(xj (t))] = 0 j=1 =1 as [χij ] = 0. In the second equality, we assumed that activity of different units decorrelates in large networks; in particular, that activity of unit j is independent of its outgoing connections Jij . As we will show in few lines, this assumption is self-consistent. In the mathematical literature, it has been referred to as local chaos hypothesis [8, 52, 90].
The second-order statistics of the effective input gives instead:
[ηi(t)ηj (t + τ)] = g2 ∑ ∑
[ χikϕ(xk(t)) χjlϕ(xl(t + τ))]
k=1 l=1 (2.4)
N N
= g2 ∑ ∑
[χikχjl][ϕ(xk(t))ϕ(xl(t + τ))].
k=1 l=1
As [χikχjl] = δij δkl/N, cross-correlations vanish, while the auto-correlation results in: [ηi(t)ηi(t + τ)] = g2[ϕ(xi(t))ϕ(xi(t + τ))]. (2.5)
We will refer to the firing rate auto-correlation function [ϕ(xi(t))ϕ(xi(t + τ))] as C(τ). Con-sistently with our starting hypothesis, the first- and the second-order statistics of the Gaussian process are uncorrelated from one unit to the other.
Once the probability distribution of the effective input has been characterized, we derive a statistical description of the network activity in terms of the activation variable xi(t) by solving the Langevin equation in Eq. 2.2.
Trivially, the first-order statistics of xi(t) and ηi(t) asymptotically coincide, so that the mean input always vanishes. In order to derive the auto-correlation function ∆(τ) = [xi(t)xi(t+
τ )], we derive twice with respect to τ and we combine Eqs. 2.2 and 2.5 to get the following time evolution law:
¨ 2 C(τ). (2.6)
∆(τ) = ∆(τ) − g
We are thus left with the problem of writing down an explicit expression for the firing rate auto-correlation function C(τ). To this aim, we write x(t) and x(t + τ) as Gaussian variables which obey [x(t)x(t + τ)] = ∆(τ) and [x(t)2] = [x(t + τ)2] = ∆0, where we defined the input variance ∆0 = ∆(τ = 0). One possible choice is:
x(t) = ∆0 − |∆(τ)|x1 + |∆(τ)|z (2.7)
√ sgn √
− | | x2 + (∆(τ)) | | z
x(t + τ) = ∆ ∆(τ) ∆(τ)
where x1, x2 and z are Gaussian variables with zero mean and unit variance. For reasons which will become clear in few steps, we focus on the case ∆(τ) > 0. Under this assumption, the firing rate auto-correlation function can be written as:
C(τ) = ∫ Dz [∫ Dxϕ(√ x + √ z)]2 (2.8)
∆0 − ∆(τ) ∆(τ)
∫ ∫ +∞ e− z2
where used the short-hand notation: Dz = −∞ √22π dz.
From a technical point of view, Eq. 7.61 is now a second-order differential equation, whose time evolution depends on its initial condition ∆0 . This equation admits different classes of solutions which are in general hard to isolate in an explicit form. Luckily, we can reshape our problem into a simpler, more convenient formulation.
Isolating the solutions We observe that Eq. 7.61 can be seen as analogous to the equation of motion of a classical particle in a one-dimensional potential:
¨ ∂V (∆, ∆0)
∆ = − ∂∆
The potential V (∆, ∆0) is given by an integration over ∆:
V (∆, ∆0) = − ∫0∆ d∆′ [ ]
∆′ − g2C(∆′, ∆0) .
One can check that this results in: Dz [∫ z)]
V (∆, ∆0) = − 2 + g2 ∫ DxΦ(√ x + √ 2
∆0−∆ ∆
Figure 2.2: Shape of the potential V (∆, ∆0) for different initial conditions ∆0. a. Weak copling regime: g < gC . b. Strong coupling regime: g > gC .
where Φ(x) = x ϕ(x′) d x′. In the present framework, Φ(x) = ln(cosh(x)). −∞ ˙ of external noise, the initial condition to be satisfied is ∆(τ = 0) = 0, which In absence ∫ implies null kinetic energy for τ = 0. A second condition is given by: ∆0 > |∆(τ)| ∀τ. The solution ∆(τ) depends on the initial value ∆0, and is governed by the energy conservation law: V (∆(τ = 0), ∆0) = V (∆(τ = ∞), ∆0) + 2 ∆(τ = ∞)
The stationary points and the qualitative features of the ∆(τ) trajectory depend then on the shape of the potential V . We notice that for the symmetric model from [127], the derivative of the potential in ∆ = 0 always vanishes, suggesting a possible equilibrium point. The full shape of V is determined by the values of g and ∆0. In particular, a critical value gC exists such that:
• when g < gC , the potential has the shape of a concave parabola centered in ∆ = 0 (Fig. 2.2 a). The only bounded solution is ∆ = ∆0 = 0;
• when g > gC , the potential admits different qualitative configurations and an infinite number of different ∆(τ) trajectories. In general, the motion in the potential will be oscillatory (Fig. 2.2 b).
We conclude that, in the weak coupling regime, the only acceptable solution is centered in 0 and has vanishing variance. In other terms, in agreement with our linear stability analysis, we must have xi(t) = 0 ∀t.
In the strong coupling regime, we observe that a particular solution exists, for which ∆(τ) decays to 0 as τ → ∞. In this final state, there is no kinetic energy left. For this particular class of solutions, Eq. (2.12) reads:
V (∆0, ∆0) = V (0, ∆0). (2.13)
More explicitly, we have:
∆2 2
= g2 {∫ DzΦ2(√∆0z) − (∫ DzΦ(√∆0z)) } . (2.14)
In the following, we will often use the compact notation:
∆2 = g2 {[Φ2] − [Φ]2} . (2.15)
A monotonically decaying auto-correlation function implies a dynamics which loses mem-ory of its previous states, and is compatible with a chaotic state. In the original study by Sompolinsky et al. [127], the average Lyapunov exponent is computed. It is shown that the monotonically decreasing solution is the only self-consistent one, as the correspondent Lya-punov exponent is positive.
Once ∆0 is computed through Eq. 2.15, its value can be injected into Eq. 7.61 to get the time course of the auto-correlation function. The decay time of ∆(τ), which depends on g, gives an estimation of time scale of chaotic fluctuations. As the transition in gC is smooth, the DMF equations can be expanded around the critical coupling to show that such time scale diverges when approaching the transition from above. Very close to g = gC , the network can thus support arbitrarily slow spontaneous activity.
Numerical inspection of the mean field solutions suggest that, as it has been predicted by the linear stability analysis, gC = 1 (Fig. 2.3 b). This can be also rigorously checked by imposing that, at the transition point, the first and the second derivative of the potential vanish in ∆ = 0.
To conclude we found that, above g = 1, the DMF predicts the emergence of chaotic trajectories which fluctuate symmetrically around 0. In the large network limit, different tra-jectories behave as totally uncoupled processes. Their average amplitude can be computed numerically as solution of the non-linear self-consistent equation in 2.15.
Fast dynamics: discrete time evolution
As a side note, we briefly consider a closely related class of models which has been extensively adopted in the DMF literature. In this formulation, the dynamics is given by a discrete time update:
N
∑j (2.16)
xi(t + 1) = Jij ϕ(xj (t))
=1
As there are no leak terms, fluctuations in the input current occur on an extremely fast time-scale (formally, within one time step). All the other elements of the model, including Jij and ϕ(x), are taken as in [127].
This discrete-time formulation has been used, for instance, in the first attempts to exploit random network dynamics for machine learning purposes [67]. It has also been adopted in several theoretically oriented studies, as analysing fast dynamics has two main advantages: mean field descriptions are easier to derive [89, 141, 30], and, in finite-size networks, the quasi-periodical route to chaos can be directly observed [46, 30, 4].
While finite size analysis falls outside the scope of this dissertation, we briefly review how the mean field equations adapt to discrete-time networks and how this description fits in the more general DMF framework.
Similar to the continuous-time case, the discrete-time dynamics admits an homogeneous fixed point in x0 = 0. Furthermore, as it can be easily verified, the linear stability matrix of this stationary state coincides with Sij = gχij − δij , so that an instability occurs in g = 1. In order to analyze dynamics beyond the instability, we apply DMF arguments. When defining the effective input ηi(t) = ∑N Jij ϕ(xj (t)), fast dynamics will translate in the following
j=1
simple update rule:
xi(t + 1) = ηi(t) (2.17)
where, at each time step, xi is simply replaced by the stochastic effective input. As a con-sequence, by squaring and averaging over all the sources of disorder, we find that the input current variance obeys the following time evolution:
∆0(t + 1) = [ηi2(t)] = g2[ϕ2(t)]. (2.18)
In the last equality, we used the self-consistent expression for the second-order statistics of ηi, which can be computed as in the continuous-time case, yielding to the same result. By expressing x(t) as Gaussian variable, the evolution law for ∆0 can be made explicit:
∆0(t + 1) = g2 ∫ Dzϕ2(√ z). (2.19)
∆0(t)
At equilibrium, the value of ∆0 satisfies the fixed-point condition:
∆0 = g2 ∫ Dzϕ2(√ z). (2.20)
∆0(t)
As it can be easily checked, this equation is satisfied for ∆0 = 0 when g < 1, while it admits a nontrivial positive solution above gC = 1, corresponding to a fast chaotic phase (Fig. 2.3 b).
The solution that we derive from solving Eq. 2.20 does not coincide exactly with the so-lution we obtained in the case of continuous-time networks, although they share many quali-tative features (Fig. 2.3 b). In contrast to discrete-time units, neurons with continuous-time dynamics act as low-pass filters of their inputs. For this reason, continuous-time chaotic fluc-tuations are characterized by a slower time scale (Fig. 2.3 c) and a smaller variance ∆0 (Fig. 2.3 b).
We conclude this paragraph with a technical remark: our new equation for ∆0 (Eq. 2.20) coincides with the general expression for stationary solutions in continuous-time networks. ¨ 2 C(τ)
The latter can be derived from the continuous-time DMF equation ∆(τ) = ∆(τ) − g by setting and thus From the analysis we just carried out, we conclude that the general stationary solution for continuous-time networks admits, together with the homogeneous fixed point, a non-homogeneous static branch for g > 1. As it is charac-terized by positive Lyapunov exponents, this solution is however never stable for continuous-time networks. This sets a formal equivalence between chaotic discrete-time and stationary continuous-time solutions which does not depend on the details of the network model. For this reason, it will return back several times within the body of this dissertation.
Transition to chaos in excitatory-inhibitory neural networks
As widely discussed in Chapter 1, network models which spontaneously sustain slow and local firing rate fluctuations are of great interest in the perspective of understanding the large, super-Poisson variability observed from in-vivo recordings [120, 56].
Furthermore, the random network model in [127] has been adopted in many training frameworks as a proxy for the unspecialized substrate on which plasticity algorithms can be applied. The original computational architecture from Jaeger [67], know as echo-state machine, adopts the variant of the model characterized by discrete-time dynamics [89, 30]. In later years, several training procedures have been designed for continuous-time models as well [132, 73, 28].
A natural question we would like to address is whether actual cortical networks exhibit dynamical regimes which are analogous to rate chaos.
The classical network model analyzed in [127] and subsequent studies [132, 73, 99, 6, 7, 130] rely on several simplifying features that prevent a direct comparison with more bi-ologically constrained models such as networks of spiking neurons. In particular, a major simplification is a high degree of symmetry in both input currents and firing rates. Indeed, in the classical model the synaptic strengths are symmetrically distributed around zero, and excitatory and inhibitory neurons are not segregated into different populations, thus violating Dale’s law. The current-to-rate activation function is furthermore symmetric around zero, so that the dynamics are symmetric under sign reversal. As a consequence, the mean activity in the network is always zero, and the transition to the fluctuating regime is characterized solely in terms of second order statistics.
To help bridge the gap between the classical model and more realistic spiking networks [24, 95], recent works have investigated fluctuating activity in rate networks that include ad-ditional biological constraints [95, 69, 58], such as segregated excitatory-inhibitory popula-tions, positive firing rates and spiking noise [69]. In general excitatory-inhibitory networks, the DMF equations can be formulated, but are difficult to solve, so that these works focused mostly on the case of purely inhibitory networks. These works therefore left unexplained some phenomena observed in simulations of excitatory-inhibitory spiking and rate networks [95], in particular the observation that the onset of fluctuating activity is accompanied by an elevation of mean firing rate.
Here we investigate the effects of excitation on fluctuating activity in inhibition-dominated excitatory-inhibitory networks [142, 91, 3, 111, 60, 61]. To this end, we focus on a simplified network architecture in which excitatory and inhibitory neurons receive statistically identical inputs [24]. For that architecture, dynamical mean field equations can be fully solved.
The model
We consider a large, randomly connected network of excitatory and inhibitory rate units. Sim-ilarly to [127], the network dynamics are given by:
N
∑j (2.21)
x˙i(t) = −xi(t) + Jij ϕ(xj (t)) + Ii.
=1
In some of the results which follow, we will include a fixed or noisy external current Ii . The function ϕ(x) is a monotonic, positively defined activation function that transforms input currents into output activity.
For the sake of simplicity, in most of the applications we restrict ourself to the case of a threshold-linear activation function with an offset γ. For practical purposes, we take:
ϕ(x) = 0 x < γ ϕmax γ
γ + x − γ − x ≤ − (2.22) ≤ ϕ max x > ϕ max − γ
where ϕmax plays the role of the saturation value. In the following, we set γ=0.5 .
We focus on a sparse, two-population synaptic matrix identical to [24, 95]. We first study the simplest version in which all neurons receive the same number C ≪ N of incoming connections (respectively CE = f C and CI = (1 − f)C excitatory and inhibitory inputs). More specifically, here we consider the limit of large N while C (and the synaptic strengths) are held fixed [9, 24]. We set f = 0.8.
All the excitatory synapses have strength J and all inhibitory synapses have strength −gJ, but the precise pattern of connections is assigned randomly (Fig. 2.4 a). For such connectivity, excitatory and inhibitory neurons are statistically equivalent as they receive statistically identi-cal inputs. This situation greatly simplifies the mathematical analysis, and allows us to obtain results in a transparent manner. In a second step, we show that the obtained results extend to more general types of connectivity.
Our analysis largely builds on the methodology that we have been reviewing in the previous section for the case of the simpler network as in [127].
Linear stability analysis
As the inputs to all units are statistically identical, the network admits a homogeneous fixed point in which the activity is constant in time and identical for all units, given by:
x0 = J(CE − gCI )ϕ(x0). (2.23)
Figure 2.4: Linear stability analysis and transition to chaos in excitatory-inhibitory networks with threshold-linear activation function. a. The sparse excitatory-inhibitory connectivity ma-trix Jij . b-c. Stationary regime: J < J0 . In b: eigenspectrum of the stability matrix Sij for a simulated network of N = 2000 units. In good agreement with the circular law prediction, the eigenvalues lie in a compact circle of approximated radius J CE + g2CI (black contin-uous line). Black star: eigenspectrum outlier in J(CE − gCI ) < 0. Dashed line: instability boundary. In c: sample of simulated activity for eight randomly chosen units. d-e. Chaotic regime: J > J0. Same figures as in b-c .
Table of contents :
1 Introduction
1.1 Irregular firing in cortical networks
1.1.1 Irregular inputs, irregular outputs
1.1.2 Point-process and firing rate variability
1.2 Chaotic regimes in networks of firing rate units
1.3 Outline of the work
I Intrinsically-generatedfluctuationsinrandomnetworksofexcitatory-inhibitory units
2 Dynamical Mean Field description of excitatory-inhibitory networks
2.1 Transition to chaos in recurrent random networks
2.1.1 Linear stability analysis
2.1.2 The Dynamical Mean Field theory
2.2 Fast dynamics: discrete time evolution
2.3 Transition to chaos in excitatory-inhibitory neural networks
2.3.1 The model
2.3.2 Linear stability analysis
2.3.3 Deriving DMF equations
3 Two regimes of fluctuating activity
3.1 Dynamical Mean Field solutions
3.2 Intermediate and strong coupling chaotic regimes
3.2.1 Computing JD
3.2.2 Purely inhibitory networks
3.3 Extensions to more general classes of networks
3.3.1 The effect of noise
3.3.2 Connectivity with stochastic in-degree
3.3.3 General excitatory-inhibitory networks
3.4 Relation to previous works
4 Rate fluctuations in spiking networks
4.1 Rate networks with a LIF transfer function
4.2 Spiking networks of leaky integrate-and-fire neurons: numerical results
4.3 Discussion
4.3.1 Mean field theories and rate-based descriptions of integrate-and-fire networks
II Random networks as reservoirs
5 Computing with recurrent networks: an overview
5.1 Designing structured recurrent networks
5.2 Training structured recurrent networks
5.2.1 Reservoir computing
5.2.2 Closing the loop
5.2.3 Understanding trained networks
6 Analysis of a linear trained network
6.1 From feedback architectures to auto-encoders and viceversa
6.1.1 Exact solution
6.1.2 The effective dynamics
6.1.3 Multiple frequencies
6.2 A mean field analysis
6.2.1 Results
6.3 A comparison with trained networks
6.3.1 Training auto-encoders
6.3.2 Training feedback architectures
6.3.3 Discussion
6.4 Towards non-linear networks
6.4.1 Response in non-linear random reservoirs
6.4.2 Training non-linear networks
III Linking connectivity, dynamics and computations
7 Dynamics of networks with unit rank structure
7.1 One dimensional spontaneous activity in networks with unit rank structure
7.2 Two dimensional activity in response to an input
7.3 The mean field framework
7.3.1 The network model
7.3.2 Computing the network statistics
7.3.3 Dynamical Mean Field equations for stationary solutions
7.3.4 Transient dynamics and stability of stationary solutions
7.3.5 Dynamical Mean Field equations for chaotic solutions
7.3.6 Structures overlapping on the unitary direction
7.3.7 Structures overlapping on an arbitrary direction
7.3.8 Response to external inputs
8 Implementing computations
8.1 Computing with unit rank structures: the Go-Nogo task
8.1.1 Mean field equations
8.2 Computing with rank two structures
8.3 Implementing the 2AFC task
8.3.1 Mean field equations
8.4 Building a ring attractor
8.4.1 Mean field equations
8.5 Implementing a context-dependent discrimination task
8.5.1 Mean field equations
8.6 Oscillations and temporal outputs
8.6.1 Mean field equations
8.7 Discussion
9 A supervised training perspective
9.1 Input-output patterns associations
9.1.1 Inverting the mean field equations
9.1.2 Stable and unstable associations
9.2 Input-output associations in echo-state architectures
9.2.1 A comparison with trained networks
9.2.2 Different activation functions
Appendix A
Finite size effects and limits of the DMF assumptions
Finite size effects
Correlations for high ϕmax
Limits of the Gaussian approximation
Appendix B
DMF equations in generalized E-I settings
Mean field theory in presence of noise
Mean field theory with stochastic in-degree
Mean field theory in general E-I networks
Appendix C Unit rank structures in networks with positive activation functions
Dynamical Mean Field solutions
Appendix D Two time scales of fluctuations in networks with unit rank structure
Appendix E Non-Gaussian unit rank structures
Appendix F Stability analysis in networks with rank two structures
Bibliography