Mean-field models of neuronal populations

Alain Destexhe, CNRS, March 2021.


The mean-field technique is well known in statistical physics, and was widely used to design models of the macroscopic states of matter (such as solid, liquid, gas) from the microscopic properties of atoms or molecules. Transposed to neurons, the mean-field approach consists of deriving population-level models based on the properties of single neurons and their interactions. As we will see below, this approach is not only important for linking scales, but it also enables us to design large-scale models of neural tissue, from small brain regions up to the whole brain.

Another motivation for the design of mean-field models is that many brain signals measure the activity of neurons at a larger scales than single neurons. This is the case for "mesoscopic signals" (hundred of microns to millimeters), such as voltage-sensitive dye (VSD) signals, local field potential (LFP) signals or calcium imaging signals. In such imaging measurements, the smallest visible "unit" of the system is typically the pixel of the camera, which represents the averaged activity over a population of neurons. So there is no point of modeling such signals at the scale of single neurons, because they are not visible at this scale. It is much more appropriate to use populations of neurons as the unit of such models.

Note that there is also a motivation related to the computational difficulty of simulating large scales. In VSD or wide-field calcium imaging, one measures an entire brain area and sometimes the whole hemisphere. Modeling this at the cellular level would require to simulate networks of hundreds of millions (if not billions) of neurons, which is only possible using large high-performance computer resources. However, modeling at the level of pixels or populations, requires to model only of the order of tens to hundred thousand variables, which is usually possible on a single desktop workstation.

The Master Equation approach

Our first approach to mean-field models, in collaboration with Sami El Boustani (PhD student in my laboratory), was to design a mean-field model applicable to conductance-based spiking networks [1] (while most mean-field models are derived for current-based interactions). Our study had two particularities: (1) we considered self-sustained irregular activity states (asynchronous irregular or AI states) where the activity of neurons is highly stochastic; (2) we consider a second-order approach where not only the mean activity but also its variance (and more generally the covariance matrix of the system) are described. This was realized by deriving a Master Equation for the activity of the network. This approach successfully reproduced the complex state diagrams calculated numerically in networks of excitatory and inhibitory neurons (Fig. 1; see details in [1]).

Figure 1: Mean-field model of activated states in networks of neurons. A. Networks of randomly-connected excitatory and inhibitory IF neurons with conductance-based synaptic interactions display asynchronous irregular (AI) states. The raster (red=excitatory cells, blue=inhibitory) shows that spike discharges are irregular, so is the instantaneous activity (firing rate, bottom). B. Decay of the auto-correlation function (dashed line = exponential fit) and activity distribution (dashed line = Gaussian fit) during AI states. C. Results of a Master Equation model which can be used to predict the state diagrams of such networks. The colorized region corresponds to AI states. The firing rate and its standard deviation (as well as cross-correlations) are well predicted by the formalism. Similar results have been obtained for locally-connected networks. Modified from El Boustani and Destexhe, Neural Computation, 2009 (see abstract).

The Master Equation approach [1], although successful, suffered from one major drawback. The mean-field model relies on knowing the transfer function of neurons, which maps the output firing rate of the neuron to the mean rate of excitatory and inhibitory inputs. Unfortunately, this function is only known analytically for very simple systems (the leaky integrate and fire neuron with current-based synapses). The approach in [1] was obtained through a heuristic modification of the transfer function to match conductance-based inputs. Therefore, extending this approach to more realistic neurons seemed compromised.

Semi-analytic mean-field models

However, a recent extension of the approach could be done in collaboration with Yann Zerlaut (PhD student in my laboratory). It was discovered that the mathematical form of the transfer function known for simple systems, can also capture the transfer function of more complex neuron models, and even real neurons [2]. This important advance allowed us to extend the Master Equation approach to derive mean-field models of more complex neural models, as we will detail below. This new extension, which we called "semi-analytic" consists of calculating numerically the transfer function for model neurons, and fit numerically the mathematical template of the transfer function. The mean-field model obtained thus remains analytical, in the sense that the transfer functions are still expressed mathematically, but the parameters of the transfer function are obtained numerically and are specific to each particular neuronal model.

The first application of the semi-analytic mean-field approach was to derive a mean-field model of networks of Adaptive Exponential (AdEx) neurons [3]. The AdEx model is a more realistic model than the leaky integrate and fire model, because it has an exponential approach to threshold and also has an adaptation variable. The AdEx model can capture a wide variety of intrinsic neuronal properties, such as adapting neurons (also called "regular spiking"), bursting neurons, delayed firing, intermittent firing, etc. In the case of cerebral cortex, it allows one to design networks with two cell types, the "regular spiking" (RS) neurons, displaying spike-frequency adaptation, as typically seen in pyramidal neurons, and the "fast spiking" (FS) neurons, with little or no adaptation, as typically seen in inhibitory interneurons. It is important to realize that networks of RS-FS neurons constitutes the simplest spiking model that accounts for the fact that inhibitory neurons are more excitable than excitatory neurons, which has potentially important consequences at the large-scale level, as we will see below.

Thus, it was shown that the Master-Equation mean-field approach could model very well networks of AdEx neurons [3]. The mean-field model accounted for important features, such as the fact that AdEx networks can display AI states of activity, with low firing rate of RS cells and larger rates for inhibitory FS cells, exactly as found experimentally. The mean-field also captures the time course of the response of the network to external inputs, except for the "tail" of the response, which depends on the adaptation. We will see below that including adaptation allows the model to fully capture this time course.

Mean-field models of mesoscopic-scale phenomena

The AdEx mean-field model was then used to test its ability to model large-scale phenomena. We used measurements of propagating waves in awake monkey visual cortex [4] as a template. These measurements showed that visual inputs can trigger a traveling wave in V1, that occurs through millimeters of tissue which corresponds to the mesoscopic scale. We constructed a large-scale network of mean-field models and could successively model the occurrence of propagating waves following visual input [3]. But here, not only the model could account for the mesoscopic-level phenomena such as propagating waves, we could use these waves to constrain the model. Because propagating waves are phenomena that can objectively be measured (its extend, speed, etc), they were very useful to constrain the connectivity between the mean-field units in the large-scale model. We found that the optimal connectivity reproducing the wave properties are with large-range excitatory connectivity, while inhibitory connectivity had to be strong and more local [3] (Fig. 2).

Figure 2: Propagating waves in a large-scale network of AdEx mean-field units. Left: scheme of the AdEx network of mean-field units (bottom) and estimate of connectivity (top). Right: space-time plots of propagating waves measured experimentally in awake monkey (top) and reproduced using the mean-field model (bottom). Modified from Zerlaut et al. J. Comp. Neurosci., 2018 (see abstract).

A further application of the AdEx mean-field model was to investigate the mechanism and role of cortical propagating waves. Extending the Zerlaut et al. [3] approach shown in Fig. 2, we have used mean-field models of V1 propagating waves to investigate their mechanisms and roles [5]. By using a clever series of experiments where two waves were triggered by two stimuli, we discovered that during the collision between the waves, the combined activity was largely sub-linear, so there was a significant suppression associated to these waves, which contrasts with an amplification as could be thought intuitively. In the model, the suppression could be precisely reproduced and we found that it depends on two ingredients, first the synaptic interactions need to be conductance-based to account for this non-linearity, and second, the gain of inhibitory (FS) neurons needed to be larger than excitatory (FS) neurons. As mentioned above, the use of the AdEx model allowed us to correctly reproduce this difference of gain, and the mean-field model could capture this. Finally, by using an external decoder, it was found that this suppression allows the visual system to disambiguate stimuli, and thus augments visual acuity. The mean-field model of V1 reproduced all these features (see details in [5]).

Biologically realistic mean-field models

The next step was to obtain a mean-field model that accurately predicts the behavior of the spiking model. As mentioned above, this requires to properly take into account adaptation. In collaboration with Matteo di Volo (postdoc in my laboratory), we designed a mean-field model including adaptation [6]. This model was re-derived from the Master Equation, but by adding a population-level variable for spike-frequency adaptation. The agreement between this adapting mean-field model and the network simulations was remarkable, as it captures the fine details of the time course of the population response to external inputs. Moreover, the adapting mean-field model was able to account for a fundamental phenomenon: the response of a given network depends on its state of ongoing (spontaneous) activity, which is also called state-dependent responses. The adapting mean-field model could account for state-dependent responses, and correctly predicted the fact that some network states produce small responses while other network states are more responsive. We believe this is a crucial property to correctly model interactions between brain areas.

Another important feature of the adapting mean-field model is that it can also account for the genesis of different brain states, and in particular the slow-wave activity with Up/Down state dynamics. In the spiking network, a transition from self-sustained AI state activity to Up/Down state activity can be obtained by modulating the spike-frequency adaptation parameter in the AdEx model [7]. Note that these oscillations do not correspond to a limit cycle, but they are a noise-driven switch between attractors, one is the AI state as described previously, and the other one is the silent state (with all cells at rest). The adaptive mean-field model could very well reproduce these dynamics and could also produce the transition from sustained AI states to Up/Down state dynamics [6]. It is therefore capable of displaying the two states fundamental to asynchronous and slow-wave dynamics as found in the waking and sleeping brain (see below for a large-scale simulation of this). Note that in Up/Down states, the silent and active phase may need a state-dependent mean-field approach to be finely modeled ([8]; in collaboration with Cristiano Capone, postdoc in my laboratory).

Because of the fact that the adaptive mean-field finely captures the time course of responses to external input, the fact that it accounts for state-dependent responses, and because it can model asynchronous and Up/Down state dynamics, we believe that this adaptive mean-field model is the most accurate that has been drawn, and can be qualified as "biologically realistic". It will constitute the basis of the large-scale models shown below.

Mean-field models of macroscopic phenomena

The next step, towards macroscopic scales, was to integrate the mean-field models to model phenomena at the scale of several brain areas, up to the entire brain. In collaboration with Jennifer Goldman (postdoc in my laboratory) and others, we used The Virtual Brain (TVB) as a simulation platform to incorporate the adaptive mean-field model into a large network of mean-field units, where the connectivity is given by the human brain connectome (Fig. 3, top). This model, called the "TVB-AdEx" model [9], was shown to generate at large scale, two fundamental dynamical states, asynchronous-irregular (AI) and Up-Down states, which correspond to the asynchronous and synchronized dynamics of wakefulness and slow-wave sleep, respectively. The synchrony of slow waves appears as an emergent property at large scales when the units are set into the Up/Down state mode (high level of adaptation). This synchrony was lost when the units were set in the asynchronous (AI) mode. The model also reproduced the very different patterns of functional connectivity found experimentally in slow-waves compared to asynchronous states. Thus, the TVB-AdEx model can simulate many features of the awake and sleeping brain [9].

Besides spontaneous activity, the TVB-AdEx model was also tested against external stimulation. We simulated experiments with transcranial magnetic stimulation (TMS) during asynchronous and slow-wave states, and showed that, like in experimental data, the effect of the stimulation greatly depends on the activity state of the brain. During slow waves, the response is strong but remains local, in contrast with asynchronous states, where the response is weaker but propagates across brain areas (Fig. 3, bottom-left). To compare more quantitatively with wake and slow-wave sleep states, we computed the perturbational complexity index (PCI) and show that it matches the values estimated from TMS experiments. In the synchronized and sleeping brain, PCI was low, reflecting the local aspect of the response. However, in the asynchronous and awake brain, PCI is high, which reflects the fact that the brain is much more responsive during that state (Fig. 3, bottom-right). Thus, the TVB-AdEx model replicates some of the properties of synchrony and responsiveness seen in the human brain, and is a promising tool to study spontaneous and evoked large-scale dynamics in the normal, anesthetized or pathological brain.

Figure 3: Whole-brain simulations using the TVB-AdEx model. Top: scheme of the integration of AdEx adaptive mean-field models (from [6]) in The Virtual Brain (TVB) simulator. The connectivity between individual nodes (each represented by a mean-field) is taken from the human connectome. Bottom-left: response to a stimulus in the occipital region in synchronized and asynchronous states. Bottom-right: perturbational complexity index (PCI) calculated from these responses, for three different stimulus amplitudes. The PCI is high for asynchronous states, and lower in synchronized states, as found experimentally. Modified from Goldman et al. bioRxiv 424574, 2020 (see paper)

Limitations of the mean-field approach

The mean-field approach described here can be very accurate in some cases, but it suffers from a number of drawbacks and limitations. First, as mentioned above, the approach relies on the knowledge of the transfer function of neurons. This works well for a number of models, such as the integrate-and-fire model, the AdEx and even the Hodgkin-Huxley model. However, the transfer function is not easy to define in other cases, such as bursting neurons for example. In the thalamus, relay neurons can respond very differently if they are depolarized or hyperpolarized, so in such cases, it is difficult to define a proper transfer function.

A second limitation is related to the assumption that the system decorrelates itself with a characteristic time (called T in the formalism of [1]). The choice of T is not very precise. It is formally defined as a period of time which is such that the activity of the system depends only on the preceding period of duration T. It was shown that T corresponds to the characteristic decay time of the autocorrelation of the system, which for AI states is between 5 and 10 ms [1]. In some cases, changing the precise value of T may lead to changes of behavior of the mean-field model. To properly study this potential problem, one should perform a precise mapping of the parameter space of the AdEx model (as it was done in [1] for the integrate-and-fire model).

A third limitation, somewhat related to the choice of T, is that in theory, the mean-field model is not valid for dynamics faster than the period T. This is in part because the formalism makes an adiabatic approximation (that the system reaches a quasi steady-state within T; see [1] for details). Consequently, oscillations faster than 1/T frequency would formally not be consistent with this formalism. The behavior of the mean-field in the oscillatory regime still remains to be explored and understood in detail.

Further developments of mean-field models

A number of developments are presently under way. First, we would like to augment the biological realism of the mean-field models. This is done first, by using transfer functions calculated from neurons with dendrites [10] (in collaboration with Yann Zerlaut). In this work, we considered the transfer function of neurons departing from the "point-neuron" model, and included dendrites. The fact that synaptic inputs occurs on dendrites may have strong consequences on the transfer function, and thus also influences the emergent behavior at larger scales. It will also allow us to apply the formalism to other neuron types for which dendrites are important, in regions such as cerebellum, hippocampus, basal ganglia, etc (work in progress).

A second development is to conceive a new class of mean-field models based on the properties found in real neurons. Thanks to the semi-analytic approach, it is possible to measure the transfer function from real neurons. However, it is non trivial because the inputs must be conductance-based, so dynamic-clamp experiments should be used. A first study of this kind was done recently, where we measured the transfer function of Layer V cortical neurons in mice visual cortex by using perforated patch recordings [2]. This study revealed that it is possible to obtain a compact description of the transfer function of individual pyramidal neurons, which opens the perspective of building truly ``realistic'' mean-field models. The study also evidenced a strong cell-to-cell diversity of firing responses. It suggests that appropriate mean-field formalisms have to be designed in order to integrate this diversity (see below).

In collaboration with Claude Bedard (postdoc and later permanent researcher in my laboratory), we used the mean-field formalism to model electro-magnetic phenomena in the brain [11]. The justification is here again provided by the macroscopic nature of brain signals or measurements. For example, impedance measurements done macroscopically at the level of millimeters or centimeters of brain tissue require a macroscopic formulation to be correctly modeled. Such a formulation was obtained by deriving a mean-field model directly from Maxwell equations [12]. This approach was initially motivated by accounting for impedance measurements, and was later extended to the current-source density analysis [13], which is also inherently macroscopic. In these formulations, one can directly integrate the macroscopic measurements of electric conductivity and permittivity, and obtain a coherent description of electromagnetic phenomena at large scales.

Another application of the semi-analytic mean-field approach was to calculate mean-field models of networks of complex neurons, described by the Hodgkin-Huxley (HH) formalism [14] (in collaboration with Mallory Carlu, Damien Depannemaecker and Matteo di Volo, postdocs in my laboratory). HH models are biophysically more accurate and complex compared to integrate and fire models. However, using the semi-analytic approach, the transfer function could be calculated and integrated in a mean-field model. The resulting mean-field model was able to reproduce the spontaneous activity and responses to external inputs in HH networks of neurons. The same approach was also followed for Morris-Lecar neurons (see [14] for details).

Finally, in collaboration with Matteo di Volo, we extended the mean-field approach to model heterogeneous systems [15]. As shown by large neuron databases (such as the Allen Brain Atlas), as well as our own experimental investigations [2], neurons are extraordinarily heterogeneous. Even in the same cell class, individual neurons have a very different excitability, as shown by the very different transfer functions estimated [2]. It is therefore necessary to depart from the usual paradigm of networks made of identical neurons, and consider the more realistic case of heterogeneous networks. Interestingly, networks of heterogeneous neurons display a different responsiveness than homogeneous networks, and are optimally responsive for intermediate levels of responsiveness that correspond to experimental estimates [15]. This responsiveness profile could very well be modeled by a Heterogeneous Mean-Field (HMF) framework, where the distributions of cell properties are explicitly taken into account [15]. This HMF also showed that there exists a relation between the responsiveness and the stability properties of the asynchronous state, which is an interesting direction to develop further in the future.


[1] El Boustani, S. and Destexhe, A. A master equation formalism for macroscopic modeling of asynchronous irregular activity states. Neural Computation 21: 46-100, 2009 (see abstract)

[2] Zerlaut, Y., Telenczuk, B., Deleuze, C., Bal, T., Ouanounou, G. and Destexhe, A. Heterogeneous firing rate response of mice layer V pyramidal neurons in the fluctuation-driven regime. J. Physiol. 594: 3791-3808, 2016 (see abstract)

[3] Zerlaut, Y., Chemla, S., Chavane, F. and Destexhe, A. Modeling mesoscopic cortical dynamics using a mean-field model of conductance-based networks of adaptive exponential integrate-and-fire neurons. J. Computational Neurosci. 44: 45-61, 2018 (see abstract)

[4] Muller, L.E., Reynaud, A., Chavane, F. and Destexhe, A. The stimulus-evoked population response in visual cortex of awake monkey is a propagating wave. Nature Communications 5: 3675, 2014 (see abstract)

[5] Chemla, S., Reynaud, A., di Volo, M., Zerlaut, Y., Perrinet, K., Destexhe, A. and Chavane, F. Suppressive traveling waves shape representations of illusory motion in primary visual cortex of awake primate. J. Neurosci. 39: 4282-4298, 2019 (see abstract)

[6] di Volo, M., Romagnoni, A., Capone, C. and Destexhe, A. Biologically realistic mean-field models of conductance-based networks of spiking neurons with adaptation. Neural Computation 31: 653-680, 2019 (see abstract)

[7] Destexhe, A. Self-sustained asynchronous irregular states and Up/Down states in thalamic, cortical and thalamocortical networks of nonlinear integrate-and-fire neurons. J. Computational Neurosci. 27: 493-506, 2009 (see abstract)

[8] Capone, C., di Volo, M., Romagnoni, A., Mattia, M. and Destexhe, A. A state-dependent mean-field formalism to model different activity states in conductance-based networks of spiking neurons. Physical Review E 100: 062413, 2019 (see abstract)

[9] Goldman, J.S., Kusch, L., Yalcinkaya, B.H., Depannemaecker, D., Nghiem, T-A., Jirsa, V. and Destexhe, A. Brain-scale emergence of slow-wave synchrony and highly responsive asynchronous states based on biologically realistic population models simulated in The Virtual Brain. bioRxiv 424574, 2020 (see paper)

[10] Zerlaut, Y. and Destexhe, A. Heterogeneous firing responses predict diverse couplings to presynaptic activity in mice Layer V pyramidal neurons. PLOS Comp. Biol. 13: e1005452, 2017 (see abstract)

[11] Bedard, C. and Destexhe, A. Mean-field formulation of Maxwell equations to model electrically inhomogeneous and isotropic media. J. Electromagnetic Analysis and Applications 6: 296-302, 2014 (see abstract)

[12] Bedard, C. and Destexhe, A. Macroscopic models of local field potentials the apparent 1/f noise in brain activity. Biophysical Journal 96: 2589-2603, 2009 (see abstract)

[13] Bedard, C. and Destexhe, A. A generalized theory for current-source density analysis in brain tissue. Physical Review E 84: 041909, 2011 (see abstract).

[14] Carlu, M., Chehab, O., Dalla Porta, L., Depannemaecker, D., Herice, C., Jedynak, M., Koksal Ersoz, E., Muratore, P., Souihel, S., Capone, C., Zerlaut, Y., Destexhe, A., di Volo, M. A mean-field approach to the dynamics of networks of complex neurons, from nonlinear Integrate-and-Fire to Hodgkin-Huxley models. J. Neurophysiol. 123: 1042-1051, 2020 (see abstract).

[15] di Volo, M. and Destexhe, A. Optimal responsiveness and emergent dynamics in networks of heterogeneous neurons. arXiv 05596, 2020 (see paper).

Department of Integrative and Computational Neuroscience (ICN),
Paris-Saclay Institute of Neuroscience (NeuroPSI),
CNRS, Bat 33,
1 Avenue de la Terrasse,
91198 Gif-sur-Yvette, France.

back to research projects

back to main page