2190-8567-1-8 2190-8567 Research <p>Statistics of spike trains in conductance-based neural networks: Rigorous results</p> CessacBrunobruno.cessac@inria.fr

NeuroMathComp, INRIA, 2004 Route des Lucioles, 06902, Sophia-Antipolis, France

The Journal of Mathematical Neuroscience 2190-8567 2011 1 1 8 http://www.mathematical-neuroscience.com/content/1/1/8 10.1186/2190-8567-1-8
261201125820112582011 2011Cessac; licensee SpringerThis is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

We consider a conductance-based neural network inspired by the generalized Integrate and Fire model introduced by Rudolph and Destexhe in 1996. We show the existence and uniqueness of a unique Gibbs distribution characterizing spike train statistics. The corresponding Gibbs potential is explicitly computed. These results hold in the presence of a time-dependent stimulus and apply therefore to non-stationary dynamics.

1 Introduction

Neural networks have an overwhelming complexity. While an isolated neuron can exhibit a wide variety of responses to stimuli 1 , from regular spiking to chaos 2 3, neurons coupled in a network via synapses (electrical or chemical) may show an even wider variety of collective dynamics 4 resulting from the conjunction of non-linear effects, time propagation delays, synaptic noise, synaptic plasticity, and external stimuli 5 . Focusing on the action potentials, this complexity is manifested by drastic changes in the spikes activity, for instance when switching from spontaneous to evoked activity (see for example A. Riehle’s team experiments on the monkey motor cortex 6 7 8 9). However, beyond this, complexity may exist some hidden laws ruling an (hypothetical) “neural code” 10 .

One way of unraveling these hidden laws is to seek some regularities or reproducibility in the statistics of spikes. While early investigations on spiking activities were focusing on firing rates where neurons are considered as independent sources, researchers concentrated more recently on collective statistical indicators such as pairwise correlations. Thorough experiments in the retina 11 12 as well as in the parietal cat cortex 13 suggested that such correlations are crucial for understanding spiking activity. Those conclusions where obtained using the maximal entropy principle 14 . Assume that the average value of observables quantities (e.g., firing rate or spike correlations) has been measured. Those average values constitute constraints for the statistical model. In the maximal entropy principle, assuming stationarity, one looks for the probability distribution which maximizes the statistical entropy given those constraints. This leads to a (time-translation invariant) Gibbs distribution. In particular, fixing firing rates and the probability of pairwise coincidences of spikes lead to a Gibbs distribution having the same form as the Ising model. This idea has been introduced by Schneidman et al. in 11 for the analysis of retina spike trains. They reproduce accurately the probability of spatial spiking pattern. Since then, their approach has known a great success (see, e.g., 15 16 17), although some authors raised solid objections on this model 12 18 19 20 while several papers have pointed out the importance of temporal patterns of activity at the network level 21 22 23. As a consequence, a few authors 13 24 25 have attempted to define time-dependent models of Gibbs distributions where constraints include time-dependent correlations between pairs, triplets, and so on 26 . As a matter of fact, the analysis of the data of 11 with such models describes more accurately the statistics of spatio-temporal spike patterns 27 .

Taking into account all constraints inherent to experiments, it seems extremely difficult to find an optimal model describing spike trains statistics. It is in fact likely that there is not one model, but many, depending on the experiment, the stimulus, the investigated part of the nervous system and so on. Additionally, the assumptions made in the works quoted above are difficult to control. Especially, the maximal entropy principle assumes a stationary dynamics while many experiments consider a time-dependent stimulus generating a time-dependent response where the stationary approximation may not be valid. At this stage, having an example where one knows the explicit form of the spike trains, probability distribution would be helpful to control those assumptions and to define related experiments.

This can be done considering neural network models. Although, to be tractable, such models may be quite away from biological plausibility, they can give hints on which statistics can be expected in real neural networks. But, even in the simplest examples, characterizing spike statistics arising from the conjunction of non-linear effects, time propagation delays, synaptic noise, synaptic plasticity, and external stimuli is far from being trivial on mathematical grounds.

In 28 , we have nevertheless proposed an exact and explicit result for the characterization of spike trains statistics in a discrete-time version of Leaky Integrate-and-Fire neural network. The results were quite surprising. It has been shown that whatever the parameters value (in particular synaptic weights), spike trains are distributed according to a Gibbs distribution whose potential can be explicitly computed. The first surprise lies in the fact that this potential has infinite range, namely spike statistics has an infinite memory. This is because the membrane potential evolution integrates its past values and the past influence of the network via the leak term. Although leaky integrate and fire models have a reset mechanism that erases the memory of the neuron whenever it spikes, it is not possible to upper bound the next time of firing. As a consequence, statistics is non-Markovian (for recent examples of non-Markovian behavior in neural models see also 29 ). The infinite range of the potential corresponds, in the maximal entropy principle interpretation, to having infinitely many constraints.

Nevertheless, the leak term influence decays exponentially fast with time (this property guarantees the existence and uniqueness of a Gibbs distribution). As a consequence, one can approximate the exact Gibbs distribution by the invariant probability of a Markov chain, with a memory depth proportional to the log of the (discrete time) leak term. In this way, the truncated potential corresponds to a finite number of constraints in the maximal entropy principle interpretation. However, the second surprise is that this approximated potential is nevertheless far from the Ising model or any of the models discussed above, which appear as quite bad approximations. In particular, there is a need to consider n-uplets of spikes with time delays. This mere fact asks hard problems about evidencing such type of potentials in experiments. Especially, new type of algorithms for spike trains analysis has to be developed 30 .

The model considered in 28 is rather academic: time evolution is discrete, synaptic interactions are instantaneous, dynamics is stationary (the stimulus is time-constant) and, as in a leaky integrate and fire model, conductances are constant. It is therefore necessary to investigate whether our conclusions remain for more realistic neural networks models. In the present paper, we consider a conductance-based model introduced by Rudolph and Destexhe in 31 called “generalized Integrate and Fire” (gIF) model. This model allows one to consider realistic synaptic responses and conductances depending on spikes arising in the past of the network, leading to a rather complex dynamics which has been characterized in 32 in the deterministic case (no noise in the dynamics). Moreover, the biological plausibility of this model is well accepted 33 34.

Here, we analyze spike statistics in the gIF model with noise and with a time-dependent stimulus. Moreover, the post-synaptic potential profiles are quite general and summarize all the examples that we know in the literature. Our main result is to prove the existence and uniqueness of a Gibbs measure characterizing spike trains statistics, for all parameters compatible with physical constraints (finite synaptic weights, bounded stimulus, and positive conductances). Here, as in 28 , the corresponding Gibbs potential has infinite range corresponding to a non-Markovian dynamics, although Markovian approximations can be proposed in the gIF model too. The Gibbs potential depends on all parameters in the model (especially connectivity and stimulus) and has a form quite more complex than Ising-like models. As a by-product of the proof of our main result, additional interesting notions and results are produced such as continuity, with respect to a raster, or exponential decay of memory thanks to the shape of synaptic responses.

The paper is organized as follows. In Section 2, we briefly introduce integrate and fire models and propose two important extensions of the classical models: the spike has a duration and the membrane potential is reset to a non-constant value. These extensions, which are necessary for the validity of our mathematical results, render nevertheless the model more biologically plausible (see Section 9). One of the keys of the present work is to consider spike trains (raster plots) as infinite sequences. Since in gIF models, conductances are updated upon the occurrence of spikes, one has to consider two types of variables with distinct type of dynamics. On the one hand, the membrane potential, which is the physical variable associated with neurons dynamics, evolves continuously. On the other hand, spikes are discrete events. Conductances are updated according to these discrete-time events. The formalism introduced in Sections 2 and 3 allows us to handle properly this mixed dynamics. As a consequence, these sections define gIF model with more mathematical structure than the original paper 31 and mostly contain original results. Moreover, we add to the model several original features such as the consideration of a general form of synaptic profile with exponential decay or the introduction of noise. Section 4 proposes a preliminary analysis of gIF model dynamics. In Sections 5 and 6, we provide several useful mathematical propositions as a necessary step toward the analysis of spike statistics, developed in Section 7, where we prove the main result of the paper: existence and uniqueness of a Gibbs distribution describing spike statistics. Sections 8 and 9 are devoted to a discussion on practical consequences of our results for neuroscience.

2 Integrate and fire model

We consider the evolution of a set of N neurons. Here, neurons are considered as “points” instead of spatially extended and structured objects. As a consequence, we define, for each neuron k { 1 , , N } , a variable V k ( t ) called the “membrane potential of neuron k at time t” without specification of which part of a real neuron (axon, soma, dendritic spine, …) it corresponds to. Denote V ( t ) the vector ( V k ( t ) ) k = 1 N .

We focus here on “integrate and fire models”, where dynamics always consists of two regimes.

2.1 The “integrate regime”

Fix a real number θ called the “firing threshold of the neuron”.1 Below the threshold, V k < θ , neuron k’s dynamics is driven by an equation of the form:

C k d V k d t + g k V k = i k ,

where C k is the membrane capacity of neuron k. In its most general form, the neuron k’s membrane conductance g k > 0 depends on V k plus additional variables such as the probability of having ionic channels open (see, e.g., Hodgkin-Huxley equations 35 ) as well as on time t. The explicit form of g k in the present model is developed in Section 3.4. The current i k typically depends on time t and on the past activity of the network. It also contains a stochastic component modeling noise in the system (e.g., synaptic transmission, see Section 3.5).

2.2 LIF model

A classical example of integrate and fire model is the Leaky Integrate and Fire’s (LIF) introduced in 36 where Equation (1) reads:

d V k d t = V k τ L + i k ( t ) C k .

where g k is a constant and τ L = C k g k is the characteristic time for membrane potential decay when no current is present (“leak term”).

2.3 Spikes

The dynamical evolution (1) may eventually lead V k to exceed θ. If, at some time t, V k ( t ) = θ , then neuron k emits a spike or “fires”. In our model, like in biophysics, a spike has a finite duration δ > 0 ; this is a generalization of the classical formulation of integrate and fire models where the spike is considered instantaneous. On biophysical grounds, δ is of order of a millisecond. Changing the time units, we may set δ = 1 without loss of generality. Additionally, neurons have a refractory period τ refr > 0 where they are not able to emit a new spike although their membrane potential can fluctuate below the threshold (see Figure 1). Hence, spikes emitted by a given neuron are separated by a minimal time scale

τ sep = δ + τ refr .

<p>Fig. 1</p>

Time course of the membrane potential in our model.

Time course of the membrane potential in our model. The blue dashed curve illustrates the shape of a real spike, but what we model is the red curve.

2.4 Raster plots

In experiments, spiking neuron activity is represented by “raster plots”, namely a graph with time in abscissa, and a neuron labeling in ordinate such that a vertical bar is drawn each “time” a neuron emits a spike. Since spikes have a finite duration δ such a representation limits the time resolution: events with a time scale smaller than δ are not distinguished. As a consequence, if neuron 1 fires at time t 1 and neuron 2 at time t 2 with | t 2 t 1 | < δ = 1 the two spikes appear to be simultaneous on the raster. Thus, the raster representation introduces a time quantization and has a tendency to enhance synchronization. In gIF models, conductances are updated upon the occurrence of spikes (see Section 3.2) which are considered as such discrete events. This could correspond to the following “experiment”. Assume that we measure the spikes emitted by a set of in vitro neurons and that we use this information to update the conductances of a model, in order to see how this model “matches” the real neurons (see 34 for a nice investigation in this spirit). Then, we would have to take into account that the information provided by the experimental raster plot is discrete, even if the membrane potential evolves continuously. The consequences of this time discretization as well as the limit δ 0 are developed in the discussion section.

As a consequence, one has to consider two types of variables with distinct type of dynamics. On the one hand, the membrane potential, which is the physical variable associated with neuron dynamics, evolves with a continuous time. On the other hand, spikes, which are the quantities of interest in the present paper, are discrete events. To properly define this mixed dynamics and study its properties, we have to model spikes times and raster plots.

2.5 Spike times

If, at time t, V k ( t ) = θ , a spike is registered at the integer time immediately after t, called the spike time. Choosing integers for the spike time occurrence is a direct consequence of setting δ = 1 . Thus, to each neuron k and integer n, we associate a “spiking state” defined by:

ω k ( n ) = { 1 if t ] n 1 , n ] s u c h t h a t V k ( t ) = θ ; 0 otherwise.

For convenience and in order to simplify the notations in the mathematical developments, we call [ t ] the largest integer which is ≤t (thus [ 1.2 ] = 2 and [ 1.2 ] = 1 ). Thus, the integer immediately after t is [ t + 1 ] and we have therefore that ω k ( [ t + 1 ] ) = 1 whenever V k ( t ) = θ . Although, characteristic events in a raster plot are spikes (neuron fires), it is useful in subsequent developments to consider also the case when neuron is not firing ( ω k ( n ) = 0 ).

2.6 Reset

In the classical formulation of integrate and fire models, the spike occurs simultaneously with a reset of the membrane potential to some constant value V reset , called the “reset potential”. Instantaneous reset is a source of pathologies as discussed in 32 37 and in the discussion section. Here, we consider that reset occurs after the time delay τ sep 1 including spike duration and refractory period. We set:

V k ( t ) = θ V k ( [ t + τ sep ] ) = V reset .

The reason why the reset time is the integer number [ t + τ sep ] instead of the real t + τ sep is that it eases the notations and proofs. Since the reset value is random (see below and Figure 1), this assumption has no impact on the dynamics.

Indeed, in our model, the reset value V reset is not a constant. This is a Gaussian random variable with mean zero (we set the rest potential to zero without loss of generality) and variance σ R 2 > 0 . In this way, we model the spike duration and refractory period, as well as the random oscillations of the membrane potential during the refractory period. As a consequence, the value of V k when the neuron can fire again is not a constant, as it is in classical IF models. A related reference (spiking neurons with partial reset) is 38 . The assumption that σ R 2 > 0 is necessary for our mathematical developments (see the bounds (37)). We assume σ R 2 to be small to avoid trivial and unrealistic situations where V reset θ with a large probability leading the neuron to fire all the time. Note, however, that this is not a required assumption to establish our mathematical results. We also assume that, in successive resets, the random variables V reset are independent.

2.7 The shape of membrane potential during the spike

On biophysical grounds, the time course of the membrane potential during the spike includes a depolarization and re-polarization phase due to the non-linear effects of gated ionic channels on the conductance. This leads to introduce, in modeling, additional variables such as activation/inactivation probabilities as in the Hodgkin-Huxley model 35 or adaptation current as, e.g., in FitzHugh-Nagumo model 39 40 41 42 (see the discussion section for extensions of our results to those models). Here, since we are considering only one variable for the neuron state, the membrane potential, we need to define the spike profile, i.e., the course of V k ( t ) during the time interval ( t , [ t + τ sep ] ) . It turns out that the precise shape of this profile plays no role in the developments proposed here, where we concentrate on spike statistics. Indeed, a spike is registered whenever V k ( t ) = θ , and this does not depend on the spike shape. What we need is therefore to define the membrane potential evolution before the spike, given by (1), and after the spike, given by (4) (see Figure 1).

2.8 Mathematical representation of raster plots

The “spiking pattern” of the neural network at integer time n is the vector ω ( n ) = ( ω k ( n ) ) k = 1 N . For m < n , we note ω m n = { ω ( m ) ω ( m + 1 ) ω ( n ) } the ordered sequence of spiking patterns between m and n. Such sequences are called spike blocks. Additionally, we note that ω m n 1 1 ω n 1 n = ω m n , the concatenation of the blocks ω m n 1 1 and ω n 1 n .

Call A = { 0 , 1 } N the set of spiking patterns (alphabet). An element of X = def A Z , i.e., a bi-infinite ordered sequence ω = { ω ( n ) } n = + of spiking patterns, is called a “raster plot”. It tells us which neurons are firing at each time n Z . In experiments, raster plots are obviously finite sequences of spiking pattern but the extension to Z , especially the possibility of considering an arbitrary distant past (negative times), is a key of the present work. In particular, the notation ω n refers to spikes occurring from −∞ to n.

To each raster ω X and each neuron index j = 1 , , N , we associate an ordered (generically infinite) list of “spike times” { t j ( r ) ( ω ) } r = 1 (integer numbers) such that t j ( r ) ( ω ) is the r-th time of firing of neuron j in the raster ω. In other words, we have ω j ( n ) = 1 if and only if n = t j ( r ) ( ω ) for some r = 1 , , + . We use here the following convention. The index k is used for a post-synaptic neuron while the index j refers to pre-synaptic neurons. Spiking events are used to update the conductance of neuron k according to spikes emitted by pre-synaptic neurons. That is why we label the spike times with an index j.

We introduce here two specific rasters which are of use in the paper. We note Ω0 the raster such that ω k ( n ) = 0 , k = 1 , , N , n Z (no neuron ever fires) and Ω1 the raster ω k ( n ) = 1 , i = 1 , , N , n Z (each neuron is firing at every integer time).

Finally, we use the following notation borrowed from 43 . We note, for n Z m 0 , and r integer:

ω = m , n ω if ω ( r ) = ω ( r ) , r { n m , , n } .

For simplicity, we consider that τ ref , the refractory period, is smaller than 1 so that a neuron can fire two consecutive time steps (i.e., one can have ω k ( n ) = 1 and ω k ( n + 1 ) = 1 ). This constraint is discussed in Section 9.2.

2.9 Representation of time-dependent functions

Throughout the paper, we use the following convention. For a real function of t and ω, we write f ( t , ω ) for f ( t , ω [ t ] ) to simplify notations. This notation takes into account the duality between variables such as membrane potential evolving with respect to a continuous time and raster plots labeled with discrete time. Thus, the function f ( t , ω ) is a function of the continuous variable t and of the spike block ω [ t ] , where by definition [ t ] t , namely f ( t , ω ) depends on the spike sequences occurring before t. This constraint is imposed by causality.

2.10 Last reset time

We define τ k ( t , ω ) as the last time before t where neuron k’s membrane potential has been reset, in the raster ω. This is −∞ if the membrane potential has never been reset. As a consequence of our choice (4) for the reset time τ k ( t , ω ) is an integer number fixed by t and the raster before t. The membrane potential value of neuron k at time t is controlled by the reset value V reset at time τ k ( t , ω ) and by the further subthreshold evolution (1) from time τ k ( t , ω ) to time t.

3 Generalized integrate and fire models

In this paper, we concentrate on an extension of (2), called “generalized Integrate-and-Fire” (gIF), introduced in 31 , closer to biology 33 34, since it considers more seriously neurons interactions via synaptic responses.

3.1 Synaptic conductances

Depending on the neurotransmitter they use for synaptic transmission (AMPA, NMDA, GABA A, GABA B 44 ), neurons can be excitatory (population E ) or inhibitory (population I ). This is modeled by introducing reversal potentials E + for excitatory (typically E + 0 mV for AMPA and NMDA) and E for inhibitory ( E 70 mV for GABA A and E 95 mV for GABA B). We focus here on one population of excitatory and one population of inhibitory neurons although extensions to several populations may be considered as well. Also, each neuron is submitted to a current I k ( t ) . We assume that this current has some stochastic component that mimics synaptic noise (Section 3.5).

The variation in the membrane potential of neuron k at time t reads:

C k d V k d t = g L , k ( V k E L ) g k ( E ) ( t ) ( V k E + ) g k ( I ) ( t ) ( V k E ) + I k ( t ) ,

where g L , k is a leak conductance, E L < 0 is the leak reversal potential (about −65 mV), g k ( E ) ( t ) the conductance of the excitatory population and g k ( I ) ( t ) the conductance of inhibitory population. They are given by:

g k ( E ) ( t ) = j E g k j ( t ) ; g k ( I ) ( t ) = j I g k j ( t ) ,

where g k j is the conductance of the synaptic contact j k .

We may rewrite Equation (6) in the form (1) setting

g k ( t ) = g L , k + g k ( E ) ( t ) + g k ( I ) ( t ) ,

and

i k ( t ) = g L , k E L + g k ( E ) ( t ) E + + g k ( I ) ( t ) E + I k ( t ) .

3.2 Conductance update upon a spike occurrence

The conductances g k j ( t ) in (7) depend on time t but also on pre-synaptic spikes occurring before t. This is a general statement, which is modeled in gIF models as follows. Upon arrival of a spike in the pre-synaptic neuron j at time t j ( r ) ( ω ) , the membrane conductance of the post-synaptic neuron k is modified as:

g k j ( t ) = g k j ( t j ( r ) ( ω ) ) + G k j α k j ( t t j ( r ) ( ω ) ) , t > t j ( r ) ( ω ) .

In this equation, the quantity G k j 0 characterizes the maximal amplitude of the conductance during a post-synaptic potential. We use the convention that G k j = 0 if and only if there is no synapse between j and k. This allows us to encode the graph structure of the neural network in the matrix G with entries G k j . Note that the G k j ’s can evolve in time due to synaptic plasticity mechanisms (see Section 9.4).

The function α k j (called “alpha” profile 44 ) mimics the time course of the synaptic conductance upon the occurrence of the spike. Classical examples are:

α k j ( t ) = e t τ k j H ( t ) ,

(exponential profile) or:

α k j ( t ) = t τ k j e t τ k j H ( t ) ,

with H the Heaviside function (that mimics causality) and τ k j is the characteristic decay times of the synaptic response. Since t is a time, the division by τ k j ensures that α k j ( t ) is a dimensionless quantity: this eases the legibility of the subsequent equations on physical grounds (dimensionality of physical quantities).

Contrarily to (9) the synaptic profile (10), with α k j ( 0 ) = 0 while α k j ( t ) is maximal for t = τ k j , allows one to smoothly delay the spike action on the post-synaptic neuron. More general forms of synaptic responses could be considered as well. For example, the α profile may obey a Green equation of type 45 :

l = 0 k a k j ( l ) d l α k j d u l ( t ) = δ ( t ) ,

where k = 1 a k j ( 0 ) = 1 τ k j a k j ( 1 ) = 1 , corresponds to (9), and so on.

3.3 Mathematical constraints on the synaptic responses

In all the paper, we assume that the α k j ’s are positive and bounded. Moreover, we assume that:

α k j ( t ) t d τ k j d e t τ k j = def f k j ( t ) , t + ,

for some integer d. So that α k j ( t ) decays exponentially fast as t + , with a characteristic time τ k j , the decay time of the evoked post-synaptic potential. This constraint matches all synaptic response kernels that we know (where typically d = 0 , 1 ) 44 45.

This has the following consequence. For all t, M < t integer, r integer, we have, setting t = { t } + [ t ] , where { t } is the fractional part:

r < M α k j ( t r ) = r < M α k j ( [ t ] r + { t } ) = n > [ t ] M α k j ( n + { t } ) .

Therefore, as M ,

r < M α k j ( t r ) n [ t ] M + 1 f k j ( n + { t } ) < t M + f k j ( u ) d u = P d ( t M τ k j ) e t M τ k j ,

where P d ( ) is a polynomial of degree d.

We introduce the following (Hardy) notation: if a function f ( t ) is bounded from above, as t + , by a function g ( t ) we write: f ( t ) g ( t ) . Using this notation, we have therefore:

Proposition 1 r < M α k j ( t r ) P d ( t M τ k j ) e t M τ k j ,

as M .

Additionally, the constraint (11) implies that there is some α + < + such that, for all t, for all k, j:

r < t α k j ( t r ) α + .

Indeed, for n 0 integer, call A k j ( n ) = sup { α k j ( n + x ) ; x [ 0 , 1 [ } . Then,

r < t α k j ( t r ) = n = 1 + α k j ( n + { t } ) n = 1 + A k j ( n ) .

Due to (11), this series converges (e.g., from Cauchy criterion). We set:

α + = max k j n = 1 + A k j ( n ) .

On physical grounds, it implies that the conductance g k remains bounded, even if each pre-synaptic neuron is firing all the time (see Equation (29) below).

3.4 Synaptic summation

Assume that Equation (8) remains valid for an arbitrary number of pre-synaptic spikes emitted by neuron j within a finite time interval [ s , t ] (i.e., neglecting non-linear effects such as the fact that there is a finite amount of neurotransmitter leading to saturation effects). Then, one obtains the following equation for the conductance g k j at time t, upon the arrival of spikes at times t j ( r ) ( ω ) in the time interval [ s , t ] :

g k j ( t ) = g k j ( s ) + G k j { r ; s t j ( r ) ( ω ) < t } α k j ( t t j ( r ) ( ω ) ) .

The conductance at time s, g k j ( s ) , depends on the neuron j’s activity preceding s. This term is therefore unknown unless one knows exactly the past evolution before s. One way to circumvent this problem is to taking s arbitrary far in the past, i.e., taking s in order to remove the dependence on initial conditions. This corresponds to the following situation. When one observes a real neural network, the time where the observation starts, say t = 0 , is usually not the time when the system has begun to exist, s in our notations. Taking s arbitrary far in the past corresponds to assuming that the system has evolved long enough so that it has reached sort of a “permanent regime”, not necessarily stationary, when the observation starts. On phenomenological grounds, it is enough to take −s larger than all characteristic relaxation times in the system (e.g., leak rate and synaptic decay rate). Here, for mathematical purposes, it is easier to take the limit s .

Since g k j ( t ) depends on the raster plot up to time t, via the spiking times t j ( r ) ( ω ) , this limit makes only sense when taking it “conditionally” to a prescribed raster plot ω. In other words, one can know the value of the conductances g k j at time t only if the past spike times of the network are known. We write g k j ( t , ω ) from now on to make this dependence explicit.

We set

α k j ( t , ω ) = lim s { r ; s t j ( r ) ( ω ) < t } α k j ( t t j ( r ) ( ω ) ) { r ; t j ( r ) ( ω ) < t } α k j ( t t j ( r ) ( ω ) ) ,

with the convention that = 0 so that α k j ( t , Ω 0 ) = 0 (recall that Ω0 is the raster such that no neuron ever fires). The limit (14) exists from (13).

3.5 Noise

We allow, in the definition of the current I k ( t ) in Equation (6), the possibility of having a stochastic term corresponding to noise so that:

I k ( t ) = i k ( ext ) ( t ) + σ B ξ k ( t ) ,

where i k ( ext ) ( t ) is a deterministic external current and ξ k ( t ) a noise term whose amplitude is controlled by σ B > 0 . The model affords an extension where σ B depends on k but this extension is straightforward and we do not develop it here. The noise term can be interpreted as the random variation in the ionic flux of charges crossing the membrane per unit time at the post-synaptic button, upon opening of ionic channels due to the binding of neurotransmitter.

We assume that ξ k ( t ) is a white noise, ξ k ( t ) = d B k d t where d B k ( t ) is a Wiener process, so that d B ( t ) = ( d B k ( t ) ) k = 1 N is a N-dimensional Wiener process. Call P the noise probability distribution and E [ ] the expectation under P. Then, by definition, E [ d B k ( t ) ] = 0 , k = 1 , , N , t R , and E [ d B k ( s ) d B l ( t ) ] = δ k l δ ( t s ) d t where δ k l = 1 if l = k , l , k = 1 , , N and δ ( t s ) is the Dirac distribution.

3.6 Differential equation for the integrate regime of gIF

Summarizing, we write Equation (6) in the form:

C k d V k d t + g k ( t , ω ) V k = i k ( t , ω ) ,

where:

g k ( t , ω ) = g L , k + j = 1 N G k j α k j ( t , ω ) .

This is the more general conductance form considered in this paper.

Moreover,

i k ( t , ω ) = g L , k E L + j = 1 N W k j α k j ( t , ω ) + i k ( ext ) ( t ) + σ B ξ k ( t ) ,

where W k j is the synaptic weight:

{ W k j = E + G k j , if j E , W k j = E G k j , if j I .

These equations hold when the membrane potential is below the threshold (Integrate regime).

Therefore, gIF models constitute rather complex dynamical systems: the vector field (r.h.s) of the differential Equation (16) depends on an auxiliary “variable”, which is the past spike sequence ω [ t ] and to define properly the evolution of V k from time t to later times one needs to know the spikes arising before t. This is precisely what makes gIF models more interesting than LIF. The definition of conductances introduces long-term memory effects.

IF models implement a reset mechanism on the membrane potential: If neuron k has been reset between s and t, say at time τ, then V k ( t ) depends only on V k ( τ ) and not on previous values, as in (4). But, in gIF model, contrarily to LIF, there is also a dependence in the past via the conductance and this dependence is not erased by the reset. That is why we have to consider a system with infinite memory.

3.7 The parameters space

The stochastic dynamical system (16) depends on a huge set of parameters: the membrane capacities C k , k = 1 , , N , the threshold θ, the reversal potentials E L , E + , E , the leak conductance g L ; the maximal synaptic conductances G k j , k , j = 1 , , N which define the neural network topology; the characteristic times τ k j , k , j = 1 , , N of synaptic responses decay; the noise amplitude σ B ; and additionally, the parameters defining the external current i k ( ext ) . Although some parameters can be fixed from biology, such as C k , the reversal potentials, τ k j , …some others such as the G k j ’s must be allowed to vary freely in order to leave open the possibility of modeling very different neural networks structures.

In this paper, we are not interested in describing properties arising for specific values of those parameters, but instead in generic properties that hold on sets of parameters. More specifically, we denote the list of all parameters ( ( C k ) k = 1 N , E L , E + , E , ( G k j ) k , j = 1 N , ) by the symbol γ. This is a vector in R K where K is the total number of parameters. In this paper, we assume that γ belongs to a bounded subset H R K . Basically, we want to avoid situations where some parameters become infinite, which would be unphysical. So the limits of H are the limits imposed by biophysics. Additionally, we assume that σ R > 0 and σ B > 0 . Together with physical constraints such as “conductances are positive”, these are the only assumption made in parameters. All mathematical results stated in the paper hold for any γ H .

4 gIF model dynamics for a fixed raster

We assume that the raster ω is fixed, namely the spike history is given. Then, it is possible to integrate the Equation (16) (Integrate regime) and to obtain explicitly the value of the membrane potential of a neuron at time t, given the membrane potential value at time s. Additionally, the reset condition (4) has the consequence of removing the dependence of neuron k on the past anterior to τ k ( t , ω ) .

4.1 Integrate regime

For t 1 t 2 , t 1 , t 2 R , set:

Γ k ( t 1 , t 2 , ω ) = e 1 C k t 1 t 2 g k ( u , ω ) d u .

We have:

Γ k ( t 1 , t 1 , ω ) = Γ k ( t 2 , t 2 , ω ) = 1 ,

and:

Γ k ( t 1 , t 2 , ω ) t 1 = g k ( t 1 , ω ) C k Γ k ( t 1 , t 2 , ω ) .

Fix two times s < t and assume that for neuron k, V k ( u ) < θ , s u t so that the membrane potential V k obeys (16). Then,

t 1 [ Γ k ( t 1 , t 2 , ω ) V k ( t 1 ) ] = Γ k ( t 1 , t 2 , ω ) [ d V k d t 1 + g k ( t 1 , ω ) C k V k ( t 1 ) ] = Γ k ( t 1 , t 2 , ω ) i k ( t 1 , ω ) C k .

We have then integrating the previous equation with respect to t 1 between s and t and setting t 2 = t :

V k ( t ) = Γ k ( s , t , ω ) V k ( s ) + 1 C k s t Γ k ( t 1 , t , ω ) i k ( t 1 , ω ) d t 1 .

This equation gives the variation in membrane potential during a period of rest (no spike) of the neuron. Note, however, that this neuron can still receive spikes from the other neurons via the update of conductances (made explicit in the previous equation by the dependence in the raster plot ω).

The term Γ k ( s , t , ω ) given by (19) is an effective leak between s , t . In the leaky integrate and fire model, it would have been equal to e s t 1 τ L d t 1 = e t s τ L . The term:

V k ( s , t , ω ) = def 1 C k s t Γ k ( t 1 , t , ω ) i k ( t 1 , ω ) d t 1 ,

has the dimension of a voltage. It corresponds to the integration of the total current between s and t weighted by the effective leak term Γ k ( t 1 , t , ω ) . It decomposes as

V k ( s , t , ω ) = V k ( syn ) ( s , t , ω ) + V k ( ext ) ( s , t , ω ) + V k ( B ) ( s , t , ω ) ,

where,

V k ( syn ) ( s , t , ω ) = 1 C k j = 1 N W k j s t Γ k ( t 1 , t , ω ) α k j ( t 1 , ω ) d t 1 ,

is the synaptic contribution. Moreover,

V k ( ext ) ( s , t , ω ) = E L τ L , k s t Γ k ( t 1 , t , ω ) d t 1 + 1 C k s t i k ( ext ) ( t 1 ) Γ k ( t 1 , t , ω ) d t 1 ,

where we set:

τ L , k = def C k g L , k ,

the characteristic leak time of neuron k. We have included the leak reversal potential term in this “external” term for convenience. Therefore, even if there is no external current, this term is nevertheless non-zero.

The sum of the synaptic and external terms gives the deterministic contribution in the membrane potential. We note:

V k ( det ) ( s , t , ω ) = V k ( syn ) ( s , t , ω ) + V k ( ext ) ( s , t , ω ) .

Finally,

V k ( B ) ( s , t , ω ) = def σ B C k s t Γ k ( t 1 , t , ω ) ξ k ( t 1 ) d t 1 = σ B C k s t Γ k ( t 1 , t , ω ) d B k ( t 1 ) ,

is a noise term. This is a Gaussian process with mean 0 and variance:

( σ B C k ) 2 E [ ( s t Γ k ( t 1 , t , ω ) d B k ( t 1 ) ) 2 ] = ( σ B C k ) 2 s t Γ k 2 ( t 1 , t , ω ) d t 1 .

The square root of this quantity has the dimension of a voltage.

As a final result, for a fixed ω, the variation in membrane potential during a period of rest (no spike) of neuron k between s and t reads (subthreshold oscillations):

V k ( t ) = Γ k ( s , t , ω ) V k ( s ) + V k ( det ) ( s , t , ω ) + V k ( B ) ( s , t , ω ) .

4.2 Reset

In Equation (4), as in all IF models that we know, the reset of the membrane potential has the effect of removing the dependence of V k on its past since V k ( [ t + τ sep ] ) is replaced by V reset . Hence, reset removes the dependence in the initial condition V k ( s ) in (24) provided that neuron k fires between s and t in the raster ω. As a consequence, Equation (24) holds, from the “last reset time” introduced in Section 2.10 up to time t. Then, Equation (24) reads

V k ( t ) = V k ( det ) ( τ k ( t , ω ) , t , ω ) + V k ( noise ) ( τ k ( t , ω ) , t , ω ) ,

where:

V k ( noise ) ( τ k ( t , ω ) , t , ω ) = Γ k ( τ k ( t , ω ) , t , ω ) V reset + V k ( B ) ( τ k ( t , ω ) , t , ω ) ,

is a Gaussian process with mean zero and variance:

σ k 2 ( τ k ( t , ω ) , t , ω ) = Γ k 2 ( τ k ( t , ω ) , t , ω ) σ R 2 + ( σ B C k ) 2 τ k ( t , ω ) t Γ k 2 ( t 1 , t , ω ) d t 1 .

5 Useful bounds

We now prove several bounds used throughout the paper.

5.1 Bounds on the conductance

From (13), and since α k j ( t ) 0 :

0 = α k j ( t , Ω 0 ) α k j ( t , ω ) = { r ; t j ( r ) ( ω ) < t } α k j ( t t j ( r ) ( ω ) ) r < t α k j ( t r ) = α k j ( t , Ω 1 ) α + .

Therefore,

g L , k = g k ( t , Ω 0 ) g k ( t , ω ) g k ( t , Ω 1 ) = def g M , k g L , k + α + j = 1 N G k j ,

so that the conductance is uniformly bounded in t and ω. The minimal conductance is attained when no neuron fires ever so that Ω0 is the “lowest conductance state”. On the opposite, the maximal conductance is reached when all neurons fire all the time so that Ω1 is the “highest conductance state”. To simplify notations, we note τ M , k = C k g M , k . This is the minimal relaxation time scale for neuron k while τ L , k = C k g L , k is the maximal relaxation time.

τ M , k = C k g M , k τ L , k = C k g L , k .

5.2 Bounds on membrane potential

Now, from (19), we have, for s < t :

0 Γ k ( s , t , Ω 1 ) = e t s τ M , k Γ k ( s , t , ω ) Γ k ( s , t , Ω 0 ) = e t s τ L , k < 1 .

As a consequence, Γ k ( s , t , ω ) 0 exponentially fast as s .

Moreover,

0 s t Γ k ( t 1 , t , ω ) α k j ( t 1 , ω ) d t 1 α + s t e t t 1 τ L , k d t 1 = α + τ L , k ( 1 e t s τ L , k ) α + τ L , k ,

so that:

α + g L , k j I W k j V k ( syn ) ( s , t , ω ) α + g L , k j E W k j .

Thus, V k ( syn ) ( s , t , ω ) is uniformly bounded in s , t .

Establishing similar bounds for V k ( ext ) ( s , t , ω ) requires the assumption that A i k ( ext ) ( t ) B , but obtaining tighter bounds requires additionally the knowledge of the sign of A C k + E L τ L , k and of B C k + E L τ L , k . Here, we have only to consider that:

| i k ( ext ) ( t ) | i + .

In this case,

| V k ( ext ) ( s , t , ω ) | = | E L τ L , k s t Γ k ( t 1 , t , ω ) d t 1 + 1 C k s t i k ( ext ) ( t 1 ) Γ k ( t 1 , t , ω ) d t 1 | [ | E L | τ L , k + i + C k ] s t Γ k ( t 1 , t , ω ) d t 1 [ | E L | τ L , k + i + C k ] s t e t t 1 τ L , k d t 1 ,

so that:

| V k ( ext ) ( s , t , ω ) | ( | E L | + i + g L , k ) ( 1 e t s τ L , k ) | E L | + i + g L , k .

Consequently,

Proposition 2 V k = def α + g L , k j I W k j ( | E L | + i + g L , k ) < V k ( det ) ( s , t , ω ) < V k + = def α + g L , k j E W k j + | E L | + i + g L , k ,

which provides uniform bounds in s, t, ω for the deterministic part of the membrane potential.

5.3 Bounds on the noise variance

Let us now consider the stochastic part V k ( noise ) ( τ k ( t , ω ) , t , ω ) . It has zero mean, and its variance (27) obeys the bounds:

e 2 t τ k ( t , ω ) τ M , k σ R 2 + τ M , k 2 ( σ B C k ) 2 ( 1 e 2 t τ k ( t , ω ) τ M , k ) σ k 2 ( τ k ( t , ω ) , t , ω ) e 2 t τ k ( t , ω ) τ L , k σ R 2 + τ L , k 2 ( σ B C k ) 2 ( 1 e 2 t τ k ( t , ω ) τ L , k ) .

If σ R 2 < τ M , k 2 ( σ B C k ) 2 the left-hand side is an increasing function of u = t τ k ( t , ω ) 0 so that the minimum, σ R 2 is reached for u = 0 while the maximum is reached for u = + and is τ M , k 2 ( σ B C k ) 2 . The opposite holds if σ R 2 τ M , k 2 ( σ B C k ) 2 . The same argument holds mutatis mutandis for the right-hand side. We set:

σ k = def min ( σ B C k τ M , k 2 , σ R ) ; σ k + = def max ( σ B C k τ L , k 2 , σ R )

so that:

Proposition 3 0 < σ k σ k ( τ k ( t , ω ) , t , ω ) σ k + < + .

5.4 The limit τ k ( t , ω )

For fixed s and t, there are infinitely many rasters such that τ k ( t , ω ) < s (we remind that rasters are infinite sequences). One may argue that taking the difference t s sufficiently large, the probability of such sequences should vanish. It is indeed possible to show (Section 8.1) that this probability vanishes exponentially fast with t s , meaning unfortunately that it is positive whatever t s . So we have to consider cases where τ k ( t , ω ) can go arbitrary far in past (this is also a key toward an extension of the present analysis to more general conductance-based models as discussed in Section 9.3). Therefore, we have to check that the quantities introduced in the previous sections are well defined as τ k ( t , ω ) .

Fix s real. For all ω such that τ k ( t , ω ) s - this condition ensuring that k does not fire between s and t - we have, from (28), (31), 0 Γ k ( t 1 , t , ω ) α k j ( t 1 , ω ) α + e t t 1 τ L , k , t 1 [ s , t ] . Now, since lim s s t e t t 1 τ L , k d t 1 = τ L , k exists, the limit

lim τ k ( t , ω ) V k ( syn ) ( τ k ( t , ω ) , t , ω ) = 1 C k j = 1 N W k j t Γ k ( t 1 , t , ω ) α k j ( t 1 , ω ) d t 1 .

exists as well. The same holds for the external term V k ( ext ) ( τ k ( t , ω ) , t , ω ) .

Finally, since Γ k ( τ k ( t , ω ) , t , ω ) 0 as τ k ( t , ω ) the noise term (26) becomes in the limit:

lim τ k ( t , ω ) V k ( noise ) ( τ k ( t , ω ) , t , ω ) = σ B C k t Γ k ( t 1 , t , ω ) d B k ( t 1 ) ,

which is a Gaussian process with mean 0 and a variance ( σ B C k ) 2 t Γ k 2 ( t 1 , t , ω ) d t 1 which obeys the bounds (37).

6 Continuity with respect to a raster

6.1 Definition

Due to the particular structure of gIF models, we have seen that the membrane potential at time t is both a function of t and of the full sequence of past spikes ω [ t ] . One expects, however, the dependence with respect to the past spikes to decay as those spikes are more distant in the past. This issue is related to a notion of continuity with respect to a raster that we now characterize.

Definition 1 Let m be a positive integer. The m-variation of a function f ( t , ω ) f ( t , ω [ t ] ) is:

var m [ f ( t , ) ] = sup { | f ( t , ω ) f ( t , ω ) | : ω = m , [ t ] ω } .

where the definition of = m , [ t ] is given in Equation (5). Hence, this notion characterizes the maximal variation of f ( t , ) on the set of spikes identical from time [ t ] m to time [ t ] (cylinder set). It implements the fact that one may truncate the spike history to time [ t ] m and make an error which is at most var m [ f ( t , ) ] .

Definition 2 The function f ( t , ω ) is continuous if var m [ f ( t , ) ] 0 as m + .

An additional information is provided by the convergence rate to 0 with m. The faster this convergence, the smaller the error made when replacing an infinite raster by a spike block on a finite time horizon.

6.2 Continuity of conductances

Proposition 4 The conductance g k ( t , ω ) is continuous in ω, for all t, for all k = 1 , , N .

Proof Fix k = 1 , , N , t R , m > 0 integer. We have, for ω = m , [ t ] ω :

| α k j ( t , ω ) α k j ( t , ω ) | = | { r ; t j ( r ) ( ω ) < t } α k j ( t t j ( r ) ( ω ) ) { r ; t j ( r ) ( ω ) < t } α k j ( t t j ( r ) ( ω ) ) | = | { r ; t j ( r ) ( ω ) < t m } α k j ( t t j ( r ) ( ω ) ) { r ; t j ( r ) ( ω ) < t m } α k j ( t t j ( r ) ( ω ) ) | ,

since the set of firing times { t m t j ( r ) ( ω ) < t } , { t m t j ( n ) ( ω ) < t } are identical by hypothesis. So, since α k j ( x ) 0 ,

| α k j ( t , ω ) α k j ( t , ω ) | { r ; t j ( r ) ( ω ) < t m } α k j ( t t j ( r ) ( ω ) ) + { r ; t j ( r ) ( ω ) < t m } α k j ( t t j ( r ) ( ω ) ) 2 r < t m α k j ( t r ) .

Therefore, as m + , from (12) and setting M = t m ,

var m [ α k j ( t , ) ] 2 r < M α k j ( t r ) 2 P d ( m τ k j ) e m τ k j ,

which converges to 0 as m + .

Therefore, from (17), g k ( t , ω ) is continuous with a variation

var m [ g k ( t , ) ] 2 j = 1 N G k j P d ( m τ k j ) e m τ k j ,

which converges exponentially fast to 0 as m + . □

6.3 Continuity of the membrane potentials

Proposition 5 The deterministic part of the membrane potential, V k ( det ) ( τ k ( t , ω ) , t , ω ) , is continuous and its m-variation decays exponentially fast with m.

Proof In the proof, we shall establish precise upper bounds for the variation in V k ( syn ) ( τ k ( t , ) , t , ) , V k ( ext ) ( τ k ( t , ) , t , ) since they are used later on for the proof of uniqueness of a Gibbs measure (Section 7.4.1). From the previous result, it is easy to show that, for all ω = m , [ t ] ω , t 1 t 2 t :

| Γ k ( t 1 , t 2 , ω ) Γ k ( t 1 , t 2 , ω ) | Γ k ( t 1 , t 2 , ω ) ( e var m [ g k ] C k ( t 2 t 1 ) 1 ) ,

Therefore, from (31),

var m [ Γ k ( t 1 , t 2 , ) ] e t 2 t 1 τ L , k ( e var m [ g k ] C k ( t 2 t 1 ) 1 )

and Γ k ( t 1 , t 2 , ω ) is continuous in ω.

Now, the product Γ k ( t 1 , t 2 , ω ) α k j ( t 1 , ω ) is continuous as a product of continuous functions. Moreover,

var m [ Γ k ( t 1 , t 2 , ) α k j ( t 1 , ) ] sup ω X Γ k ( t 1 , t 2 , ω ) var m [ α k j ( t 1 , ) ] + sup ω X α k j ( t 1 , ω ) var m [ Γ k ( t 1 , t 2 , ) ] = e t 2 t 1 τ L , k [ var m [ α k j ( t 1 , ) ] + ( e var m [ g k ] C k ( t 2 t 1 ) 1 ) α k j ( t 1 , Ω 1 ) ] ,

so that:

var m [ Γ k ( t 1 , t 2 , ) α k j ( t 1 , ) ] < e t 2 t 1 τ L , k [ 2 P d ( m τ k j ) e m τ k j + ( e var m [ g k ] C k ( t 2 t 1 ) 1 ) α + ] .

Since, as m + :

e var m [ g k ] C k ( t 2 t 1 ) 1 var m [ g k ] C k ( t 2 t 1 ) 2 ( t 2 t 1 ) C k j = 1 N G k j P d ( m τ k j ) e m τ k j ,

we have,

var m [ Γ k ( t 1 , t 2 , ) α k j ( t 1 , ) ] 2 e t 2 t 1 τ L , k [ P d ( m τ k j ) e m τ k j + α + ( t 2 t 1 ) C k j = 1 N G k j P d ( m τ k j ) e m τ k j ] ,

which converges to 0 as m + .

Let us show the continuity of V k ( syn ) ( τ k ( t , ) , t , ) . We have, from (20),

| V k ( syn ) ( τ k ( t , ω ) , t , ω ) V k ( syn ) ( τ k ( t , ω ) , t , ω ) | 1 C k j = 1 N | W k j | × | τ k ( t , ω ) t Γ k ( t 1 , t , ω ) α k j ( t 1 , ω ) d t 1 τ k ( t , ω ) t Γ k ( t 1 , t , ω ) α k j ( t 1 , ω ) d t 1 | .

The following inequality is used at several places in the paper. For a t 1 -integrable function f ( t 1 , t , ω ) , we have:

| τ k ( t , ω ) t f ( t 1 , t , ω ) d t 1 τ k ( t , ω ) t f ( t 1 , t , ω ) d t 1 | τ k ( t , ω ) t | f ( t 1 , t , ω ) f ( t 1 , t , ω ) | d t 1 + | τ k ( t , ω ) τ k ( t , ω ) f ( t 1 , t , ω ) d t 1 | .

Here, it gives, for t 1 t :

| τ k ( t , ω ) t Γ k ( t 1 , t , ω ) α k j ( t 1 , ω ) d t 1 τ k ( t , ω ) t Γ k ( t 1 , t , ω ) α k j ( t 1 , ω ) d t 1 | τ k ( t , ω ) t | Γ k ( t 1 , t , ω ) α k j ( t 1 , ω ) Γ k ( t 1 , t , ω ) α k j ( t 1 , ω ) | d t 1 + | τ k ( t , ω ) τ k ( t , ω ) Γ k ( t 1 , t , ω ) α k j ( t 1 , ω ) d t 1 | .

For the first term, we have,

τ k ( t , ω ) t | Γ k ( t 1 , t , ω ) α k j ( t 1 , ω ) Γ k ( t 1 , t , ω ) α k j ( t 1 , ω ) | d t 1 τ k ( t , ω ) t var m [ Γ k ( t 1 , t , ) α k j ( t 1 , ) ] d t 1 t var m [ Γ k ( t 1 , t , ) α k j ( t 1 , ) ] d t 1 t 2 e t t 1 τ L , k [ P d ( m τ k j ) e m τ k j + α + ( t t 1 ) C k j = 1 N G k j P d ( m τ k j ) e m τ k j ] d t 1 = 2 P d ( m τ k j ) e m τ k j t e t t 1 τ L , k d t 1 + 2 α + C k j = 1 N G k j P d ( m τ k j ) e m τ k j t ( t t 1 ) e t t 1 τ L , k d t 1 = 2 τ L , k [ P d ( m τ k j ) e m τ k j + α + τ L , k C k j = 1 N G k j P d ( m τ k j ) e m τ k j ] .

Let us now consider the second term. If τ k ( t , ω ) t m or τ k ( t , ω ) t m , then τ k ( t , ω ) = τ k ( t , ω ) and this term vanishes. Therefore, the supremum in the definition of var m [ V k ( syn ) ( τ k ( t , ) , t , ) ] is attained if τ k ( t , ω ) < t m and τ k ( t , ω ) < t m . We may assume, without loss of generality, that τ k ( t , ω ) τ k ( t , ω ) . Then, from (32),

τ k ( t , ω ) τ k ( t , ω ) Γ k ( t 1 , t , ω ) α k j ( t 1 , ω ) d t 1 < α + τ k ( t , ω ) τ k ( t , ω ) e t t 1 τ L , k d t 1 = α + τ L , k e t τ k ( t , ω ) τ L , k ( 1 e τ k ( t , ω ) τ k ( t , ω ) τ L , k ) α + τ L , k e t τ k ( t , ω ) τ L , k α + τ L , k e m τ L , k .

So, we have, for the variation of V k ( syn ) ( τ k ( t , ) , t , ) , using (21):

var m [ V k ( syn ) ( τ k ( t , ) , t , ) ] 1 g L , k j = 1 N | W k j | [ 2 ( P d ( m τ k j ) e m τ k j + α + g L , k j = 1 N G k j P d ( m τ k j ) e m τ k j ) 1 g L , k j = 1 N | W k j | [ + α + e m τ L , k ] ,

so that finally,

var m [ V k ( syn ) ( τ k ( t , ) , t , ) ] j = 1 N A k j ( syn ) P d ( m τ k j ) e m τ k j + B k ( syn ) e m τ L , k ,

with

A k j ( syn ) = 1 g L , k ( 2 | W k j | + α + G k j g L , k j = 1 N | W k j | ) ,

B k ( syn ) = α + g L , k j = 1 N | W k j | ,

and var m [ V k ( syn ) ( τ k ( t , ) , t , ) ] converges to 0 exponentially fast as m + .

Now, let us show the continuity of V k ( ext ) ( τ k ( t , ω ) , t , ω ) with respect to ω. We have:

| V k ( ext ) ( τ k ( t , ω ) , t , ω ) V k ( ext ) ( τ k ( t , ω ) , t , ω ) | = | τ k ( t , ω ) t ( E L τ L , k + i k ( ext ) ( t 1 ) C k ) Γ k ( t 1 , t , ω ) d t 1 | τ k ( t , ω ) t ( E L τ L , k + i k ( ext ) ( t 1 ) C k ) Γ k ( t 1 , t , ω ) d t 1 | τ k ( t , ω ) t | E L τ L , k + i k ( ext ) ( t 1 ) C k | | Γ k ( t 1 , t , ω ) Γ k ( t 1 , t , ω ) | d t 1 + τ k ( t , ω ) τ k ( t , ω ) | E L τ L , k + i k ( ext ) ( t 1 ) C k | Γ k ( t 1 , t , ω ) d t 1 ( | E L | τ L , k + i + C k ) × ( τ k ( t , ω ) t | Γ k ( t 1 , t , ω ) Γ k ( t 1 , t , ω ) | d t 1 + τ k ( t , ω ) τ k ( t , ω ) Γ k ( t 1 , t , ω ) d t 1 ) ( | E L | τ L , k + i + C k ) ( τ k ( t , ω ) t var m [ Γ k ( t 1 , t , ) ] d t 1 + τ k ( t , ω ) τ k ( t , ω ) e t t 1 τ L , k d t 1 ) ( | E L | τ L , k + i + C k ) ( 2 τ L , k 2 C k j = 1 N G k j P d ( m τ k j ) e m τ k j + τ L , k e m τ L , k ) ,

where, in the last inequality, we have used that the supremum in the variation is attained for τ k ( t , ω ) < t m and τ k ( t , ω ) < t m . Finally:

var m [ V k ( ext ) ( τ k ( t , ) , t , ) ] j = 1 N A k j ( ext ) P d ( m τ k j ) e m τ k j + B k ( ext ) e m τ L , k ,

where,

A k j ( ext ) = 2 G k j g L , k B k ( ext ) ,

B k ( ext ) = | E L | + i + g L , k ,

and V k ( ext ) ( τ k ( t , ) , t , ) is continuous.

As a conclusion, V k ( det ) ( τ k ( t , ) , t , ) is continuous as the sum of two continuous functions. □

6.4 Continuity of the variance of V k ( noise ) ( τ k ( t , ) , t , )

Using the same type of arguments, one can also prove that

Proposition 6 The variance σ k ( τ k ( t , ω ) , t , ω ) is continuous in ω, for all t, for all k = 1 , , N .

Proof We have, from (27)

| σ k 2 ( τ k ( t , ω ) , t , ω ) σ k 2 ( τ k ( t , ω ) , t , ω ) | σ R 2 var m [ Γ k 2 ( τ k ( t , ) , t , ) ] + ( σ B C k ) 2 | τ k ( t , ω ) t Γ k 2 ( t 1 , t , ω ) d t 1 τ k ( t , ω ) t Γ k 2 ( t 1 , t , ω ) d t 1 | .

For the first term, we have that the sup in var m [ Γ k 2 ( τ k ( t , ) , t , ) ] is attained for τ k ( t , ω ) , τ k ( t , ω ) < t m and:

var m [ Γ k 2 ( τ k ( t , ) , t , ) ] 2 sup { Γ k 2 ( τ k ( t , ω ) , t , ω ) ; ω s.t. τ k ( t , ω ) < t m } 2 e 2 m τ L , k .

For the second term, we have:

| τ k ( t , ω ) t Γ k 2 ( t 1 , t , ω ) d t 1 τ k ( t , ω ) t Γ k 2 ( t 1 , t , ω ) d t 1 | τ k ( t , ω ) t | Γ k 2 ( t 1 , t , ω ) Γ k 2 ( t 1 , t , ω ) | d t 1 + τ k ( t , ω ) τ k ( t , ω ) Γ k 2 ( t 1 , t , ω ) d t 1 τ k ( t , ω ) t var m [ Γ k 2 ( t 1 , t , ) ] d t 1 + τ k ( t , ω ) τ k ( t , ω ) e 2 ( t t 1 ) τ L , k d t 1 t e 2 ( t t 1 ) τ L , k ( e 2 var m [ g k ( t , ) ( t t 1 ) ] C k 1 ) d t 1 + τ L , k 2 e 2 m τ L , k τ L , k 2 2 C k var m [ g k ( t , ) ] + τ L , k 2 e 2 m τ L , k ,

so that finally,

var m [ σ k 2 ( t , ) ] j = 1 N A k j ( σ ) P d ( m τ k j ) e m τ k j + C k ( σ ) e 2 m τ L , k ,

with

A k j ( σ ) = G k j g L , k ( σ B τ L , k C k ) 2 , C k ( σ ) = 1 2 ( σ B τ L , k C k ) 2 + 2 σ R 2 ,

and continuity follows. □

6.5 Remark

Note that the variation in all quantities considered here is exponentially decaying with a time constant given by max ( τ k j , τ L , k ) . This is physically satisfactory: the loss of memory in the system is controlled by the leak time and the decay of the post-synaptic potential.

7 Statistics of raster plots

7.1 Conditional probability distribution of V k ( t )

Recall that P is the joint distribution of the noise and E [ ] the expectation under P. Under P, the membrane potential V is a stochastic process whose evolution, below the threshold, is given Equations (24), (25) and above by (4). It follows from the previous analysis that:

Proposition 7 Conditionally to ω [ t ] , V ( t ) is Gaussian with mean:

E [ V k ( t ) | ω [ t ] ] = V k ( det ) ( τ k ( t , ω ) , t , ω ) , k = 1 , , N ,

and covariance:

Cov [ V k ( t ) , V l ( t ) | ω [ t ] ] = σ k 2 ( τ k ( t , ω ) , t , ω ) δ k l , k , l = 1 , , N

where σ k 2 ( τ k ( t , ω ) , t , ω ) is given by (27).

Moreover, the V k ( t ) ’s, k = 1 , , N are conditionally independent.

Proof Essentially, the proof is a direct consequence of Equations (24), (25) and the Gaussian nature of the noise V k ( noise ) ( τ k ( t , ω ) , t , ω ) . The conditional independence results from the fact that:

Cov [ V k ( t ) , V l ( t ) | ω [ t ] ] = σ B 2 C k C l E [ τ k ( t , ω ) t Γ k ( t 1 , t , ω ) d B k ( t 1 ) τ l ( t , ω ) t Γ l ( t 2 , t , ω ) d B l ( t 2 ) | ω ] + Cov [ Γ k ( τ k ( t , ω ) , t , ω ) V reset , Γ l ( τ l ( t , ω ) , t , ω ) V reset ] = σ B 2 C k C l τ k ( t , ω ) t τ l ( t , ω ) t Γ k ( t 1 , t , ω ) Γ l ( t 2 , t , ω ) E [ d B k ( t 1 ) d B l ( t 2 ) ] + σ R 2 Γ k 2 ( τ k ( t , ω ) , t , ω ) δ k l = δ k l [ ( σ B C k ) 2 τ k ( t , ω ) t τ k ( t , ω ) t Γ k ( t 1 , t , ω ) Γ k ( t 2 , t , ω ) δ ( t 1 t 2 ) d t 1 d t 2 δ k l [ + σ R 2 Γ k 2 ( τ k ( t , ω ) , t , ω ) ] = σ k 2 ( τ k ( t , ω ) , t , ω ) δ k l .

 □

7.2 The transition probability

We now compute the probability of a spiking pattern at time n = [ t ] , ω ( n ) , given the past sequence ω n 1 .

Proposition 8 The probability of ω ( n ) conditionally to ω n 1 is given by:

P [ ω ( n ) | ω n 1 ] = k = 1 N P [ ω k ( n ) | ω n 1 ] , with

P [ ω k ( n ) | ω n 1 ] = ω k ( n ) π ( X k ( n 1 , ω ) ) + ( 1 ω k ( n ) ) ( 1 π ( X k ( n 1 , ω ) ) ) , where

X k ( n 1 , ω ) = θ V k ( det ) ( τ k ( n 1 , ω ) , n 1 , ω ) σ k ( τ k ( n 1 , ω ) , n 1 , ω ) , and

π ( x ) = 1 2 π x + e u 2 2 d u .

Proof We have, using the conditional independence of the V k ( n ) ’s:

P [ ω ( n ) | ω n 1 ] = k = 1 N ( ω k ( n ) P [ V k ( n 1 ) θ | ω n 1 ] + ( 1 ω k ( n ) ) P [ V k ( n 1 ) < θ | ω n 1 ] ) .

Since the V k ( n 1 ) ’s are conditionally Gaussian, with mean V k ( det ) ( τ k ( n 1 , ω ) , n 1 , ω ) and variance σ k 2 ( τ k ( n 1 , ω ) , n 1 , ω ) , we directly obtain (47), (48).

Note that since σ k ( τ k ( n 1 , ω ) , n 1 , ω ) is bounded from below by a positive quantity (see (37)), the ratio θ V k ( det ) ( τ k ( n 1 , ω ) , n 1 , ω ) σ k ( τ k ( n 1 , ω ) , n 1 , ω ) in (48) is defined for all ω X . □

7.3 Chains with complete connections

The transition probabilities (47) define a stochastic process on the set of raster plots where the underlying membrane potential dynamics is summarized in the terms V k ( det ) ( τ k ( n 1 , ω ) , n 1 , ω ) and σ k ( τ k ( n 1 , ω ) , n 1 , ω ) . While the integral defining these terms extends from τ k ( n 1 , ω ) to n 1 where τ k ( n 1 , ω ) can go arbitrary far in the past, the integrand involves the conductance g k ( n 1 , ω ) that summarizes an history dating back to s = . As a consequence, the probability transitions generate a stochastic process with unbounded memory, thus non-Markovian. One may argue that this property is a result of our procedure of taking the initial condition in a infinite past s , to remove the unresolved dependency on g k ( s ) (Section 3.4). So the alternative is either to keep s finite in order to have a Markovian process; then, we have to fix arbitrarily g k ( s ) and the probability distribution of V k ( s ) . Or we take s , removing the initial condition, to the price of considering a non-Markovian process. Actually, such processes are widely studied in the literature under the name of “chains with complete connections” 43 46 47 48 49 and several important results can be used here. So we adopt the second approach of the alternative. As a by-product, the knowledge of the Gibbs measure provided by this analysis allows a posteriori to fix the probability distribution of V k ( s ) .

For the sake of completeness, we give here the definition of a chain with complete connections (see 43 for more details). For n Z , we note A n 1 the set of sequences ω n 1 and F n 1 the related σ-algebra, while F is the σ-algebra related with X = A Z . P ( X , F ) is the set of probability measures on ( X , F ) .

Definition 3 A system of transition probabilities is a family { P n } n Z of functions

P n [ | ] : A × A n 1 [ 0 , 1 ] ,

such that the following conditions hold for every n Z :

• For every ω ( n ) A , the function P n [ ω ( n ) | . ] is measurable with respect to F n 1 .

• For every ω n 1 A n 1 ,

ω ( n ) A P n [ ω ( n ) | ω n 1 ] = 1 .

A probability measure μ in P ( X , F ) is consistent with a system of transition probabilities { P n } n Z if for all n Z and all F n -measurable functions f:

f ( ω n ) μ ( d ω ) = ω ( n ) A f ( ω n 1 ω ( n ) ) P n [ ω ( n ) | ω n 1 ] μ ( d ω ) .

Such a measure μ is called a “chain with complete connections consistent with the system of transition probabilities { P n } n Z ”.

The transitions probabilities (47) constitute such a system of transitions probabilities: the summation to 1 is obvious while the measurability follows from the continuity of P n [ ω ( n ) | ω n 1 ] proved below. To simplify notations, we write p ( n , ω ) instead of P n [ ω ( n ) | ω n 1 ] whenever it makes no confusion.

7.4 Existence of a consistent probability measure μ

In the definition above, the measure μ summarizes the statistics of spike trains from −∞ to +∞. Its marginals allow the characterization of finite spike blocks. So, μ provides the characterization of spike train statistics in gIF models. Its existence is established by a standard result in the frame of chains with complete connections stating that a system of continuous transition probabilities on a compact space has at least one probability measure consistent with it 43 . Since π is a continuous function the continuity of p ( n , ω ) with respect to ω follows from the continuity of V k ( det ) ( τ k ( n 1 , ω ) , n 1 , ω ) and the continuity of σ k ( τ k ( n 1 , ω ) , n 1 , ω ) , proved in Section 6.

Therefore, there is at least one probability measure consistent with (47).

7.4.1 The Gibbs distribution

A system of transition probabilities is non-null if for all n Z and all ω n 1 A n 1 P [ ω ( n ) | ω n 1 ] > 0 . Following 50 , a chain with complete connection μ is a Gibbs measure consistent with the system of transition probabilities p ( n , ) if this system is continuous and non-null. Gibbs distributions play an important role in statistical physics, as well as ergodic theory and stochastic processes. In statistical physics, they are usually derived from the maximal entropy principle 14 . Here, we use them in a more general context affording to consider non-stationary processes. It turns out that the spike train statistics in gIF model is given by such a Gibbs measure. In this section, we prove the main mathematical result of this paper (uniqueness of the Gibbs measure). The consequences for spike trains characterizations are discussed in the next section.

Theorem 1 For each choice of parameters γ H , the gIF model (16) has a unique Gibbs distribution.

The proof of uniqueness is based on the following criteria due to Fernandez and Maillard 50 .

Proposition 9 Let:2

m ( p ) = inf n Z inf ω A n p ( n , ω ) ,

and

v ( p ) = sup m Z n m var n m [ p ( n , ) ] .

If m ( p ) > 0 and v ( p ) < , then there exists at most one Gibbs measure consistent with it.

So, to prove the uniqueness, we only have to establish that

m ( p ) > 0 ,

v ( p ) < + .

Proof m ( p ) > 0 .

Recall that:

p ( n , ω ) = k = 1 N [ ω k ( n ) π ( X k ( n 1 , ω ) ) + ( 1 ω k ( n ) ) ( 1 π ( X k ( n 1 , ω ) ) ) ] .

From (35), (37), we have:

< θ V k + σ k + < X k ( n 1 , ω ) < θ V k σ k < + .

Since π, given by (50), is monotonously decreasing, we have:

0 < π k = def π ( θ V k σ k ) < π ( X k ( n 1 , ω ) ) < π k + = def π ( θ V k + σ k + ) < 1 ,

so that:

0 < Π k = def min ( π k , 1 π k + ) < ω k ( n ) π k + ( 1 ω k ( n ) ) ( 1 π k + ) < p k ( n , ω ) < ω k ( n ) π k + + ( 1 ω k ( n ) ) ( 1 π k ) < Π k + = def max ( π k + , 1 π k ) < 1 .

Finally,

m ( p ) > k = 1 N Π k > 0 ,

which proves (51). This also proves the non-nullness of the system of transition probabilities.

v ( p ) < .

The proof, which is rather long, is given in the appendix. □

8 Consequences

8.1 The probability that neuron k does not fire in the time interval [ s , t ]

In Section 4.2, we argued that this probability vanishes exponentially fast with t s . This probability is μ [ n = [ s ] + 1 [ t ] { ω k ( n ) = 0 } ] . We now prove this result.

Proposition 10 The probability that neuron k does not fire within the time interval [ s , t ] , t s > 1 has the following bounds:

0 < Π [ t ] [ s ] < μ [ n = [ s ] + 1 [ t ] { ω k ( n ) = 0 } ] < Π + [ t ] [ s ] < 1 ,

for some constants 0 < Π < Π + < 1 depending on the system parameters γ H .

Proof We have:

μ [ n = [ s ] + 1 [ t ] { ω k ( n ) = 0 } ] = A [ s ] μ [ n = [ s ] + 1 [ t ] { ω k ( n ) = 0 } | ω [ s ] ] d μ ( ω ) = A [ s ] n = [ s ] + 1 [ t ] μ [ { ω k ( n ) = 0 } | ω n 1 ] d μ ( ω ) = A [ s ] n = [ s ] + 1 [ t ] P [ ω k ( n ) = 0 | ω n 1 ] d μ ( ω ) ,

where P [ ω k ( n ) = 0 | ω n 1 ] is given by (47) and obeys the bounds (54). Therefore, setting Π = k = 1 N Π k and Π + = k = 1 N Π k + , we have

Π [ t ] [ s ] A [ s ] d μ ( ω ) = Π [ t ] [ s ] μ [ τ k ( t , ω ) s ] Π + [ t ] [ s ] A [ s ] d μ ( ω ) = Π + [ t ] [ s ] .

 □

8.2 Back to spike trains analysis with the maximal entropy principle

Here, we shortly develop the consequences of our results in relation with the statistical model estimation discussed in the introduction. A more detailed discussion will be published elsewhere (in preparation and 51 ). Set:

ϕ ( n , ω ) = def log p ( n , ω ) = k = 1 N ϕ k ( n , ω ) ,

with,

ϕ k ( n , ω ) = def ω k ( n ) log π ( X k ( n 1 , ω ) ) + ( 1 ω k ( n ) ) log ( 1 π ( X k ( n 1 , ω ) ) ) .

The function ϕ is a Gibbs potential 52 . Indeed, we have m < n ω n :

μ [ ω m n | ω m 1 ] = exp l = m n ϕ ( l , ω ) .

This equation emphasizes the connection with Gibbs distributions in statistical physics that considers probability distributions on multidimensional lattices with specified boundary conditions and their behavior under space translations 52 . The correspondence with our case is that “time” is represented by a mono-dimensional space and where the “boundary conditions” are the past ω m 1 . Note that in our case, the partition function is equal to 1.

For simplicity, assume stationarity (this is equivalent to assuming a time-independent external current). In this case, it is sufficient to consider the potential at time n = 0 .

Thanks to the bounds (53), one can make a series expansion of the functions log ( π ) and log ( 1 π ) and rewrite the potential under the form of the expansion:

where P ( N , R ) is the set of non-repeated pairs of integers ( k , N ) with k { 1 , , N } and n { R , , 0 } . We call the product ω k 1 ( n 1 ) ω k r ( n r ) a monomial. It is 1 if and only if neuron k 1 fires at time n 1 , , k r fires at time n r . The λ ( k 1 , n 1 ) , , ( k r , n r ) ’s are explicit functions of the parameters γ. Due to the causal form of the potential, where the time-0 spike, ω k ( 0 ) , is multiplied by a function of the past ω 1 , the polynomial expansion does not contain monomials of the form ω k 1 ( 0 ) ω k r ( 0 ) , r > 1 (the corresponding coefficient λ vanishes).

Since the potential has infinite range, the expansion (57) contains infinitely many terms. One can nevertheless consider truncations to a range R = D + 1 corresponding to truncating the memory of the process to some memory depth D. Note that although truncations with a memory depth D are approximations, the distance with the exact potential converges exponentially fast to 0 as D + thanks to the continuity of the potential, with a decay rate controlled by synaptic responses and leak rate.

The truncated Gibbs potential has the form:

ϕ ( D ) ( ω D 0 ) = l λ l ϕ l ( ω D 0 ) ;

where l stands for ( k 1 , n 1 ) , , ( k r , n r ) and is an enumeration of the elements in P ( N , D + 1 ) and where ϕ l is the corresponding monomial. Due to the truncation, (58), contrarily to (55), is not normalized. Its partition function3 is not equal to 1, and its computation becomes rapidly intractable as soon as the number of neurons and memory depth increases.

Clearly, (58) is precisely the form of potential which is obtained under the maximal entropy principle, where the ϕ l ’s are constraints of type “neuron k 1 is firing at time n 1 , neuron k 2 is firing at time n 2 , ” and the λ l ’s the conjugated Lagrange multipliers. Thus, using the maximal entropy principle to characterize spike statistics in the gIF model by expressing constraints in terms of spike events (monomials), one can at best find an approximation which can be rather bad, especially if those constraints focus on instantaneous spike patterns ( D = 0 ) or short memory patterns. Moreover, increasing the memory depth to approach better the right statistics leads to an exponential increase in the number of monomials which becomes rapidly intractable. Finally, the Lagrange multipliers λ l are rather difficult to interpret.

On the opposite, the analytic form (55) depends only on a finite numbers of parameters (γ) constraining the neural network dynamics, which have a straightforward interpretations being physical quantities. This shows that, at least in gIF model, the linear Gibbs potential (58) obtained from the maximal entropy principle is not really appropriate, even for empirical/numerical purposes, and that a form (55) where the infinite memory ω 1 is replaced by ω D 1 could be more efficient although non-linear.

To finish this section, let us discuss the link with Ising model in light of the present work. Ising model corresponds to a memory-less case, hence to D = 0 . Since the causal structure of the Gibbs potential forbids monomials of the form ω k 1 ( 0 ) ω k r ( 0 ) , the D = 0 expansion of the gIF-Gibbs potential corresponds to a Bernoulli distribution where neurons are independent ϕ ( 0 ) ( 0 , ω ) = k = 1 N λ k ω k ( 0 ) . The Ising model is therefore irrelevant to approximate the exact potential of gIF model, if one wants to reproduce spike statistics at the minimal discretization time scale δ without considering memory effects.

However, in real data analysis, people are usually binning data, with a time windows of width w 10 - 20 ms . Binning consists of recoding the raster plot with spikes amalgamation. The binned raster b consists of “spikes” b k ( n ) { 0 , 1 } where b k ( n ) = 1 if neuron k fired at least once in the time window [ n w , ( n + 1 ) w [ . In the expansion (57), this corresponds to collecting all monomials corresponding to b k ( n ) = 1 in a unique monomial. In this way, the binned potential contains indeed an Ising term… that mixes all spike events occurring within the time interval w. These events appear simultaneous because of binning, leading to the Ising pairwise term b k 1 ( 0 ) b k 2 ( 0 ) while events occurring on smaller time scales are scrambled by this procedure.

The binning effect on Gibbs potential requires, however, a more detailed description. This will be discussed elsewhere.

9 Discussion

To conclude this paper, we would like to discuss several consequences and possible extensions of this work.

9.1 The spike time discretization

In gIF model, membrane potential evolves continuously while conductance is updated with spike occurrence considered as discrete events. Here, we discuss this time discretization. Actually, there are two distinct questions.

9.1.1 The limit of time-bin tending to 0

This limit would correspond to a case where spike is instantaneous and modeled by a Dirac distribution. As discussed in 32 , this limit raises serious difficulties. To summarize, in real neurons, firing occurs within a finite time δ corresponding to the time of raise and fall for the membrane potential. This involves physicochemical processes that cannot be instantaneous. The time curse of the membrane potential during the spike is described by differential equations, like Hodgkin-Huxley’s 35 . Although, the time scale dt appearing in the differential equations has the mathematical meaning of being arbitrary small, on biophysical grounds, this time scale cannot be arbitrary small, otherwise the Hodgkin-Huxley equations loose their meaning. Indeed, they correspond to an average over microscopic phenomena such as ionic channels dynamics. In particular, their time scale must be sufficiently large to ensure that the description of ionic channels dynamics (opening and closing) in terms of probabilities is valid so dt must be larger than the characteristic time of opening-closing of ionic channels τ P . Additionally, Hodgkin-Huxley’s equations use a Markovian approach (master equation) for the dynamics of h m n gates. This requires that the characteristic time dt is quite a bit larger than the characteristic time of decay for the time correlations between gates activity τ C . Summarizing, we must have 0 < τ C τ P < d t < δ . Thus, on biophysical grounds, δ cannot be arbitrary small.

In our case, the δ 0 limit is armless, however, provided we keep a non-zero refractory period, ensuring that only finitely many spikes occur in a finite time interval. Taking the limit δ 0 without considering a refractory period raises mathematical problems. One can in principle have uncountably many spikes in a finite time interval leading to the divergence of physical quantities like energy. Also, one can generate nice causal paradoxes 37 . Take a loop with two neurons one excitatory and one inhibitory and assume instantaneous propagation (the α profile is then represented by a Dirac distribution). Then, depending on the synaptic weights value, one can have a situation where neuron 1 fires instantaneously and make instantaneously 2 firing which prevents instantaneously 1 from firing and so on. So taking the limit δ 0 as well as τ refr 0 induces pathologies not inherent to our approach but to IF models.

9.1.2 Synchronization for distinct neurons

There is a more subtle issue pointed out in 53 . We do not only discretize time for each neuron’ spikes, we align the spikes emitted by distinct neurons on a discrete-time grid, as an experimental raster does. As shown in 32 this induces, in gIF models with a purely deterministic dynamics (no noise and reset to a constant value), an artificial synchronization. As a consequence, the deterministic dynamics of gIF models has generically only stable periodic orbits, although periods can be larger than any accessible computational time in a specific region of the parameters space. Additionally, these periods increase as δ decreases. The addition of noise on dynamics and on the reset value, as we propose in this paper, removes this synchronization effect.

9.2 Refractory period

In the definition of the model, we have assumed that the refractory period τ ref was smaller than 1. The consequence for raster plots is that one can have two consecutive 1’s in the spike sequence of a neuron. The extension to the case where τ ref > 1 is straightforward for spike statistics. Having such a refractory period forbids some sequences. For example, if 1 < τ ref 2 , then all sequences containing two consecutive 1’s for one neuron ( 1 , 1 ) are forbidden. If 2 < τ ref 3 sequences containing 1 , , 1 for a given neuron, where = 0 , 1 , are forbidden, and so on. More generally, the procedure consisting of forbidding specific (finite) spike blocks is equivalent to introducing a grammar in the spike generation. This grammar can be implemented in the Gibbs potential: forbidden sequences have a potential equal to −∞ (resp. a zero probability). In this case, X = A Z , the set of all possible rasters, becomes a subset where forbidden sequences have been removed.

9.3 Beyond IF models

Let us now discuss the extension of the present work to more general models of neurons. First, one characteristic feature of integrate and fire models is the reset, which has the consequence that the memory of activity preceding the spike is lost after reset. Although in the deterministic (noiseless) case, this is a simplifying feature allowing for example to fully characterize the asymptotic dynamics of (discrete time) IF models 32 54, here, it somewhat renders more complex the analysis. Indeed, it led us to introduce the notion of “last reset” time and, at some point in the proof (see, e.g., Equation (39)), obliged us to consider several situations (e.g., τ k ( t , ω ) t m or τ k ( t , ω ) t m versus τ k ( t , ω ) < t m and τ k ( t , ω ) < t m in the proof of continuity, Section 6). On the opposite, considering a model where no such reset occur would simply lead us to consider a model where τ k ( t , ω ) , ∀ω, ∀k. This case is already considered in our formalism, and actually, considering that τ k ( t , ω ) , ∀ω, ∀k simplifies the proofs (for example it eliminates the second term in Equation (39)).

As a matter of fact, the theorems established in this paper should therefore also hold without reset. But this requires to replace the firing condition (4) by another condition stating what the membrane potential does during the spike. Although it could be possible to propose an ad hoc form for the spike, it would certainly be more interesting to extend the results here to models where neurons activity depends on additional variables such as adaptation currents, as in the FitzHugh-Nagumo model 39 40 41 42, or activation-inactivation variables as in the Hodgkin-Huxley model 35 .

The present formalism affords an extension toward such models, where the neuron fires whenever its membrane potential belongs to a region of the phase space, which can be delimited by membrane potentials plus additional variables such as adaptation currents or activation-inactivation variables, and where the spike is controlled by the global dynamics of all these variables. But while here the firing of a neuron is described by the crossing of a fixed threshold, in the FitzHugh-Nagumo model, it is given by the crossing of a separatrix in the plane (voltage-adaptation current) and by a more complex “frontier” in the Hodgkin-Huxley model 2 55. One difficulty is to precisely define this region. To our knowledge, there is no clear agreement for the Hodgkin-Huxley model (some authors 2 even suggest that the “spike region” could have a fractal frontier). The extension toward FitzHugh-Nagumo seems more manageable.

Finally, the most important difficulty toward extending this paper results in more realistic neural networks is the definition of the synaptic spike response. In IF models, the spike is thought as a punctual “event” (typically, an “instantaneous” pulse) while the synaptic response is described by a convolution kernel (the α-profile). This leads one to consider a somewhat artificial mixed dynamics where membrane potential evolves continuously while spike are discrete events. In more realistic models, one would have to consider kinetic equations for neurotransmitter release, receptor binding and opening of post-synaptic ionic channels 5 44. Additionally, the consideration of these mechanics deserves a spatially extended modeling of the neuron, with time delays. In this case, all variables evolve continuously and the statistics of spike trains would be characterized by the statistics of return times in the “spike region”. This statistics is induced by some probability measure in the phase space; a natural candidate would be the Sinai-Ruelle-Bowen measure 56 57 58, for stationary dynamics, or the time-dependent SRB measure for non-stationary cases, as defined, e.g., in 59 . These measures are Gibbs measures as well 60 . Here, the main mathematical property ensuring existence and uniqueness of such a measure would be uniform hyperbolicity. To our knowledge, conditions ensuring such a property in networks have not been established yet neither for Hodgkin-Huxley’s nor for FitzHugh-Nagumo’s models.

9.4 Synaptic plasticity

As the results established in this paper hold for any synaptic weight value in H , they hold as well for networks underlying synaptic plasticity mechanisms. The effects of a joint evolution of spikes dynamics, depending on synaptic weights distributions, and synaptic weights evolution depending on spike dynamics have been studied in 61 . In particular it has been shown that mechanisms such as Spike-Time-Dependent Plasticity are related to a variational principle for a quantity, the topological pressure, derived for the thermodynamic formalism of Gibbs distributions. In the paper 61 , the fact that spike trains statistics were given by a Gibbs distribution was a working assumption. Therefore, the present work establishes a firm ground for 61 .

Appendix

Here, we establish (52). We use the following lemma.

Lemma 1 For a collection 0 a k , a k 1 , k = 1 , , N , we have

| k = 1 N a k k = 1 N a k | k = 1 N | a k a k | .

This lemma is easily proved by recursion.

We have, for n Z , m 0

var m [ p ( n , ) ] = sup { | k = 1 N a k k = 1 N a k | ; ω = m , n ω } ,

where:

a k = ω k ( n ) π ( X k ( n 1 , ω ) ) + ( 1 ω k ( n ) ) ( 1 π ( X k ( n 1 , ω ) ) ) , a k = ω k ( n ) π ( X k ( n 1 , ω ) ) + ( 1 ω k ( n ) ) ( 1 π ( X k ( n 1 , ω ) ) ) .

Therefore, using inequality (59),

var m [ p ( n , ) ] k = 1 N sup { | a k a k | ; ω = m , n ω } .

The condition ω = m , n ω implies ω k ( n ) = ω k ( n ) so that:

| a k a k | = | π ( X k ( n 1 , ω ) ) π ( X k ( n 1 , ω ) ) | .

We have

| π ( X k ( n 1 , ω ) ) π ( X k ( n 1 , ω ) ) | | X k ( n 1 , ω ) X k ( n 1 , ω ) | π ,

with π = sup x R | π ( x ) | = 1 2 π , so that

var m [ p ( n , ) ] 1 2 π k = 1 N var m [ X k ( n 1 , ) ] .

We have now to upper bound var m [ X k ( n 1 , ) ] = sup { | X k ( n 1 , ω ) X k ( n 1 , ω ) | ; ω = m , n 1 ω } . We have

var m [ X k ( n 1 , ) ] var m [ θ V k ( det ) ( τ k ( n 1 , ) , n 1 , ) ] sup ω X 1 σ k ( τ k ( n 1 , ω ) , n 1 , ω ) + sup ω X | θ V k ( det ) ( τ k ( n 1 , ω ) , n 1 , ω ) | var m [ 1 σ k ( τ k ( n 1 , ) , n 1 , ) ] ,

with,

var m [ θ V k ( det ) ( τ k ( n 1 , ) , n 1 , ) ] = var m [ V k ( det ) ( τ k ( n 1 , ) , n 1 , ) ] var m [ V k ( syn ) ( τ k ( n 1 , ) , n 1 , ) ] + var m [ V k ( ext ) ( τ k ( n 1 , ) , n 1 , ) ]

so that, from (40), (43):

var m [ θ V k ( det ) ( τ k ( n 1 , ) , n 1 , ) ] j = 1 N A k j ( det ) P d ( m τ k j ) e m τ k j + B k ( det ) e m τ L , k

where:

A k j ( det ) = A k j ( syn ) + A k j ( ext ) ,

(see Equations (41), (44)) and:

B k ( det ) = B k ( syn ) + B k ( ext ) ,

(see Equations (42), (45)).

Moreover, from (37),

sup ω X 1 σ k ( τ k ( n 1 , ω ) , n 1 , ω ) 1 σ k .

From (35),

sup ω X | θ V k ( det ) ( τ k ( n 1 , ω ) , n 1 , ω ) | max ( | θ V k | , | θ V k + | ) .

Finally,

var m [ 1 σ k ( τ k ( n 1 , ) , n 1 , ) ] var m [ σ k ( τ k ( n 1 , ) , n 1 , ) ] sup ω X 1 σ k 2 ( τ k ( n 1 , ω ) , n 1 , ω ) 1 ( σ k ) 2 var m [ σ k ( τ k ( n 1 , ) , n 1 , ) ] ,

while

var m [ σ k 2 ( τ k ( n 1 , ) , n 1 , ) ] 2 σ k var m [ σ k ( τ k ( n 1 , ) , n 1 , ) ] ,

from (37), so that:

var m [ σ k ( τ k ( n 1 , ) , n 1 , ) ] 1 2 σ k var m [ σ k 2 ( τ k ( n 1 , ) , n 1 , ) ] ,

and, from (46),

var m [ 1 σ k ( τ k ( n 1 , ) , n 1 , ) ] 1 2 ( σ k ) 3 [ j = 1 N A k j ( σ ) P d ( m τ k j ) e m τ k j + C k ( σ ) e 2 m τ L , k ] .

Summarizing (60), (61), (62), (63)

var m [ X k ( n 1 , ) ] 1 σ k ( j = 1 N A k j ( det ) P d ( m τ k j ) e m τ k j + B k ( det ) e m τ L , k ) + max [ | θ V k | , | θ V k + | ] + max × 1 ( 2 σ k ) 3 [ j = 1 N A k j ( σ ) P d ( m τ k j ) e m τ k j + C k ( σ ) e 2 m τ L , k ] .

Therefore, we have:

var m [ X k ( n 1 , ) ] j = 1 N A k j ( X ) P d ( m τ k j ) e m τ k j + B k ( X ) e m τ L , k + C k ( X ) e 2 m τ L , k ,

for constants A k j ( X ) , B k ( X ) , C k ( X ) .

As a consequence,

var m [ p ( n , ) ] 1 2 π k = 1 N [ j = 1 N A k j ( X ) P d ( m τ k j ) e m τ k j + B k ( X ) e m τ L , k + C k ( X ) e 2 m τ L , k ] .

Therefore, n m var n m [ p ( n , ) ] is bounded from above by the series

1 2 π k = 1 N [ j = 1 N A k j ( X ) n m P d ( n m τ k j ) e n m τ k j + B k ( X ) n m e n m τ L , k 1 2 π k = 1 N [ + C k ( X ) n m e 2 ( n m ) τ L , k ] = 1 2 π k = 1 N [ j = 1 N A k j ( X ) l 0 P d ( l τ k j ) e l τ k j + B k ( X ) l 0 e l τ L , k 1 2 π k = 1 N [ + C k ( X ) l 0 e 2 l τ L , k ] ,

which converges, uniformly in m . As a consequence, in (52), v ( p ) < + and we are done.

Competing interests

The author declare that he has no competing interests.

Footnotes

1We assume that all neurons have the same firing threshold. The notion of threshold is already an approximation which is not sharply defined in Hodgkin-Huxley 35 or Fitzhugh-Nagumo 40 41 models (more precisely it is not a constant but it depends on the dynamical variables 55 ). Recent experiments 62 63 64 even suggest that there may be no real potential threshold.

2In 50 , the authors use the following definition for the n m variation, which reads in our notations:

var m [ p ( n , ) ] = sup { | p ( n , ω ) p ( n , ω ) | ; ω , ω A n , ω m n = ω m n } , m n .

It differs therefore slightly from our definition (5), (38). The correspondence is var m [ p ( n , ) ] = var n m [ p ( n , ) ] . The definition of v ( p ) takes this correspondence into account.

3For D > 0 , this is Z ( ω D 1 ) = ω ( 0 ) e ϕ ( D ) ( ω D 0 ) , ensuring that e ϕ ( D ) ( ω D 0 ) Z ( ω D 1 ) is a conditional probability P [ ω ( 0 ) | ω D 1 ] . Hence, it is not a constant but a function of the past ω D 1 , in a similar way to statistical physics on lattices where the partition function depends on the boundary conditions. Only in the case D = 0 (memory less) is this a constant.

Acknowledgements

I would like to thank both reviewers for a careful reading of the manuscript, constructive criticism, and helpful comments. I also acknowledge G. Maillard and J. R. Chazottes for invaluable advices and references. I am also grateful to O. Faugeras and T. Viéville for useful comments. This work has been supported by the ERC grant Nervi, the ANR grant KEOPS, and the European grant BrainScales.

<p>Which model to use for cortical spiking neurons?</p>IzhikevichEIEEE Trans. Neural Netw.200415510631070<p>Chaos in the Hodgkin-Huxley model</p>GuckenheimerJOlivaRSIAM J. Appl. Dyn. Syst.20021105<p>A nonrandom dynamic component in the synaptic noise of a central neuron</p>FaurePKornHProc. Natl. Acad. Sci. USA19979465066511<p>Is there chaos in the brain? I. Concept of nonlinear dynamics and methods of investigation</p>FaurePKornHC. R. Acad. Sci., Sér. 3 Sci. Vie2001324773793ErmentroutGBTermanDFoundations of Mathematical NeuroscienceSpringer, Berlin2010<p>Spike synchronization and rate modulation differentially involved in motor cortical function</p>RiehleAGrünSDiesmannMAertsenAScience199727819501953<p>Precise spike synchronization in monkey motor cortex involved in preparation for movement</p>GrammontFRiehleAExp. Brain Res.1999128118122<p>Dynamical changes and temporal precision of synchronized spiking activity in monkey motor cortex during movement preparation</p>RiehleAGrammontFDiesmannMGrünSJ. Phys. (Paris)200094569582<p>Spike synchronization and firing rate in a population of motor cortical neurons in relation to movement direction and reaction time</p>GrammontFRiehleABiol. Cybern.2003885260373RiekeFWarlandDde Ruyter van SteveninckRBialekWSpikes: Exploring the Neural CodeBradford Books, Denver, CO1997<p>Weak pairwise correlations imply strongly correlated network states in a neural population</p>SchneidmanEBerryMSegevRBialekWNature2006440708710071012Tkačik, G., Schneidman, E., Berry, M.J. II, Bialek, W.: Spin glass models for a network of real neurons. ArXiv:0912.5409v1, 15 (2009)<p>Prediction of spatiotemporal patterns of neural activity from pairwise correlations</p>MarreOBoustaniSEFrégnacYDestexheAPhys. Rev. Lett.2009102<p>Information theory and statistical mechanics</p>JaynesEPhys. Rev.1957106620<p>The structure of multi-neuron firing patterns in primate retina</p>ShlensJFieldGGauthierJGrivichMPetruscaDSherALitkeAChichilniskyEJ. Neurosci.20062632<p>Neuronal couplings between retinal ganglion cells inferred by efficient inverse statistical physics methods</p>CoccoSLeiblerSMonassonRProc. Natl. Acad. Sci. USA2009106331405814062http://www.pnas.org/cgi/doi/10.1073/pnas.0906705106<p>A maximum entropy model applied to spatial and temporal correlations from cortical networks in vitro</p>TangAJacksonDHobbsJChenWSmithJLPatelHPrietoAPetruscaDGrivichMISherAHottowyPDabrowskiWLitkeAMBeggsJMJ. Neurosci.2008282505518<p>Pairwise maximum entropy models for studying large biological systems: when they can work and when they can’t</p>RoudiYNirenbergSLathamPPLoS Comput. Biol.200955118<p>The structure of large-scale synchronized firing in primate retina</p>ShlensJFieldGDGauthierJLGreschnerMSherALitkeAMChichilniskyEJJ. Neurosci.2009291550225031<p>Sparse coding and high-order correlations in fine-scale cortical networks</p>OhiorhenuanIEMechlerFPurpuraKPSchmidAMHuQVictorJDNature20104667617621<p>Repeated patterns of distributed synchrony in neuronal assemblies</p>LindseyBMorrisKShannonRGersteinGJ. Neurophysiol.19977817141719<p>Spatiotemporal activity patterns of rat cortical neurons predict responses in a conditioned task</p>VillaAEPTetkoIVHylandBNajemAProc. Natl. Acad. Sci. USA199996311061111<p>Hidden neuronal correlations in cultured networks</p>SegevRBaruchiIHulataEBen-JacobEPhys. Rev. Lett.200492<p>Information geometry of multiple spike trains</p>AmariSIAnalysis of Parallel Spike TrainsSpringer, BerlinGrün S., Rotter S.2010221253Roudi, Y., Hertz, J.: Mean field theory for non-equilibrium network reconstruction. ArXiv:1009.5946v1 (2010)Vasquez, J.C., Viéville, T., Cessac, B.: Parametric estimation of Gibbs distributions as generalized maximum-entropy models for the analysis of spike train statistics. Research Report RR-7561, INRIA (2011)Vasquez, J.C., Palacios, A.G., Marre, O., Berry, M.J., Cessac, B.: Gibbs distribution analysis of temporal correlation structure on multicell spike trains from retina ganglion cells. J. Phys. (Paris) (in press) (2011)Cessac, B.: A discrete time neural network model with spiking neurons II. Dynamics with noise. J. Math. Biol. (in press) (2010). http://www.springerlink.com/content/j602452n1224x602/Kravchuk, K., Vidybida, A.: Delayed feedback causes non-Markovian behavior of neuronal firing statistics. J. Phys. A (2010). ArXiv:1012.6019Vasquez, J.C., Viéville, T., Cessac, B.: Entropy-based parametric estimation of spike train statistics. Research Report, INRIA (2010). http://arxiv.org/abs/1003.3157<p>Analytical Integrate and Fire Neuron models with conductance-based dynamics for event driven simulation strategies</p>RudolphMDestexheANeural Comput.20061821462210http://www.mitpressjournals.org/doi/abs/10.1162/neco.2006.18.9.2146Cessac, B., Viéville, T.: On dynamics of integrate-and-fire neural networks with adaptive conductances. Frontiers Neurosci 2(2) (2008)<p>Generalized integrate-and-fire models of neuronal activity approximate spike trains of a detailed model to a high degree of accuracy</p>JolivetRLewisTGerstnerWJ. Neurophysiol.200492959976JolivetRRauchALescherHRGerstnerWIntegrate-and-Fire Models with Adaptation Are Good EnoughMIT Press, Cambridge2006<p>A quantitative description of membrane current and its application to conduction and excitation in nerve</p>HodgkinAHuxleyAJ. Physiol.1952117500544<p>Recherches quantitatifs sur l’excitation des nerfs traitee comme une polarisation</p>LapicqueLJ. Physiol. Paris19079620635<p>A view of neural networks as dynamical systems</p>CessacBInt. J. Bifurc. Chaos201020615851629http://lanl.arxiv.org/abs/0901.2203<p>Sequential desynchronization in networks of spiking neurons with partial reset</p>KirstCGeiselTTimmeMPhys. Rev. Lett.2009102http://link.aps.org/doi/10.1103/PhysRevLett.102.068101<p>Mathematical models of threshold phenomena in the nerve membrane</p>FitzHughRBull. Math. Biophys.195517257278<p>Impulses and physiological states in models of nerve membrane</p>FitzHughRBiophys. J.19611445466<p>An active pulse transmission line simulating nerve axon</p>NagumoJArimotoSYoshizawaSProc. IRE19625020612070FitzHughRMathematical Models of Excitation and Propagation in NerveMcGraw-Hill, New York1969chap. 1Maillard, G.: Introduction to Chains with Complete Connections. Ecole Federale Polytechnique de Lausanne (2007)<p>Kinetic models of synaptic transmission</p>DestexheAMainenZFSejnowskiTJMethods in Neuronal ModelingMIT Press, Cambridge1998125<p>Neural networks as spatio-temporal pattern-forming systems</p>ErmentroutBRep. Prog. Phys.199861353430Chazottes, J.: Entropie relative, dynamique symbolique et turbulence. Ph.D. thesis, Université de Provence-Aix Marseille I (1999)<p>Principe variationnel et systèmes dynamiques symboliques</p>LedrappierFZ. Wahrscheinlichkeitstheor. Verw. Geb.197430185185202<p>Criteria for <it>d</it>-continuity</p>CoelhoZQuasATrans. Am. Math. Soc.1998350832573268<p>Decay of correlations for non Hölderian dynamics. A coupling approach</p>BressaudXFernandezRGalvesAElectron. J. Probab.199943119<p>Chains with complete connections: general theory, uniqueness, loss of memory and mixing properties</p>FernándezRMaillardGJ. Stat. Phys.2005118555588CessacBPalaciosASpike Train Statistics from Empirical Facts to Theory: The Case of the RetinaSpringer, Berlin2011GeorgiiHOGibbs Measures and Phase Transitionsde Gruyter, Berlin1988<p>How precise is the timing of action potentials?</p>KirstCTimmeMFront. Neurosci.2009323<p>A discrete time neural network model with spiking neurons. Rigorous results on the spontaneous dynamics</p>CessacBJ. Math. Biol.2008563311345CroninJMathematical Aspects of Hodgkin-Huxley TheoryCambridge University Press, Cambridge1987<p>Gibbs measures in ergodic theory</p>SinaiYRuss. Math. Surv.19722742169BowenREquilibrium States and the Ergodic Theory of Anosov DiffeomorphismsSpringer-Verlag, New York1975RuelleDThermodynamic FormalismAddison-Wesley, Reading, Massachusetts1978<p>Smooth dynamics and new theoretical ideas in nonequilibrium statistical mechanics</p>RuelleDJ. Stat. Phys.199995393468KellerGEquilibrium States in Ergodic TheoryCambridge University Press, Cambridge1998<p>How Gibbs distribution may naturally arise from synaptic adaptation mechanisms: a model based argumentation</p>CessacBRostro-GonzalezHVasquezJCViévilleTJ. Stat. Phys.20091363565602http://www.springerlink.com/content/3617162602160603<p>Unique features of action potential initiation in cortical neurons</p>NaundorfBWolfFVolgushevMNature200644010601063<p>Neurophysiology: Hodgkin and Huxley model - still standing?</p>McCormickDShuYYuYNature2007445E1E2<p>Neurophysiology: Hodgkin and Huxley model - still standing? (Reply)</p>NaundorfBWolfFVolgushevMNature2007445E2E3