The Galves–Löcherbach model (or GL model) is a mathematical model for a network of neurons with intrinsic stochasticity.[1] [2]
In the most general definition, a GL network consists of a countable number of elements (idealized neurons) that interact by sporadic nearly-instantaneous discrete events (spikes or firings). At each moment, each neuron N fires independently, with a probability that depends on the history of the firings of all neurons since the last time N last fired. Thus each neuron "forgets" all previous spikes, including its own, whenever it fires. This property is a defining feature of the GL model.
In specific versions of the GL model, the past network spike history since the last firing of a neuron N may be summarized by an internal variable, the potential of that neuron, that is a weighted sum of those spikes. The potential may include the spikes of only a finite subset of other neurons, thus modeling arbitrary synapse topologies. In particular, the GL model includes as a special case the general leaky integrate-and-fire neuron model.
The GL model has been formalized in several different ways. The notations below are borrowed from several of those sources.
The GL network model consists of a countable set of neurons with some set
I
\Delta
In the GL model, all neurons are assumed evolve synchronously and atomically between successive sampling times. In particular, within each time step, each neuron may fire at most once. A Boolean variable
Xi[t]
i\inI
Xi[t]=1
Xi[t]=0
t\inZ
t+1
Let
X[t'l{:}t]
t'
{}t
X[t'l{:}t]=((Xi[s])t'\leq)i\in
X[-inftyl{:}t]
\taui[t]
i
t
\taui[t]=max\{s<t l{|} Xi[s]=1\}.
Probl(Xi[t]=1 l{|} X[-inftyl{:}t-1]r) = \Phiil(Xl[\taui[t]l{:}t-1r]r)
Moreover, the firings in the same time step are conditionally independent, given the past network history, with the above probabilities. That is, for each finite subset
K\subsetI
ai\in\{0,1\},i\inK,
Probl(capk\inl\{Xk[t]=akr\} l{|} X[-inftyl{:}t-1]r) = \prodk\inProbl(Xk[t]=ak l{|} Xl[\tauk[t]l{:}t-1l]r)
In a common special case of the GL model, the part of the past firing history
Xl[\taui[t]l{:}t-1r]
i\inI
t
Vi[t]
i
Vi[t] =
t-1 | |
\sum | |
t'=\taui[t] |
l(Ei[t']+\sumj\inwjXj[t']r)\alphail[t'-\taui[t],t-1-t'r]
wj
j
i
Ei[t']
t'
t'+1
\alphai[r,s]
r
i
s
Then one defines
Probl(Xi[t]=1 l{|} X[-infty:t-1]r) = \phii(Vi[t])
\phii
R
[0,1]
If the synaptic weight
wj\to
j
Vi
wj\to=0
In an even more specific case of the GL model, the potential
Vi
i
j
Vi
wj →
\mui
In this variant, the evolution of the potential
Vi
Vi[t+1] = \left\{ \begin{array}{ll} 0&if Xi[t]=1\ \muiVi[t]&if Xi[t]=0 \end{array} \right\} + Ei[t] + \sumj\inwj\toXj[t]
Vi[t+1] = (1-Xi[t])\muiVi[t] + Ei[t] + \sumj\inwj\toXj[t]
This special case results from taking the history weight factor
\alpha[r,s]
s | |
\mu | |
i |
If, between times
t
t+1
i
Xi[t]=1
Xj[t]=0
j ≠ i
Ei[t]=0
Vi[t+1]
wi\to
Vi[t+1] = \left\{ \begin{array}{ll}
R | |
V | |
i |
&if Xi[t]=1\\ \muiVi[t]&if Xi[t]=0 \end{array} \right\} + Ei[t] + \sumj\in
R | |
V | |
i |
=wi\to
Vi[t+1] =
R | |
X | |
i |
+ (1-Xi[t])\muiVi[t] + Ei[t] + \sumj\in
These formulas imply that the potential decays towards zero with time, when there are no external or synaptic inputs and the neuron itself does not fire. Under these conditions, the membrane potential of a biological neuron will tend towards some negative value, the resting or baseline potential
B | |
V | |
i |
However, this apparent discrepancy exists only because it is customary in neurobiology to measure electric potentials relative to that of the extracellular medium. That discrepancy disappears if one chooses the baseline potential
B | |
V | |
i |
Vi
Some authors use a slightly different refractory variant of the integrate-and-fire GL neuron,[3] which ignores all external and synaptic inputs (except possibly the self-synapse
wi\to
Vi[t+1] = \left\{ \begin{array}{ll}
R | |
V | |
i |
& if Xi[t]=1\\[2mm] \displaystyle \muiVi[t] + Ei[t] + \sumj\in
Vi[t+1] =
R | |
X | |
i |
+ (1-Xi[t])l(\muiVi[t] + Ei[t] + \sumj\in
Even more specific sub-variants of the integrate-and-fire GL neuron are obtained by setting the recharge factor
\mui
Vi
The evolution equations then simplify to
Vi[t+1] = \left\{ \begin{array}{ll}
R | |
V | |
i |
&if Xi[t]=1\\ 0&if Xi[t]=0 \end{array} \right\} + Ei[t] + \sumj\in
Vi[t+1] =
R | |
X | |
i |
+ Ei[t] + \sumj\in
Vi[t+1] = \left\{ \begin{array}{ll}
R | |
V | |
i |
& if Xi[t]=1\\[2mm] \displaystyle Ei[t] + \sumj\in
Vi[t+1] =
R | |
X | |
i |
+ (1-Xi[t])l(Ei[t] + \sumj\in
In these sub-variants, while the individual neurons do not store any information from one step to the next, the network as a whole still can have persistent memory because of the implicit one-step delay between the synaptic inputs and the resulting firing of the neuron. In other words, the state of a network with
n
n
Xi[t]
The GL model was defined in 2013 by mathematicians Antonio Galves and Eva Löcherbach.[1] Its inspirations included Frank Spitzer's interacting particle system and Jorma Rissanen's notion of stochastic chain with memory of variable length. Another work that influenced this model was Bruno Cessac's study on the leaky integrate-and-fire model, who himself was influenced by Hédi Soula.[4] Galves and Löcherbach referred to the process that Cessac described as "a version in a finite dimension" of their own probabilistic model.
Prior integrate-and-fire models with stochastic characteristics relied on including a noise to simulate stochasticity.[5] The Galves–Löcherbach model distinguishes itself because it is inherently stochastic, incorporating probabilistic measures directly in the calculation of spikes. It is also a model that may be applied relatively easily, from a computational standpoint, with a good ratio between cost and efficiency. It remains a non-Markovian model, since the probability of a given neuronal spike depends on the accumulated activity of the system since the last spike.
Contributions to the model were made, considering the hydrodynamic limit of the interacting neuronal system,[6] the long-range behavior and aspects pertaining to the process in the sense of predicting and classifying behaviors according to a fonction of parameters,[7] [8] and the generalization of the model to the continuous time.[9]
The Galves–Löcherbach model was a cornerstone to the NeuroMat project.[10]