Saturday, February 4, 2017

System-size expansion and Gaussian moment closure for Quiescent-Active-Refractory model

Epilogue: These notes concern the system size expansion and moment closure later published as "Neural field models for latent state inference: Application to large-scale neuronal recordings[pdf] (more) 

These notes derive the Kramers-Moyal system size expansion and approximate equation for the evolution of means and covariances of a single-compartment neural mass model with Quiescent (Q), Active (A), and Refractory (R) states. The derivations here are identical to the standard ones for the Susceptible (S), Infected (I), and Recovered (R) (SIR) model commonly used in epidemiology. 

[get these notes as PDF]

Normally, in the system-size expansion we take the number of neurons N to be large. In this case we can expand the master equation in terms of the state change for a single-neuron, which is a small parameter O(1/N)

But, re-scaling the equations by the system size N only complicates the notation. It is equivalent to take the expansion in terms of a single neuron. This is O(1), but we understand that as N grows large it becomes increasingly accurate relative to the size of the total population.

The system

QρqAQ+AρeA+AAρaRRρrQ

In retinal waves, the spontaneous reaction QρqA is rare. It is poorly modeled by a mean-field approach. Instead, we treat this reaction separately as an external input (Poisson a.k.a. shot noise).

Kramers-Moyal expansion

Consider the non-spatial case for illustration. The allowed transitions of the neural population can be described in terms of a master equations. For review, the possible reactions include creation and annihilation of cells to and from the Quiescent (Q), Active (A), and Refractory (R) states

tPr(Q,A,R)=Pr(Q+1,A1,R)ρe(A1)(Q+1)+Pr(Q,A+1,R1)ρa(A+1)+Pr(Q1,A,R+1)ρr(R+1)Pr(Q,A,R)[ρeAQ+ρaA+ρrR]

The terms, in order, reflect (1) Q+AA+A transition into this state, (2) spontaneous AR transition into this state, (3) spontaneous RQ transition into this state, (4) the sum of all possible transitions away from this state.

We approximate Pr(Q,A,R) as Gaussian and derive terms for the evolution of its mean and covariance by a second-order Taylor expansion of the master equation, parameterized by the system size N, the number of interacting agents within a single well-mixed compartment. First, re-write the steps in the master equation in terms of the system size N, explicitly. Q+A+R=N.

tPr(Q,A,R)=Pr(S+1,A1,R)ρe(A1)(Q+1)+Pr(Q,A+1,R1)ρa(A+1)+Pr(Q1,A,R+1)ρr(R+1)Pr(Q,A,R)[ρeAQ+ρaA+ρrR]

Now, let ϵ(εq,εa,εr) be a Taylor expansion of the step operator (denoting the identity operator as 1). This represents the change in probability density Pr(Q,A,R) for an infinitesimal change in intensity for the Q,A,R fields.

ϵ(εq,εa,εr)=1+εqQ+εaA+εrR+12(εq2Q2+εa2A2+εr2R2)+(εqεaQA+εaεrAR+εrεqRQ)

We can approximate the first three terms of the master equation in terms of this expansion (abbreviating Pr(Q,A,R) as P)

Pr(Q+1,A1,R)ρe(A1)(Q+1)ϵ(1,1,0)ρeAQPPr(Q,A+1,R1)ρa(A+1)ϵ(0,1,1)ρaAPPr(Q1,A,R+1)ρr(R+1)ϵ(1,0,1)ρrRP

Substituting the Taylor approximations into the master equation (for brevity, let ρqa=ρeA. Note that this depends on A and is not constant).

tP=ϵ(1,1,0)ρqaQP+ϵ(0,1,1)ρaAP+ϵ(1,0,1)ρrRP[ρqaQ+ρaA+ρrR]P

Regrouping terms and dividing by P, we get:

tPP=[ϵ(1,1,0)1]ρqaQ+[ϵ(0,1,1)1]ρaA+[ϵ(1,0,1)1]ρrR

Substituting the definition of ϵ(εs,εi,εr) and collecting terms leads to a second-order description of the master equation, which can be interpreted as the drift and diffusion terms for the evolution of the mean and the covariance of a multivariate Gaussian approximation of Pr(Q,A,R).

tPP=[(QA)+12(Q2+A22QA)]ρqaQ+[(AR)+12(A2+R22AR)]ρaA+[(RQ)+12(R2+Q22RQ)]ρrR

The first and second order terms form the second-order expansion of the master equation yield the drift and diffusion terms for a Fokker-Plank equation, respectively. Denote state (Q,A,R) as x=(Q,A,R)

Collecting first-order terms yields a drift term μ(x)

1[Q[ρrRρqaQ]+A[ρqaQρaA]+R[ρaAρrR]]μ(x)=[ρrRρqa(A)Qρqa(A)QρaAρaAρrR]

Collecting second-order terms yields a diffusion term Σ(x)

12(Q2[ρqaQ+ρrR]+A2[ρaA+ρqaQ]+R2[ρrR+ρaA]2[QAρqaQ+ARρaA+RQρrR])Σ(x)=[ρqaQ+ρrRρqaQρrRρqaQρaA+ρqaQρaAρrRρaAρrR+ρaA]

The drift μ(x) and diffusion Σ(x) terms define a Fokker-Plank equation that can be used to propagate a state-space model for the Q,A,R system forward in time:

tPr(x)=ixiμi(x)Pr(x)+12ijxi,xj2Σij(x)Pr(x)

Reformulating as a Langevin equation

x=(Q,A,R)dx=μ(x)dt+Q(x)dW

Where Q is a matrix square root of Σ(x)

QQ=Σ(x),

for example

Q(x)=[ρqaQ0ρrRρqaQρaA00ρaAρrR].

To normalize by the number of neurons, divide μ(x) and Q(x) by 1N.

Moment closure

So far, we've used a system-size expansion to derive a diffusive approximation of the dynamics. This can be used to simulate Pr(x) in time, as a partial differential equation. It can also be used to sample trajectories from the model as a Langevin equation. We next discuss various approximations to the time evolution of the moments of the state distribution.

Using the drift and diffusion terms above directly gives the so-called linear noise approximation (LNA) to the state-space dynamics. This uses the deterministic rate equations for the evolution of the mean.

In state-space inference models, fluctuations cause the densities of quiescent and excitable neurons to be anti-correlated. This has the effect of stabilizing the system, but is not captured in the LNA. These can be captured at second order by treating the state distribution as Gaussian, and neglecting higher cumulants. This is the so-called "cumulant neglect" or Gaussian moment closure.

The time-evolution of the first moment is derived by taking the time derivative of the expected values of the states over Pr(Q,A,R)

tQtQ=ρrRρqQρeQA =ρrRρqQρeQApartialtAtA=ρqQ+ρeQAρaApartialtRtR=ρaAρrR

Note that the first moment couples to the second moment through pairwise interaction terms QA.

The time evolution of the second moment, again, can be derived by taking the time derivative of expectations x1x2 where xi{Q,A,R}, for example QA or AR:

tQA=tQA=QtA+AtQ=Q(QAρe+QρqAρa)+A(RρrQAρeQρq)=ρeQQA+ρqQQρaQAρrARρeAQAρqAQtAR=tAR=RtA+AtR=R(QAρe+QρqAρa)+A(AρaRρr)=ρeQQA+ρqQQρaQAρrARρeAQAρqAQ

Gaussian moment closure makes the simplifying assumption that we can replace third-order terms like x1x1x2 with the values that one might expect if the state distribution is a Gaussian with parameters given by the first two moments.

No comments:

Post a Comment