Thursday, June 20, 2019

Exploring a regular-spiking limit of autoregressive point-process models with an absolute refractory period

These notes explore a deterministic limit of Autoregressive Point-Process Generalized Linear Model (AR-PPGLM) of a spiking neuron. We consider a Poisson point process that is either strongly driven or strongly suppressed, and take a limit where spiking output becomes deterministic (and instantaneous rates diverge). This deterministic limit is of interest in clarifying how AR-PPGLM models relate to deterministic models of spiking.

[get as PDF]

Introduction

Point-process models are used in neuroscience to understand what drives neurons to spike. Typically, these take the form of a Poisson process with a time-varying firing rate λ(t) that is predicted from experimentally measured variables and the history of spiking activity.

The linear-nonlinear-Poisson formulation of point-process models doesn't capture the physiological conductances and currents that cause spiking. For this reason, these models typically only emulate neuronal activity seen in the training data . These models also treat spiking as a random process, but spiking can be quite regular at times. Models fit to regular spiking data learn to strongly increase or suppress their firing rate to capture this regularity, which this can lead to diverging parameters or rates.

In Rule and Sanguinetti (2008) , we explored these models mathematically, and showed one way to calculate how their mean rates and correlations evolve in time. These approximations assume that the underlying process is stochastic, with small Gaussian fluctuations. Considering what happens as we approach the regular spiking limit is therefore a stress-test of these models, to how (or whether) they start to break down.

An autoregressive point-process with an absolute refractory period

We model neuronal spike trains as a sequence of impulses (Dirac δ distributions) in time:

y(t)=iδ(tti),

where ti are the spike times. Autoregressive point-process GLMs predict the instantaneous firing rate λ(t) of a neuron as a linear-nonlinear function of spiking history h(t,τ) and external input u(t). One way to write a model for the output spike train y(t) of a neuron is:

(1)λ(t)=ϕ(Hh(t)+u(t))y(t)Poisson(λ(t)dt)h(t,τ)=y(tτ).

Here, ϕ() is a rectifying nonlinearity that reflects that the firing rate must be positive, and H is a spiking history filter. The function ϕ can be any rectifying nonlinearity, although it is usually exponential, logistic, or rectified polynomial. Above, we have denoted the history filter as a linear operator, defined for a single neuron as:

Hh(y)=limϵ0ϵη(τ)h(t,τ)dτ,

where η(τ) are the history filter weights. The limit ϵ0 simply means that we exclude the present time-point (τ=0) from the spiking history. In single neurons, the spike history filter captures phenomena like resonance, bursting, or a relative refractory period. In population models, it also captures how different neurons excite or inhibit one another.

After a neuron spikes, there is typically a brief 5-10 ms time window in which it cannot spike again, called the absolute refractory period. The firing rate is zero during this time. Depending on the choice of nonlinearity ϕ, it can be hard for the linear weights to drive the rate to zero. This can be fixed by adding a gating term that goes to zero if the cell has spiked recently:

(2)λ(t)=g[h(t)]ϕ(Hh(t)+u(t))

We consider the case of a unit that is constrained to fire a most once every unit interval. That is, we express time in units such that the absolute refractory period is Δt=1. The gating function g[h(t)] is therefore defined as:

(3)g[h(t,τ)]=1limϵ0ϵ1h(t,τ)dτ.=1Rh(t,τ)

This sets the rate to zero if a spike has occurred in the past τ(0,1] time-points.

Solution for steady-state firing rate in a simplified scenario

First we look at a mean-field model of the mapping between the external drive and output firing rate of a Poisson neuron with an absolute refractory period. Let λ0 be the firing-rate response to a constant input for a neuron without the absolute refractory period. We neglect spiking entirely and assume that all dynamics summarized by the average rate λ(t) that is constant in time. If a steady-state is stable, we can find for the steady-state rate with absolute refractoriness λ by solving the relationship:

(4)λ=[1τ(0,1]h(τ)]λ0eλη=[1λ]λ0eλη.

Where η is the average value of this history filter. There is no closed-form solution to the above equation. If we ignore the history effects for now (or assume that they integrate to zero), we can solve the simpler relationship:

(5)λ=[1λ]λ0,

which solves to (6)λ=λ01+λ0.

In the case of the Poisson GLM, our nonlinearity is exponential ϕ=exp, and this reduces to a logistic function σ(x)=[1+ex]1:

(7)λ=λ01+λ0=[1+λ01]1=[1+elog(λ0)]1=σ[log(λ0)]

In this simplified scenario, then a Poisson GLM with an absolute refractory period behaves much like a discrete-time Bernoulli GLM, with a logistic sigmoid nonlinearity and binary spiking events yi{0,1} (Figure 1a). From a practical standpoint, this confirms the intuition that a good way to fit spike train data is to bin the spike train at a time resolution equal to the absolute refractory period, and fit a Bernoulli GLM. However, in the next section we will show that the observation model is not identical to the discrete-time Bernoulli GLM (Figure 1b).

In this scenario, the firing rate approaches λ=1 as the drive becomes large λ0. In a spiking neuron, this corresponds to regular spiking with an inter-spike interval of Δt=1. The large-input limit with absolute refractoriness therefore yields a regular spiking limit of the Poisson GLM spiking model (Figure 1c).

Sampling from this model

Sampling from this model with a finite step-size differs slightly from what one would ordinarily do with an autoregressive Poisson process. Ordinarily, one samples yPoisson(λΔt). However, because of refractoriness, the probability of observing counts y>1 is zero, for Δt shorter than the refractory period.

Instead, we should ask what the probability of not spiking is, within time window Δt. The probability of a zero count in a Poisson process is exp(λΔt). The probability of emitting a spike is therefore p=1exp(λΔt). Equivalently, one may sample Poisson counts, but replace any multiple-spike events with single spikes. Note that this nonlinearity is similar, but not identical, to the sigmoidal nonlinearity that one would use for a Bernoulli GLM

This modified sampling is necessary when treating approximations of the regular spiking limit, since instantaneous conditional intensities diverge toward infinity. The problem is reminiscent of the issue of infinite populations in infinitesimal volume encountered when taking limits in neural field theories, and is solved by explicitly addressing considering activity of a window with a defined volume in time Δt.

It's also important to account for how refractoriness changes the variance of this process. Ordinarily, samples of rate λ and time-step Δt can be thought of as Poisson with μ=σ2=λΔt. However, the strong inhibition associated with the absolute refractoriness reduces this variance. The variance can instead be modeled as Bernoulli, with σ2=p(1p).

Figure 1: Sampling from a refractory-Poisson autoregressive GLM. (a) For a constant input λ0, the mean-rate of a refractory Poisson model with an absolute refractory period of τ=1 behaves like a continuous-time version of a Bernoulli process. The output firing rate is a logistic sigmoid function of the log-input lnλ0. (b) The sampling and observation model for the refractory-Poisson should take into account both the continuous time nature, and the limited firing rate. The probability of observing a spike is given not by a Bernoulli or Poisson mode, but rather Pr(y=1)=1exp[λΔt]. (c) For low rates, the refractory-Poisson model still exhibits some randomness. Top: spiking output from twenty random samples of the refractory-Poisson model driven by input λ0=10. Bottom: spiking output for strong drive, λ0=100, showing more regular spiking.

Continuous approximations

In Rule and Sanguinetti (2018) we used a Langevin approximation to autoregressive point-processes, which approximates the Poisson noise as Gaussian:

y=dN=λdt+λdW,

where dN denotes the derivative of the count process (the integral of y(t) in time), and dW denotes the derivative of the standard Wiener process. Normally, one integrates the resulting stochastic differential equation using Euler-Maruyama.

ΔN=λΔt+λΔtξξN(0,1)

However, this diffusive approximation is wrong in the case of the refractory Poisson process. The refractory period reduces variance, and spiking behaves more like a Bernoulli variable where σ2=p(1p), where p=1exp(λΔt). For any finite Δt, we instead get the integration update:

ΔN=[1eλΔt]+p(1p)ξ=[1eλΔt]+eλΔteλΔt1ξ

This reduces to the original approximation when λΔt0 is small, but we cannot assume that in the case of the deterministic limit, since the instantaneous rate λ can diverge. To get a process modeling regular spiking, we simultaneously take the limits λ0 and Δt0. This does not behave like an Itô stochastic differential equation, but the process is well-defined for all finite Δt, and can be numerically integrated.

(or the Complex Chemical Langevin Equation (CCLE) of Schnoerr, Sanguinetti, and Grima, 2014)

Moment equations

In " Moment equations for an autoregressive point process with an absolute refractory period " (February 5, 2019; online notes ), we derived equations for how the mean and covariance evolve in an AR-PPGLM with absolute refractoriness. The moment equations are much simpler for a neuron with steady-state drive, and for which all spiking history effects are captured by an absolute refractory period. For fixed input λ0 and in the absence of history effects (η(τ)=0), the refractory Poisson model is linear. The time evolution of the history process h(t,τ)=y(tτ) can be written as:

(8)th=τh+δτ=0yyPoisson(λdt)λ=(1Rh)λ0

Because the interaction between the firing rate and the spiking history is linear, evolution of the moment (mean μ and covariance Σ) of the process history can be written without invoking any approximations or moment closures needed:

(9)μy=(1Rμh)λ0dttμ=τμ+δτ=0μytΣ=JΣ+ΣJ+δτ=0μyδτ=0J=τδτ=0Rλ0dt

Figure 2 illustrates that these moment equations match the true process, and also checks the accuracy of the continuous approximation derived earlier.

Figure 2: Solution for the mean and covariance of the refractory Poisson model. In this simulation, the drive is λ0=25, the absolute refractory period is τ=1, and time step is Δt=0.02. (a) Single-time marginal means and variances. Black and shaded region trace represents the theoretical mean and variance, computed by integrating the moment equation. Green traces reflect ±1σ computed by sampling the refractory Poisson model. Teal traces reflect ±1σ sampled from a continuous approximation, with a Bernoulli-like, rather than Poisson, noise model. (b) Similar to (a), but in this case means and variances have been averaged over one refractory period. The refractory period reduces the variability of the process, so the average fluctuations over a short time period are smaller than one might expect given the single-time marginal variances of the process.

Overall,

These notes examined one way to model an absolute refractory period, and its role in reducing neuronal variability, in GLM point-process models. We showed that the sampling of the process must be modified to account for the influence of the refractory period, which enforces a sort of "law of conservation of spikes" in time. The Langevin approximation to the refractory Poisson model is also not straightforward, but for any finite Δt one can derive an integration scheme that approximates the original process. The resulting process resembled a continuous-time Bernoulli process, but with slightly different nonlinearities. In the case of steady-state input, the moment equations for the refractory Poisson model are linear, and agree with Monte-Carlo simulations. It might be interesting to see whether other types of highly regular spiking observed in neurons, such as integrators or resonators with variable frequencies, can be defined in a consistent way in terms of AR-PPGLMs.


No comments:

Post a Comment