Thursday, June 20, 2019

Exploring a regular-spiking limit of autoregressive point-process models with an absolute refractory period

These notes explore a deterministic limit of Autoregressive Point-Process Generalized Linear Model (AR-PPGLM) of a spiking neuron. We consider a Poisson point process that is either strongly driven or strongly suppressed, and take a limit where spiking output becomes deterministic (and instantaneous rates diverge). This deterministic limit is of interest in clarifying how AR-PPGLM models relate to deterministic models of spiking.

[get as PDF]

Introduction

Point-process models are used in neuroscience to understand what drives neurons to spike. Typically, these take the form of a Poisson process with a time-varying firing rate $\lambda(t)$ that is predicted from experimentally measured variables and the history of spiking activity.

The linear-nonlinear-Poisson formulation of point-process models doesn't capture the physiological conductances and currents that cause spiking. For this reason, these models typically only emulate neuronal activity seen in the training data . These models also treat spiking as a random process, but spiking can be quite regular at times. Models fit to regular spiking data learn to strongly increase or suppress their firing rate to capture this regularity, which this can lead to diverging parameters or rates.

In Rule and Sanguinetti (2008) , we explored these models mathematically, and showed one way to calculate how their mean rates and correlations evolve in time. These approximations assume that the underlying process is stochastic, with small Gaussian fluctuations. Considering what happens as we approach the regular spiking limit is therefore a stress-test of these models, to how (or whether) they start to break down.

An autoregressive point-process with an absolute refractory period

We model neuronal spike trains as a sequence of impulses (Dirac $\delta$ distributions) in time:

$$ y(t) = \textstyle \sum_i \delta(t-t_i), $$

where $t_i$ are the spike times. Autoregressive point-process GLMs predict the instantaneous firing rate $\lambda(t)$ of a neuron as a linear-nonlinear function of spiking history $h(t,\tau)$ and external input $u(t)$. One way to write a model for the output spike train $y(t)$ of a neuron is:

\begin{equation}\begin{aligned} \lambda(t) &= \phi(\text H h(t) + u(t)) \\ y(t) &\sim \operatorname{Poisson}(\lambda(t) \cdot d t) \\ h(t,\tau) &= y(t-\tau). \end{aligned}\end{equation}

Here, $\phi(\cdot)$ is a rectifying nonlinearity that reflects that the firing rate must be positive, and $\text H$ is a spiking history filter. The function $\phi$ can be any rectifying nonlinearity, although it is usually exponential, logistic, or rectified polynomial. Above, we have denoted the history filter as a linear operator, defined for a single neuron as:

$$ \text H h(y) = \lim_{\epsilon \downarrow 0} \textstyle \int_{\infty}^{\epsilon} \eta(\tau) h(t,\tau) d\tau, $$

where $\eta(\tau)$ are the history filter weights. The limit $\epsilon \downarrow 0$ simply means that we exclude the present time-point ($\tau = 0$) from the spiking history. In single neurons, the spike history filter captures phenomena like resonance, bursting, or a relative refractory period. In population models, it also captures how different neurons excite or inhibit one another.

After a neuron spikes, there is typically a brief 5-10 ms time window in which it cannot spike again, called the absolute refractory period. The firing rate is zero during this time. Depending on the choice of nonlinearity $\phi$, it can be hard for the linear weights to drive the rate to zero. This can be fixed by adding a gating term that goes to zero if the cell has spiked recently:

\begin{equation}\begin{aligned} \lambda(t) &= g[h(t)] \cdot \phi(\text H h(t) + u(t)) \end{aligned}\end{equation}

We consider the case of a unit that is constrained to fire a most once every unit interval. That is, we express time in units such that the absolute refractory period is $\Delta t=1$. The gating function $g[h(t)]$ is therefore defined as:

\begin{equation}\begin{aligned} g[h(t,\tau)] &= 1 - \lim_{\epsilon \downarrow 0} \textstyle \int_{\epsilon}^{1} h(t,\tau) d\tau. \\&= 1 - \text R h(t,\tau) \end{aligned}\end{equation}

This sets the rate to zero if a spike has occurred in the past $\tau\in(0,1]$ time-points.

Solution for steady-state firing rate in a simplified scenario

First we look at a mean-field model of the mapping between the external drive and output firing rate of a Poisson neuron with an absolute refractory period. Let $\lambda_0$ be the firing-rate response to a constant input for a neuron without the absolute refractory period. We neglect spiking entirely and assume that all dynamics summarized by the average rate $\lambda(t)$ that is constant in time. If a steady-state is stable, we can find for the steady-state rate with absolute refractoriness $\lambda$ by solving the relationship:

\begin{equation}\begin{aligned} \lambda &= \left[1-\textstyle\int_{\tau\in(0,1]} h(\tau)\right] \lambda_0 e^{\lambda \left<\eta\right>} \\&= \left[1-\lambda\right] \lambda_0 e^{\lambda \left<\eta\right>}. \end{aligned}\end{equation}

Where $\left<\eta\right>$ is the average value of this history filter. There is no closed-form solution to the above equation. If we ignore the history effects for now (or assume that they integrate to zero), we can solve the simpler relationship:

\begin{equation}\begin{aligned} \lambda &= \left[1-\lambda\right]\lambda_0, \end{aligned}\end{equation}

which solves to \begin{equation}\begin{aligned} \lambda = \frac {\lambda_0} {1 + \lambda_0}. \end{aligned}\end{equation}

In the case of the Poisson GLM, our nonlinearity is exponential $\phi=\exp$, and this reduces to a logistic function $\sigma(x) = [1+e^{-x}]^{-1}$:

\begin{equation}\begin{aligned} \lambda &= \frac{\lambda_0}{1 + \lambda_0} \\&= \left[1 + {\lambda_0}^{-1}\right]^{-1} \\&= \left[1 + e^{-\log(\lambda_0)}\right]^{-1} \\&= \sigma\left[\log(\lambda_0)\right] \end{aligned}\end{equation}

In this simplified scenario, then a Poisson GLM with an absolute refractory period behaves much like a discrete-time Bernoulli GLM, with a logistic sigmoid nonlinearity and binary spiking events $y_i\in\{0,1\}$ (Figure 1a). From a practical standpoint, this confirms the intuition that a good way to fit spike train data is to bin the spike train at a time resolution equal to the absolute refractory period, and fit a Bernoulli GLM. However, in the next section we will show that the observation model is not identical to the discrete-time Bernoulli GLM (Figure 1b).

In this scenario, the firing rate approaches $\lambda=1$ as the drive becomes large $\lambda_0\to\infty$. In a spiking neuron, this corresponds to regular spiking with an inter-spike interval of $\Delta t=1$. The large-input limit with absolute refractoriness therefore yields a regular spiking limit of the Poisson GLM spiking model (Figure 1c).

Sampling from this model

Sampling from this model with a finite step-size differs slightly from what one would ordinarily do with an autoregressive Poisson process. Ordinarily, one samples $y{\sim}\operatorname{Poisson}(\lambda\cdot\Delta t)$. However, because of refractoriness, the probability of observing counts $y>1$ is zero, for $\Delta t$ shorter than the refractory period.

Instead, we should ask what the probability of not spiking is, within time window $\Delta t$. The probability of a zero count in a Poisson process is $\exp(-\lambda\cdot\Delta t)$. The probability of emitting a spike is therefore $p=1{-}\exp(-\lambda\cdot\Delta t)$. Equivalently, one may sample Poisson counts, but replace any multiple-spike events with single spikes. Note that this nonlinearity is similar, but not identical, to the sigmoidal nonlinearity that one would use for a Bernoulli GLM

This modified sampling is necessary when treating approximations of the regular spiking limit, since instantaneous conditional intensities diverge toward infinity. The problem is reminiscent of the issue of infinite populations in infinitesimal volume encountered when taking limits in neural field theories, and is solved by explicitly addressing considering activity of a window with a defined volume in time $\Delta t$.

It's also important to account for how refractoriness changes the variance of this process. Ordinarily, samples of rate $\lambda$ and time-step $\Delta t$ can be thought of as Poisson with $\mu=\sigma^2=\lambda\cdot \Delta t$. However, the strong inhibition associated with the absolute refractoriness reduces this variance. The variance can instead be modeled as Bernoulli, with $\sigma^2 = p(1-p)$.

Figure 1: Sampling from a refractory-Poisson autoregressive GLM. (a) For a constant input $\lambda_0$, the mean-rate of a refractory Poisson model with an absolute refractory period of $\tau=1$ behaves like a continuous-time version of a Bernoulli process. The output firing rate is a logistic sigmoid function of the log-input $\ln\lambda_0$. (b) The sampling and observation model for the refractory-Poisson should take into account both the continuous time nature, and the limited firing rate. The probability of observing a spike is given not by a Bernoulli or Poisson mode, but rather $\Pr(y=1)=1-\exp[-\lambda\cdot\Delta t]$. (c) For low rates, the refractory-Poisson model still exhibits some randomness. Top: spiking output from twenty random samples of the refractory-Poisson model driven by input $\lambda_0=10$. Bottom: spiking output for strong drive, $\lambda_0=100$, showing more regular spiking.

Continuous approximations

In Rule and Sanguinetti (2018) we used a Langevin approximation to autoregressive point-processes, which approximates the Poisson noise as Gaussian:

$$ y = dN = \lambda \cdot dt + \sqrt{\lambda} \cdot dW, $$

where $dN$ denotes the derivative of the count process (the integral of $y(t)$ in time), and $dW$ denotes the derivative of the standard Wiener process. Normally, one integrates the resulting stochastic differential equation using Euler-Maruyama.

$$ \begin{aligned} \Delta N &= \lambda \cdot \Delta t + \sqrt{\lambda\cdot\Delta t} \cdot \xi \\ \xi &\sim \mathcal N(0,1)\, \end{aligned} $$

However, this diffusive approximation is wrong in the case of the refractory Poisson process. The refractory period reduces variance, and spiking behaves more like a Bernoulli variable where $\sigma^2 = p(1-p)$, where $p = 1-\exp(-\lambda\cdot\Delta t)$. For any finite $\Delta t$, we instead get the integration update:

$$ \begin{aligned} \Delta N &= [1-e^{-\lambda\Delta t}] + \sqrt{p(1-p)} \cdot \xi \\&= [1-e^{-\lambda\Delta t}] + e^{-\lambda\Delta t}\sqrt{e^{\lambda\Delta t}-1} \cdot \xi \end{aligned} $$

This reduces to the original approximation when $\lambda\Delta t \to 0$ is small, but we cannot assume that in the case of the deterministic limit, since the instantaneous rate $\lambda$ can diverge. To get a process modeling regular spiking, we simultaneously take the limits $\lambda_0 \to \infty$ and $\Delta t \to 0$. This does not behave like an Itô stochastic differential equation, but the process is well-defined for all finite $\Delta t$, and can be numerically integrated.

(or the Complex Chemical Langevin Equation (CCLE) of Schnoerr, Sanguinetti, and Grima, 2014)

Moment equations

In " Moment equations for an autoregressive point process with an absolute refractory period " (February 5, 2019; online notes ), we derived equations for how the mean and covariance evolve in an AR-PPGLM with absolute refractoriness. The moment equations are much simpler for a neuron with steady-state drive, and for which all spiking history effects are captured by an absolute refractory period. For fixed input $\lambda_0$ and in the absence of history effects ($\eta(\tau)=0$), the refractory Poisson model is linear. The time evolution of the history process $h(t,\tau) = y(t-\tau)$ can be written as:

\begin{equation} \begin{aligned} \partial_t h &= -\partial_\tau h + \delta_{\tau=0} y \\ y &\sim\operatorname{Poisson}(\lambda\cdot dt) \\ \lambda &= (1-\text Rh) \cdot \lambda_0 \end{aligned} \end{equation}

Because the interaction between the firing rate and the spiking history is linear, evolution of the moment (mean $\mu$ and covariance $\Sigma$) of the process history can be written without invoking any approximations or moment closures needed:

\begin{equation} \begin{aligned} \mu_y &= (1-\text R \mu_h) \lambda_0 \cdot d t \\ \partial_t \mu &= -\partial_\tau \mu + \delta_{\tau=0} \mu_y \\ \partial_t \Sigma &= J \Sigma + \Sigma J^\top + \delta_{\tau=0}\mu_y\delta_{\tau=0}^\top \\ J &= -\partial_\tau - \delta_{\tau=0} \text R^\top \lambda_0 \cdot dt \end{aligned} \end{equation}

Figure 2 illustrates that these moment equations match the true process, and also checks the accuracy of the continuous approximation derived earlier.

Figure 2: Solution for the mean and covariance of the refractory Poisson model. In this simulation, the drive is $\lambda_0=25$, the absolute refractory period is $\tau=1$, and time step is $\Delta t=0.02$. (a) Single-time marginal means and variances. Black and shaded region trace represents the theoretical mean and variance, computed by integrating the moment equation. Green traces reflect $\pm 1\sigma$ computed by sampling the refractory Poisson model. Teal traces reflect $\pm 1\sigma$ sampled from a continuous approximation, with a Bernoulli-like, rather than Poisson, noise model. (b) Similar to (a), but in this case means and variances have been averaged over one refractory period. The refractory period reduces the variability of the process, so the average fluctuations over a short time period are smaller than one might expect given the single-time marginal variances of the process.

Overall,

These notes examined one way to model an absolute refractory period, and its role in reducing neuronal variability, in GLM point-process models. We showed that the sampling of the process must be modified to account for the influence of the refractory period, which enforces a sort of "law of conservation of spikes" in time. The Langevin approximation to the refractory Poisson model is also not straightforward, but for any finite $\Delta t$ one can derive an integration scheme that approximates the original process. The resulting process resembled a continuous-time Bernoulli process, but with slightly different nonlinearities. In the case of steady-state input, the moment equations for the refractory Poisson model are linear, and agree with Monte-Carlo simulations. It might be interesting to see whether other types of highly regular spiking observed in neurons, such as integrators or resonators with variable frequencies, can be defined in a consistent way in terms of AR-PPGLMs.


No comments:

Post a Comment