Monday, January 1, 2018

Autoregressive point-processes as latent state-space models

In 2016 I started a postdoc with the labs of Guido Sanguinetti and Matthias H. Hennig—and this is the first paper to result!

[get PDF]

A central challenge in neuroscience is understanding how the activity of single cells combines to create the collective dynamics that underlie perception, cognition, and behavior. One way to study this is to build detailed models, and "coarse grain" them to see which details are important. 

Our paper develops ways to relate detailed point-process models to coarse-grained quantities, like average neuronal firing rates and correlations. Point-process models are used for statistical modelling of spike train data. They can reveal effective neural dynamics by capturing how neurons in a population inhibit or excite each-other and themselves

Preview of figures: 

Figure 2: Moment closure of autoregressive PPGLMs combines aspects of three modeling approaches:

(A) Log-linear autoregressive PPGLM framework (e.g., Weber & Pillow, 2017). Dependence on the history of both extrinsic covariates x(t) and the process itself y(t) are mediated by linear filters, which are combined to predict the instantaneous log intensity of the process. (B) Latent state-space models learn a hidden dynamical system, which can be driven by both extrinsic covariates and spiking outputs. Such models are often fit using expectation- maximization, and the learned dynamics are descriptive. (C) Moment closure recasts autoregressive PPGLMs as state-space models. History dependence of the process is subsumed into the state-space dynamics, but the latent states retain a physical interpretation as moments of the process history (dashed arrow). (D) Compare to neural mass and neural field models, which define dynamics on a state space with a physical interpretation as moments of neural population activity

Figure 3, excerpt sub-figures EFG: Moment closure captures slow timescales in the mean and fast timescales in the variance:
 
(E) Example stimulus (black) and Izhikevich voltage response (red). (F) Bursts of spiking are captured by increases in variance in the autoregressive PPGLM (mean: black, 1σ : shaded). Spikes sampled (bottom) from the conditionally OU Langevin approximation (yellow) retain the phasic bursting character. (G) The state-space model derived from moment closure on the Langevin approximation retains essential characteristics of the original system.

Overview

We start by asking what would happen if we were only interested in coarse measures of average activity like mean firing rates and their correlations. These coarse-grained statistics are moments. We start with the mathematical definition of a point-process model, and derive a coarse-grained description of how these moments evolve in time. We use some tricks to do this:
  • Neurons average a many inputs over time. This averaging means that the central limit theorem applies, and we can often ignore the discrete, binary nature of individual spikes. Consequentially, the coarse-grained moments capture the dynamics of the point-process model. 
  • Dynamical point-process models often need the history of spiking activity to help predict future activity. Our approach replaces this history with a fading memory of the statistical moments of population activity. 
  • The "moment closure" approach is inspired by David Schnoerr's work, and entails (1) summarizing spiking activity in terms of mean rates and correlations (2) assuming that the the distribution of activity has a simple form, to make the math work out nicely.
  • This allows one to convert a point-process model into a model that describes how means and correlations evolve in time.
Based on David Schnoerr's Cox-process interpretation, we view neuronal spikes as random events driven by the coarse-grained model (the moment-based description is a second-order neural field model). The coarse-grained model is therefore a latent state-space model, and describes neuronal spiking as stochastic  observations of a simpler underlying dynamical process. 

Overall,

This paper outlined theoretical foundations. Extending it to population models is straightforward, but methods to reduce the moment-based descriptions to even simpler coarse-grained representations remain to be explored. 

There are caveats: coarse-graining is one-way. Many microscopic models lead to similar coarse-grain models, so you can't infer the detailed model using the coarse-grained representation. This could help us study what features of the more detailed model are important, and which features are sloppy.

The Python code used in this manuscript is available on Github.

Many thanks to Guido Sanguinetti, as well as David Schnoerr, Matthias Hennig, and Wilson Truccolo, for invaluable comments on this manuscript. This paper can be cited as:




No comments:

Post a Comment