Monday, January 1, 2018

Autoregressive point-processes as latent state-space models

In 2016 I started a postdoc with the labs of Guido Sanguinetti and Matthias H. Hennig—and this is the first paper to result!

[get PDF]

A central challenge in neuroscience is understanding how the activity of single cells combines to create the collective dynamics that underlie perception, cognition, and behavior. One way to study this is to build detailed models, and "coarse grain" them to see which details are important. 

Our paper develops ways to relate detailed point-process models to coarse-grained quantities, like average neuronal firing rates and correlations. Point-process models are used for statistical modelling of spike train data. They can reveal effective neural dynamics by capturing how neurons in a population inhibit or excite each-other and themselves

Preview of figures: 

Figure 2: Moment closure of autoregressive PPGLMs combines aspects of three modeling approaches:

(A) Log-linear autoregressive PPGLM framework (e.g., Weber & Pillow, 2017). Dependence on the history of both extrinsic covariates x(t) and the process itself y(t) are mediated by linear filters, which are combined to predict the instantaneous log intensity of the process. (B) Latent state-space models learn a hidden dynamical system, which can be driven by both extrinsic covariates and spiking outputs. Such models are often fit using expectation- maximization, and the learned dynamics are descriptive. (C) Moment closure recasts autoregressive PPGLMs as state-space models. History dependence of the process is subsumed into the state-space dynamics, but the latent states retain a physical interpretation as moments of the process history (dashed arrow). (D) Compare to neural mass and neural field models, which define dynamics on a state space with a physical interpretation as moments of neural population activity