Friday, November 1, 2019

Causes and consequences of representational drift

Drs. Harvey, O'Leary, and myself, have just published a review in Current Opinion in Neurobiology titled the "Causes and consequences of representational drift[PDF].

We explore recent results from the work of Alon Rubin and Liron Sheintuch in the Ziv lab and Laura Driscoll in the Harvey lab. Their work has shown that neural representations reconfigure even for fixed, learned tasks. 

We discuss ways that the brain might support this reconfiguration without forgetting what it has already learned. There might be a subset of stable neurons that maintain memories. Alternatively, neural representations can be highly redundant, and support a stable readout even as single cells seem to change. 

Finally, we conjecture that redundancy allows different brain areas to "error-correct" each other, allowing the brain to keep track of the shifting meaning of single neurons in plastic representations.

For a brief preview, here are the pre-publication versions of figures 2 and 3:

Figure 2:

Figure 2: Internal representations have unconstrained degrees of freedom that allow drift. (a) Nonlinear dimensionality reduction of population activity recovers the low-dimensional structure of the T-maze in Driscoll et al. (2017). Each point represents a single time-point of population activity, and is colored according to location in the maze. (b) Point clouds illustrate low-dimensional projections of neural activity as in (a). Although unsupervised dimensionality-reduction methods can recover the task structure on each day, the way in which this structure is encoded in the population can change over days to weeks. (c) Left: Neural populations can encode information in relative firing rates and correlations, illustrated here as a sensory variable encoded in the sum of two neural signals ($y_1 + y_2$). Points represent neural activity during a repeated presentation of the same stimulus. Variability orthogonal to this coding axis does not disrupt coding, but could appear as drift in experiments if it occurred on slow timescales. Right: Such distributed codes may be hard to read-out from recorded subpopulations (e.g. $y_1$ or $y_2$ alone; black), especially if they entail correlations between brain areas. (d) Left: External covariates may exhibit context-dependent relationships. Each point here reflects a neural population state at a given time-point. The relationship between directions $x_1$ and $x_2$ changes depending on context (cyan versus red). Middle: Internally, this can be represented a mixture model, in which different subspaces are allocated to encode each context, and the representations are linearly-separable (gray plane). Right: The expanded representation contains two orthogonal subspaces that each encode a separate, context- dependent relationship. This dimensionality expansions increases the degrees of freedom in internal representations, thereby increasing opportunities for drift. 

Wednesday, October 2, 2019

Note: Training stochastic neural networks

Feed-forward neural networks consist of a series of layers. In each layer, outputs from past layers are combined linearly, then passed through some nonlinear transformation. As long as all computations are differentiable, the entire network is differentiable as well. This allows artificial neural networks to be trained using gradient-based optimization techniques (backpropagation).

Methods for training stochastic networks via backpropagation are less well developed, but solutions exist and are the subject of ongoing research (c.f. Rezende et al. 2014 and the numerous papers that cite it). In the context of models of neural computation, Echeveste et al. (2019) trained stochastic neural networks with rectified-polynomial nonlinearities.

Monday, September 16, 2019

Note: Moment approximations for Bernoulli neurons with sigmoidal nonlinearity

Consider a stochastic, binray, linear-nonlinear unit, with spiking output $s$, synaptic inputs $\mathbf x$, weights $\mathbf w$, and bias (threshold) $b$:

\begin{equation}\begin{aligned} s &\sim\operatorname{Bernoulli}[p = \Phi(a)] \\ a &= \mathbf w^\top \mathbf x + b, \end{aligned}\end{equation}

where $\Phi(\cdot)$ is the cumulative distribution function of a standard normal distribution. Note that $\Phi(\cdot)$ can be rescaled to closely approximate the logistic sigmoid if desired. Assuming the mean $\mu$ and covariance $\Sigma$ of $\mathbf x$ are known, can we obtain the mean and covariance of $s$?

Thursday, August 29, 2019

Note: Noise-induced gain modulation (stochastic gating)

In nonlinear stochastic networks, noise can interact with nonlinearities to alter the effective transfer function of neurons. This figure outlines a hypothetical application of this phenomenon. 

Wednesday, June 26, 2019

Constrained plasticity can compensate for ongoing drift in neural populations

I've started a new postdoc, working on a collaboration between the O'LearyZiv, and Harvey labs, on the Human Frontiers Science Program grant, "Building a theory of shifting representations in the mammalian brain". 

To start, I've been working with Adrianna Loback to try to make sense of a puzzling result from Driscoll et al. (2017): the neural code for sensorimotor variables in parietal cortex is unstable, changing dramatically even for habitual tasks in which no learning takes place.

We think the brain might be using a distributed population code. Because there are so many possible ways to read-out a distributed and redundant population code, it could be that there is a stable representation at the population level, despite the apparent instability of single neurons. 

I'll be presenting our work to date as a poster at the UK Neural Computation conference in Nottingham, July 1st-3rd. 

[download poster PDF] 


Abstract:

Recent experiments reveal that neural populations underlying behavior reorganize their tunings over days to weeks, even for routine tasks. How can we reconcile stable behavioral performance with ongoing reconfiguration in the underlying neural populations? We examine drift in the population encoding of learned behaviour in posterior parietal cortex of mice navigating a virtual-reality maze environment. Over five to seven days, we find a subspace of population activity that can partially decode behaviour despite shifts in single-neuron tunings. Additionally, directions of trial-to-trial variability on a single day predict the direction of drift observed on the following day. We conclude that day-to-day drift is concentrated in a subspace that could facilitate stable decoding if trial-to-trial variability lies in an encoding-null space. However, a residual component of drift remains aligned with the task-coding subspace, eventually disrupting a fixed decoder on longer timescales. We illustrate that this slower drift could be compensated in a biologically plausible way, with minimal synaptic weight changes and using a weak error signal. We conjecture that behavioral stability is achieved by active processes that constrain plasticity and drift to directions that preserve decoding, as well as adaptation of brain regions to ongoing changes in the neural code.

This poster can be cited as: 

Rule, M. E., Loback, A. R., Raman, D. V., Harvey, C. D., O'Leary, T. S. (2019) Constrained plasticity can compensate for ongoing drift in neural populations. [Poster] UK Neural Computation 2019, July 1st Nottingham, UK



Thursday, June 20, 2019

Exploring a regular-spiking limit of autoregressive point-process models with an absolute refractory period

These notes explore a deterministic limit of Autoregressive Point-Process Generalized Linear Model (AR-PPGLM) of a spiking neuron. We consider a Poisson point process that is either strongly driven or strongly suppressed, and take a limit where spiking output becomes deterministic (and instantaneous rates diverge). This deterministic limit is of interest in clarifying how AR-PPGLM models relate to deterministic models of spiking.

[get as PDF]

Friday, May 17, 2019

Moment-closure approaches to statistical mechanics and inference in models of neural dynamics

At the upcoming SAND meeting in Pittsburgh, I'll be presenting our recent work on using moment closures to combine theoretical models with statistical inference. This work has already been published, but this poster provides a quick summary. 

In my postdoc at Edinburgh, I worked on methods to combine neural field modelling and statistical inference. Neural field models capture how microscopic actions of single neurons combine to create emergent collective dynamics. Statistical modelling of spiking data commonly uses Poisson point-process models. These projects combined the two in an interesting way. 

In "autoregressive point-processes as latent state-space models" [PDF], we convert a popular statistical model for spike-train data into a neural field model. This neural field model is a bit unusual: it extends over time rather than space, and describes correlations as well as mean firing rates. This may lead to new tricks for inference and coarse-graining on these types of models. 

In "neural field models for latent state inference", we use a microscopic model of retinal waves to specify a second-order neural field model that doubles as a latent state-space model for spiking observations. This advances methods for developing data-driven neural field models.

[download poster PDF]

Sunday, February 10, 2019

Neural field models for latent state inference

The final paper from my Edinburgh postdoc in the Sanguinetti and Hennig labs (perhaps, we shall see). 

[get PDF]

We combined neural field modelling with point-process latent state inference. Neural field models capture collective population activity like oscillations and spatiotemporal waves. They make the simplifying assumption that neural activity can be summarized by the average firing rate in a region.

High-density electrode array recordings can now record developmental retinal waves in detail. We derived a neural field model for these waves from the microscopic model proposed by Hennig et al.. This model posits that retinal waves are supported by an quiescent, active, and refractory states. 

Fig 3. Spatial 3-state neural-field model exhibits self-organized multi-scale wave phenomena. Simulated example states at selected time-points on a [0,1]² unit interval using a 20×20 grid with effective population density of $\rho{=}50$ cells per unit area, and rate parameters $\sigma{=}0.075$, $\rho_a {=} 0.4$, $\rho_r {=} 3.2 \times 10^{−3}$, $\rho_e {=} 0.028$, and $\rho_q {=} 0.25$ (Methods: Sampling from the model). As, for instance, in neonatal retinal waves, spontaneous excitation of quiescent cells (blue) lead to propagating waves of activity (red), which establish localized patches in which cells are refractory (green) to subsequent wave propagation. Over time, this leads to diverse patterns of waves at a range of spatial scales. 

Tuesday, February 5, 2019

Moment equations for an autoregressive point process with an absolute refractory period

These notes extend the autoregressive point-process moment closures derived in Rule and Sanguinetti (2018) to models that include an absolute refractory period via a gating term that sets the rate to zero after a spike.

[get notes as PDF]

Tuesday, January 8, 2019

Gaussian moment-closures for autoregressive GLM population models

[get notes as PDF]

In Rule and Sanguinetti (2008) "Autoregressive Point Processes as Latent State-Space Models" (more at the blog post) we developed moment-closure approximations for the time-evolution of the moments of Autoregressive Point-Process Generalized Linear Models (AR-PPGLMs) of neural activity. These notes generalize these approximations for AR-PPGLM models of neural ensembles . For networks of linear-nonlinear "neurons", the derivations are very similar to the single-neuron case presented in the paper.