Friday, November 1, 2019

Causes and consequences of representational drift

Drs. Harvey, O'Leary, and myself, have just published a review in Current Opinion in Neurobiology titled the "Causes and consequences of representational drift[PDF].

We explore recent results from the work of Alon Rubin and Liron Sheintuch in the Ziv lab and Laura Driscoll in the Harvey lab. Their work has shown that neural representations reconfigure even for fixed, learned tasks. 

We discuss ways that the brain might support this reconfiguration without forgetting what it has already learned. There might be a subset of stable neurons that maintain memories. Alternatively, neural representations can be highly redundant, and support a stable readout even as single cells seem to change. 

Finally, we conjecture that redundancy allows different brain areas to "error-correct" each other, allowing the brain to keep track of the shifting meaning of single neurons in plastic representations.

For a brief preview, here are the pre-publication versions of figures 2 and 3:

Figure 2:

Figure 2: Internal representations have unconstrained degrees of freedom that allow drift. (a) Nonlinear dimensionality reduction of population activity recovers the low-dimensional structure of the T-maze in Driscoll et al. (2017). Each point represents a single time-point of population activity, and is colored according to location in the maze. (b) Point clouds illustrate low-dimensional projections of neural activity as in (a). Although unsupervised dimensionality-reduction methods can recover the task structure on each day, the way in which this structure is encoded in the population can change over days to weeks. (c) Left: Neural populations can encode information in relative firing rates and correlations, illustrated here as a sensory variable encoded in the sum of two neural signals ($y_1 + y_2$). Points represent neural activity during a repeated presentation of the same stimulus. Variability orthogonal to this coding axis does not disrupt coding, but could appear as drift in experiments if it occurred on slow timescales. Right: Such distributed codes may be hard to read-out from recorded subpopulations (e.g. $y_1$ or $y_2$ alone; black), especially if they entail correlations between brain areas. (d) Left: External covariates may exhibit context-dependent relationships. Each point here reflects a neural population state at a given time-point. The relationship between directions $x_1$ and $x_2$ changes depending on context (cyan versus red). Middle: Internally, this can be represented a mixture model, in which different subspaces are allocated to encode each context, and the representations are linearly-separable (gray plane). Right: The expanded representation contains two orthogonal subspaces that each encode a separate, context- dependent relationship. This dimensionality expansions increases the degrees of freedom in internal representations, thereby increasing opportunities for drift.