Friday, November 1, 2019

Causes and consequences of representational drift

Drs. Harvey, O'Leary, and myself, have just published a review in Current Opinion in Neurobiology titled the "Causes and consequences of representational drift[PDF].

We explore recent results from the work of Alon Rubin and Liron Sheintuch in the Ziv lab and Laura Driscoll in the Harvey lab. Their work has shown that neural representations reconfigure even for fixed, learned tasks. 

We discuss ways that the brain might support this reconfiguration without forgetting what it has already learned. There might be a subset of stable neurons that maintain memories. Alternatively, neural representations can be highly redundant, and support a stable readout even as single cells seem to change. 

Finally, we conjecture that redundancy allows different brain areas to "error-correct" each other, allowing the brain to keep track of the shifting meaning of single neurons in plastic representations.

For a brief preview, here are the pre-publication versions of figures 2 and 3:

Figure 2:

Figure 2: Internal representations have unconstrained degrees of freedom that allow drift. (a) Nonlinear dimensionality reduction of population activity recovers the low-dimensional structure of the T-maze in Driscoll et al. (2017). Each point represents a single time-point of population activity, and is colored according to location in the maze. (b) Point clouds illustrate low-dimensional projections of neural activity as in (a). Although unsupervised dimensionality-reduction methods can recover the task structure on each day, the way in which this structure is encoded in the population can change over days to weeks. (c) Left: Neural populations can encode information in relative firing rates and correlations, illustrated here as a sensory variable encoded in the sum of two neural signals ($y_1 + y_2$). Points represent neural activity during a repeated presentation of the same stimulus. Variability orthogonal to this coding axis does not disrupt coding, but could appear as drift in experiments if it occurred on slow timescales. Right: Such distributed codes may be hard to read-out from recorded subpopulations (e.g. $y_1$ or $y_2$ alone; black), especially if they entail correlations between brain areas. (d) Left: External covariates may exhibit context-dependent relationships. Each point here reflects a neural population state at a given time-point. The relationship between directions $x_1$ and $x_2$ changes depending on context (cyan versus red). Middle: Internally, this can be represented a mixture model, in which different subspaces are allocated to encode each context, and the representations are linearly-separable (gray plane). Right: The expanded representation contains two orthogonal subspaces that each encode a separate, context- dependent relationship. This dimensionality expansions increases the degrees of freedom in internal representations, thereby increasing opportunities for drift. 


Figure 3:


Figure 3: Local changes in recurrent networks have global effects, and global processes can compensate. (a) The curved surfaces represent network configurations suitable for a given sensorimotor task, that is, neural connections and tunings that generate a consistent behavior. Each axis represents different circuit parameters. Ongoing processes that disrupt performance must be corrected via error feedback (middle panel) to maintain overall sensorimotor accuracy. (Right panel) continual circuit reconfiguration is possible in principle in the space of feasible circuit configurations. (b) Colored dots represent projections of neural population activity onto task-relevant dimensions at various time-points. Activity is illustrated in three hypothetical areas, depicting a feed-forward transformation of a stimulus input into a motor output. (top) If the representation in one area changes (e.g. rotation of an internal sensory representation, Ds, curved black arrow), downstream areas must also compensate to avoid errors (e.g. motor errors, Dm, curved gray arrows). (bottom) Although the original perturbation was localized, compensation can be distributed over many areas. Each downstream area can adjust how it interprets its input. This is illustrated here as curved arrows, which denote a compensatory rotation that partially corrects the original perturbation. The distributed adjustment in neural tuning may appear as drift to experiments that examine only a local subpopulation.

Overall, 

I was left with the impression that the brain might continually translate memories into new representations for shifting neural codes, somewhat akin to the way that old texts must be continually re-translated as language evolves. I'm reminded of the quote from Il Gattopardo, "Se vogliamo che tutto rimanga com’รจ, bisogna che tutto cambi": To stay the same, everything must change.

Many thanks to Tim O'Leary and Chris Harvey. 

The paper can be cited as:

Rule, M.E., O’Leary, T. and Harvey, C.D., 2019. Causes and consequences of representational drift. Current opinion in neurobiology58, pp.141-147.

No comments:

Post a Comment