In this paper, we test one of the hypothesis from the "Causes and consequences of representational drift" review: that neural population codes might support a stable readout, even as single neural tunings reconfigure.
[get PDF]
We examined long-term neuroimaging recordings form Driscoll et al. (2017). We found that the population code does, indeed, preserve a stable readout over time. This is because much of the reconfiguration in the neural code is orthogonal to the directions that encode behavior.
Even this readout isn't perfectly stable, however, so some amount of ongoing compensation must occur to allow readouts to adjust to the shifting neural code. This residual reconfiguration is much slower than the apparent day-to-day variability in single neurons, so it could be easily tracked with synaptic plasticity.
Many thanks to all! This really was a team effort: data were collected by Laura Driscoll and colleagues in the Harvey lab, and most of the analysis was started by Adriana Loback. Dhruva Raman was instrumental in sorting out the null model simulations. The paper can be cited as
Rule, M.E., Loback, A.R., Raman, D.V., Driscoll, L.N., Harvey, C.D. and O'Leary, T., 2020. Stable task information from an unstable neural population. Elife, 9, p.e51121.
Excerpt: Figure 4
|
A slowly-varying component of drift disrupts the behavior-coding subspace. (a) The small error increase when training concatenated decoders (Figure 3) suggests that plasticity is needed to maintain good decoding in the long term. We assess the minimum rate for this plasticity by training a separate decoder Md for each day, while minimizing the change in weights across days. The parameter λ controls how strongly we constrain weight changes across days (the inset equation reflects the objective function to be minimized; Methods). (b) Decoders trained on all days (cyan) perform better than chance (red), but worse than single-day decoders (ochre). Black traces illustrate the plasticity-accuracy trade-off for adaptive decoding. Modest weight changes per day are sufficient to match the performance of single-day decoders (Boxes: inner 50% of data, horizontal lines: median, whiskers: 5–95th%). (c) Across days, the mean neural activity associated with a particular phase of the task changes (Δμ). We define an alignment measure ρ (Materials and methods) to assess the extent to which these changes align with behavior-coding directions in the population code (blue) verses directions of noise correlations (ochre). (d) Drift is more aligned with noise (ochre) than it is with behavior-coding directions (blue). Nevertheless, drift overlaps this behavior-coding subspace much more than chance (grey; dashed line: 95% Monte-Carlo sample). Each box reflects the distribution over all maze locations, with all consecutive pairs of sessions combined. |
No comments:
Post a Comment