mrule-intheworks
Saturday, December 31, 2022
Monday, February 21, 2022
Fun with reverse-caption images
Wednesday, March 10, 2021
Self-Healing Neural Codes
A first-draft of a new manuscript is up on bioRĪiv This wraps up some loose-ends from our previous work, which examined how the brain might use constantly shifting neural representations in sensorimotor tasks.
In "Self-Healing Neural Codes", we show that modelling experiments predict that homeostatic mechanisms could help the brain maintain consistent interpretations of shifting neural representations. This could allow for internal representations to be continuously re-consolidated, and allow the brain to reconfigure how single neurons are used without forgetting.
Here's the abstract:
Recently, we proposed that neurons might continuously exchange prediction error signals in order to support ``coordinated drift''. In coordinated drift, neurons track unstable population codes by updating how they read-out population activity. In this work, we show how coordinated drift might be achieved without a reward-driven error signal in a semi-supervised way, using redundant population representations. We discuss scenarios in which an error-correcting code might be combined with neural plasticity to ensure long-lived representations despite drift, which we call ``self-healing codes''. Self-healing codes imply three signatures of population activity that we see in vivo (1) Low-dimensional manifold activity; (2) Neural representations that reconfigure while preserving the code geometry; and (3) Neuronal tuning fading in and out, or changing preferred tuning abruptly to a new location. We also show that additional mechanisms, like population response normalization and recurrent predictive computations, stabilize codes further. These results are consistent with long-term neural recordings of representational drift in both hippocampus and posterior parietal cortex. The model we explore here outlines neurally plausible mechanisms for long-term stable readouts from drifting population codes, as well as explaining some features of the statistics of drift.
Tuesday, December 1, 2020
Friday, October 16, 2020
Brain–Machine Interfaces: Closed-Loop Control in an Adaptive System
Monday, July 13, 2020
Convolution with the Hartley transform
The Hartley transform can be computed by summing the real and imaginary parts of the Fourier transform.
\begin{equation}\begin{aligned} \mathcal F \mathbf a &= \mathbf x + i\mathbf y \\ \mathcal H \mathbf a &= \mathbf x + \mathbf y, \end{aligned}\end{equation}where $\mathbf a$, $\mathbf x$, and $\mathbf y$ are real-valued vectors, $\mathcal F$ is the Fourier transform, and $\mathcal H$ is the Hartley transform. It has several useful properties.
- It is unitary, and also an involution: it is its own inverse.
- Its output is real-valued, so it can be used with numerical routines that cannot handle complex numbers.
- It can be computed in $\mathcal O (n \log(n))$ time using standard Fast Fourier Transform (FFT) libraries.
Tuesday, May 12, 2020
Gaussian process models for hippocampal grid cells
Gaussian Processes (GPs) generalize the idea of multivariate Gaussian distributions to distributions over functions. In neuroscience, they can be used to estimate how the firing rate of a neuron varies as a function of other variables (e.g. to track retinal waves ). Lately, we've been using Gaussian processes to describe the firing rate map of hippocampal grid cells .
We review Bayesian inference and Gaussian processes, explore applications of Gaussian Processes to analyzing grid cell data, and finally construct a GP model of the log-rate that accounts for the Poisson noise in spike count data. Along the way, we discuss fast approximations for these methods, like kernel density estimation , or approximating GP inference using convolutions.
Edit: There is a bug in the "covariance_crosshairs" function, there should be a square-root around "chi2.isf(1-p,df=2)".