Monday, February 21, 2022

Fun with reverse-caption images

Our department asked for an image to accompany a blog post on a manuscript we published recently. I explored using a reverse-caption image generation bot based on vqgan/clip to generate some (github here). Perhaps for the best, colleagues voted against using them. Here they are for posterity, and their accompanying captions.

paint an anatomical diagram of the brain on the left, which disassembles itself into Mongolian script in the middle, then reassembles itself into an abstract geometric object in the style of Kandinsky on the right.

Thursday, April 15, 2021

An ODE with smooth, bounded solutions that computes a discontinous function in finite time

Wednesday, March 10, 2021

Self-Healing Neural Codes

A first-draft of a new manuscript is up on bioRĪ‡iv This wraps up some loose-ends from our previous work, which examined how the brain might use constantly shifting neural representations in sensorimotor tasks. 

In "Self-Healing Neural Codes", we show that modelling experiments predict that homeostatic mechanisms could help the brain maintain consistent interpretations of shifting neural representations. This could allow for internal representations to be continuously re-consolidated, and allow the brain to reconfigure how single neurons are used without forgetting. 

Here's the abstract: 

Recently, we proposed that neurons might continuously exchange prediction error signals in order to support ``coordinated drift''. In coordinated drift, neurons track unstable population codes by updating how they read-out population activity. In this work, we show how coordinated drift might be achieved without a reward-driven error signal in a semi-supervised way, using redundant population representations. We discuss scenarios in which an error-correcting code might be combined with neural plasticity to ensure long-lived representations despite drift, which we call ``self-healing codes''. Self-healing codes imply three signatures of population activity that we see in vivo (1) Low-dimensional manifold activity; (2) Neural representations that reconfigure while preserving the code geometry; and (3) Neuronal tuning fading in and out, or changing preferred tuning abruptly to a new location. We also show that additional mechanisms, like population response normalization and recurrent predictive computations, stabilize codes further. These results are consistent with long-term neural recordings of representational drift in both hippocampus and posterior parietal cortex. The model we explore here outlines neurally plausible mechanisms for long-term stable readouts from drifting population codes, as well as explaining some features of the statistics of drift.

Feedback welcome ( :


 






Tuesday, December 1, 2020

The Information Theory of Developmental Pruning: Optimizing Global Network Architecture Using Local Synaptic Rules

Another paper from the Hennig lab is out, this one is from Carolin Scholl's master's thesis. Once again, we used an artificial neural network to get intuition about biology. The paper is on BioRiv, and you can also get the PDF here

Friday, October 16, 2020

Brain–Machine Interfaces: Closed-Loop Control in an Adaptive System

Edit: I'm pleased to announce that the in-press preprint is now available from Annual Reviews [pdf].

During the first pandemic lockdown in 2020, I had the pleasure of preparing an introductory review on brain-machine interfaces with Ethan Sorrell. It will be published in the 2021 Annual Review of Control, Robotics, and Autonomous Systems. The review is in press now, but I thought I'd share a little sneak peak by way of some figures.

More figures and clip-art are on github. The clip art and figure components are free to reuse (CC NY-NC 4), but Annual Reviews owns the copyright to composed figures and sub-figures.

Monday, July 13, 2020

Convolution with the Hartley transform

The Hartley transform can be computed by summing the real and imaginary parts of the Fourier transform.

\begin{equation}\begin{aligned} \mathcal F \mathbf a &= \mathbf x + i\mathbf y \\ \mathcal H \mathbf a &= \mathbf x + \mathbf y, \end{aligned}\end{equation}

where $\mathbf a$, $\mathbf x$, and $\mathbf y$ are real-valued vectors, $\mathcal F$ is the Fourier transform, and $\mathcal H$ is the Hartley transform. It has several useful properties.

  • It is unitary, and also an involution: it is its own inverse.
  • Its output is real-valued, so it can be used with numerical routines that cannot handle complex numbers.
  • It can be computed in $\mathcal O (n \log(n))$ time using standard Fast Fourier Transform (FFT) libraries.