mrule-intheworks
Saturday, December 31, 2022
Monday, February 21, 2022
Fun with reverse-caption images
Thursday, April 15, 2021
An ODE with smooth, bounded solutions that computes a discontinous function in finite time
I'd like to share an Ordinary Differential Equation (ODE) that I find entertaining.
For initial condition $x(0) = x_0 \in \mathbb R$, the following ODE converges to $\operatorname{sign}(x_0)$ in finite time: \begin{equation}\begin{aligned} \tau \dot x &= \operatorname{sq}\{ x - x^3 \}, \end{aligned}\tag{1}\end{equation} where $\tau$ is a time constant controlling how quickly the ODE evolves and $\operatorname {sq}(\cdot)$ is the signed square root function $\operatorname {sq}(a) = \operatorname{sign(a)}\cdot\sqrt{|a|}$ . If we set the time constant to $\tau = {\sqrt{8\pi}}/{\Gamma(\tfrac 1 4)^2}$, we find that all $x_0$ reach $\operatorname{sign}(x_0)$ by time $t=1$.
Figure 1a shows the vector field for Equation $(1)$. Note that it's slope is vertical (infinite) at $x\in\{-1,0,1\}$ and so the vector field is not Lipschitz continuous. This is required for finite-time convergence, but removes the uniqueness of solutions. In particular, you cannot continue the final-value problem back in time from $x_f\in\{-1,0,1\}$.
Derivation from polar form
Equation $(1)$ can be derived from the following ODE on a circular variable $\theta$ \begin{equation}\begin{aligned} \dot\theta &= \tfrac 1 2 \operatorname{sq}\{ \sin(2\theta) \}, \end{aligned}\tag{2}\end{equation} where $\theta\in[0,\pi)$ and $x\in\mathbb R$. Equation $(2)$ will pull $\theta$ toward $\theta\operatorname{mod}\pi = \pm\pi/2$, or leave it stationary at the unstable fixed points $\theta\operatorname{mod}\pi=0$. We apply the change of variables $x = \tan(\frac\theta 2)$ to Equation $(2)$. Note that $\theta = 2 \tan^{-1}(x)$ and $\frac{dx}{d\theta} = \frac 1 2 (1+x^2)$. This yields: $$ \dot x = \tfrac 1 4 (1 + x^2)\, \operatorname{sq}\{ 4 \tfrac{x-x^3}{(1 + x^2)^2} \}. $$ Most terms cancel if we expand the signed square root: $$ \dot x = \tfrac 1 4 (1 + x^2) \sqrt{\left\lvert4 \tfrac{x-x^3}{(1 + x^2)^2}\right\rvert} \cdot\operatorname{sign}\big\{ 4 \tfrac{x-x^3}{(1 + x^2)^2}\big\}, $$ and we recover Equation $(1)$ if we cancel the $1+x^2$ term and drop $4/(1+x^2)^2$ from inside the $\operatorname{sign}(\cdot)$ term (since it is always positive): \begin{equation}\begin{aligned} \dot x &= \sqrt{\left\lvert {x-x^3}\right\rvert} \cdot\operatorname{sign}\{ x-x^3 \} =\operatorname{sq}\{ x - x^3 \}. \end{aligned}\end{equation}
The peculiar time constant $\tau = {\sqrt{8\pi}}/{\Gamma(\tfrac 1 4)^2}$ can be solved as the maximum convergence time for the polar form $(2)$ using Elliptic integrals.
Remarks
This ODE is notable in that the mapping $x_0 \mapsto x(1) = \operatorname{sign}(x_0)$ computes a discontinuous function in finite time, with all solutions remaining smooth. It also misbehaves in some other interesting ways and is a nice pathological example of what can happen if your vector field isn't Lipschitz. My main motivation for deriving this was to find an ODE whose solutions are smooth and bounded, but for which backpropagation through time fails due to gradients vanishing.
Wednesday, March 10, 2021
Self-Healing Neural Codes
A first-draft of a new manuscript is up on bioRĪiv This wraps up some loose-ends from our previous work, which examined how the brain might use constantly shifting neural representations in sensorimotor tasks.
In "Self-Healing Neural Codes", we show that modelling experiments predict that homeostatic mechanisms could help the brain maintain consistent interpretations of shifting neural representations. This could allow for internal representations to be continuously re-consolidated, and allow the brain to reconfigure how single neurons are used without forgetting.
Here's the abstract:
Recently, we proposed that neurons might continuously exchange prediction error signals in order to support ``coordinated drift''. In coordinated drift, neurons track unstable population codes by updating how they read-out population activity. In this work, we show how coordinated drift might be achieved without a reward-driven error signal in a semi-supervised way, using redundant population representations. We discuss scenarios in which an error-correcting code might be combined with neural plasticity to ensure long-lived representations despite drift, which we call ``self-healing codes''. Self-healing codes imply three signatures of population activity that we see in vivo (1) Low-dimensional manifold activity; (2) Neural representations that reconfigure while preserving the code geometry; and (3) Neuronal tuning fading in and out, or changing preferred tuning abruptly to a new location. We also show that additional mechanisms, like population response normalization and recurrent predictive computations, stabilize codes further. These results are consistent with long-term neural recordings of representational drift in both hippocampus and posterior parietal cortex. The model we explore here outlines neurally plausible mechanisms for long-term stable readouts from drifting population codes, as well as explaining some features of the statistics of drift.
Tuesday, December 1, 2020
Friday, October 16, 2020
Brain–Machine Interfaces: Closed-Loop Control in an Adaptive System
Monday, July 13, 2020
Convolution with the Hartley transform
The Hartley transform can be computed by summing the real and imaginary parts of the Fourier transform.
\begin{equation}\begin{aligned} \mathcal F \mathbf a &= \mathbf x + i\mathbf y \\ \mathcal H \mathbf a &= \mathbf x + \mathbf y, \end{aligned}\end{equation}where $\mathbf a$, $\mathbf x$, and $\mathbf y$ are real-valued vectors, $\mathcal F$ is the Fourier transform, and $\mathcal H$ is the Hartley transform. It has several useful properties.
- It is unitary, and also an involution: it is its own inverse.
- Its output is real-valued, so it can be used with numerical routines that cannot handle complex numbers.
- It can be computed in $\mathcal O (n \log(n))$ time using standard Fast Fourier Transform (FFT) libraries.