Another paper from the Hennig lab is out, this one is from Carolin Scholl's master's thesis. Once again, we used an artificial neural network to get intuition about biology. The paper is on BioRiv, and you can also get the PDF here.
Biological neural networks need to be efficient and compact: Synapses and neurons require space to store and energy to operate and maintain. This favors an optimized network topology that minimizes redundant neurons and connections.
Not everything can be hard-wired: During development, the brain must learn efficient network topologies based on neural activity. It does this by first producing extra neurons and synapses, and then removing redundant ones later.
How might a neuron or synapse determine its own importance in the broader network? To study this, we used a Restricted Boltzmann Machine (RBM) to emulate developmental pruning. RBMs are simple enough to understand with math, but still exhibit some features of real neurons, like binary outputs and stochasticity.
Previously, we noticed that synapses in an RBM could measure their own importance based on local signals, like the spiking patterns of the presynaptic and postsynaptic cells. Synapses in the brain should also have access to these signals, so this pruning approach is biologically plausible (albeit quite abstract).
We tested whether this could work as a signal for optimizing the network topology, by removing unimportant synapses. We also removed neurons that lost a large number of synapses, which is similar to what happens in the brain.
In a sensory coding task, this worked well: pruning removed extra synapses and neurons, while retaining a network topology that could encode sensory inputs. This pruning rule is motivated in information theory, and properly identified redundant neurons. In contrast, pruning at random, or based on synaptic weights alone, was less able to identify redundant neurons.
Figure 3 Excerpt: (A): Data representation and DBM architecture. MNIST digits were cropped and binarized. Each unit from hidden layer $\mathbf{h^1}$ had an individual receptive field covering neighboring input pixels; $\mathbf{h^2}$ was fully connected. Only few connections are shown due to clarity. The classifier was trained on latent encodings from $\mathbf{h^2}$. (B): Classification error of a logistic regression classifier trained on $\mathbf{h^2}$ samples as a function of remaining weights $n_w$. The dotted line stands for the baseline error of a classifier trained on the raw digits. (C): Number of latent units in $\mathbf{h^1}$ and $\mathbf{h^2}$ as a function of remaining weights over the course of ten iterations of pruning.
The paper still has to go through peer review, but for now it can be cited as:
Scholl, C., Rule, M. E., Hennig, M. H., 2020. The Information Theory of Developmental Pruning: Optimizing Global Network Architecture Using Local Synaptic Rules. BioRĪiv, doi: 10.1101/2020.11.30.403360
No comments:
Post a Comment