Tuesday, September 5, 2017

Inferring unobserved neural field intensities from spiking observations

Edit: I am very happy to report that this work has now been published in PLoS Computational Biology.

I'll be presenting our ongoing work on merging neural field models with statistical inference at the Integrated Systems Neuroscience Workshop in Manchester, and at the Bernstein Conference in Göttingen. [get poster PDF]

What's exciting about this work is that it combines modelling principles from statistical physics and statistical inference. We start with a detailed microscopic model, and then construct a second-order neural field model, which is then used directly for statistical inference. Normally, neural field models are only treated as abstract, qualitative mathematical models, and are rarely integrated with data. 


Video: Simulation of 3-state Quiescent-Active-Refractory blue, red, green) neural field model of spontaneous retinal waves that occur during development. Waves are generated by the inner retina, and drive retinal ganglion cell spiking, which we can observe on a high-density multi-electrode array. [get original avi from Github]

Friday, September 1, 2017

Population coding of sensory stimuli through latent variables

Edit: This work is published now, in Entropy [PDF].

Martino Sorbaro has been doing some really interesting work exploring the encoding strategies learned by artificial neural networks. We've found similarities between the statistics of the population codes learned by Restricts Boltzmann Machines (RBMs), and those of the retina. We'll present this work as a poster at the upcoming  Integrated Systems Neuroscience in Manchester.

TL;DR:

RBMs as a model for latent-variable encoding

  • Optimal latent-variable encoding of visual stimuli seems to consistently yield models near statistical criticality.  
  • Poor fits (too few hidden units,under-fitting) do not exhibit this property.
  • Critical RBMs mimic the retina in Zipf laws, sparsity, and decorrelation.
  • Above the optimal model size, extra units are weakly constrained as measured by Fisher information.  
  • Receptive fields of excess units are less retina-like.

Questions and controversy

  • Is statistical criticality a general feature of factorized latent variable models?
  • Is criticality in the retina expected based simply on optimal encoding?

[download poster PDF]


Abtract:

Several studies observe power-law statistics consistent with critical scaling exponents in neural data, but it is unclear whether such statistics necessarily imply criticality. In this work, we examine whether the 1/f statistics of retinal populations are inherited from visual stimuli, or whether they might emerge from collective neural dynamics independently of stimulus statistics. We examine, in silico, a latent-variable encoding model of visual scenes, and empirically explore the conditions under which such a model exhibits 1/f statistics thought to reflect criticality. Specifically, we examine the Restricted Boltzmann Machines (RBMs) as a factorized binary latent-variable model for stimulus encoding. We find two surprising results. First, latent variable models need not exhibit 1/f statistics, but that the optimal model size, reflecting the smallest model that can faithfully encode stimuli, does. We illustrate that the optimal model size can be predicted from sloppy dimensions of the Fisher information matrix (FIM), which align with a subspace spanning the superfluous latent variables. Second, the optimal-sized model can exhibit 1/f statistics even when stimuli do not, indicating that this property is not inherited from environmental statistics. Furthermore, such models exhibit properties of statistical criticality, including diverging susceptibilities. This empirical evidence suggests that 1/f statistics are neither inherited from the  environment, nor a necessary feature of accurate encoding. Rather, it suggests that parsimonious latent- variable models are naturally poised close to criticality, generating the observed 1/f statistics. Overall,  these results are consistent with conjectures in other fields that a cost-benefit trade-off between expressivity and parsimony underlies the emergence of criticality and 1/f power-law statistics. Furthermore, this works suggests that in latent-variable encoding models, the emergence of 1/f statistics reflects true criticality and is not inherited from the environmental distribution of stimuli.

The poster can be cited as:

Sorbaro, M, Rule, M., Hilgen, G., Sernagot, E. , D, Hennig, M. H. (2017) Signatures of optimal population coding of sensory stimuli through latent variables. [Poster] The second Integrated Systems Neuroscience Workshop, 7-8th September 2017, at The University of Manchester, Manchester, UK.

Edit: the paper can be cited as

Rule, M.E., Sorbaro, M. and Hennig, M.H., 2020. Optimal encoding in stochastic latent-variable ModelsEntropy22(7), p.714.

 

Thursday, June 22, 2017

Note: Marginal likelihood for Bayesian models with a Gaussian-approximated posterior

I first learned this solution from Botond Cseke . I'm not sure where it originates; It is essentially Laplace's method for approximating integrals using a Gaussian distribution, where the parameters of the Gaussian distribution might come from any number of various approximate inference approaches.

If I have a Bayesian statistical model with hyperparameters $\Theta$, with a no closed-form posterior, how can I optimize $\Theta$?

Saturday, February 4, 2017

System-size expansion and Gaussian moment closure for Quiescent-Active-Refractory model

Epilogue: These notes concern the system size expansion and moment closure later published as "Neural field models for latent state inference: Application to large-scale neuronal recordings[pdf] (more) 

These notes derive the Kramers-Moyal system size expansion and approximate equation for the evolution of means and covariances of a single-compartment neural mass model with Quiescent (Q), Active (A), and Refractory (R) states. The derivations here are identical to the standard ones for the Susceptible (S), Infected (I), and Recovered (R) (SIR) model commonly used in epidemiology. 

[get these notes as PDF]

Thursday, January 26, 2017

Optogenetic stimulation shifts the excitability of cerebral cortex from type I to type II

Our new paper, Heitmann et al. [get PDF], is finally out! It's a collaboration between the theoretical neuroscientists Stewart Heitmann and Bard Ermentrout at the University of Pittsburgh, and the Truccolo lab at Brown University. 

This work could help us understand what happens when we stimulate cerebral cortex in primates using optogenetics. Modeling how the brain responds to stimulation is important for learning how to use this new technology to control neural activity.

Optogenetic stimulation elicits gamma (~50 Hz) oscillations, the amplitude of which grows with the intensity of light stimulation. However, traveling waves away from the stimulation site also emerge. It's difficult to reconcile oscillatory and traveling-wave dynamics in neural field models, but Heitmann et al. arrive at a surprising and testable prediction: 

The observed effects can be explained by paradoxical recruitment of inhibition at low levels of stimulation, which changes cortex from a wave-propagating medium to an oscillator. 

At higher stimulation levels, excitation overwhelms inhibition, giving rise to the observed gamma oscillations. 

Many thanks to Stewart Heitmann, Wilson Truccolo, and Bard Ermentrout. The paper can be cited as:

Heitmann, S., Rule, M., Truccolo, W. and Ermentrout, B., 2017. Optogenetic stimulation shifts the excitability of cerebral cortex from type I to type II: oscillation onset and wave propagation. PLoS computational biology, 13(1), p.e1005349.
 

 

Dissociation between sustained single-neuron spiking and transient β-LFP oscillations in primate motor cortex

Chapter two of my thesis has just been published! Rule et al. 2017 [PDF] explores the neurophysiology of beta (β) oscillations in primates, especially how single-neuron activity relates to population activity reflected in local field potentials (a.k.a. "brain waves").

Beta (~20 Hz) oscillations occur in frontal cortex. We've known about them for about a century, but still don't understand how they work or what they do. β-wave activity is related to "holding steady", so to speak. 

Beta oscillations are dysregulated in Parkinson's, in which movements are slowed or stopped. Beta oscillations are also reduced relative to slow-wave activity in ADHD, a disorder associated with motor restlessness and hyperactivity.

I looked at beta oscillations during movement preparation, where they seem to play a role in stabilizing a planned movement. I found that single neurons had very little relationship to the β-LFP brain waves. However! This appears to be for a good reason: the firing frequencies of neurons store information about the upcoming movement, and neurons firing at different frequencies cannot phase-lock together into a coherent population oscillation.

Anyone who's played in an orchestra knows that when notes are just slightly out of tune, you get interference patterns called beats. The same thing is happening in the brain, where many neurons firing at slightly different "pitches" cause β-LFP fluctuations, even though the underlying neural activity is constant.

This result provides a new explanation for how β-waves can appear as "transients" during motor steady-state: the fluctuations are cased by "beating", rather than changes in the β activity in the individual neurons. This differs from the prevailing theory for the origin of β transients in more posterior brain regions.

Many thanks to Carlos Vargas-Irwin, John Donoghue, and Wilson Truccolo. You can grab the PDF here. Please cite as

Rule, M.E., Vargas-Irwin, C.E., Donoghue, J.P. and Truccolo, W., 2017. Dissociation between sustained single-neuron spiking and transient β-LFP oscillations in primate motor cortex. Journal of neurophysiology, 117(4), pp.1524-1543.