Tuesday, September 5, 2017

Inferring unobserved neural field intensities from spiking observations

Edit: I am very happy to report that this work has now been published in PLoS Computational Biology.

I'll be presenting our ongoing work on merging neural field models with statistical inference at the Integrated Systems Neuroscience Workshop in Manchester, and at the Bernstein Conference in Göttingen. [get poster PDF]

What's exciting about this work is that it combines modelling principles from statistical physics and statistical inference. We start with a detailed microscopic model, and then construct a second-order neural field model, which is then used directly for statistical inference. Normally, neural field models are only treated as abstract, qualitative mathematical models, and are rarely integrated with data. 


Video: Simulation of 3-state Quiescent-Active-Refractory blue, red, green) neural field model of spontaneous retinal waves that occur during development. Waves are generated by the inner retina, and drive retinal ganglion cell spiking, which we can observe on a high-density multi-electrode array. [get original avi from Github]

Friday, September 1, 2017

Population coding of sensory stimuli through latent variables

Edit: This work is published now, in Entropy [PDF].

Martino Sorbaro has been doing some really interesting work exploring the encoding strategies learned by artificial neural networks. We've found similarities between the statistics of the population codes learned by Restricts Boltzmann Machines (RBMs), and those of the retina. We'll present this work as a poster at the upcoming  Integrated Systems Neuroscience in Manchester.

TL;DR:

RBMs as a model for latent-variable encoding

  • Optimal latent-variable encoding of visual stimuli seems to consistently yield models near statistical criticality.  
  • Poor fits (too few hidden units,under-fitting) do not exhibit this property.
  • Critical RBMs mimic the retina in Zipf laws, sparsity, and decorrelation.
  • Above the optimal model size, extra units are weakly constrained as measured by Fisher information.  
  • Receptive fields of excess units are less retina-like.

Questions and controversy

  • Is statistical criticality a general feature of factorized latent variable models?
  • Is criticality in the retina expected based simply on optimal encoding?

[download poster PDF]


Abtract:

Several studies observe power-law statistics consistent with critical scaling exponents in neural data, but it is unclear whether such statistics necessarily imply criticality. In this work, we examine whether the 1/f statistics of retinal populations are inherited from visual stimuli, or whether they might emerge from collective neural dynamics independently of stimulus statistics. We examine, in silico, a latent-variable encoding model of visual scenes, and empirically explore the conditions under which such a model exhibits 1/f statistics thought to reflect criticality. Specifically, we examine the Restricted Boltzmann Machines (RBMs) as a factorized binary latent-variable model for stimulus encoding. We find two surprising results. First, latent variable models need not exhibit 1/f statistics, but that the optimal model size, reflecting the smallest model that can faithfully encode stimuli, does. We illustrate that the optimal model size can be predicted from sloppy dimensions of the Fisher information matrix (FIM), which align with a subspace spanning the superfluous latent variables. Second, the optimal-sized model can exhibit 1/f statistics even when stimuli do not, indicating that this property is not inherited from environmental statistics. Furthermore, such models exhibit properties of statistical criticality, including diverging susceptibilities. This empirical evidence suggests that 1/f statistics are neither inherited from the  environment, nor a necessary feature of accurate encoding. Rather, it suggests that parsimonious latent- variable models are naturally poised close to criticality, generating the observed 1/f statistics. Overall,  these results are consistent with conjectures in other fields that a cost-benefit trade-off between expressivity and parsimony underlies the emergence of criticality and 1/f power-law statistics. Furthermore, this works suggests that in latent-variable encoding models, the emergence of 1/f statistics reflects true criticality and is not inherited from the environmental distribution of stimuli.

The poster can be cited as:

Sorbaro, M, Rule, M., Hilgen, G., Sernagot, E. , D, Hennig, M. H. (2017) Signatures of optimal population coding of sensory stimuli through latent variables. [Poster] The second Integrated Systems Neuroscience Workshop, 7-8th September 2017, at The University of Manchester, Manchester, UK.

Edit: the paper can be cited as

Rule, M.E., Sorbaro, M. and Hennig, M.H., 2020. Optimal encoding in stochastic latent-variable ModelsEntropy22(7), p.714.