Wednesday, October 31, 2012

Sometimes the spike-triggered average is just as good as Poisson GLMs for spike-train analysis

(hopefully no major mistakes in this one; get PDF here)

The Poisson GLM for spiking data

Generalized Linear Models (GLMs) are similar to linear regression, but account for nonlinearities and non-uniform noise in the observations. In neuroscience, it is common to predict a sequence of spikes $Y=\{y_1,..,y_T\}$, $y_i\in\{0,1\}$, from a series of observations $X=\{\mathbf x_1,..,\mathbf x_T\}$, using a Poisson GLM:

$$ \begin{aligned} y_i &\sim \operatorname{Poisson}(\lambda_i\cdot\Delta t) \\ \lambda_i &= \exp\left( \mathbf a^\top \mathbf x_i + b \right) \end{aligned} $$

These models are fit by minimizing the negative log-likelihood of the observations, given the vector of regression weights $\mathbf a$ and mean offset parameter $b$:

Wednesday, August 1, 2012

Impact of fast and slow neuronal variability in output-null dimensions on motor decoding

[get these notes as PDF]

Output-null spaces in motor control

The idea of a null-space extends the notion of selectivity and invariance to motor cortex (Kaufman et al. 2010). Rather than asking what stimuli change the firing rate of a neuron (and what stimulus changes it is in variant to), we ask what neural activity drives movement (and what activity does not). The subspace in which neural activity in motor cortex can vary without changing behavior is called the "output-null" space.

Variables in the output-null space explains neural variability not related to an observed behavior. This residual variability may contain components related to unmeasured behavior, neural processing, and noise sources. If one has observations from behavior $X$ and output-null space $Z$, then neural covariates $Y$ are determined. Neural variability factors in to variability induced by behavior, and that induced by output-null space. 

Wednesday, May 16, 2012

Note: Differentiating expectations of a function of a random variable with respect to location and scale parameters

Consider a real-valued random variable with a known probability distribution $\Pr(z) = \phi(z)$. From $\phi(z)$, one can generate a scale/location family of probability densities by scaling and shifting $\phi(z)$:

$$ \Pr\left( x ; \mu, \sigma \right) = \frac 1 \sigma \phi \left( \frac {x - \mu}{\sigma} \right) $$

The most familiar example of such a family is the univariate Gaussian distribution, when $\phi(z) = [ 2\pi]^{-1/2}\exp\left(-\tfrac 1 2 z^2\right)$. Now, consider the expectation of a function of $\langle f(x) \rangle$ with respect to $\Pr\left( x \right)$.

$$ \langle f(x) \rangle = \int_{\mathbb R} f(x) \Pr(x) dx = \int_{\mathbb R} f(x) \frac 1 \sigma \phi\left(\frac{x-\mu}{\sigma}\right) dx $$

What are the derivatives of $\langle f(x) \rangle$ with respect to $\mu$ and $\sigma^2$? The answers are:

\begin{equation}\begin{aligned} \partial_\mu \langle f(x) \rangle &= \langle f'(x) \rangle \\ \partial_{\sigma^2} \langle f(x)\rangle &= \tfrac1{2\sigma^2} \left<(x-\mu) f'(x)\right>_x \end{aligned}\end{equation}

Saturday, January 14, 2012

Bayesian approach to point-process generalized linear models

I've been learning about generalized linear models and Bayesian approaches for doing statistics on spike train data, in the Truccolo lab. Here are some notes on the subject. 

[get notes as a PDF]

In neuroscience, we are interested in the problem of how neurons encode, process, and communicate information. Neurons communicate over long distances using brief all-or-nothing events called spikes. We are often interested in how the spiking rate of a neuron depends on other variables, such as stimuli, motor output, or other ongoing signals in the brain. 

To model this, we consider spikes as events that occur at a point in time with an underlying variable rate, or conditional intensity, $\lambda$. There are many approaches to estimating $\lambda$. These notes cover point-process generalized linear models, and Bayesian approaches. These are closely related, and in some cases the same thing.