Pitkow et al., Neuron, 87, 411-423, 2015
A couple of weeks ago I presented Xaq Pitkow et al.’s paper examining the convoluted relationship between choice probabilities (CP), information-limiting correlations (ILC), and suboptimal coding.
On July 7, we discussed Partitioning Neural Variability by Gorris et al. In this paper, the authors seek to isolate the portion of the variability of sensory neurons that comes from non-sensory sources such as arousal or attention. In order to partition the variability in a principled way, the authors propose a “modulated Poisson framework” for spiking neurons, in which a neuron produces spikes according to a Poisson process whose mean rate is the product of a stimulus-driven component , and a stimulus-independent ‘gain’ term (G).
We discussed state dependence of noise correlations in macaque primary visual cortex  today. Noise correlation quantifies the covariability in spike counts between neurons (it’s called noise correlation because the signal (stimulus) drive component has been subtracted out). In a 2010 science paper , noise correlation was shown to be much smaller than previously reported; in the range of 0.01 compared to the usual 0.1-0.2 range and stirred up the field (see  for a list of values). In this paper, they argue that this difference in noise correlation magnitude is due to population level covariations during anesthesia (they used sufentanil).
Yesterday marked the start of the 2014 summer course in COMPUTATIONAL NEUROSCIENCE: VISION at Cold Spring Harbor. The course was founded in 1985 by Tony Movshon and Ellen Hildreth, with the goal of inspiring new generations of students to address problems at the intersection of vision, computation, and the brain. The list of past attendees is impressive.
This week we discussed a recent paper from Anne Churchland and colleagues:
Variance as a Signature of Neural Computations during Decision Making,
Anne. K. Churchland, R. Kiani, R. Chaudhuri, Xiao-Jing Wang, Alexandre Pouget, & M.N. Shadlen. Neuron, 69:4 818-831 (2011).
This paper examines the variance of spike counts in area LIP during the “random dots” decision-making task. While much has been made of (trial-averaged) spike rates in these neurons (specifically, the tendency to “ramp” linearly during decision-making), little has been made of their variability.
The paper’s central goal is to divide the net spike count variance (measured in 60ms bins) into two fundamental components, in accordance with a doubly stochastic modulated renewal model of the response. We can formalize this as follows: let denote the external (“task”) variables on a single trial (motion stimulus, saccade direction, etc), let denote the time-varying (“command”) spike rate on that trial, and let represent the actual (binned) spike counts. The model specifies the final distribution over spike counts in terms of two underlying distributions (hence “doubly stochastic”):
So we can think of this as a kind of “cascade” model: , where each of those arrows implies some kind of noisy encoding process.
The law of total variance states essentially that the total variance of is the sum of the “rate” variance of and the average point-process variance , averaged across . Technically, the first of these (the quantity of interest here) is called the “variance of the conditional expectation” (or varCE, as it says on the t-shirt)—this terminology comes from the fact that is the conditional expectation of , and we’re interested in its variability, or . The approach taken here is to assume that spiking process is governed by a (modulated) renewal process, meaning that there is a linear relationship between and the variance of . That is, . For a Poisson process, we would have , since variance is equal to mean.
The authors’ approach to data analysis in this paper is as follows:
The take-home conclusion is that the variance of (i.e., the varCE), is consistent with evolving according to a drift-diffusion model (DDM): it grows linearly with time, which is precisely the prediction of the DDM (aka “diffusion to bound” or “bounded accumulator” model, equivalent to a Wiener process plus linear drift). This rules out several competing models of LIP responses (e.g., a time-dependent scaling of i.i.d. Gaussian response noise), but is roughly consistent with both the population coding framework of Pouget et al (‘PPC’) and a line attractor model from XJ Wang. (This sheds some light on the otherwise miraculous confluence of authors on this paper, for which Anne surely deserves high diplomatic honors).
To sum up, the paper shows an nice analysis of spike count variance in LIP responses, and a cute application of the law-of-total-variance. The doubly stochastic point process model of LIP responses seems ripe for more analysis.