Quantifying the effect of intertrial dependence on perceptual decisions

Today we discussed a paper on sequential effects in psychophysics by Fründ et al. Although inter-trial dependencies are known to exist, psychophysical responses are typically modeled as independent Bernoulli observations. One way to account for the effect of previous trial outcomes is to use logistic regression with terms that represent previous stimuli or responses (Busse et al). Fründ and colleagues extend this approach by including a lapse rate which captures responses where the subject was not doing the task.

Continue reading

Partitioning Neural Variability

On July 7, we discussed Partitioning Neural Variability by Gorris et al.  In this paper, the authors seek to isolate the portion of the variability of sensory neurons that comes from non-sensory sources such as arousal or attention.  In order to partition the variability in a principled way, the authors propose a “modulated Poisson framework” for spiking neurons, in which a neuron produces spikes according to a Poisson process whose mean rate is the product of a stimulus-driven component f(S) , and a stimulus-independent ‘gain’ term (G).

Continue reading

Noise correlation in V1: 1D-dynamics explains differences between anesthetized and awake

We discussed state dependence of noise correlations in macaque primary visual cortex [1] today. Noise correlation quantifies the covariability in spike counts between neurons (it’s called noise correlation because the signal (stimulus) drive component has been subtracted out). In a 2010 science paper [2], noise correlation was shown to be much smaller than previously reported; in the range of 0.01 compared to the usual 0.1-0.2 range and stirred up the field (see [3] for a list of values). In this paper, they argue that this difference in noise correlation magnitude is due to population level covariations during anesthesia (they used sufentanil).

Continue reading

Lab Meeting 2/11/2013: Spectral Learning

Last week I gave a brief introduction to Spectral Learning. This is a topic I’ve been wanting to know more about since reading the (very cool) NIPS paper from Lars Buesing, Jakob Macke and Maneesh Sahani, which describes a spectral method for fitting latent variable models of multi-neuron spiking activity (Buesing et al 2012).  I presented some of the slides (and a few video demos) from the spectral learning tutorial organized by Geoff Gordon, Byron Boots, and Le Song  at AISTATS 2012.

So, what is spectral learning, and what is it good for?  The basic idea is that we can fit latent variable models very cheaply using just the eigenvectors of a covariance matrix, computed directly from the observed data. This stands in contrast to “standard” maximum likelihood methods (e.g., EM or gradient ascent), which require expensive iterative computations and are plagued by local optima.

Continue reading