On July 28th, I presented the following paper in lab meeting:
- Efficient and direct estimation of a neural subunit model for sensory coding,
Vintch, Zaharia, Movshon, & Simoncelli, NIPS 2012
This paper proposes a new method for characterizing the multi-dimensional stimulus selectivity of sensory neurons. The main idea is that, instead of thinking of neurons as projecting high-dimensional stimuli into an arbitrary low-dimensional feature space (the view underlying characterization methods like STA, STC, iSTAC, GQM, MID, and all their rosy-cheeked cousins), it might be more useful / parsimonious to think of neurons as performing a projection onto convolutional subunits. That is, rather than characterizing stimulus selectivity in terms of a bank of arbitrary linear filters, it might be better to consider a subspace defined by translated copies of a single linear filter.
Fig 1 illustrates the rationale rather beautifully:
Panel (a) shows a bank of subunit filters — translated copies of a single filter (here scaled by the weighting of each subunit). The output of each subunit filter passes through a point nonlinearity, and the resulting values are summed to form the neural output. Panel (b) gives the corresponding geometric picture: the arrows labeled indicate vectors defined by the subunit filter shifted to positions 1, …, 4, respectively. Note that these vectors lie on a hyper-sphere (since translations do not change the length of a vector). The span of these vectors defines a multi-dimensional space of stimuli that will excite the neuron (assuming their projection is positive, so perhaps we should say half-space). But if we were to attempt to characterize this space using spike-triggered covariance analysis (panel c), we would obtain a basis defined by a collection of orthonormal vectors. (The slightly subtle point being made here is that the number of vectors we can recover grows with the size of the dataset — so in the bottom plot with
datapoints, there are four “significant” eigenvalues corresponding to a four-dimensional subspace). The four eigenvectors
span the same subspace as
, but they have been orthogonalized, and their relationship to the original subunit filter shape is obscured.
If you think about it, this picture is devastating for STC analysis. The subunit model gives rise to a 4-dimensional feature space with a single filter shifted to 4 different locations (which requires parameters), whereas STC analysis requires four distinct filters (
parameters). But the real calamity comes in trying to estimate the nonlinear mapping from feature space to spike rate. For STC analysis, we need to estimate an arbitrary nonlinear function in 4 dimensions. That is, we need to go into the 4-dimensional space spanned by filters
and figure out how much the neuron spikes for each location in that 4D hypercube. In the subunit model, by contrast, the nonlinear behavior is assumed to arise from a (rectifying) scalar nonlinearity applied to the output of each subunit filter. We need only a single 1D nonlinearity and a set of 4 weights.
So clearly, if a neuron’s response is governed by a linear combination of nonlinear subunits, it’s crazy to attempt to go about characterizing it with STC analysis.
This idea is not entirely new, of course. Nicole Rust’s 2005 paper suggested that the derivative-like filters revealed by spike-triggered covariance analysis of V1 responses might be a signature of nonlinear subunits (Rust et al, Neuron 2005). Kanaka Rajan & Bill Bialek made a related observation about STC analysis (Rajan & Bialek 2013), noting that many of the simple computations we might expect a neuron to perform (e.g., motion selectivity) lead to relatively high-dimensional features spaces (as opposed to the low-dimensional subspaces expected by STC). Tatyanya Sharpee’s lab has introduced a method for characterizing translation-invariant receptive fields inspired by maximally-informative-dimensions estimator (Eickenberg et al 2012). And of course, the idea of nonlinear subunits is much older, going back to at least 1960s work in retina (e.g., Barlow & Levick 1965).
The Vintch et al paper describes a particular method for characterizing the subunit model, which they formulate as a linear-nonlinear-linear (LNL) model from stimulus to response
given by:
+ noise
where denotes the subunit filter
shifted to the
‘th stimulus position,
is the subunit nonlinearity, and
is the weight on the output of the
‘th subunit. I won’t belabor the details of the fitting procedure, but the results are quite beautiful, showing that one can obtain substantially more accurate prediction of V1 responses (with far fewer parameters!) using a model with separate populations of excitatory and inhibitory subunits.
We finished lab meeting with a brief discussion of some new ideas I’ve been working on with Anqi and Memming, aimed at developing fast, scalable methods for fitting high-dimensional subunit models. So watch this space, and hopefully we’ll have something new to report in a few months!
References
- Efficient and direct estimation of a neural subunit model for sensory coding. Vintch, Zaharia, Movshon, & Simoncelli, NIPS 2012.
- Spatiotemporal elements of macaque V1 receptive fields. Rust, Schwartz, Movshon & Simoncelli, Neuron 2005.
- Maximally Informative “Stimulus Energies” in the Analysis of Neural Responses to Natural Signals. Rajan & Bialek, PLoS ONE 2013.
- Characterizing Responses of Translation-Invariant Neurons to Natural Stimuli: Maximally Informative Invariant Dimensions. Eickenberg,Rowekamp, Kouh, & Sharpee. Neural Comp., 2012.
- The mechanism of directionally selective units in rabbit’s retina. Barlow & Levick, J Physiol 178(3):1965.