Partitioning Neural Variability

On July 7, we discussed Partitioning Neural Variability by Gorris et al.  In this paper, the authors seek to isolate the portion of the variability of sensory neurons that comes from non-sensory sources such as arousal or attention.  In order to partition the variability in a principled way, the authors propose a “modulated Poisson framework” for spiking neurons, in which a neuron produces spikes according to a Poisson process whose mean rate is the product of a stimulus-driven component f(S) , and a stimulus-independent ‘gain’ term (G).

Continue reading

Lab Meeting 11/12/13: Spike Identification Using Continuous Basis Pursuit

In a previous lab meeting (7/9/13) we discussed the Continuous Basis Pursuit algorithm presented in Ekanadham et al., 2011.  In today’s meeting, we considered the recent work by the authors in applying this method to the problem of identifying action potentials of different neurons from extracellular electrode recordings (Ekanadham et al., 2013).   Most current methods for automatic spike sorting involve identifying candidate regions where a spike may have occurred, projecting the data from these candidate time intervals onto some lower dimensional feature space, and then using clustering methods to group segments with similar voltage traces.   These methods tend to perform badly when the waveforms corresponding to spikes from different neurons overlap.

The authors of this paper model the voltage trace as the (noisy) linear combination of waveforms that are translated continuously in time:  V(t) = \sum_n^N \sum_i^C a_{n,i}W_n(t-\tau_{n,i}) + \epsilon(t).

The method proceeds, roughly, by alternately using least square to estimate the shape the waveforms W_{n}, and then using Continuous Basis Pursuit to estimate locations and amplitude (\tau_{n,i}, a_{n,i} ) of the waveforms present in the signal.

References

  • Ekanadham, Chaitanya, Daniel Tranchina, and Eero P. Simoncelli. “A unified framework and method for automatic neural spike identification.” Journal of neuroscience methods (2013).
  • Ekanadham, Chaitanya, Daniel Tranchina, and Eero P. Simoncelli. “Recovery of sparse translation-invariant signals with continuous basis pursuit.” Signal Processing, IEEE Transactions on 59.10 (2011): 4735-4744.
  • Ekanadham, Chaitanya, Daniel Tranchina, and Eero P. Simoncelli. “A blind sparse deconvolution method for neural spike identification.” Advances in Neural Information Processing Systems. 2011.

Lab Meeting 7/9/13: Continuous Basis Pursuit

We discussed the method of Continuous Basis Pursuit introduced in recent papers Ekanadham et al. for decomposing a signal into a linear combination continuously translated copies of a small set of elementary features.  A standard method for recovering the time-shifts and amplitudes for these features is to introduce a dictionary consisting of many shifted copies of the features, and then use basis pursuit or a related method to recover a representation of the signal as a sparse linear combination of these shifted copies of the elementary waveforms.  Accurately representing the signal requires relatively close spacing of the dictionary elements; however, such close spacing yields highly correlated dictionary elements, which decreases the performance of basis pursuit and related recovery algorithms.

With Continuous Basis Pursuit, Ekanadham et al. circumvent this problem by first augmenting the dictionary to allow for continuous interpolation, either using a first- or second- order Taylor interpolation or using an interpolation based on trigonometric splines.  These augmentations increase the ability to accurately represent the signal as a sparse sum of features, without introducing too much additional correlation in the set of dictionary vectors, and the smooth interpolation allows the recovery of continuous, rather than discrete, time shifts.  This kind of decomposition of a signal into (continuously) shifted copies of a few basic features is useful in the spike-sorting problem, for example.

References

  • Ekanadham, Chaitanya, Daniel Tranchina, and Eero P. Simoncelli. “Recovery of sparse translation-invariant signals with continuous basis pursuit.” Signal Processing, IEEE Transactions on 59.10 (2011): 4735-4744.
  • Ekanadham, Chaitanya, Daniel Tranchina, and Eero Simoncelli. “A blind deconvolution method for neural spike identification.”

A Hierarchical Pitman-Yor Model of Natural Language

In the lab meeting on 9/17, we discussed the hierarchical, non-parametric Bayesian model for discrete sequence data presented in:

Wood, Archambeau, Gasthaus, James, & Teh,  A Stochastic Memoizer for Sequence Data.  ICML, 2009.

The authors extend previous work that used hierarchically linked Pitman-Yor processes to model the predictive distribution of a word given a context of finite length (an n-gram model), and here consider the distribution of words conditioned on a context of unbounded length (an \infty-gram model). The hierarchical structuring allows for the combination of information from contexts of different lengths, and the Pitman-Yor process allows for power-law distributions of words similar to those seen in natural language.  The authors develop the sequence memoizer and use coagulation and fragmentation operators to marginalize and reduce the computational complexity and create a collapsed graphical model on which inference is more efficient. The model is shown to perform well (i.e. have low perplexity) compared to existing models when applied to New York Times and Associated Press data.