At long last, I’ve finished cleaning, commenting, and packaging up code for binary pursuit spike sorting, introduced in our 2013 paper in PLoS ONE. You can download the Matlab code here (or on github), and there’s a simple test script to illustrate how to use it on a simulated dataset.
The method relies on a generative model (of the raw electrode data) that explicitly accounts for the superposition of spike waveforms. This allows it to detect synchronous and overlapping spikes in multi-electrode recordings, which clustering-based methods (by design) fail to do.
If you’d like to know more (but don’t feel like reading the paper), I wrote a blog post describing the basic intuition (and the cross-correlation artifacts that inspired us to develop it in the first place) back when the paper came out (link).
Yesterday marked the start of the 2014 summer course in COMPUTATIONAL NEUROSCIENCE: VISION at Cold Spring Harbor. The course was founded in 1985 by Tony Movshon and Ellen Hildreth, with the goal of inspiring new generations of students to address problems at the intersection of vision, computation, and the brain. The list of past attendees is impressive.
I’m proud to announce the publication of our “zombie” spike sorting paper (Pillow, Shlens, Chichilnisky & Simoncelli 2013), which addresses the problem of detecting overlapped spikes in multi-electrode recordings.
The basic problem we tried to address is that standard “clustering” based spike-sorting methods often miss near-synchronous spikes. As a result, you get cross-correlograms that look like this:
When I first saw these correlograms (back in 2005 or so), I thought: “Wow, amazing —retinal ganglion cells inhibit each other with 1-millisecond precision! Should we send this to Nature or Science?” My more sober experimental colleagues pointed out that that this was likely only a (lowly) spike sorting artifact. So we set out to address the problem (leading to the publication of this paper a mere 8 years later!)
This week, Memming and I are in Columbus, Ohio for a workshop on “Sensory and Coding”, organized by Brent Doiron, Adrienne Fairhall, David Kleinfeld, and John Rinzel.
Monday was “Big Picture Day”, and I gave a talk about Bayesian Efficient Coding, which represents our attempt to put Barlow’s Efficient Coding Hypothesis in a Bayesian framework, with an explicit loss function to specify what kinds of posteriors are “good”. One of my take-home bullet points was that “you can’t get around the problem of specifying a loss function”, and entropy is no less arbitrary than other choice. This has led to some stimulating lunchtime discussions with Elad Schneidman, Surya Ganguli, Stephanie Palmer, David Schwab, and Memming over whether entropy really is special (or not!).
It’s been a great workshop so far, with exciting talks from a panoply of heavy hitters, including Garrett Stanley, Steve Baccus, Fabrizio Gabbiani, Tanya Sharpee, Nathan Kutz, Adam Kohn, and Anitha Pasupathy. You can see the full lineup here:
We’ve recently returned from Utah, where several of us attended the 10th annual Computational and Systems Neuroscience (CoSyNe) Annual Meeting. It’s hard to believe Cosyne is ten! I got to have a little fun with the opening-night remarks, noting that Facebook and Cosyne were founded only a month apart in Feb/March 2004, with impressive aggregate growth in the years since:
The meeting kicked off with a talk from Bill Bialek (one of the invited speakers for the very first Cosyne—where he gave a chalk talk!), who provoked the audience with a talk entitled “Are we asking the right questions.” His answer (“no”) focused in part on the issue of what the brain is optimized for: in his view, for extracting information that is useful for predicting the future.
In honor of the meeting’s 10th anniversary, three additional reflective/provocative talks on the state of the field were contributed by Eve Marder, Terry Sejnowski, and Tony Movshon. Eve spoke about how homeostatic mechanisms lead to “degenerate” (non-identifiable) biophysical models and confer robustness in neural systems. Terry talked about the brain’s sensitivity to “suspicious coincidences” of spike patterns and the recent BAM proposal (which he played a central part in advancing). Tony gave the meeting’s final talk, a lusty defense of primate neurophysiology against the advancing hordes of rodent and invertebrate neuroscience, arguing that we will only understand the human brain by studying animals with sufficiently similar brains.
See Memming’s blog post for a summary of some of the week’s other highlights. We had a good showing this year, with 7 lab-related posters in total:
- I-4. Semi-parametric Bayesian entropy estimation for binary spike trains. Evan Archer, Il M Park, & Jonathan W Pillow. [oops—we realized after submitting that the estimator is not *actually* semi-parametric; live and learn.]
- I-14. Precise characterization of multiple LIP neurons in relation to stimulus and behavior. Jacob Yates, Il M Park, Lawrence Cormack, Jonathan W Pillow, & Alexander Huk.
- I-28. Beyond Barlow: a Bayesian theory of efficient neural coding. Jonathan W Pillow & Il M Park.
- II-6. Adaptive estimation of firing rate maps under super-Poisson variability. Mijung Park, J. Patrick Weller, Gregory Horwitz, & Jonathan W Pillow.
- II-14. Perceptual decisions are limited primarily by variability in early sensory cortex. Charles Michelson, Jonathan W Pillow, & Eyal Seidemann
- II-94. Got a moment or two? Neural models and linear dimensionality reduction. Il M Park, Evan Archer, Nicholas Priebe, & Jonathan W Pillow
- II-95. Spike train entropy-rate estimation using hierarchical Dirichlet process priors. Karin Knudson & Jonathan W Pillow.
See Memming’s post on NIPS 2011 highlights.
I’m still hoping to post my own list of highlights, but may have to wait until after the flurry of Cosyne-related review activity subsides.
Tomorrow I’ll be speaking at a Symposium on Minds, Brains and Models at City University of New York, the third in a series organized by Bill Bialek. I will present some of our recent work on model-based approaches to understanding the neural code in parietal cortex (area LIP), which is joint work with Memming, Alex Huk, Miriam Meister, & Jacob Yates.
Encoding and decoding of decision-related information from spike trains in parietal cortex (12:00 PM )
Looks to be an exciting day, with talks from Sophie Deneve, Elad Schneidman & Gasper Tkacik.
Our paper came out online today!
Receptive Field Inference with Localized Priors
M. Park & J.W. Pillow
PLoS Comput Biol 7(10). (2011).
Great to see it in print. Strangely, PLoS CB doesn’t send galley proofs, and they introduced a small error in one of the figs. Hopefully they’ll agree to fix it…
Next week, Evan Archer and I are off to Woods Hole, MA for the Methods in Computational Neuroscience (MCN 2011) course, organized by Adrienne Fairhall (U. Washington) and Michael Berry (Princeton).
You can get a sense of what we’ll be up to from the provisional schedule (schedule.pdf). I lecture during the first week, which will focus on “neural coding”. After that, I head back to the simmering Texas Cauldron, but Evan gets to stay for the whole month—lucky Evan! Be careful not to O.D. on bioluminescence…
If you happen to be in Miami Beach, FL and have had enough of sun, sand and Art Deco, come hear Memming speak about “Spike Train Kernel Methods for Neuroscience” at the JSM 2011, in a Monday session on Statistical Modeling of Neural Spike, organized by Dong Song (USC) and Haonan Wang (Colorado State).
Memming will speak about kernel-based methods for clustering, decoding, and computing distances between spike trains, which he started during his Ph.D. at U. Florida with José Príncipe.