I’m proud to announce the publication of our “zombie” spike sorting paper (Pillow, Shlens, Chichilnisky & Simoncelli 2013), which addresses the problem of detecting overlapped spikes in multi-electrode recordings.

The basic problem we tried to address is that standard “clustering” based spike-sorting methods often miss near-synchronous spikes. As a result, you get cross-correlograms that look like this:

When I first saw these correlograms (back in 2005 or so), I thought: “Wow, amazing —retinal ganglion cells inhibit each other with 1-millisecond precision! Should we send this to Nature or Science?” My more sober experimental colleagues pointed out that that this was likely only a (lowly) spike sorting artifact. So we set out to address the problem (leading to the publication of this paper a mere 8 years later!)

Let me just give a little bit of intuition for the problem and our solution, mainly as an excuse for showing some of the beautiful figures that Jon Shlens made.

**Clustering-based spike sorting**

Most spike-sorting algorithms use some variant of clustering. They identify “candidate” spike waveforms (basically, large deviations in the recorded signal) and apply clustering to determine which spikes came from which neurons. The problem is that when two spikes are close together in time, they superimpose linearly, creating a composite waveform that may not look much like either of the two spike waveforms in isolation. Here’s a figure illustrating the phenomenon, showing the superposition of two synchronous spikes on a single channel (A) and in principal component space (B):

In B, the black cloud of points represent waveforms assigned to the first neuron (black trace in A), and the red cloud of points represent waveforms assigned to the second neuron (red trace in A). The blue arrow shows where you end up in PCA space if you add the mean black spike to the mean red spike. Crazy, huh? You end up way off in no-man’s land!

Let me point out: this is *actual data* from an *actual pair of retinal ganglion cells*, recorded by Jon Shlens in EJ Chichilnisky‘s lab. See those little gray dots near the blue arrow? Those are projections of actual waveforms recorded during the experiment, which the clustering algorithm simply threw out. (Makes sense, given how far they are from both clusters, right?) Discarding these spikes is what produced the curious artifact in the cross-correlogram shown above.

(The problem is actually slightly worse than this: if the spikes are not precisely synchronous but still overlap, then it turns out that the superposition of waveforms traces out an entire manifold in PCA space; see Fig 1C-D of the paper if your eyes can handle it.)

(Note #2: the problem also arises for window discriminators, matched filtering, and basically all methods that don’t explicitly take superposition into account.)

**Binary pursuit**

The good news is that we can address this problem using a generative model that incorporates superposition of spike waveforms. I won’t bore you with details here (read the paper if you like to be bored), but the basic idea is to use the clustering-based algorithm to identify the isolated spikes in the recording. *Then*, go through the recorded data and (greedily) insert spikes whenever doing so will reduce the residual error (essentially, whenever it looks like a particular spike should go there), with a sparse prior on spiking. This results in an (approximate) MAP estimate of the spike train. (We call it *binary pursuit* since it’s essentially a binary version of matching pursuit).

Nothing too fancy here. Similar methods have been developed by Michael Berry’s and Vijay Balasubramanian’s groups (see Prentice et al 2011). There are no theoretical or practical guarantees with our method, but there are a few cool tricks and I think the results are pretty nice / compelling. Here’s one of the comely figures Jon made:

**Left**: black dots: spikes detected by clustering; red dots: “missed” spikes.

**Middle**: colored ellipses: where we’d expect points to be, as a function of time-shift of second spike (based on generative model).

**Right: **overlapped spikes detected by binary pursuit (colored by time-shift).

And finally: look what it does with the correlograms:

(Showing 8 pairs of neurons: gray=original clustering; red = corrected with our method.)

There is of course huge room for improvement. One would like to combine our approach with methods that identify the number of neurons (e.g., see Frank Wood’s work on Dirichlet-Process based spike sorting) and allow for non-stationarity in spike waveforms (see Calabrese & Paninski 2010). (And I should add: the last section of the paper focuses on the signal detection theory issue of knowing when to trust the results of this or any algorithm.)

There’s lots of active work on this problem, and in my view it’s a great area for the development and application of Bayesian methods*, since we have a ton of prior information about spike trains and neural tissue and the statistical features of the recorded data, much of which has yet to be exploited. The challenge is to find computationally efficient inference methods, since the raw data is pretty **big** (512 electrodes sampled at 20KHz, for our dataset).

* However, in my view, Frank Wood‘s utopian vision of a world in which neuroscientists *don’t* spike sort, but rather perform all analyses on multiple *samples from the posterior over spike sortings,* is still (thankfully) a very long ways off.