Comp Neuro JC on “Probabilistic Neural Representations”

Wednesday (June 22), I presented the 3rd segment in a 4-part series on “Probabilistic Representations in the Brain” in the Computational & Theoretical Neuroscience Journal Club.  This summer, Comp JC has been re-configured to allow each lab to present a bloc of papers on a single topic. Our lab (which got stuck going first) decided to focus on a recent controversy over representations of uncertainty in the brain, namely: do neural responses represent parameters of or samples from probability distributions?  (I’ll try to unpack this distinction in a moment). These competing theories generated a lively and entertaining debate at the  Cosyne 2010 workshops, and we thought it would be fun to delve into some of the primary literature.

The two main competitors are:

  1. “Probabilistic Population Codes” (PPC) – advocated by Ma, Beck, Pouget, Latham and colleagues and (more recently, in a related but not identical form), Jazayeri, Movshon, Graf and Kohn.
    basic idea:  the log-probability distribution over stimuli is a linear combination of “kernels” (i.e., things that look kinda like tuning curves) weighted by neural spike counts. Each neuron has its own kernel, so the vector of population activity gives rise to a weighted sum of kernels that can have variable width, peak location, etc.  This log-linear representation of probabilities sits well with “Poisson-like” variability observed in cortex, and makes it easy to perform Bayesian inference (e.g., combine information from two different populations) using purely linear operations.  
    key paper
    :
    • Ma et al, Bayesian inference with probabilistic population codes. Nature Neuroscience (2006)

  2. “Sampling Hypothesis” – proposed by Fiser, Berkes, Orban & Lengyel.
    basic idea: Holds that neurons represent stimulus features, i.e., “causes” underlying sensory stimuli, which the brain would like to extract. Each neuron represents a particular feature, and higher spiking corresponds to more of that feature in a particular image. In this scheme, probabilities are represented by the variability in neural responses themselves: neurons sample their spike count from the probability distribution over the presence of the corresponding feature. So for example, a neuron that emits 75 spikes in every time bin has high certainty that the corresponding feature is present; a neuron that emits 4 spikes in every time bin carries high certainty that the corresponding feature is not present in the image; a neuron with variable spike count ranging between 0 and 100 spikes in each bin represents a high level of uncertainty about the presence or absence of the corresponding feature. This scheme is better suited to representing high-dimensional probability distributions, and makes interesting predictions about learning and spontaneous activity.
    key papers:

This week I presented the two (Fiser and Berkes) papers on the sampling hypothesis (slides: keynote, pdf). I have a few niggling complaints, which I may try to outline in a later post, but overall I think it’s a pretty cool idea and a very nice pair of papers. The idea that we should think about spontaneous activity as “sampling from the prior” seems interesting and original.

Who will ultimately win out?  It’s contest between a group of wild and woolly Magyars (“Hungarians”, in the parlance of our times) and an international coalition of would-be cynics led by an irascible Frenchman (humanized only by a laconic Dutchman with philanthropic bona fides). Since neither group enjoys a reputation for martial triumph, this conflict may play out for a while.  But our 4-part series will wrap up next week with a paper from Graf et al (presented by Kenneth) that puts the PPC theory to the test with neural data from visual cortex.

4 thoughts on “Comp Neuro JC on “Probabilistic Neural Representations”

  1. Why, this irascible post has me niggling to humanize the laconic parlance of bona fide Magyars! And it’s also a nice summary of our mythic sojourn across this intellectual landscape.

  2. I plan to present Barlow’s papers with Yongseok (one of Sriram’s student under Ila) next week. I hope that we can give some overview on “efficient coding” and “redundancy reduction”…

  3. Pingback: Poisson-like noise in the brain | Pillow Lab Blog

  4. Pingback: NIPS 2013 | Memming

Leave a comment