# Inferring synaptic plasticity rules from spike counts

In last week’s computational & theoretical neuroscience journal club I presented the following paper from Nicolas Brunel’s group:

Inferring learning rules from distributions of firing rates in cortical neurons.
Lim, McKee, Woloszyn, Amit, Freedman, Sheinberg, & Brunel.
Nature Neuroscience (2015).

The paper seeks to explain experience-dependent changes in IT cortical responses in terms of an underlying synaptic plasticity rule. Continue reading

# How can single neurons predict behavior?

Pitkow et al., Neuron, 87, 411-423, 2015

A couple of weeks ago I presented Xaq Pitkow et al.’s paper examining the convoluted relationship between choice probabilities (CP), information-limiting correlations (ILC), and suboptimal coding.

# Spawning a realistic model of the brain?

I (Memming) presented Eliasmith et al. “A Large-Scale Model of the Functioning Brain” Science 2012 for our computational neuroscience journal club. The authors combined their past efforts for building various modules for solving cognitive tasks to build a large-scale spiking neuron model called SPAUN.

# Comp Neuro JC: “Implicit Encoding of Prior Probabilities in Optimal Neural Populations”

For Wednesday’s Computational & Theoretical Neuroscience Journal Club I presented a paper by Deep Ganguli and Eero Simoncelli on priors in optimal population coding,

The paper considers a population of neurons which encode a single, scalar stimulus variable, $s$. Given that each stimulus value $s$ has some probability $p(s)$ of occurring in the environment, what is the best possible population code?

The paper frames this question as a mathematical optimization problem, quantifying the notion of “best population” using Fisher Information
as a measure of the amount of information carried about a given stimulus in the population. Through the use of some careful assumptions and approximations, and by parameterizing the solutions through a very clever “warping” transform of a simple population, they’re able to obtain an optimization program that’s analytically solvable. The end result are predictions of a population’s density (neurons/stimulus, roughly) and gain (mean spike rate) as a function of the prior, $p(s)$ – an implicit encoding of the prior in the population. Comparisons of measured prior distributions of spatial frequency and orientation (in natural images) with predictions based on experimentally-recorded densities yield good matches.

# Comp Neuro JC on “Probabilistic Neural Representations”

Wednesday (June 22), I presented the 3rd segment in a 4-part series on “Probabilistic Representations in the Brain” in the Computational & Theoretical Neuroscience Journal Club.  This summer, Comp JC has been re-configured to allow each lab to present a bloc of papers on a single topic. Our lab (which got stuck going first) decided to focus on a recent controversy over representations of uncertainty in the brain, namely: do neural responses represent parameters of or samples from probability distributions?  (I’ll try to unpack this distinction in a moment). These competing theories generated a lively and entertaining debate at the  Cosyne 2010 workshops, and we thought it would be fun to delve into some of the primary literature.

The two main competitors are:

1. “Probabilistic Population Codes” (PPC) – advocated by Ma, Beck, Pouget, Latham and colleagues and (more recently, in a related but not identical form), Jazayeri, Movshon, Graf and Kohn.
basic idea:  the log-probability distribution over stimuli is a linear combination of “kernels” (i.e., things that look kinda like tuning curves) weighted by neural spike counts. Each neuron has its own kernel, so the vector of population activity gives rise to a weighted sum of kernels that can have variable width, peak location, etc.  This log-linear representation of probabilities sits well with “Poisson-like” variability observed in cortex, and makes it easy to perform Bayesian inference (e.g., combine information from two different populations) using purely linear operations.
key paper
:
• Ma et al, Bayesian inference with probabilistic population codes. Nature Neuroscience (2006)

2. “Sampling Hypothesis” – proposed by Fiser, Berkes, Orban & Lengyel.
basic idea: Holds that neurons represent stimulus features, i.e., “causes” underlying sensory stimuli, which the brain would like to extract. Each neuron represents a particular feature, and higher spiking corresponds to more of that feature in a particular image. In this scheme, probabilities are represented by the variability in neural responses themselves: neurons sample their spike count from the probability distribution over the presence of the corresponding feature. So for example, a neuron that emits 75 spikes in every time bin has high certainty that the corresponding feature is present; a neuron that emits 4 spikes in every time bin carries high certainty that the corresponding feature is not present in the image; a neuron with variable spike count ranging between 0 and 100 spikes in each bin represents a high level of uncertainty about the presence or absence of the corresponding feature. This scheme is better suited to representing high-dimensional probability distributions, and makes interesting predictions about learning and spontaneous activity.
key papers:

This week I presented the two (Fiser and Berkes) papers on the sampling hypothesis (slides: keynote, pdf). I have a few niggling complaints, which I may try to outline in a later post, but overall I think it’s a pretty cool idea and a very nice pair of papers. The idea that we should think about spontaneous activity as “sampling from the prior” seems interesting and original.

Who will ultimately win out?  It’s contest between a group of wild and woolly Magyars (“Hungarians”, in the parlance of our times) and an international coalition of would-be cynics led by an irascible Frenchman (humanized only by a laconic Dutchman with philanthropic bona fides). Since neither group enjoys a reputation for martial triumph, this conflict may play out for a while.  But our 4-part series will wrap up next week with a paper from Graf et al (presented by Kenneth) that puts the PPC theory to the test with neural data from visual cortex.