Cosyne 2013

We’ve recently returned from Utah, where several of us attended the 10th annual Computational and Systems Neuroscience (CoSyNe) Annual Meeting.  It’s hard to believe Cosyne is ten!  I got to have a little fun with the opening-night remarks, noting that Facebook and Cosyne were founded only a month apart in Feb/March 2004, with impressive aggregate growth in the years since:

FBCosyneGrowth

The meeting kicked off with a talk from Bill Bialek  (one of the invited speakers for the very first Cosyne—where he gave a chalk talk!), who provoked the audience with a talk entitled “Are we asking the right questions.” His answer (“no”) focused in part on the issue of what the brain is optimized for: in his view, for extracting information that is useful for predicting the future.

In honor of the meeting’s 10th anniversary, three additional reflective/provocative talks on the state of the field were contributed by Eve Marder, Terry Sejnowski, and Tony Movshon.  Eve spoke about how homeostatic mechanisms lead to “degenerate” (non-identifiable) biophysical models and confer robustness in neural systems. Terry talked about the brain’s sensitivity to “suspicious coincidences” of spike patterns and the recent BAM proposal (which he played a central part in advancing). Tony gave the meeting’s final talk, a lusty defense of primate neurophysiology against the advancing hordes of rodent and invertebrate neuroscience, arguing that we will only understand the human brain by studying animals with sufficiently similar brains.

See Memming’s blog post for a summary of some of the week’s other highlights.  We had a good showing this year, with 7 lab-related posters in total:

  • I-4. Semi-parametric Bayesian entropy estimation for binary spike trains. Evan Archer, Il M Park, & Jonathan W Pillow.  [oops—we realized after submitting that the estimator is not *actually* semi-parametric; live and learn.]
  • I-14. Precise characterization of multiple LIP neurons in relation to stimulus and behavior.  Jacob Yates, Il M Park, Lawrence Cormack, Jonathan W Pillow, & Alexander Huk.
  • I-28. Beyond Barlow: a Bayesian theory of efficient neural coding.  Jonathan W Pillow & Il M Park.
  • II-6. Adaptive estimation of firing rate maps under super-Poisson variability.  Mijung Park, J. Patrick Weller, Gregory Horwitz, & Jonathan W Pillow.
  • II-14. Perceptual decisions are limited primarily by variability in early sensory cortex.  Charles Michelson, Jonathan W Pillow, & Eyal Seidemann
  • II-94. Got a moment or two? Neural models and linear dimensionality reduction. Il M Park, Evan Archer, Nicholas Priebe, & Jonathan W Pillow
  • II-95. Spike train entropy-rate estimation using hierarchical Dirichlet process priors.  Karin Knudson & Jonathan W Pillow.

Year in Review: 2012

We’re now almost a month into 2013, but I wanted to post a brief reflection on our lab highlights from 2012.

Ideas / Projects:

Here’s a summary of a few of the things we’ve worked on:

Active Learning –  Basically: these are methods for adaptive, “closed-loop” stimulus selection, designed to improve neurophysiology experiments by selecting stimuli that tell you the most about whatever it is you’re interested in (so you don’t waste time showing stimuli that don’t reveal anything useful).  Mijung Park has made progress on two distinct active learning projects. The first focuses on estimating linear receptive fields in a GLM framework (published in NIPS 2012): the main advance is a particle-filtering based method for doing active learning under a hierarchical receptive field model, applied specifically with the “ALD” prior that incorporates localized receptive field structure (an extension of her 2011 PLoS CB paper).  The results are pretty impressive (if I do say so!), showing substantial improvements over Lewi, Butera & Paninski’s work (which used a much more sophisticated likelihood). Ultimately, there’s hope that a combination of the Lewi et al likelihood with our prior could yield even bigger advances.

The second active learning project involves a collaboration with Greg Horwitz‘s group at U. Washington, aimed at estimating the nonlinear color tuning properties of neurons in V1.  Here, the goal is to estimate an arbitrary nonlinear mapping from input space (the 3D space of cone contrasts, for Greg’s data) to spike rate, using a Gaussian Process prior over the space of (nonlinearly transformed) tuning curves. This extends Mijung’s 2011 NIPS paper to examine the role of the “learning criterion” and link function in active learning paradigms (submitted to AISTATS) and to incorporate response history and overdispersion (which occurs when spike count variance > spike count mean) (new work to be presented at Cosyne 2013).  We’re excited that Greg and his student Patrick Weller have started collecting some data using the new method, and plan to compare it to conventional staircase methods.

• Generalized Quadratic Models – The GQM is an extension of the GLM encoding model to incorporate a low-dimensional quadratic form (as opposed to pure linear form) in the first stage.  This work descends directly from our 2011 NIPS paper on Bayesian Spike-Triggered Covariance, with application to both spiking (Poisson) and analog (Gaussian noise) responses. One appealing feature of this setup is the ability to connect maximum likelihood estimators with moment-based estimators (response-triggered average and covariance) via a trick we call the “expected log-likelihood“, an idea on which Alex Ramirez and Liam Paninski have also done some very elegant theoretical work.

Basically, what’s cool about the GQM framework is that it combines a lot of desirable things: (1) ability to estimate a neuron’s (multi-dimensional, nonlinear) stimulus dependence very quickly when stimulus distribution is “nice” (like STC and iSTAC); (2) achieve efficient performance when stimulus distribution isn’t “nice” (like MID / maximum-likelihood); (3) incorporate spike history (like GLM), but with quadratic terms (making it more flexible than GLM, and unconditionally stable);  (4) apply to both spiking and analog data.  The work clarifies theoretical relationships between moment-based and likelihood-based formulations (novel, as far as we know, for the analog / Gaussian noise version).  This is joint work with Memming, Evan & Nicholas Priebe. (Jonathan gave a talk at SFN; new results to be presented at Cosyne 2013).

• Modeling decision-making signals in parietal cortex (LIP) – encoding and decoding analyses of the information content of spike trains in LIP, using a generalized linear model (with Memming and Alex Huk; presented in a talk by Memming at SFN 2012). Additional work by Kenneth and Jacob on Bayesian inference for “switching” and “diffusion to bound” latent variable models for LIP spike trains, using MCMC and particle filtering (also presented at SFN 2012).

• Non-parametric Bayesian models for spike trains / entropy estimation – Evan Archer, Memming Park and I have worked on extending the popular “Nemenman-Schafee-Bialek” (NSB) entropy estimator to countably-infinite distributions (i.e., cases where one doesn’t know the true number of symbols).  We constructed novel priors using mixtures of Dirichlet Processes and Pitman-Yor Processes, arriving at what we call a Pitman-Yor Mixture (PYM) prior; the resulting Bayes least-squares entropy estimator is explicitly designed to handle data with power-law tails (first version published in NIPS 2012.) We have some new work in this vein coming up at Cosyne, with Evan & Memming presenting a poster that models multi-neuron spike data with a Dirichlet process centered on a Bernoulli model (i.e., using a Bernoulli model for the base distribution of the DP).  Karin Knudson will present a poster about using the hierarchical Dirichlet process (HDP) to capture the Markovian structure in spike trains and estimate entropy rates.

• Bayesian Efficient Coding – new normative paradigm for neural coding, extending Barlow’s efficient coding hypothesis to a Bayesian framework. (Joint work with Memming: to appear at Cosyne 2013).

• Coding with the Dichotomized Gaussian model – Ozan Koyluoglu has been working to understand the representational capacity of the DG model, which provides an attractive alternative to the Ising model for describing the joint dependencies in multi-neuron spike trains. The model is known in the statistics literature as the “multivariate probit”, and it seems there should be good opportunities for cross-pollination here.

Other ongoing projects include spike-sorting (with Jon Shlens, EJ Chichilnisky, & Eero Simoncelli), prior elicitation in Bayesian ideal observer models (with Ben Naecker), model-based extensions of the MID estimator for neural receptive fields (with Ross Williamson and Maneesh Sahani), Bayesian models for biases in 3D motion perception (with Bas Rokers), models of joint choice-related and stimulus-related variability in V1 (with Chuck Michelson & Eyal Seidemann; to be presented at Cosyne 2013), new models for psychophysical reverse correlation (with Jacob Yates), and Bayesian inference methods for regression and factor analysis in neural models with negative binomial spiking (with James Scott, published in NIPS 2012).

Conferences:  We presented our work this year at: Cosyne (Feb: Salt Lake City & Snowbird), CNS workshops (July: Atlanta), SFN (Oct: New Orleans), NIPS (Dec: Lake Tahoe).

Reading Highlights

  • In the fall, we continued (re-started) our reading group on Non-parametric Bayesian models, focused in particular on models of discrete data based on the Dirichlet Process, in particular: Hierarchical Dirichlet Processes (Teh et al) and the Sequence Memoizer (Wood et al).
  • Kenneth has introduced us to Riemannian Manifold HMC and Hybrid Monte Carlo and some other fancy Bayesian inference methods, and is preparing to tell us about some implementations (using CUDA) that allow them to run super fast on the GPU (if you have an nvidia graphics card).
  • We enjoyed reading Simon Wood’s paper (Nature 2010), about “simulated likelihood methods” for doing statistical inference in systems with chaotic dynamics. Pretty cool idea, related to the Method of Simulated Moments, that he applies to some crazy chaotic (but simple) nonlinear models from population ecology.  Seems like an approach that might be useful for neuroscience applications (where we also have biophysical models described by nonlinear ODEs for which inference difficult!)

Milestones:

  • Karin Knudson: new lab member (Ph.D. student in mathematics), joined during fall semester.
  • Kenneth Latimer & Jacob Yates: passed INS qualifying exams
  • Mijung Park: passed Ph.D. qualifying exam in ECE.