Sensory Coding Workshop @ MBI

This week, Memming and I are in Columbus, Ohio for a workshop on “Sensory and Coding”, organized by  Brent DoironAdrienne FairhallDavid Kleinfeld, and John Rinzel.

Monday was “Big Picture Day”, and I gave a talk about Bayesian Efficient Coding, which represents our attempt to put Barlow’s Efficient Coding Hypothesis in a Bayesian framework, with an explicit loss function to specify what kinds of posteriors are “good”. One of my take-home bullet points was that “you can’t get around the problem of specifying a loss function”, and entropy is no less arbitrary than other choice. This has led to some stimulating lunchtime discussions with Elad Schneidman, Surya Ganguli, Stephanie Palmer, David Schwab, and Memming over whether entropy really is special (or not!).

It’s been a great workshop so far, with exciting talks from a panoply of heavy hitters, including  Garrett Stanley, Steve Baccus, Fabrizio Gabbiani, Tanya Sharpee, Nathan Kutz, Adam Kohn, and Anitha Pasupathy.  You can see the full lineup here:
http://mbi.osu.edu/2012/ws6schedule.html

Lab meeting 4/18/2011

Today we discussed Nemenman et. al.’s  Neural Coding of Natural Stimuli: Information at Sub-Millisecond Resolution PLOS Comp. Bio. 2008.

Given a slowly changing naturalistic stimuli with correlation in the time scale of 55 ms, is there information in the spike trains from fly H1 neuron in the time sub-millisecond scale. To quantify this, mutual information was estimated with different word length, and bin sizes. The main figure 4D suggests there is information in the smaller time scale, and this is demonstrated directly by choosing a few spike patterns that correspond to same larger time scale representation and showing stimuli (velocity) conditioned on those patterns (figure 5).

Mutual information rate is estimated using NSB entropy estimators with extra steps of extrapolation/fitting to obtain (1) asymptotic entropy rate (infinite data), (2) large word size, (3) remove empirical fluctuations of the estimate due to structure in the stimulus or response. These are more or less empirical approach to get better estimates. The mutual information is estimated by taking the difference between the marginal entropy and conditional entropy \mathcal{I}(R;S) = H(R) - H(R|S). One quantity that was not extrapolated was the limit of bin size going to zero.

NSB estimator is a Bayesian estimator that uses an approximately flat prior on entropy itself. They show that in 1D case, the uniform prior on probability space results in a poor entropy estimator. It would be interesting to see the actual prior distribution over probability given a flat prior on entropy.

One question of the overall methodology is the estimation of noise entropy using just 5 seconds of data repeated 100 times. How diverse is the stimulus? How robust is the estimated noise entropy obtained this way?

Lab Meeting 3/30/11 (Wed)

Mijung will present the following paper this Wednesday during the lab meeting:

 

Sequential Optimal Design of Neurophysiology Experiments
Jeremy Lewi, Robert Butera, Liam Paninski
Neural Computation 21, 619-687 (2007) (pdf)

 

“Since this is a long paper (69 papers in total, I guess this is almost the same as the author’s PhD thesis), I will aim to summarize the math part and look at the simulation results, which will be chapter 1 – 5.”