Lab meeting 6/20/2011

Today I presented a paper from Liam’s group:  “A new look at state-space models for neural data”, Paninski et al,  JCNS 2009

The paper presents a high-level overview of state-space models for neural data, with an emphasis on statistical inference methods.  The basic setup of these models is the following:

• Latent variable Q  defined by dynamics distribution:  P(q_{t+1}|q_t)
• Observed variable Y defined by observation distribution: P(y_t | q_t).

These two ingredients ensure that the joint probability of latents and observed variables is
P(Q,Y) = P(q_1 ) P(y_1|q_1) \prod_{t=2}^T P(y_t | q_t) P(q_{t}|q_{t-1}) .
A variety of applications are illustrated (e.g., Q = common input noise; Y = multi-neuron spike trains).

The two problems we’re interested in solving, in general, are:
(1) Filtering / Smoothing:  inferring Q from noisy observations Y, given the model parameters \theta.
(2) Parameter Fitting: inferring \theta from observations Y.

The “standard” approach to these problems involves: (1) recursive approximate inference methods that involve updating a Gaussian approximation to P(q_t|Y) using its first two moments; and (2) Expectation-Maximization (EM) for inferring \theta.  By contrast, this paper emphasizes: (1) exact maximization for Q, which is tractable in O(T) via Newton’s Method, due to the banded nature of the Hessian; and (2) direct inference for \theta using the Laplace approximation to P(Y|\theta).  When the dynamics are linear and the noise is Gaussian, the two methods are exactly the same (since a Gaussian’s maximum is the same as its mean; the forward and backward recursions in Kalman Filtering/Smoothing are the same set of operations needed by Newton’s method). But for non-Gaussian noise or non-linear dynamics, the latter method may (the paper argues) provide much more accurate answers with approximately the same computational cost.

Key ideas of the paper are:

  • exact maximization of a log-concave posterior
  • O(T) computational cost, due to sparse (tridiagonal or banded) Hessian.
  • the Laplace approximation (Gaussian approximation to the posterior using its maximum and second-derivative matrix), which is (more likely to be) justified for log-concave posteriors
  • log-boundary method for constrained problems (which preserves sparsity)

Next week: we’ll do a basic tutorial on Kalman Filtering / Smoothing (and perhaps, EM).


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s