# Bayesian inference for Poisson-GLM with Laplace prior

During the last lab meeting, we talked about using expectation propagation (EP), an approximate Bayesian inference method, to fit generalized linear models (Poisson-GLMs) under Gaussian and Laplace (or exponential) priors on the filter coefficients. Both priors give rise to log-concave posteriors, and the Laplace prior has the useful property that the MAP estimate is often sparse (i.e., many weights are exactly zero).  EP attempts to find the posterior mean, which is not (ever) sparse, however.

Bayesian inference under a Laplace prior is quite challenging. Unfortunately, our best friend the Laplace approximation is intractable, since the prior is non-differentiable at zero.

maximize $p(Y|x,w) = \frac{e^{Yxw}}{(1 + e^{xw})}$, where Y is the observer’s responses, x is a matrix of the stimulus (trials x stimulus vector) augmented by a column of ones (for the observer’s bias), and w is the observer’s kernel (size = [1 x(1,:)]). Using a sparse prior (L1 norm) over a set of smooth basis (defined by a laplacian pyramid) reduces the number of trials required to fit the kernel while adding only one hyperparameter. The authors use simulations and real psychophysical data to fit an observer’s psychophysical kernel and their code is available here.