Evaluating point-process likelihoods

We recently discussed two recent papers proposing improvements to the commonly used discrete approximation of the log-likelihood for a point process (Paninski, 2004). The likelihood is written as $ll(T|t_{1,..,N(T)}) = \sum_{i = 1}^{N(T)} \log( \lambda(t_i|\mathcal{H}_{t_i})) - \int_0^T \lambda(t|\mathcal{H}_{t}) dt$
where $t_i$ are the spike times and $\lambda(t|\mathcal{H}_t)$ is the conditional intensity function (CIF) of the process at time $t$ given the preceding spikes. Typically, the integral in this equation cannot be evaluated in closed form. The standard approximation computes the function by binning along a regular lattice with bins size $\delta$ $ll(T|t_{1,..,N(T)}) \approx \sum_{i = 1}^{l} \Delta N_i \log(\lambda_i \delta) - \lambda_i\delta$
where $\Delta N_i$ is the number of spikes in the $i$th bin. Both papers demonstrate that smarter approximations to the integral are better for point-process statistics than naïvely binning spike train data.

The first paper we discussed (Citi et al., 2014) used the fact that neurons have an absolute refractory period after each spike to derive a correction term which makes up for the average spike placement within a bin $ll(T|t_{1,..,N(T)}) \approx \sum_{i = 1}^{l} \Delta N_i \log(\lambda_i \delta) - \left( 1-\frac{\Delta N_i}{2}\right)\lambda_i\delta$
This correction is extremely simple to implement, but the authors demonstrate that it not provides a better approximation to the likelihood, but GLMs can be successfully fit to data using larger bin sizes than with the naïve approximation.

The second paper (Mena & Paninski, submitted) proposes to approximate the integral in the point-process likelihood using quadrature methods. The assuming the CIF is zero for some time $tau$ after each spike $\int_0^T \lambda(t|\mathcal{H}_{t}) dt = \int_0^{t_1} \lambda(t|\mathcal{H}_{t}) dt + \sum_{i=1}^{NT} \int_{t_i+\tau}^{t_{i+1}} \lambda(t|\mathcal{H}_{t}) dt$
Then the smaller integrals ( $\int_{t_i+\tau}^{t_{i+1}} \lambda(t|\mathcal{H}_{t}) dt$) are computed using Gaussian quadrature.
This method produces a very accurate approximation to the true (continuous) likelihood function, and is more accurate than the likelihood given by Citi et al. while evaluating the CIF at less points. In addition, they point out that their likelihood remains just as simple to maximize as the standard approximation when using non-adaptive quadrature methods. However, they do not show that their method improves parameter estimation of a GLM in practice.