NP Bayes reading group (9/27): hierarchical DPs

Our second NPB reading group meeting took aim at the seminal 2006 paper (with >1000 citations!) by Teh, Jordan, Beal & Blei on Hierarchical Dirichlet Processes. We were joined by newcomers Piyush Rai (newly arrived SSC postdoc), and Ph.D. students Dan Garrette (CS) and Liang Sun (mathematics), both of whom have experience with natural language models.

We established a few basic properties of the hierarchical DP, such as the the fact that it involves creating dependencies between DPs by endowing them with a common base measure, which is itself sampled from a DP. That is:

  • G_0 \sim DP(\gamma, H)     (“global measure” sampled from DP with base measure H and concentration \gamma).
  • G_j|\alpha_0,G_0 \sim DP(\alpha_0,G_0)  (sequence of conditionally independent random measures with common base measure G_0, e.g., G_j are distributions over clusters from data collected on different days)

Beyond this, we got bogged down in confusion over metaphors and interpretations, unclear whether G_j‘s were topics or documents or tables or restaurants or ethnicities, and were hampered by having two different version of the manuscript floating around with different page numbers and figures.
This week: we’ll take up where we left off, focusing on Section 4 (“Hierarchical Dicirhlet Processes”) with discussion led by Piyush.  We’ll agree to show up with the same (“official journal”) version of the manuscript, available: here.

Time: 4:00 PM, Thursday, Oct 4.
Location: SEA 5.106
Please email pillow AT mail.utexas.edu if you’d like to be added to the announcement list.

Revivifying the NP Bayes Reading Group

After a nearly 1-year hiatus, we’ve restarted our reading group on non-parametric (NP) Bayesian methods, focused on models for discrete data based on generalizations of the Dirichlet and other stick-breaking processes.

Thursday (9/20) was our first meeting, and Karin led a discussion of:

Teh, Y. W. (2006). A hierarchical Bayesian language model based on Pitman-Yor
processes. Proceedings of the 21st International Conference on
Computational Linguistics and the 44th annual meeting of the
Association for Computational Linguistics. 985-992

In the first meeting, we made it only as far as describing the Pitman-Yor (PY) process, a stochastic process whose samples are random probability distributions, and two methods for sampling from it:

  1. Chinese Restaurant sampling (aka “Blackwell-MacQueen urn scheme”), which directly provides samples \{X_i\} from distribution G \sim PY with G marginalized out.
  2. Stick-breaking, which samples the distribution G = \sum \pi_i \delta_{\phi_i} explicitly, using iid draws of Beta random variables to obtain stick weights \pi_i.

We briefly discussed the intuition for the hierarchical PY process, which uses PY process as base measure for PY process priors at deeper levels of the hierarchy (applied here to develop an n-gram model for natural language).

 
Next week: We’ve decided to go a bit further back in time to read:

Teh, Y. W.; Jordan, M. I.; Beal, M. J. & Blei, D. M. (2006). Hierarchical dirichlet processes. Journal of the American Statistical Association 101:1566-1581.

Time: Thursday (9/27), 4:00pm.
Location: Pillow lab
Presenter: Karin

note: if you’d like to be added to the email announcement list for this group, please send email to pillow AT mail.utexas.edu.

NP Bayes Reading Group: 2nd meeting

Continuing from last week, we discussed the formulation of generative clustering (mixture model) with fixed number of clusters K using Dirichlet distribution as a prior for cluster size distribution following Jordan’s slides. The definition of Dirichlet process (DP) and its existence was briefly shown via Kolmogorov extension theorem. Following (Sethuraman, 1994), we discussed the stick breaking construction of DP. Stick breaking provides the sample-biased permutation of Poisson-Dirichlet distribution obtained by Kingman limit (Kingman, 1975). The following fun facts about (extended) Dirichlet distribution are from (Sethuraman, 1994).

Fun Fact 1 Let {e_j} be a n-dimensional vector consisting of 0’s, except having 1 at j-th index.

\displaystyle \begin{array}{rcl} Dir(e_j) = e_j\end{array}

Fun Fact 2 Let

\displaystyle \begin{array}{rcl} U &\sim Dir(\alpha_1, \ldots, \alpha_n)\\ V &\sim Dir(\gamma_1, \ldots, \gamma_n)\\ W &\sim Beta(\sum_i \alpha_i, \sum_j \gamma_j). \end{array}

Then,

\displaystyle \begin{array}{rcl} W U + (1-W) V &\sim Dir(\alpha_1 + \gamma_1, \ldots, \alpha_n + \gamma_n). \end{array}

Fun Fact 3 Let {\sum_i \gamma_i = 1}. Then,

\displaystyle \begin{array}{rcl} \sum_i \gamma_i Dir([\alpha \gamma_1, \ldots, \alpha \gamma_n] + e_j) &= Dir(\alpha \gamma_1, \ldots, \alpha \gamma_n). \end{array}

Next week, we will continue on the discussion of DP as a prior for nonparameteric Bayesian clustering, posterior of DP and how to do inference with DP. (Jordan slide #45)

Possible further exploration:

  • Sampling from Poisson-Dirichlet distribution (Donnelly-Tavaré-Griffiths sampling?)
  • Proof of Lemma 3.2 from Sethuraman 1994

The number of tables distribution of a CRP has mean \simeq \alpha log(n) (simple proof can be found in Teh’s DP notes), and follows Ewens distribution.