In today’s NP Bayes discussion group we returned to the 2006 Hierarchical Dirichlet Process (HDP) paper by Teh et al to discuss sampling-based inference. We spent most of our time sorting through the notational soup needed to specify the HDP variables and their relationships between one another. This led to a brief discussion of implementation issues, and finally a description of the three Gibbs sampling techniques presented in the paper.
Category Archives: NP Bayes reading group
A Hierarchical Pitman-Yor Model of Natural Language
In the lab meeting on 9/17, we discussed the hierarchical, non-parametric Bayesian model for discrete sequence data presented in:
Wood, Archambeau, Gasthaus, James, & Teh, A Stochastic Memoizer for Sequence Data. ICML, 2009.
The authors extend previous work that used hierarchically linked Pitman-Yor processes to model the predictive distribution of a word given a context of finite length (an n-gram model), and here consider the distribution of words conditioned on a context of unbounded length (an -gram model). The hierarchical structuring allows for the combination of information from contexts of different lengths, and the Pitman-Yor process allows for power-law distributions of words similar to those seen in natural language. The authors develop the sequence memoizer and use coagulation and fragmentation operators to marginalize and reduce the computational complexity and create a collapsed graphical model on which inference is more efficient. The model is shown to perform well (i.e. have low perplexity) compared to existing models when applied to New York Times and Associated Press data.
NP Bayes reading group (9/27): hierarchical DPs
Our second NPB reading group meeting took aim at the seminal 2006 paper (with >1000 citations!) by Teh, Jordan, Beal & Blei on Hierarchical Dirichlet Processes. We were joined by newcomers Piyush Rai (newly arrived SSC postdoc), and Ph.D. students Dan Garrette (CS) and Liang Sun (mathematics), both of whom have experience with natural language models.
We established a few basic properties of the hierarchical DP, such as the the fact that it involves creating dependencies between DPs by endowing them with a common base measure, which is itself sampled from a DP. That is:
(“global measure” sampled from DP with base measure
and concentration
).
(sequence of conditionally independent random measures with common base measure
, e.g.,
are distributions over clusters from data collected on different days)
Beyond this, we got bogged down in confusion over metaphors and interpretations, unclear whether ‘s were topics or documents or tables or restaurants or ethnicities, and were hampered by having two different version of the manuscript floating around with different page numbers and figures.
This week: we’ll take up where we left off, focusing on Section 4 (“Hierarchical Dicirhlet Processes”) with discussion led by Piyush. We’ll agree to show up with the same (“official journal”) version of the manuscript, available: here.
Time: 4:00 PM, Thursday, Oct 4.
Location: SEA 5.106
Please email pillow AT mail.utexas.edu if you’d like to be added to the announcement list.
Revivifying the NP Bayes Reading Group
After a nearly 1-year hiatus, we’ve restarted our reading group on non-parametric (NP) Bayesian methods, focused on models for discrete data based on generalizations of the Dirichlet and other stick-breaking processes.
Thursday (9/20) was our first meeting, and Karin led a discussion of:
Teh, Y. W. (2006). A hierarchical Bayesian language model based on Pitman-Yor
processes. Proceedings of the 21st International Conference on
Computational Linguistics and the 44th annual meeting of the
Association for Computational Linguistics. 985-992
In the first meeting, we made it only as far as describing the Pitman-Yor (PY) process, a stochastic process whose samples are random probability distributions, and two methods for sampling from it:
- Chinese Restaurant sampling (aka “Blackwell-MacQueen urn scheme”), which directly provides samples
from distribution
with G marginalized out.
- Stick-breaking, which samples the distribution
explicitly, using iid draws of Beta random variables to obtain stick weights
.
We briefly discussed the intuition for the hierarchical PY process, which uses PY process as base measure for PY process priors at deeper levels of the hierarchy (applied here to develop an n-gram model for natural language).
Next week: We’ve decided to go a bit further back in time to read:
Teh, Y. W.; Jordan, M. I.; Beal, M. J. & Blei, D. M. (2006). Hierarchical dirichlet processes. Journal of the American Statistical Association 101:1566-1581.
Time: Thursday (9/27), 4:00pm.
Location: Pillow lab
Presenter: Karin
note: if you’d like to be added to the email announcement list for this group, please send email to pillow AT mail.utexas.edu.
Using size-biased sampling for certain expectations
Let be a well defined infinite discrete probability distribution (e.g., a draw from Dirichlet process (DP)). We are interested in evaluating the following form of expectations:
for some function
(we are especially interested when
, which gives us Shannon’s entropy). Following [1], we can re-write it as
where is a random variable that takes the value
with probability
. This random variable
is better known as the first size-biased sample
. It is defined by
. In other words, it takes one of the probabilities
among
with probability
.
For Pitman-Yor process (PY) with discount parameter and concentration parameter
(Dirichlet process is a special case where
), the size biased samples are naturally obtained by the stick breaking construction. Given a sequence of independent random variables
distributed as
, if we define
, then the set of
is invariant to size biased permutation [2], and they form a sequence of size-biased samples. In our case, we only need the first size biased sample which is simply distributed as
.
Using this trick, we can compute the entropy of PY without the complicated simplex integrals. We used this and its extension for computing the PY based entropy estimator.
- Jim Pitman, Marc Yor. The two-parameter Poisson-Dirichlet distribution derived from a stable subordinator. The Annals of Probability, Vol. 25, No. 2. (April 1997), pp. 855-900, doi:10.1214/aop/1024404422
- Mihael Perman, Jim Pitman, Marc Yor. Size-biased sampling of Poisson point processes and excursions. Probability Theory and Related Fields, Vol. 92, No. 1. (21 March 1992), pp. 21-39, doi:10.1007/BF01205234
NP Bayes Reading Group: 6th meeting
This week, our discussion focused on estimating the hyperparameter for Dirichlet process models. We began by working through a couple of theorems in Antoniak (1974) for mixtures of Dirichlet processes. Importantly, it can be shown, in the language of Chinese restaurant processes, that the number of occupied tables and number of samples are sufficient to find a distribution over the Dirichlet hyperparameter.
Given a gamma prior for the hyperparameter, we worked through the derivation of a posterior distribution for the hyperparamter given the number of occupied tables and number of observations given by Escobar & West (1995). This results in an easily samplable mixture of two gamma distributions which can be added to the Gibbs sampling scheme we reviewed last week.
NP Bayes Reading Group: 5th meeting
This week we discussed how to apply a Dirichlet process-based method to real problems. We focused on the Infinite Gaussian Mixture Model and its uses in spike sorting (Wood & Black, 2008). In this model, the observed data come from an unknown (and potentially infinite) number of multivariate Gaussians. Our goal is to cluster the observations that come from the same Gaussian. This requires an MCMC approach. We chose to examine a collapsed Gibbs sampler which takes advantage of conjugate priors (the inverse Wishart for the multivariate normal). Combined with last week’s results on exchangeability, a Gibbs sweep merely needed to examine each observation given the current labeling of all other observations. The prior for the clustering was the given by the Chinese Restaurant Process and the likelihood became a multivariate Student-t. Next week, we will see how sample over the hyperparameter for the CRP.
NP Bayes Reading Group: 4th meeting
Last Friday, while everyone else was toweling dry and/or knocking small crustaceans out of their ears, I presented a proof of de Finetti’s theorem given in (Heath and Sudderth, 1976).
When presented with an infinite sequence of coinflips, , we as frequentists presume that each coin is drawn independently from the same distribution. That is,
, for some
, and the joint probability of our data, as we receive it, factorizes as
. Although we don’t know what
is when the coins start falling,
is not random. Our uncertainty is due to finite sample-size; in fact,
.
On the other hand, as Bayesians, is a random variable, and its distribution
reflects our prior uncertainty about its value. To obtain our joint probability we marginalize over
and end up with
. Now the sequence is
is not independent, but it is infinitely-exchangeable: although the
are not independent, their ordering doesn’t matter.
So, mixing conditionally-independent distributions with a prior over results in an exchangeable sequence of random variables. The de Finetti theorem states that the converse is also true:
Theorem 1 (de Finetti) A sequence of Bernoulli random variables
is infinitely-exchangeable if and only if there exists a random variable
with distribution function
such that the joint probability has the form
This is seen as validating the Bayesian point of view by its implication that, when the data is (infinitely) exchangeable, there must exist a parameter with a prior and a likelihood with respect to which the data is conditionally independent.
Exchangeablity crops up frequently in our DP discussions (for instance, the DP and CRP are exchangeable), which indicates that, although we only discussed and proved the theorem in the above form, it’s also true in much more general contexts.
Next week, Kenneth will lead the discussion on applying the theory we learned so far to practical clustering algorithms (he had a pass this week).
NP Bayes Reading Group: 3rd meeting
Continuing from last week (Generative model diagram for stick breaking), we established that the Chinese restaurant process (CRP) is exchangeable, and the underlying process from de Finetti theorem is Dirichlet process (DP), that is, CRP is the marginal distribution of DP. The simple form of conditional distribution of CRP provides easy way of sampling. Then, we discussed the form of posterior of DP, whose expectation (marginal) coincides with the CRP. Next week, Kenneth will lead the discussion on applying the theory we learned so far to practical clustering algorithms.
NP Bayes Reading Group: 2nd meeting
Continuing from last week, we discussed the formulation of generative clustering (mixture model) with fixed number of clusters K using Dirichlet distribution as a prior for cluster size distribution following Jordan’s slides. The definition of Dirichlet process (DP) and its existence was briefly shown via Kolmogorov extension theorem. Following (Sethuraman, 1994), we discussed the stick breaking construction of DP. Stick breaking provides the sample-biased permutation of Poisson-Dirichlet distribution obtained by Kingman limit (Kingman, 1975). The following fun facts about (extended) Dirichlet distribution are from (Sethuraman, 1994).
Fun Fact 1 Let
be a n-dimensional vector consisting of 0’s, except having 1 at j-th index.
Then,
Next week, we will continue on the discussion of DP as a prior for nonparameteric Bayesian clustering, posterior of DP and how to do inference with DP. (Jordan slide #45)
Possible further exploration:
- Sampling from Poisson-Dirichlet distribution (Donnelly-Tavaré-Griffiths sampling?)
- Proof of Lemma 3.2 from Sethuraman 1994
The number of tables distribution of a CRP has mean (simple proof can be found in Teh’s DP notes), and follows Ewens distribution.
NP Bayes Reading Group: 1st meeting
We’ve started a reading group to come to grips with some of the recent developments in non-parametric (NP) Bayesian modeling, in particular, hierarchical Bayesian models for discrete data. The defining characteristic of NP models are that the number of parameters scale with the amount of data (leading to an infinite number of parameters in the limit of infinite data). Although these have sparked a mini-revolution in cognitive psychology (e.g., Tenenbaum, Griffiths & Kemp 2006), they do not appear to have found much application to statistical analysis of neural data (with the exception of spike sorting — see, e.g. Wood & Black 2008).
Our first assignment is to go through the slides from Michael Jordan’s 2005 NIPS tutorial (slides.ps). Last week we began, and made it through slide #23, covering the basic ideas of non-parametric models, exchangeability, De Finetti’s theorem, conjugate priors, Gibbs sampling, graphical models, Dirichlet & Beta distributions.
A few issues set aside for further exploration:
- proof of De Finetti’s theorm (Evan)
- relationship between CRP and stick-breaking (JP)
- slide 13: “A short calculation shows…” (Joe)
- proof that # of occupied tables is O(log n). (Memming)
- aggregation property (Ken)
Next meeting: today 3pm (Jun 17, 2011). Memming to lead…