Dirichlet distribution

Source: Wikipedia, the free encyclopedia.
Dirichlet distribution
Probability density function
Parameters number of categories (integer)
concentration parameters, where
Support where and
PDF
where
where
Mean

(where is the digamma function)
Mode
Variance
where , and is the
Entropy

with defined as for variance, above; and is the digamma function
Method of Moments where is any index, possibly itself

In probability and statistics, the Dirichlet distribution (after Peter Gustav Lejeune Dirichlet), often denoted , is a family of

parameterized by a vector of positive .

The infinite-dimensional generalization of the Dirichlet distribution is the Dirichlet process.

Definitions

Probability density function

Illustrating how the log of the density function changes when K = 3 as we change the vector α from α = (0.3, 0.3, 0.3) to (2.0, 2.0, 2.0), keeping all the individual 's equal to each other.

The Dirichlet distribution of order K ≥ 2 with parameters α1, ..., αK > 0 has a probability density function with respect to Lebesgue measure on the Euclidean space RK-1 given by

where belong to the standard simplex, or in other words:

The normalizing constant is the multivariate beta function, which can be expressed in terms of the gamma function:

Support

The support of the Dirichlet distribution is the set of K-dimensional vectors whose entries are real numbers in the interval [0,1] such that , i.e. the sum of the coordinates is equal to 1. These can be viewed as the probabilities of a K-way

standard (K − 1)-simplex,[3] which is a generalization of a triangle, embedded in the next-higher dimension. For example, with K = 3, the support is an equilateral triangle
embedded in a downward-angle fashion in three-dimensional space, with vertices at (1,0,0), (0,1,0) and (0,0,1), i.e. touching each of the coordinate axes at a point 1 unit away from the origin.

Special cases

A common special case is the symmetric Dirichlet distribution, where all of the elements making up the parameter vector have the same value. The symmetric case might be useful, for example, when a Dirichlet prior over components is called for, but there is no prior knowledge favoring one component over another. Since all elements of the parameter vector have the same value, the symmetric Dirichlet distribution can be parametrized by a single scalar value α, called the concentration parameter. In terms of α, the density function has the form

When α=1[1], the symmetric Dirichlet distribution is equivalent to a uniform distribution over the open

standard (K − 1)-simplex, i.e. it is uniform over all points in its support. This particular distribution is known as the flat Dirichlet distribution. Values of the concentration parameter above 1 prefer variates
that are dense, evenly distributed distributions, i.e. all the values within a single sample are similar to each other. Values of the concentration parameter below 1 prefer sparse distributions, i.e. most of the values within a single sample will be close to 0, and the vast majority of the mass will be concentrated in a few of the values.

More generally, the parameter vector is sometimes written as the product of a (scalar) concentration parameter α and a (vector) base measure where lies within the (K − 1)-simplex (i.e.: its coordinates sum to one). The concentration parameter in this case is larger by a factor of K than the concentration parameter for a symmetric Dirichlet distribution described above. This construction ties in with concept of a base measure when discussing Dirichlet processes and is often used in the topic modelling literature.

^ If we define the concentration parameter as the sum of the Dirichlet parameters for each dimension, the Dirichlet distribution with concentration parameter K, the dimension of the distribution, is the uniform distribution on the (K − 1)-simplex.

Properties

Moments

Let .

Let

Then[4][5]

Furthermore, if

The matrix is thus singular.

More generally, moments of Dirichlet-distributed random variables can be expressed in the following way. For , denote by its -th Hadamard power. Then,[6]

where the sum is over non-negative integers with , and is the cycle index polynomial of the Symmetric group of degree .

The multivariate analogue for vectors can be expressed[7] in terms of a color pattern of the exponents in the sense of Pólya enumeration theorem.

Particular cases include the simple computation[8]

Mode

The mode of the distribution is[9] the vector (x1, ..., xK) with

Marginal distributions

The marginal distributions are beta distributions:[10]

Conjugate to categorical or multinomial

The Dirichlet distribution is the

posterior distribution
of the parameter is also a Dirichlet. Intuitively, in such a case, starting from what we know about the parameter prior to observing the data point, we then can update our knowledge based on the data point and end up with a new distribution of the same form as the old one. This means that we can successively update our knowledge of a parameter by incorporating new observations one at a time, without running into mathematical difficulties.

Formally, this can be expressed as follows. Given a model

then the following holds:

This relationship is used in

pseudocounts
, i.e. as representing the number of observations in each category that we have already seen. Then we simply add in the counts for all the new observations (the vector c) in order to derive the posterior distribution.

In Bayesian

hierarchical Bayesian models with mixture components, Dirichlet distributions are commonly used as the prior distributions for the categorical variables appearing in the models. See the section on applications
below for more information.

Relation to Dirichlet-multinomial distribution

In a model where a Dirichlet prior distribution is placed over a set of

variational Bayes, Dirichlet prior distributions are often marginalized out. See the article on this distribution
for more details.

Entropy

If X is a random variable, the differential entropy of X (in nat units) is[11]

where is the digamma function.

The following formula for can be used to derive the differential

entropy
above. Since the functions are the sufficient statistics of the Dirichlet distribution, the exponential family differential identities can be used to get an analytic expression for the expectation of (see equation (2.62) in [12]) and its associated covariance matrix:

and

where is the digamma function, is the trigamma function, and is the Kronecker delta.

The spectrum of Rényi information for values other than is given by[13]

and the information entropy is the limit as goes to 1.

Another related interesting measure is the entropy of a discrete categorical (one-of-K binary) vector with probability-mass distribution , i.e., . The conditional

information entropy
of , given is

This function of is a scalar random variable. If has a symmetric Dirichlet distribution with all , the expected value of the entropy (in nat units) is[14]

Aggregation

If

then, if the random variables with subscripts i and j are dropped from the vector and replaced by their sum,

This aggregation property may be used to derive the marginal distribution of mentioned above.

Neutrality

If , then the vector X is said to be neutral[15] in the sense that XK is independent of [3] where

and similarly for removing any of . Observe that any permutation of X is also neutral (a property not possessed by samples drawn from a generalized Dirichlet distribution).[16]

Combining this with the property of aggregation it follows that Xj + ... + XK is independent of . In fact it is true, further, for the Dirichlet distribution, that for , the pair , and the two vectors and , viewed as triple of normalised random vectors, are mutually independent. The analogous result is true for partition of the indices {1,2,...,K} into any other pair of non-singleton subsets.

Characteristic function

The characteristic function of the Dirichlet distribution is a

Phillips as[17]

where

The sum is over non-negative integers and . Phillips goes on to state that this form is "inconvenient for numerical calculation" and gives an alternative in terms of a

complex path integral
:

where L denotes any path in the complex plane originating at , encircling in the positive direction all the singularities of the integrand and returning to .

Inequality

Probability density function plays a key role in a multifunctional inequality which implies various bounds for the Dirichlet distribution.[18]

Related distributions

For K independently distributed Gamma distributions:

we have:[19]: 402 

Although the Xis are not independent from one another, they can be seen to be generated from a set of K independent gamma random variable.[19]: 594  Unfortunately, since the sum V is lost in forming X (in fact it can be shown that V is stochastically independent of X), it is not possible to recover the original gamma random variables from these values alone. Nevertheless, because independent random variables are simpler to work with, this reparametrization can still be useful for proofs about properties of the Dirichlet distribution.

Conjugate prior of the Dirichlet distribution

Because the Dirichlet distribution is an exponential family distribution it has a conjugate prior. The conjugate prior is of the form:[20]

Here is a K-dimensional real vector and is a scalar parameter. The domain of is restricted to the set of parameters for which the above unnormalized density function can be normalized. The (necessary and sufficient) condition is:[21]

The conjugation property can be expressed as

if [prior: ] and [observation: ] then [posterior: ].

In the published literature there is no practical algorithm to efficiently generate samples from .

Occurrence and applications

Bayesian models

Dirichlet distributions are most commonly used as the

hierarchical Bayesian models. (In many fields, such as in natural language processing, categorical variables are often imprecisely called "multinomial variables". Such a usage is unlikely to cause confusion, just as when Bernoulli distributions and binomial distributions
are commonly conflated.)

Inference over hierarchical Bayesian models is often done using Gibbs sampling, and in such a case, instances of the Dirichlet distribution are typically marginalized out of the model by integrating out the Dirichlet random variable. This causes the various categorical variables drawn from the same Dirichlet random variable to become correlated, and the joint distribution over them assumes a Dirichlet-multinomial distribution, conditioned on the hyperparameters of the Dirichlet distribution (the concentration parameters). One of the reasons for doing this is that Gibbs sampling of the Dirichlet-multinomial distribution is extremely easy; see that article for more information.


Intuitive interpretations of the parameters

The concentration parameter

Dirichlet distributions are very often used as

discrete probability distribution, then intuitively the concentration parameter can be thought of as determining how "concentrated" the probability mass of the Dirichlet distribution to its center, leading to samples with mass dispersed almost equally among all components, i.e., with a value much less than 1, the mass will be highly concentrated in a few components, and all the rest will have almost no mass, and with a value much greater than 1, the mass will be dispersed almost equally among all the components. See the article on the concentration parameter
for further discussion.

String cutting

One example use of the Dirichlet distribution is if one wanted to cut strings (each of initial length 1.0) into K pieces with different lengths, where each piece had a designated average length, but allowing some variation in the relative sizes of the pieces. Recall that The values specify the mean lengths of the cut pieces of string resulting from the distribution. The variance around this mean varies inversely with .

Example of Dirichlet(1/2,1/3,1/6) distribution
Example of Dirichlet(1/2,1/3,1/6) distribution

Pólya's urn

Consider an urn containing balls of K different colors. Initially, the urn contains α1 balls of color 1, α2 balls of color 2, and so on. Now perform N draws from the urn, where after each draw, the ball is placed back into the urn with an additional ball of the same color. In the limit as N approaches infinity, the proportions of different colored balls in the urn will be distributed as Dir(α1,...,αK).[22]

For a formal proof, note that the proportions of the different colored balls form a bounded [0,1]K-valued

in mean to a limiting random vector. To see that this limiting vector has the above Dirichlet distribution, check that all mixed moments
agree.

Each draw from the urn modifies the probability of drawing a ball of any one color from the urn in the future. This modification diminishes with the number of draws, since the relative effect of adding a new ball to the urn diminishes as the urn accumulates increasing numbers of balls.


Random variate generation

From gamma distribution

With a source of Gamma-distributed random variates, one can easily sample a random vector from the K-dimensional Dirichlet distribution with parameters . First, draw K independent random samples from Gamma distributions each with density

and then set

[Proof]

The joint distribution of the independently sampled gamma variates, , is given by the product:

Next, one uses a change of variables, parametrising in terms of and , and performs a change of variables from such that . Each of the variables and likewise . One must then use the change of variables formula, in which is the transformation Jacobian. Writing y explicitly as a function of x, one obtains The Jacobian now looks like

The determinant can be evaluated by noting that it remains unchanged if multiples of a row are added to another row, and adding each of the first K-1 rows to the bottom row to obtain

which can be expanded about the bottom row to obtain the determinant value . Substituting for x in the joint pdf and including the Jacobian determinant, one obtains:

where . The right-hand side can be recognized as the product of a Dirichlet pdf for the and a gamma pdf for . The product form shows the Dirichlet and gamma variables are independent, so the latter can be integrated out by simply omitting it, to obtain:

Which is equivalent to

with support

Below is example Python code to draw the sample:

params = [a1, a2, ..., ak]
sample = [random.gammavariate(a, 1) for a in params]
sample = [v / sum(sample) for v in sample]

This formulation is correct regardless of how the Gamma distributions are parameterized (shape/scale vs. shape/rate) because they are equivalent when scale and rate equal 1.0.

From marginal beta distributions

A less efficient algorithm[23] relies on the univariate marginal and conditional distributions being beta and proceeds as follows. Simulate from

Then simulate in order, as follows. For , simulate from

and let

Finally, set

This iterative procedure corresponds closely to the "string cutting" intuition described above.

Below is example Python code to draw the sample:

params = [a1, a2, ..., ak]
xs = [random.betavariate(params[0], sum(params[1:]))]
for j in range(1, len(params) - 1):
    phi = random.betavariate(params[j], sum(params[j + 1 :]))
    xs.append((1 - sum(xs)) * phi)
xs.append(1 - sum(xs))

When each alpha is 1

When α1 = ... = αK = 1, a sample from the distribution can be found by randomly drawing a set of K − 1 values independently and uniformly from the interval [0, 1], adding the values 0 and 1 to the set to make it have K + 1 values, sorting the set, and computing the difference between each pair of order-adjacent values, to give x1, ..., xK.

When each alpha is 1/2 and relationship to the hypersphere

When α1 = ... = αK = 1/2, a sample from the distribution can be found by randomly drawing K values independently from the standard normal distribution, squaring these values, and normalizing them by dividing by their sum, to give x1, ..., xK.

A point (x1, ..., xK) can be drawn uniformly at random from the (K−1)-dimensional

hypersphere (which is the surface of a K-dimensional hyperball
) via a similar procedure. Randomly draw K values independently from the standard normal distribution and normalize these coordinate values by dividing each by the constant that is the square root of the sum of their squares.

See also

References

  1. . (Chapter 49: Dirichlet and Inverted Dirichlet Distributions)
  2. .
  3. ^ a b Bela A. Frigyik; Amol Kapila; Maya R. Gupta (2010). "Introduction to the Dirichlet Distribution and Related Processes" (PDF). University of Washington Department of Electrical Engineering. Archived from the original (Technical Report UWEETR-2010-006) on 2015-02-19.
  4. ^ Eq. (49.9) on page 488 of Kotz, Balakrishnan & Johnson (2000). Continuous Multivariate Distributions. Volume 1: Models and Applications. New York: Wiley.
  5. .
  6. – via Project Euclid.
  7. ].
  8. ^ Hoffmann, Till. "Moments of the Dirichlet distribution". Archived from the original on 2016-02-14. Retrieved 14 February 2016.
  9. .
  10. ^ Farrow, Malcolm. "MAS3301 Bayesian Statistics" (PDF). Newcastle University. Retrieved 10 April 2013.
  11. ^ Lin, Jiayu (2016). On The Dirichlet Distribution (PDF). Kingston, Canada: Queen's University. pp. § 2.4.9.
  12. SSRN 4541076
    . Retrieved 15 August 2023.
  13. .
  14. ^ Nemenman, Ilya; Shafee, Fariel; Bialek, William (2002). Entropy and Inference, revisited (PDF). NIPS 14., eq. 8
  15. JSTOR 2283728
    .
  16. ^ See Kotz, Balakrishnan & Johnson (2000), Section 8.5, "Connor and Mosimann's Generalization", pp. 519–521.
  17. ^ Phillips, P. C. B. (1988). "The characteristic function of the Dirichlet and multivariate F distribution" (PDF). Cowles Foundation Discussion Paper 865.
  18. .
  19. ^ .
  20. .
  21. ].
  22. .
  23. .

External links