Dvoretzky–Kiefer–Wolfowitz inequality

Source: Wikipedia, the free encyclopedia.
The above chart shows an example application of the DKW inequality in constructing confidence bounds (in purple) around an empirical distribution function (in light blue). In this random draw, the true CDF (orange) is entirely contained within the DKW bounds.

In the theory of

Jack Kiefer, and Jacob Wolfowitz
, who in 1956 proved the inequality

with an unspecified multiplicative constant C in front of the exponent on the right-hand side.[1]

In 1990, Pascal Massart proved the inequality with the sharp constant C = 2,[2] confirming a conjecture due to Birnbaum and McCarty.[3] In 2021, Michael Naaman proved the multivariate version of the DKW inequality and generalized Massart's tightness result to the multivariate case, which results in a sharp constant of twice the dimension k of the space in which the observations are found: C = 2k.[4]

The DKW inequality

Given a natural number n, let X1, X2, …, Xn be real-valued

independent and identically distributed random variables with cumulative distribution function F(·). Let Fn denote the associated empirical distribution function
defined by

so is the probability that a single random variable is smaller than , and is the fraction of random variables that are smaller than .

The Dvoretzky–Kiefer–Wolfowitz inequality bounds the probability that the

random function
Fn differs from F by more than a given constant ε > 0 anywhere on the real line. More precisely, there is the one-sided estimate

which also implies a two-sided estimate[5]

This strengthens the

uniform distribution on [0,1] [6]
as Fn has the same distributions as Gn(F) where Gn is the empirical distribution of U1, U2, …, Un where these are independent and Uniform(0,1), and noting that

with equality if and only if F is continuous.

Multivariate case

In the multivariate case, X1, X2, …, Xn is an i.i.d. sequence of k-dimensional vectors. If Fn is the multivariate empirical cdf, then

for every ε, n, k > 0. The (n + 1) term can be replaced with a 2 for any sufficiently large n.[4]

Kaplan–Meier estimator

The Dvoretzky–Kiefer–Wolfowitz inequality is obtained for the Kaplan–Meier estimator which is a right-censored data analog of the empirical distribution function

for every and for some constant , where is the Kaplan–Meier estimator, and is the censoring distribution function.[7]

Building CDF bands

The Dvoretzky–Kiefer–Wolfowitz inequality is one method for generating CDF-based confidence bounds and producing a confidence band, which is sometimes called the Kolmogorov–Smirnov confidence band. The purpose of this confidence interval is to contain the entire CDF at the specified confidence level, while alternative approaches attempt to only achieve the confidence level on each individual point, which can allow for a tighter bound. The DKW bounds runs parallel to, and is equally above and below, the empirical CDF. The equally spaced confidence interval around the empirical CDF allows for different rates of violations across the support of the distribution. In particular, it is more common for a CDF to be outside of the CDF bound estimated using the DKW inequality near the median of the distribution than near the endpoints of the distribution.

The interval that contains the true CDF, , with probability is often specified as

which is also a special case of the asymptotic procedure for the multivariate case,[4] whereby one uses the following critical value

for the multivariate test; one may replace 2k with k(n + 1) for a test that holds for all n; moreover, the multivariate test described by Naaman can be generalized to account for heterogeneity and dependence.

See also

References

  1. .
  2. ^ .
  3. ^
  4. ^ Bitouze, D.; Laurent, B.; Massart, P. (1999), "A Dvoretzky–Kiefer–Wolfowitz type inequality for the Kaplan–Meier estimator", Annales de l'Institut Henri Poincaré B, 35 (6),