Integral of the Gaussian function, equal to sqrt(π)
This integral from statistics and physics is not to be confused with Gaussian quadrature, a method of numerical integration.
The Gaussian integral, also known as the Euler–Poisson integral, is the integral of the Gaussian function over the entire real line. Named after the German mathematician Carl Friedrich Gauss, the integral is
Abraham de Moivre originally discovered this type of integral in 1733, while Gauss published the precise integral in 1809.[1] The integral has a wide range of applications. For example, with a slight change of variables it is used to compute the normalizing constant of the normal distribution. The same integral with finite limits is closely related to both the error function and the cumulative distribution function of the normal distribution. In physics this type of integral appears frequently, for example, in quantum mechanics, to find the probability density of the ground state of the harmonic oscillator. This integral is also used in the path integral formulation, to find the propagator of the harmonic oscillator, and in statistical mechanics, to find its partition function.
Although no
indefinite integral
for
but the
definite integral
can be evaluated. The definite integral of an arbitrary Gaussian function is
Computation
By polar coordinates
A standard way to compute the Gaussian integral, the idea of which goes back to Poisson,[3] is to make use of the property that:
Consider the function on the plane , and compute its integral two ways:
Using Fubini's theorem, the above double integral can be seen as an area integral
taken over a square with vertices {(−a, a), (a, a), (a, −a), (−a, −a)} on the xy-
plane
.
Since the exponential function is greater than 0 for all real numbers, it then follows that the integral taken over the square's
incircle
must be less than , and similarly the integral taken over the square's circumcircle must be greater than . The integrals over the two disks can easily be computed by switching from Cartesian coordinates to
In Laplace approximation, we deal only with up to second-order terms in Taylor expansion, so we consider .
In fact, since for all , we have the exact bounds:
Then we can do the bound at Laplace approximation limit:
That is,
By trigonometric substitution, we exactly compute those two bounds: and
By taking the square root of the
Wallis formula
,
we have , the desired lower bound limit. Similarly we can get the desired upper bound limit.
Conversely, if we first compute the integral with one of the other methods above, we would obtain a proof of the Wallis formula.
Relation to the gamma function
The integrand is an
even function
,
Thus, after the change of variable , this turns into the Euler integral
where is the gamma function. This shows why the factorial of a half-integer is a rational multiple of . More generally,
which can be obtained by substituting in the integrand of the gamma function to get .
This form is useful for calculating expectations of some continuous probability distributions related to the normal distribution, such as the log-normal distribution, for example.
where σ is a permutation of {1, …, 2N} and the extra factor on the right-hand side is the sum over all combinatorial pairings of {1, …, 2N} of N copies of A−1.
for some analytic functionf, provided it satisfies some appropriate bounds on its growth and some other technical criteria. (It works for some functions and fails for others. Polynomials are fine.) The exponential over a differential operator is understood as a power series.
While
functional integrals have no rigorous definition (or even a nonrigorous computational one in most cases), we can define a Gaussian functional integral in analogy to the finite-dimensional case. [citation needed
] There is still the problem, though, that is infinite and also, the functional determinant would also be infinite in general. This can be taken care of if we only consider ratios:
In the DeWitt notation, the equation looks identical to the finite-dimensional case.
n-dimensional with linear term
If A is again a symmetric positive-definite matrix, then (assuming all are column vectors)
One could also integrate by parts and find a recurrence relation to solve this.
Higher-order polynomials
Applying a linear change of basis shows that the integral of the exponential of a homogeneous polynomial in n variables may depend only on
SL(n)-invariants of the polynomial. One such invariant is the discriminant
,
zeros of which mark the singularities of the integral. However, the integral may also depend on other invariants.[5]
Exponentials of other even polynomials can numerically be solved using series. These may be interpreted as formal calculations when there is no convergence. For example, the solution to the integral of the exponential of a quartic polynomial is[citation needed]
The n + p = 0 mod 2 requirement is because the integral from −∞ to 0 contributes a factor of (−1)n+p/2 to each term, while the integral from 0 to +∞ contributes a factor of 1/2 to each term. These integrals turn up in subjects such as quantum field theory.