Playground
Special case of covariance matrices
A covariance matrix
can be represented as the product
. Its
eigenvalues
are positive:
![{\displaystyle \Sigma v_{i}=\lambda _{i}v_{i}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8b4eece7b88db3a2b52faeb79da9238565d73a64)
![{\displaystyle A'Av_{i}=\lambda _{i}v_{i}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/4a3a00a8de9e74a267b70a927a1d98d6bd1b3867)
![{\displaystyle v_{i}'A'Av_{i}=v_{i}'\lambda _{i}v_{i}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ca92c9941e043db1a2f4fb9e47b2fb428091ce97)
![{\displaystyle \left\|Av_{i}\right\|=\lambda _{i}\left\|v_{i}\right\|}](https://wikimedia.org/api/rest_v1/media/math/render/svg/dc00a05ea59072fcd4b2b217109cd932ad1b13cb)
![{\displaystyle \lambda _{i}={\frac {\left\|v_{i}\right\|}{\left\|Av_{i}\right\|}}\geq 0}](https://wikimedia.org/api/rest_v1/media/math/render/svg/13ce60a09d5afde9c0b2d5b4aac0677dd56e1c9d)
The
eigenvectors
are orthogonal to one another:
![{\displaystyle A'Av_{i}=\lambda _{i}v_{i}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/4a3a00a8de9e74a267b70a927a1d98d6bd1b3867)
![{\displaystyle v_{j}'A'Av_{i}=\lambda _{i}v_{j}'v_{i}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/1d8ff337939f0e509fb927ec2aad4e1cd94f94d8)
![{\displaystyle (A'Av_{j})'v_{i}=\lambda _{i}v_{j}'v_{i}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/b0fe4d0c9d6dd765ba034c0348b55fa64000482f)
![{\displaystyle \lambda _{j}v_{j}'v_{i}=\lambda _{i}v_{j}'v_{i}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/92690662b310482ae3f8f74fc25c54c74f58fa84)
![{\displaystyle (\lambda _{j}-\lambda _{i})v_{j}'v_{i}=0}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2a1ec1970e7c03aea5ad5786128019d372cd9961)
(different eigenvalues, in case of multiplicity, the basis can be orthogonalized)
The Rayleigh quotient can be expressed as a function of the eigenvalues by decomposing any vector
on the basis of eigenvectors:
![{\displaystyle x=\sum _{i=1}^{n}\alpha _{i}v_{i}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/3afeba7a27155946678a9a13c3b97c4f01dbc36f)
![{\displaystyle \rho ={\frac {x'A'Ax}{x'x}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/413db91aaec04e2bd4f6f6b9154230d9d3ec4764)
![{\displaystyle \rho ={\frac {(\sum _{j=1}^{n}\alpha _{j}v_{j})'A'A(\sum _{i=1}^{n}\alpha _{i}v_{i})}{(\sum _{j=1}^{n}\alpha _{j}v_{j})'(\sum _{i=1}^{n}\alpha _{i}v_{i})}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/0fe72ff1956e5ec7139fd0d705812a6171d9c859)
Which, by orthogonality of the eigenvectors, becomes:
![{\displaystyle \rho ={\frac {\sum _{i=1}^{n}\alpha _{i}^{2}\lambda _{i}}{\sum _{i=1}^{n}\alpha _{i}^{2}}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/cba1fea56b7d37fedce0216cf3878db34b29e5c7)
If a vector
maximizes
, then any vector
(for
) also maximizes it, one can reduce to the
Lagrange problem
of maximizing
![{\displaystyle \sum _{i=1}^{n}\alpha _{i}^{2}\lambda _{i}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6efadde8d41e3ff6392200a3269624a23722dcee)
under the constrainst that
![{\displaystyle \sum _{i=1}^{n}\alpha _{i}^{2}=1}](https://wikimedia.org/api/rest_v1/media/math/render/svg/66c79e2a21b39e9939c86f0b3e884218245ff238)
.
Since all the eingenvalues are non-negative, the problem is convex and the maximum occurs on the edges of the domain, namely when
and
(when the eigenvalues are ordered in decreasing magnitude).
This property is the basis for
.