Numerical linear algebra
Numerical linear algebra, sometimes called applied linear algebra, is the study of how
Numerical linear algebra aims to solve problems of continuous
Common problems in numerical linear algebra include obtaining matrix decompositions like the
History
Numerical linear algebra was developed by computer pioneers like John von Neumann, Alan Turing, James H. Wilkinson, Alston Scott Householder, George Forsythe, and Heinz Rutishauser, in order to apply the earliest computers to problems in continuous mathematics, such as ballistics problems and the solutions to systems of partial differential equations.[2] The first serious attempt to minimize computer error in the application of algorithms to real data is John von Neumann and Herman Goldstine's work in 1947.[3] The field has grown as technology has increasingly enabled researchers to solve complex problems on extremely large high-precision matrices, and some numerical algorithms have grown in prominence as technologies like parallel computing have made them practical approaches to scientific problems.[2]
Matrix decompositions
Partitioned matrices
For many problems in applied linear algebra, it is useful to adopt the perspective of a matrix as being a concatenation of column vectors. For example, when solving the linear system , rather than understanding x as the product of with b, it is helpful to think of x as the vector of coefficients in the linear expansion of b in the basis formed by the columns of A.[1]: 8 Thinking of matrices as a concatenation of columns is also a practical approach for the purposes of matrix algorithms. This is because matrix algorithms frequently contain two nested loops: one over the columns of a matrix A, and another over the rows of A. For example, for matrices and vectors and , we could use the column partitioning perspective to compute y := Ax + y as
for q = 1:n
for p = 1:m
y(p) = A(p,q)*x(q) + y(p)
end
end
Singular value decomposition
The singular value decomposition of a matrix is where U and V are unitary, and is diagonal. The diagonal entries of are called the
QR factorization
The QR factorization of a matrix is a matrix and a matrix so that A = QR, where Q is
LU factorization
An LU factorization of a matrix A consists of a lower triangular matrix L and an upper triangular matrix U so that A = LU. The matrix U is found by an upper triangularization procedure which involves left-multiplying A by a series of matrices to form the product , so that equivalently .[1]: 147 [4]: 96
Eigenvalue decomposition
The eigenvalue decomposition of a matrix is , where the columns of X are the eigenvectors of A, and is a diagonal matrix the diagonal entries of which are the corresponding eigenvalues of A.[1]: 33 There is no direct method for finding the eigenvalue decomposition of an arbitrary matrix. Because it is not possible to write a program that finds the exact roots of an arbitrary polynomial in finite time, any general eigenvalue solver must necessarily be iterative.[1]: 192
Algorithms
Gaussian elimination
From the numerical linear algebra perspective, Gaussian elimination is a procedure for factoring a matrix A into its LU factorization, which Gaussian elimination accomplishes by left-multiplying A by a succession of matrices until U is upper triangular and L is lower triangular, where .[1]: 148 Naive programs for Gaussian elimination are notoriously highly unstable, and produce huge errors when applied to matrices with many significant digits.[2] The simplest solution is to introduce pivoting, which produces a modified Gaussian elimination algorithm that is stable.[1]: 151
Solutions of linear systems
Numerical linear algebra characteristically approaches matrices as a concatenation of columns vectors. In order to solve the linear system , the traditional algebraic approach is to understand x as the product of with b. Numerical linear algebra instead interprets x as the vector of coefficients of the linear expansion of b in the basis formed by the columns of A.[1]: 8
Many different decompositions can be used to solve the linear problem, depending on the characteristics of the matrix A and the vectors x and b, which may make one factorization much easier to obtain than others. If A = QR is a QR factorization of A, then equivalently . This is as easy to compute as a matrix factorization.[1]: 54 If is an eigendecomposition A, and we seek to find b so that b = Ax, with and , then we have .[1]: 33 This is closely related to the solution to the linear system using the singular value decomposition, because singular values of a matrix are the absolute values of its eigenvalues, which are also equivalent to the square roots of the absolute values of the eigenvalues of the Gram matrix . And if A = LU is an LU factorization of A, then Ax = b can be solved using the triangular matrices Ly = b and Ux = y.[1]: 147 [4]: 99
Least squares optimisation
Matrix decompositions suggest a number of ways to solve the linear system r = b − Ax where we seek to minimize r, as in the regression problem. The QR algorithm solves this problem by computing the reduced QR factorization of A and rearranging to obtain . This upper triangular system can then be solved for x. The SVD also suggests an algorithm for obtaining linear least squares. By computing the reduced SVD decomposition and then computing the vector , we reduce the least squares problem to a simple diagonal system.[1]: 84 The fact that least squares solutions can be produced by the QR and SVD factorizations means that, in addition to the classical normal equations method for solving least squares problems, these problems can also be solved by methods that include the Gram-Schmidt algorithm and Householder methods.
Conditioning and stability
Allow that a problem is a function , where X is a normed vector space of data and Y is a normed vector space of solutions. For some data point , the problem is said to be ill-conditioned if a small perturbation in x produces a large change in the value of f(x). We can quantify this by defining a condition number which represents how well-conditioned a problem is, defined as
Instability is the tendency of computer algorithms, which depend on
Iterative methods
There are two reasons that iterative algorithms are an important part of numerical linear algebra. First, many important numerical problems have no direct solution; in order to find the eigenvalues and eigenvectors of an arbitrary matrix, we can only adopt an iterative approach. Second, noniterative algorithms for an arbitrary matrix require time, which is a surprisingly high floor given that matrices contain only numbers. Iterative approaches can take advantage of several features of some matrices to reduce this time. For example, when a matrix is sparse, an iterative algorithm can skip many of the steps that a direct approach would necessarily follow, even if they are redundant steps given a highly structured matrix.
The core of many iterative methods in numerical linear algebra is the projection of a matrix onto a lower dimensional Krylov subspace, which allows features of a high-dimensional matrix to be approximated by iteratively computing the equivalent features of similar matrices starting in a low dimension space and moving to successively higher dimensions. When A is symmetric and we wish to solve the linear problem Ax = b, the classical iterative approach is the conjugate gradient method. If A is not symmetric, then examples of iterative solutions to the linear problem are the generalized minimal residual method and CGN. If A is symmetric, then to solve the eigenvalue and eigenvector problem we can use the Lanczos algorithm, and if A is non-symmetric, then we can use Arnoldi iteration.
Software
Several programming languages use numerical linear algebra optimisation techniques and are designed to implement numerical linear algebra algorithms. These languages include
References
- ^ ISBN 978-0-89871-361-9.
- ^ a b c d Golub, Gene H. "A History of Modern Numerical Linear Algebra" (PDF). University of Chicago Statistics Department. Retrieved February 17, 2019.
- S2CID 16174165.
- ^ ISBN 0-8018-5413-X.
- ^ Rickert, Joseph (August 29, 2013). "R and Linear Algebra". R-bloggers. Retrieved February 17, 2019.
Further reading
- ISBN 0-19-853564-3.
External links
- Freely available software for numerical algebra on the web, composed by Jack Dongarra and Hatem Ltaief, University of Tennessee
- NAG Library of numerical linear algebra routines
- Numerical Linear Algebra Group on Twitter(Research group in the United Kingdom)
- siagla on Twitter (Activity group about numerical linear algebra in the Society for Industrial and Applied Mathematics)
- The GAMM Activity Group on Applied and Numerical Linear Algebra