Ridge regression
Part of a series on |
Regression analysis |
---|
Models |
Estimation |
|
Background |
|
Ridge regression is a method of estimating the
The theory was first introduced by Hoerl and Kennard in 1970 in their Technometrics papers "Ridge regressions: biased estimation of nonorthogonal problems" and "Ridge regressions: applications in nonorthogonal problems".[5][6][1] This was the result of ten years of research into the field of ridge analysis.[7]
Ridge regression was developed as a possible solution to the imprecision of least square estimators when linear regression models have some multicollinear (highly correlated) independent variables—by creating a ridge regression estimator (RR). This provides a more precise ridge parameters estimate, as its variance and mean square estimator are often smaller than the least square estimators previously derived.[8][2]
Overview
In the simplest case, the problem of a
History
Tikhonov regularization was invented independently in many different contexts. It became widely known through its application to integral equations in the works of
Some authors use the term Tikhonov–Phillips regularization. The finite-dimensional case was expounded by Arthur E. Hoerl, who took a statistical approach,[17] and by Manus Foster, who interpreted this method as a Wiener–Kolmogorov (Kriging) filter.[18] Following Hoerl, it is known in the statistical literature as ridge regression,[19] named after ridge analysis ("ridge" refers to the path from the constrained maximum).[20]Tikhonov regularization
Suppose that for a known matrix and vector , we wish to find a vector such that
The standard approach is ordinary least squares linear regression.[clarification needed] However, if no satisfies the equation or more than one does—that is, the solution is not unique—the problem is said to be
In order to give preference to a particular solution with desirable properties, a regularization term can be included in this minimization:
L2 regularization is used in many contexts aside from linear regression, such as classification with logistic regression or support vector machines,[22] and matrix factorization.[23]
Application to existing fit results
Since Tikhonov Regularisation simply adds a quadratic term to the objective function in optimization problems, it is possible to do so after the unregularised optimisation has taken place. E.g., if the above problem with yields the solution , the solution in the presence of can be expressed as:
If the parameter fit comes with a covariance matrix of the estimated parameter uncertainties , then the regularisation matrix will be
In the context of arbitrary likelihood fits, this is valid, as long as the quadratic approximation of the likelihood function is valid. This means that, as long as the perturbation from the unregularised result is small, one can regularise any result that is presented as a best fit point with a covariance matrix. No detailed knowledge of the underlying likelihood function is needed. [24]
Generalized Tikhonov regularization
For general multivariate normal distributions for and the data error, one can apply a transformation of the variables to reduce to the case above. Equivalently, one can seek an to minimize
This generalized problem has an optimal solution which can be written explicitly using the formula
Lavrentyev regularization
In some situations, one can avoid using the transpose , as proposed by Mikhail Lavrentyev.[25] For example, if is symmetric positive definite, i.e. , so is its inverse , which can thus be used to set up the weighted norm squared in the generalized Tikhonov regularization, leading to minimizing
This minimization problem has an optimal solution which can be written explicitly using the formula
The Lavrentyev regularization, if applicable, is advantageous to the original Tikhonov regularization, since the Lavrentyev matrix can be better conditioned, i.e., have a smaller condition number, compared to the Tikhonov matrix
Regularization in Hilbert space
Typically discrete linear ill-conditioned problems result from discretization of integral equations, and one can formulate a Tikhonov regularization in the original infinite-dimensional context. In the above we can interpret as a compact operator on Hilbert spaces, and and as elements in the domain and range of . The operator is then a self-adjoint bounded invertible operator.
Relation to singular-value decomposition and Wiener filter
With , this least-squares solution can be analyzed in a special way using the
Finally, it is related to the Wiener filter:
Determination of the Tikhonov factor
The optimal regularization parameter is usually unknown and often in practical problems is determined by an ad hoc method. A possible approach relies on the Bayesian interpretation described below. Other approaches include the discrepancy principle, cross-validation, L-curve method,[27] restricted maximum likelihood and unbiased predictive risk estimator. Grace Wahba proved that the optimal parameter, in the sense of leave-one-out cross-validation minimizes[28][29]
Using the previous SVD decomposition, we can simplify the above expression:
Relation to probabilistic formulation
The probabilistic formulation of an inverse problem introduces (when all uncertainties are Gaussian) a covariance matrix representing the a priori uncertainties on the model parameters, and a covariance matrix representing the uncertainties on the observed parameters.[30] In the special case when these two matrices are diagonal and isotropic, and , and, in this case, the equations of inverse theory reduce to the equations above, with .
Bayesian interpretation
Although at first the choice of the solution to this regularized problem may look artificial, and indeed the matrix seems rather arbitrary, the process can be justified from a Bayesian point of view.[31] Note that for an ill-posed problem one must necessarily introduce some additional assumptions in order to get a unique solution. Statistically, the prior probability distribution of is sometimes taken to be a multivariate normal distribution. For simplicity here, the following assumptions are made: the means are zero; their components are independent; the components have the same standard deviation . The data are also subject to errors, and the errors in are also assumed to be
If the assumption of
See also
- LASSO estimator is another regularization method in statistics.
- Elastic net regularization
- Matrix regularization
Notes
- ^ In statistics, the method is known as ridge regression, in machine learning it and its modifications are known as weight decay, and with multiple independent discoveries, it is also variously known as the Tikhonov–Miller method, the Phillips–Twomey method, the constrained linear inversion method, L2 regularization, and the method of linear regularization. It is related to the Levenberg–Marquardt algorithm for non-linear least-squares problems.
References
- ^ ]
- ^ ISBN 978-0-8247-0156-7.
- ISBN 0-262-61183-X.
- ISBN 0-8247-0156-9.
- JSTOR 1267351.
- JSTOR 1267352.
- ISBN 978-0-471-06118-2.
- ISBN 978-0-387-22440-4.
- ^ For the choice of in practice, see Khalaf, Ghadban; Shukur, Ghazi (2005). "Choosing Ridge Parameter for Regression Problems". S2CID 122983724.
- arXiv:1509.09169 [stat.ME].
- Doklady Akademii Nauk SSSR. 39 (5): 195–198. Archived from the originalon 2005-02-27.
- ^ Tikhonov, A. N. (1963). "О решении некорректно поставленных задач и методе регуляризации". Doklady Akademii Nauk SSSR. 151: 501–504.. Translated in "Solution of incorrectly formulated problems and the regularization method". Soviet Mathematics. 4: 1035–1038.
- ISBN 0-470-99124-0.
- ISBN 0-7923-3583-X. Retrieved 9 August 2018.
- ISBN 0-412-78660-5. Retrieved 9 August 2018.
- S2CID 35368397.
- ^ Hoerl, Arthur E. (1962). "Application of Ridge Analysis to Regression Problems". Chemical Engineering Progress. 58 (3): 54–59.
- doi:10.1137/0109031.
- .
- ISSN 0040-1706.
- ^ Ng, Andrew Y. (2004). Feature selection, L1 vs. L2 regularization, and rotational invariance (PDF). Proc. ICML.
- ^ R.-E. Fan; K.-W. Chang; C.-J. Hsieh; X.-R. Wang; C.-J. Lin (2008). "LIBLINEAR: A library for large linear classification". Journal of Machine Learning Research. 9: 1871–1874.
- S2CID 8755408.
- .
- ^ Lavrentiev, M. M. (1967). Some Improperly Posed Problems of Mathematical Physics. New York: Springer.
- ISBN 978-0-89871-403-6.
- ^ P. C. Hansen, "The L-curve and its use in the numerical treatment of inverse problems", [1]
- Bibcode:1990smod.conf.....W.
- .
- ISBN 0-89871-792-2. Retrieved 9 August 2018.
- ISBN 0-471-09077-8.
- ISBN 0-89871-550-4.
- ISBN 0-674-00560-0.
Further reading
- Gruber, Marvin (1998). Improving Efficiency by Shrinkage: The James–Stein and Ridge Regression Estimators. Boca Raton: CRC Press. ISBN 0-8247-0156-9.
- Kress, Rainer (1998). "Tikhonov Regularization". Numerical Analysis. New York: Springer. pp. 86–90. ISBN 0-387-98408-9.
- Press, W. H.; Teukolsky, S. A.; Vetterling, W. T.; Flannery, B. P. (2007). "Section 19.5. Linear Regularization Methods". Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press. ISBN 978-0-521-88068-8.
- Saleh, A. K. Md. Ehsanes; Arashi, Mohammad; Kibria, B. M. Golam (2019). Theory of Ridge Regression Estimation with Applications. New York: John Wiley & Sons. ISBN 978-1-118-64461-4.
- Taddy, Matt (2019). "Regularization". Business Data Science: Combining Machine Learning and Economics to Optimize, Automate, and Accelerate Business Decisions. New York: McGraw-Hill. pp. 69–104. ISBN 978-1-260-45277-8.