Rosenbrock function

In mathematical optimization, the Rosenbrock function is a non-convex function, introduced by Howard H. Rosenbrock in 1960, which is used as a performance test problem for optimization algorithms.[1] It is also known as Rosenbrock's valley or Rosenbrock's banana function.
The
The function is defined by
It has a global minimum at , where . Usually, these parameters are set such that and . Only in the trivial case where the function is symmetric and the minimum is at the origin.
Multidimensional generalizations
Two variants are commonly encountered.

One is the sum of uncoupled 2D Rosenbrock problems, and is defined only for even s:
This variant has predictably simple solutions.
A second, more involved variant is
has exactly one minimum for (at ) and exactly two minima for —the global minimum at and a local minimum near . This result is obtained by setting the gradient of the function equal to zero, noticing that the resulting equation is a rational function of . For small the
Stationary points
Many of the stationary points of the function exhibit a regular pattern when plotted.[5] This structure can be exploited to locate them.

Optimization examples


The Rosenbrock function can be efficiently optimized by adapting appropriate coordinate system without using any gradient information and without building local approximation models (in contrast to many derivate-free optimizers). The following figure illustrates an example of 2-dimensional Rosenbrock function optimization by adaptive coordinate descent from starting point . The solution with the function value can be found after 325 function evaluations.
Using the Nelder–Mead method from starting point with a regular initial simplex a minimum is found with function value after 185 function evaluations. The figure below visualizes the evolution of the algorithm.
See also
References
- ISSN 0010-4620.
- ISBN 978-1-4822-5290-3.
- .
- ^ "Generalized Rosenbrock's function". Retrieved 2008-09-16.
- ^ PMID 19708775.