List of metaphor-based metaheuristics

Source: Wikipedia, the free encyclopedia.
(Redirected from
Harmony search
)

A diagrammatic classification of metaheuristics
A diagram classifying the various kinds of metaheuristics

This is a chronologically ordered list of metaphor-based

metaheuristics and swarm intelligence
algorithms, sorted by decade of proposal.

Algorithms

1980s-1990s

Simulated annealing (Kirkpatrick et al., 1983)

animation of simulated annealing solving a 3D traveling salesman problem instance
Visualization of simulated annealing solving a three-dimensional travelling salesman problem instance on 120 points

Simulated annealing is a

probabilistic algorithm inspired by annealing, a heat treatment method in metallurgy. It is often used when the search space is discrete (e.g., all tours that visit a given set of cities). For problems where finding the precise global optimum is less important than finding an acceptable local optimum in a fixed amount of time, simulated annealing may be preferable to alternatives such as gradient descent
.

The analogue of the slow cooling of annealing is a slow decrease in the probability of simulated annealing accepting worse solutions as it explores the solution space. Accepting worse solutions is a fundamental property of metaheuristics because it allows for a more extensive search for the optimal solution.

Ant colony optimization (ACO) (Dorigo, 1992)

The ant colony optimization algorithm is a

example needed] have emerged, drawing on various aspects of the behavior of ants. From a broader perspective, ACO performs a model-based search[3] and shares some similarities with the estimation of distribution algorithms
.

Particle swarm optimization (PSO) (Kennedy & Eberhart, 1995)

Particle swarm optimization is a computational method that

position and velocity
. Each particle's movement is influenced by its local best known position but is also guided toward the best-known positions in the search space, which are updated as better positions are found by other particles. This is expected to move the swarm toward the best solutions.

PSO is originally attributed to

fish school. The algorithm was simplified, and it was observed to be performing optimization. The book by Kennedy and Eberhart[7] describes many philosophical aspects of PSO and swarm intelligence. An extensive survey of PSO applications is made by Poli.[8][9] A comprehensive review of theoretical and experimental works on PSO has been published by Bonyadi and Michalewicz.[10]

2000s

Harmony search (HS) (Geem, Kim & Loganathan, 2001)

Harmony search is a phenomenon-mimicking metaheuristic introduced in 2001 by Zong Woo Geem, Joong Hoon Kim, and G. V. Loganathan[11] and is inspired by the improvization process of jazz musicians. In the HS algorithm, a set of possible solutions is randomly generated (called Harmony memory). A new solution is generated by using all the solutions in the Harmony memory (rather than just two as used in GA) and if this new solution is better than the worst solution in Harmony memory, the worst solution gets replaced by this new solution. The effectiveness and advantages of HS have been demonstrated in various applications like design of municipal water distribution networks,[12] structural design,[13] load dispatch problem in electrical engineering,[14] multi-objective optimization,[15] rostering problems,[16] clustering,[17] and classification and feature selection.[18][19] A detailed survey on applications of HS can be found.[20][21] and applications of HS in data mining can be found in.[22]

Dennis (2015) claimed that harmony search is a special case of the evolution strategies algorithm.[23] However, Saka et al. (2016) argues that the structure of evolution strategies is different from that of harmony search.[24]

Artificial bee colony algorithm (Karaboga, 2005)

Artificial bee colony algorithm is a metaheuristic introduced by Karaboga in 2005

weasel words] exploration and exploitation ability.[clarification needed
]

Bees algorithm (Pham, 2005)

The bees algorithm was formulated by Pham and his co-workers in 2005

honey bees, the algorithm combines global explorative search with local exploitative search. A small number of artificial bees (scouts) explores randomly the solution space (environment) for solutions of high fitness (highly profitable food sources), whilst the bulk of the population search (harvest) the neighbourhood of the fittest solutions looking for the fitness optimum. A deterministic recruitment procedure which simulates the waggle dance
of biological bees is used to communicate the scouts' findings to the foragers and distribute the foragers depending on the fitness of the neighbourhoods selected for local search. Once the search in the neighbourhood of a solution stagnates, the local fitness optimum is considered to be found, and the site is abandoned.

Imperialist competitive algorithm (Atashpaz-Gargari & Lucas, 2007)

The imperialist competitive algorithm (ICA), like most of the methods in the area of

biological evolution
of species.

This algorithm starts by generating a set of random candidate solutions in the search space of the optimization problem. The generated random points are called the initial Countries. Countries in this algorithm are the counterpart of Chromosomes in GAs and Particles in

Particle Swarm Optimization and it is an array of values of a candidate solution of optimization problem. The cost function of the optimization problem determines the power of each country. Based on their power, some of the best initial countries (the countries with the least cost function value), become Imperialists and start taking control of other countries (called colonies) and form the initial Empires.[28]

Two main operators of this algorithm are Assimilation and Revolution. Assimilation makes the colonies of each empire get closer to the imperialist state in the space of socio-political characteristics (optimization search space). Revolution brings about sudden random changes in the position of some of the countries in the search space. During assimilation and revolution, a colony might reach a better position and then have a chance to take the control of the entire empire and replace the current imperialist state of the empire.[29]

Imperialistic Competition is another part of this algorithm. All the empires try to win this game and take possession of colonies of other empires. In each step of the algorithm, based on their power, all the empires have a chance to take control of one or more of the colonies of the weakest empire.[28]

The algorithm continues with the mentioned steps (Assimilation, Revolution, Competition) until a stop condition is satisfied.

The above steps can be summarized as the below pseudocode:[30][29]

0) Define objective function: 
1) Initialization of the algorithm. Generate some random solution in the search space and create initial empires.
    2) Assimilation: Colonies move towards imperialist states in different in directions.
    3) Revolution: Random changes occur in the characteristics of some countries.
    4) Position exchange between a colony and Imperialist. A colony with a better position than the imperialist,
       has the chance to take the control of empire by replacing the existing imperialist.
    5) Imperialistic competition: All imperialists compete to take possession of colonies of each other.
    6) Eliminate the powerless empires. Weak empires lose their power gradually and they will finally be eliminated.
    7) If the stop condition is satisfied, stop, if not go to 2.
8) End

River formation dynamics (Rabanal, Rodríguez & Rubio, 2007)

River formation dynamics is based on imitating how water forms rivers by eroding the ground and depositing sediments (the drops act as the swarm). After drops transform the landscape by increasing/decreasing the altitude of places, solutions are given in the form of paths of decreasing altitudes. Decreasing gradients are constructed, and these gradients are followed by subsequent drops to compose new gradients and reinforce the best ones. This heuristic optimization method was proposed in 2007 by Rabanal et al.[31] The applicability of RFD to other NP-complete problems has been studied,[32] and the algorithm has been applied to fields such as routing[33] and robot navigation.[34] The main applications of RFD can be found at the survey Rabanal et al. (2017).[35]

Gravitational search algorithm (Rashedi, Nezamabadi-pour & Saryazdi, 2009)

The gravitational search algorithm is based on the

position of the agent corresponds to a solution of the problem, and its mass is determined using a fitness function. By lapse of time, masses are attracted by the heaviest mass, which would ideally present an optimum solution in the search space. The GSA could be considered as a small artificial world of masses obeying the Newtonian laws of gravitation and motion.[37] A multi-objective variant of GSA, called MOGSA, was proposed by Hassanzadeh et al. in 2010.[38]

2010s

Bat algorithm (Yang, 2010)

Bat algorithm is a swarm-intelligence-based algorithm, inspired by the echolocation behavior of microbats. BA automatically balances exploration (long-range jumps around the global search space to avoid getting stuck around one local maximum) with exploitation (searching in more detail around known good solutions to find local maxima) by controlling loudness and pulse emission rates of simulated bats in the multi-dimensional search space.[39]

Spiral optimization (SPO) algorithm (Tamura & Yasuda 2011, 2016-2017)

Spiral optimization algorithm

The spiral optimization algorithm, inspired by spiral phenomena in nature, is a multipoint search algorithm that has no objective function gradient. It uses multiple spiral models that can be described as deterministic dynamical systems. As search points follow logarithmic spiral trajectories towards the common center, defined as the current best point, better solutions can be found, and the common center can be updated.[40]

Artificial swarm intelligence (Rosenberg, 2014)

Artificial swarm intelligence is a real-time closed-loop system of human users connected over the internet and structured in a framework modeled after natural swarms such that it evokes the group's collective wisdom as a unified emergent intelligence.[41][42] In this way, human swarms can answer questions, make predictions, reach decisions, and solve problems by collectively exploring a diverse set of options and converging on preferred solutions in synchrony. Invented by Dr. Louis Rosenberg in 2014, the ASI methodology has been noted for its ability to make accurate collective predictions that outperform the individual members of the swarm.[43] In 2016, an Artificial Swarm Intelligence group from Unanimous A.I. was challenged by a reporter to predict the winners of the Kentucky Derby; it successfully picked the first four horses, in order, beating 540 to 1 odds.[44][45]

Self-tuning metaheuristics

Self-tuning metaheuristics have emerged as a significant advancement in the field of optimization algorithms in recent years, since fine tuning can be a very long and difficult process.[46] These algorithms differentiate themselves by their ability to autonomously adjust their parameters in response to the problem at hand, enhancing efficiency and solution quality. This self-tuning capability is particularly important in complex optimization scenarios where traditional methods may struggle due to rigid parameter settings. In this attempt, a PSO variant has already been introduced that adopts fuzzy logic to automatically calculate the parameters of each particle[47] as well as the Flying fox optimization which is a fuzzy self tuning optimizer.[48]

The advent of self-tuning variants in metaheuristics, marks a pivotal shift towards more autonomous optimization tools. These self-tuning algorithms significantly reduce the need for expert intervention in parameter tuning, a process requiring extensive domain knowledge. By leveraging fuzzy logic and other adaptive mechanisms, these algorithms can intelligently adjust their parameters in response to the problem's characteristics and search space dynamics. This autonomy not only simplifies the optimization process but also broadens the applicability of these algorithms, making them more accessible and effective for a wider range of users and complex problems. The ability of these self-tuning metaheuristics to perform effectively without perfect tuning by the user represents a considerable advancement in making optimization more user-friendly and efficient.

Criticism of the metaphor methodology

While individual metaphor-inspired metaheuristics have produced remarkably effective solutions to specific problems,[49] metaphor-inspired metaheuristics in general have attracted criticism among researchers for hiding their lack of effectiveness or novelty behind elaborate metaphors.[49][50] Kenneth Sörensen noted:[51]

In recent years, the field of combinatorial optimization has witnessed a true tsunami of "novel" metaheuristic methods, most of them based on a metaphor of some natural or man-made process. The behavior of virtually any species of insects, the flow of water, musicians playing together – it seems that no idea is too far-fetched to serve as inspiration to launch yet another metaheuristic. [I] will argue that this line of research is threatening to lead the area of metaheuristics away from scientific rigor.

Sörensen and Glover stated:[52]

A large (and increasing) number of publications focuses on the development of (supposedly) new metaheuristic frameworks based on metaphors. The list of natural or man-made processes that has been used as the basis for a metaheuristic framework now includes such diverse processes as bacterial foraging,

frogs
, salmon, vultures, termites, flies, and many others, have all been used to inspire a "novel" metaheuristic. [...] As a general rule, publication of papers on metaphor-based metaheuristics has been limited to second-tier journals and conferences, but some recent exceptions to this rule can be found. Sörensen (2013) states that research in this direction is fundamentally flawed. Most importantly, the author contends that the novelty of the underlying metaphor does not automatically render the resulting framework "novel". On the contrary, there is increasing evidence that very few of the metaphor-based methods are new in any interesting sense.

In response, Springer's Journal of Heuristics has updated their editorial policy to state:[53]

Proposing new paradigms is only acceptable if they contain innovative basic ideas, such as those that are embedded in classical frameworks like

leapfrogs
, kangaroos, all types of swarms and insects and even mine blast processes (Sörensen, 2013). If a researcher uses a metaphor to stimulate his or her own ideas about a new method, the method must nevertheless be translated into metaphor-free language, so that the strategies employed can be clearly understood, and their novelty is made clearly visible. (See items 2 and 3 below.) Metaphors are cheap and easy to come by. Their use to "window dress" a method is not acceptable."

[...] Implementations should be explained by employing standard optimization terminology, where a solution is called a "solution" and not something else related to some obscure metaphor (e.g., harmony, flies, bats, countries, etc.).

[...] The Journal of Heuristics fully endorses Sörensen's view that metaphor-based “novel” methods should not be published if they cannot demonstrate a contribution to their field. Renaming existing concepts does not count as a contribution. Even though these methods are often called “novel”, many present no new ideas, except for the occasional marginal variant of an already existing methodology. These methods should not take the journal space of truly innovative ideas and research. Since they do not use the standard optimization vocabulary, they are unnecessarily difficult to understand.

The policy of

4OR - A Quarterly Journal of Operations Research states:[54]

The emphasis on scientific rigor and on innovation implies, in particular, that the journal does not publish articles that simply propose disguised variants of known methods without adequate validation (e.g., metaheuristics that are claimed to be "effective" on the sole basis of metaphorical comparisons with natural or artificial systems and processes). New methods must be presented in metaphor-free language by establishing their relationship with classical paradigms. Their properties must be established on the basis of scientifically compelling arguments: mathematical proofs, controlled experiments, objective comparisons, etc.

See also

Notes

  1. .
  2. ^ M. Dorigo, Optimization, Learning and Natural Algorithms, PhD thesis, Politecnico di Milano, Italy, 1992.[page needed]
  3. S2CID 63137
    .
  4. .
  5. .
  6. .
  7. ^ Kennedy, J.; Eberhart, R.C. (2001). Swarm Intelligence. Morgan Kaufmann. .
  8. ^ Poli, R. (2007). "An analysis of publications on particle swarm optimisation applications" (PDF). Technical Report CSM-469. Department of Computer Science, University of Essex, UK. Archived from the original (PDF) on 2011-07-16. Retrieved 2016-08-31.
  9. .
  10. .
  11. .
  12. .
  13. .
  14. .
  15. .
  16. .
  17. .
  18. .
  19. .
  20. ^ "Harmony Search Algorithm". sites.google.com. Retrieved 23 April 2022.
  21. .
  22. .
  23. .
  24. .
  25. .
  26. ^ Pham DT, Ghanbarzadeh A, Koc E, Otri S, Rahim S and Zaidi M. The Bees Algorithm. Technical Note, Manufacturing Engineering Centre, Cardiff University, UK, 2005.[page needed]
  27. S2CID 111315200
    .
  28. ^ .
  29. ^ .
  30. .
  31. .
  32. .
  33. .
  34. .
  35. .
  36. .
  37. .
  38. .
  39. .
  40. .
  41. .
  42. ^ Reese, Hope (Jan 22, 2016). "How 'artificial swarm intelligence' uses people to make better predictions than experts".
  43. S2CID 15166767
    .
  44. ^ Cuthbertson, Anthony (10 May 2016). "Artificial intelligence turns $20 into $11,000 in Kentucky Derby bet". Newsweek. Retrieved 23 April 2022.
  45. ^ Ohlheiser, Abby (2 June 2016). "What happened when an A.I. hive mind answered Reddit's burning politics questions". Washington Post. Retrieved 23 April 2022.
  46. ISSN 1089-778X
    .
  47. .
  48. .
  49. ^ a b Alexander Brownlee and John R. Woodward (2015). "Why we fell out of love with algorithms inspired by nature". The Conversation.
  50. Evolution Strategies
    ' even though the metaphors that inspired them were quite different. Formally describing state, representation, and operators allows genuine novelty to be distinguished from minor variation."
  51. S2CID 14042315
    .
  52. ^ Fred Glover and Kenneth Sörensen (ed.). "Metaheuristics". Scholarpedia.
  53. ^ "Journal of Heuristic Policies on Heuristic Search Research" (PDF). www.springer.com. Archived from the original (PDF) on 9 July 2017.
  54. ^ "4OR – incl. Option to publish open access". www.springer.com. Retrieved 23 April 2022.

References

External links