Memetic algorithm

Source: Wikipedia, the free encyclopedia.

A memetic algorithm (MA) in computer science and operations research, is an extension of the traditional genetic algorithm (GA) or more general evolutionary algorithm (EA). It may provide a sufficiently good solution to an optimization problem. It uses a suitable heuristic or local search technique to improve the quality of solutions generated by the EA and to reduce the likelihood of premature convergence.[1]

Memetic algorithms represent one of the recent growing areas of research in evolutionary computation. The term MA is now widely used as a synergy of evolutionary or any population-based approach with separate individual learning or local improvement procedures for problem search. Quite often, MAs are also referred to in the literature as Baldwinian evolutionary algorithms (EAs), Lamarckian EAs, cultural algorithms, or genetic local search.

Introduction

Inspired by both Darwinian principles of natural evolution and Dawkins' notion of a meme, the term memetic algorithm (MA) was introduced by Pablo Moscato in his technical report[2] in 1989 where he viewed MA as being close to a form of population-based hybrid genetic algorithm (GA) coupled with an individual learning procedure capable of performing local refinements. The metaphorical parallels, on the one hand, to Darwinian evolution and, on the other hand, between memes and domain specific (local search) heuristics are captured within memetic algorithms thus rendering a methodology that balances well between generality and problem specificity. This two-stage nature makes them a special case of dual-phase evolution.

In the context of complex optimization, many different instantiations of memetic algorithms have been reported across a wide range of application domains, in general, converging to high-quality solutions more efficiently than their conventional evolutionary counterparts.[3]

In general, using the ideas of memetics within a computational framework is called memetic computing or memetic computation (MC).[4][5] With MC, the traits of universal Darwinism are more appropriately captured. Viewed in this perspective, MA is a more constrained notion of MC. More specifically, MA covers one area of MC, in particular dealing with areas of evolutionary algorithms that marry other deterministic refinement techniques for solving optimization problems. MC extends the notion of memes to cover conceptual entities of knowledge-enhanced procedures or representations.

Theoretical Background

The no-free-lunch theorems of optimization and search[6][7] state that all optimization strategies are equally effective with respect to the set of all optimization problems. Conversely, this means that one can expect the following: The more efficiently an algorithm solves a problem or class of problems, the less general it is and the more problem-specific knowledge it builds on. This insight leads directly to the recommendation to complement generally applicable metaheuristics with application-specific methods or heuristics,[8] which fits well with the concept of MAs.

The development of MAs

1st generation

Pablo Moscato characterized an MA as follows: "Memetic algorithms are a marriage between a population-based global search and the heuristic local search made by each of the individuals. ... The mechanisms to do local search can be to reach a local optimum or to improve (regarding the objective cost function) up to a predetermined level." And he emphasizes "I am not constraining an MA to a genetic representation.".[9] This original definition of MA although encompasses characteristics of cultural evolution (in the form of local refinement) in the search cycle, it may not qualify as a true evolving system according to universal Darwinism, since all the core principles of inheritance/memetic transmission, variation, and selection are missing. This suggests why the term MA stirred up criticisms and controversies among researchers when first introduced.[2] The following pseudo code would correspond to this general definition of an MA:

Pseudo code
   Procedure Memetic Algorithm
   Initialize: Generate an initial population, evaluate the individuals and assign a quality value to them;
   while Stopping conditions are not satisfied do
       Evolve a new population using stochastic search operators.
       Evaluate all individuals in the population and assign a quality value to them.
       Select the subset of individuals, , that should undergo the individual improvement procedure.
       for each individual in  do
           Perform individual learning using meme(s) with frequency or probability of , with an intensity of .
           Proceed with Lamarckian or Baldwinian learning.
       end for
   end while

Lamarckian learning in this context means to update the chromosome according to the improved solution found by the individual learning step, while Baldwinian learning leaves the chromosome unchanged and uses only the improved fitness. This pseudo code leaves open which steps are based on the fitness of the individuals and which are not. In question are the evolving of the new population and the selection of .

Since most MA implementations are based on EAs, the pseudo code of a corresponding representative of the first generation is also given here, following Krasnogor:[10]

Pseudo code
   Procedure Memetic Algorithm Based on an EA
   Initialization: ;  // Initialization of the generation counter
                   Randomly generate an initial population ;
                   Compute the fitness ;
   while Stopping conditions are not satisfied do
       Selection:  Accordingly to  choose a subset of  and store it in ;
       Offspring: Recombine and mutate individuals  and store them in ;
       Learning: Improve  by local search or heuristic ;
       Evaluation: Compute the fitness ;
       if Lamarckian learning then
          Update chromosome of  according to improvement ;
       fi
       New generation: Generate  by selecting some individuals from  and ;
       ;  // Increment the generation counter
   end while
   Return best individual  as result;

There are some alternatives for this MA scheme. For example:

  • All or some of the initial individuals may be improved by the meme(s).
  • The parents may be locally improved instead of the offspring.
  • Instead of all offspring, only a randomly selected or fitness-dependent fraction may undergo local improvement. The latter requires the evaluation of the offspring in prior to the Learning step.

2nd generation

Multi-meme,[11] hyper-heuristic[12][13] and meta-Lamarckian MA[14][15] are referred to as second generation MA exhibiting the principles of memetic transmission and selection in their design. In Multi-meme MA, the memetic material is encoded as part of the genotype. Subsequently, the decoded meme of each respective individual/chromosome is then used to perform a local refinement. The memetic material is then transmitted through a simple inheritance mechanism from parent to offspring(s). On the other hand, in hyper-heuristic and meta-Lamarckian MA, the pool of candidate memes considered will compete, based on their past merits in generating local improvements through a reward mechanism, deciding on which meme to be selected to proceed for future local refinements. Memes with a higher reward have a greater chance of continuing to be used. For a review on second generation MA; i.e., MA considering multiple individual learning methods within an evolutionary system, the reader is referred to.[16]

3rd generation

Co-evolution[17] and self-generating MAs[18] may be regarded as 3rd generation MA where all three principles satisfying the definitions of a basic evolving system have been considered. In contrast to 2nd generation MA which assumes that the memes to be used are known a priori, 3rd generation MA utilizes a rule-based local search to supplement candidate solutions within the evolutionary system, thus capturing regularly repeated features or patterns in the problem space.

Some design notes

The learning method/meme used has a significant impact on the improvement results, so care must be taken in deciding which meme or memes to use for a particular optimization problem.[12][16][19] The frequency and intensity of individual learning directly define the degree of evolution (exploration) against individual learning (exploitation) in the MA search, for a given fixed limited computational budget. Clearly, a more intense individual learning provides greater chance of convergence to the local optima but limits the amount of evolution that may be expended without incurring excessive computational resources. Therefore, care should be taken when setting these two parameters to balance the computational budget available in achieving maximum search performance. When only a portion of the population individuals undergo learning, the issue of which subset of individuals to improve need to be considered to maximize the utility of MA search. Last but not least, it has to be decided whether the respective individual should be changed by the learning success (Lamarckian learning) or not (Baldwinian learning). Thus, the following five design questions[15][19][20] must be answered, the first of which is addressed by all of the above 2nd generation representatives during an MA run, while the extended form of meta-Lamarckian learning of [15] expands this to the first four design decisions.

Selection of an individual learning method or meme to be used for a particular problem or individual

In the context of continuous optimization, individual learning exists in the form of local heuristics or conventional exact enumerative methods.

interior point methods, conjugate gradient method
, line search, and other local heuristics. Note that most of the common individual learning methods are deterministic.

In combinatorial optimization, on the other hand, individual learning methods commonly exist in the form of heuristics (which can be deterministic or stochastic) that are tailored to a specific problem of interest. Typical heuristic procedures and schemes include the k-gene exchange, edge exchange, first-improvement, and many others.

Determination of the individual learning frequency

One of the first issues pertinent to memetic algorithm design is to consider how often the individual learning should be applied; i.e., individual learning frequency. In one case,[19] the effect of individual learning frequency on MA search performance was considered where various configurations of the individual learning frequency at different stages of the MA search were investigated. Conversely, it was shown elsewhere[22] that it may be worthwhile to apply individual learning on every individual if the computational complexity of the individual learning is relatively low.

Selection of the individuals to which individual learning is applied

On the issue of selecting appropriate individuals among the EA population that should undergo individual learning, fitness-based and distribution-based strategies were studied for adapting the probability of applying individual learning on the population of chromosomes in continuous parametric search problems with Land[23] extending the work to combinatorial optimization problems. Bambha et al. introduced a simulated heating technique for systematically integrating parameterized individual learning into evolutionary algorithms to achieve maximum solution quality.[24]

Specification of the intensity of individual learning

Individual learning intensity, , is the amount of computational budget allocated to an iteration of individual learning; i.e., the maximum computational budget allowable for individual learning to expend on improving a single solution.

Choice of Lamarckian or Baldwinian learning

It is to be decided whether a found improvement is to work only by the better fitness (Baldwinian learning) or whether also the individual is adapted accordingly (lamarckian learning). In the case of an EA, this would mean an adjustment of the genotype. This question has been controversially discussed for EAs in the literature already in the 1990s, stating that the specific use case plays a major role.[25][26][27] The background of the debate is that genome adaptation may promote premature convergence. This risk can be effectively mitigated by other measures to better balance breadth and depth searches, such as the use of structured populations.[28]

Applications

Memetic algorithms have been successfully applied to a multitude of real-world problems. Although many people employ techniques closely related to memetic algorithms, alternative names such as hybrid genetic algorithms are also employed.

Researchers have used memetic algorithms to tackle many classical

generalized assignment problem
.

More recent applications include (but are not limited to)

gene expression profiles,[45] feature/gene selection,[46][47] parameter determination for hardware fault injection,[48] and multi-class, multi-objective feature selection.[49][50]

Recent activities in memetic algorithms

References

  1. .
  2. ^ a b Moscato, Pablo (1989), On Evolution, Search, Optimization, Genetic Algorithms and Martial Arts: Towards Memetic Algorithms, Caltech Concurrent Computation Program, Technical Report 826, Pasadena, CA: California Institute of Technology
  3. ^
    S2CID 173187844
    .
  4. .
  5. .
  6. .
  7. .
  8. .
  9. ^ Moscato, Pablo (1989), On Evolution, Search, Optimization, Genetic Algorithms and Martial Arts: Towards Memetic Algorithms, Caltech Concurrent Computation Program, Technical Report 826, Pasadena, CA: California Institute of Technology, pp. 19–20
  10. ^ Krasnogor, Natalio (2002). Studies on the Theory and Design Space of Memetic Algorithms (PhD). Bristol, UK: University of the West of England. p. 23.
  11. ^ Krasnogor, Natalio (1999). "Coevolution of genes and memes in memetic algorithms". Graduate Student Workshop: 371.
  12. ^ a b Kendall G. and Soubeiga E. and Cowling P. Choice function and random hyperheuristics (PDF). 4th Asia-Pacific Conference on Simulated Evolution and Learning. SEAL 2002. pp. 667–671.
  13. S2CID 3053192
    .
  14. .
  15. ^ .
  16. ^ .
  17. .
  18. ^ Krasnogor N. & Gustafson S. (2002). "Toward truly "memetic" memetic algorithms: discussion and proof of concepts". Advances in Nature-Inspired Computation: The PPSN VII Workshops. PEDAL (Parallel Emergent and Distributed Architectures Lab). University of Reading.
  19. ^
    CiteSeerX 10.1.1.473.1370
    .
  20. .
  21. .
  22. .
  23. .
  24. .
  25. .
  26. , retrieved 2023-02-07
  27. .
  28. .
  29. .
  30. .
  31. .
  32. .
  33. .
  34. .
  35. .
  36. .
  37. .
  38. .
  39. .
  40. .
  41. .
  42. .
  43. .
  44. .
  45. .
  46. .
  47. ^ "Artificial Intelligence for Fault Injection Parameter Selection | Marina Krček | Hardwear.io Webinar". hardwear.io. Retrieved 2021-05-21.
  48. S2CID 2904028
    .
  49. .