Computational irreducibility
![]() | This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these messages)
|
Computational irreducibility suggests certain computational processes cannot be simplified and the only way to determine the outcome of a process is to go through each step of its computation. It is one of the main ideas proposed by Stephen Wolfram in his 2002 book A New Kind of Science, although the concept goes back to studies from the 1980s.[1]
The idea
phenomena are normally computationally irreducible.[2]
Computational irreducibility explains why many natural systems are hard to predict or simulate. The Principle of Computational Equivalence implies these systems are as computationally powerful as any designed computer. Implications
AnalysisNavot Israeli and Nigel Goldenfeld found that some less complex systems behaved simply and predictably (thus, they allowed approximations). However, more complex systems were still computationally irreducible and unpredictable. It is unknown what conditions would allow complex phenomena to be described simply and predictably. CompatibilismMarius Krumm and Markus P Muller tie computational irreducibility to Compatibilism.[3] They refine concepts via the intermediate requirement of a new concept called computational sourcehood that demands essentially full and almost-exact representation of features associated with problem or process represented, and a full no-shortcut computation. The approach simplifies conceptualization of the issue via the No Shortcuts metaphor. This may be analogized to the process of cooking, where all the ingredients in a recipe are required as well as following the 'cooking schedule' to obtain the desired end product. This parallels the issues of the profound distinctions between similarity and identity. See also
External links and references
References
|