AI-complete
This article needs to be updated.(March 2024) |
In the field of artificial intelligence, the most difficult problems are informally known as AI-complete or AI-hard, implying that the difficulty of these computational problems, assuming intelligence is computational, is equivalent to that of solving the central artificial intelligence problem—making computers as intelligent as people.[1] To call a problem AI-complete reflects an attitude that it would not be solved by a simple specific algorithm.
AI-complete problems are hypothesised to include computer vision, natural language understanding, and dealing with unexpected circumstances while solving any real-world problem.[2]
Currently, AI-complete problems cannot be solved with modern computer technology alone, but would also require
History
The term was coined by
AI-complete problems
AI-complete problems are hypothesized to include:
- AI peer review (composite natural language understanding, automated reasoning, automated theorem proving, formalized logic expert system)
- Bongard problems[8]
- object recognition)[9]
- Natural language understanding (and subproblems such as text mining,[10] machine translation,[11] and word-sense disambiguation[12])
- Autonomous driving[13]
- Dealing with unexpected circumstances while solving any real world problem, whether it's reasoning done by expert systems.[citation needed]
Software brittleness
Current AI systems can solve very simple and/or restricted versions of AI-complete problems, but never in their full generality. When AI researchers attempt to "scale up" their systems to handle more complicated, real-world situations, the programs tend to become excessively
Formalization
Computational complexity theory deals with the relative computational difficulty of computable functions. By definition, it does not cover problems whose solution is unknown or has not been characterised formally. Since many AI problems have no formalisation yet, conventional complexity theory does not allow the definition of AI-completeness.
To address this problem, a complexity theory for AI has been proposed.[16] It is based on a model of computation that splits the computational burden between a computer and a human: one part is solved by computer and the other part solved by human. This is formalised by a human-assisted Turing machine. The formalisation defines algorithm complexity, problem complexity and reducibility which in turn allows equivalence classes to be defined.
The complexity of executing an algorithm with a human-assisted Turing machine is given by a pair , where the first element represents the complexity of the human's part and the second element is the complexity of the machine's part.
Results
The complexity of solving the following problems with a human-assisted Turing machine is:[16]
- Optical character recognition for printed text:
- Turing test:
- for an -sentence conversation where the oracle remembers the conversation history (persistent oracle):
- for an -sentence conversation where the conversation history must be retransmitted:
- for an -sentence conversation where the conversation history must be retransmitted and the person takes linear time to read the query:
- ESP game:
- Image labelling (based on the Arthur–Merlin protocol):
- Image classification: human only: , and with less reliance on the human: .
Research
Yamopolsky[17] suggests that a problem is AI-Complete if it has two properties:
- It is in the set of AI problems (Human Oracle-solvable).
- Any AI problem can be converted into by some polynomial time algorithm.
On the other hand, a problem is AI-Hard if and only if there is an AI-Complete problem that is polynomial time Turing-reducible to . This also gives as a consequence the existence of AI-Easy problems, that are solvable in polynomial time by a deterministic Turing machine with an oracle for some problem.
Yampolskiy
Groppe and Jain[19] classify problems which require artificial general intelligence to reach human-level machine performance as AI-complete, while only restricted versions of AI-complete problems can be solved by the current AI systems. For Šekrst,[20] getting a polynomial solution to AI-complete problems would not necessarily be equal to solving the issue of strong AI, while emphasizing the lack of computational complexity research being the limiting factor towards achieving artificial general intelligence.
For Kwee-Bintoro and Velez,[21] solving AI-complete problems would have strong repercussions on the society.
See also
References
- ^ Shapiro, Stuart C. (1992). Artificial Intelligence Archived 2016-02-01 at the Wayback Machine In Stuart C. Shapiro (Ed.), Encyclopedia of Artificial Intelligence (Second Edition, pp. 54–57). New York: John Wiley. (Section 4 is on "AI-Complete Tasks".)
- ^ Roman V. Yampolskiy. Turing Test as a Defining Feature of AI-Completeness. In Artificial Intelligence, Evolutionary Computation and Metaheuristics (AIECM) --In the footsteps of Alan Turing. Xin-She Yang (Ed.). pp. 3–17. (Chapter 1). Springer, London. 2013. http://cecs.louisville.edu/ry/TuringTestasaDefiningFeature04270003.pdf Archived 2013-05-22 at the Wayback Machine
- ^ Luis von Ahn, Manuel Blum, Nicholas Hopper, and John Langford. CAPTCHA: Using Hard AI Problems for Security Archived 2016-03-04 at the Wayback Machine. In Proceedings of Eurocrypt, Vol. 2656 (2003), pp. 294–311.
- ) (unpublished?)
- ^ Mallery, John C. (1988), "Thinking About Foreign Policy: Finding an Appropriate Role for Artificially Intelligent Computers", The 1988 Annual Meeting of the International Studies Association., St. Louis, MO, archived from the original on 2008-02-29, retrieved 2007-04-27
{{citation}}
: CS1 maint: location missing publisher (link). - ^ Mueller, Erik T. (1987, March). Daydreaming and Computation (Technical Report CSD-870017) Archived 2020-10-30 at the Wayback Machine PhD dissertation, University of California, Los Angeles. ("Daydreaming is but one more AI-complete problem: if we could solve anyone artificial intelligence problem, we could solve all the others", p. 302)
- ^ Raymond, Eric S. (1991, March 22). Jargon File Version 2.8.1 Archived 2011-06-04 at the Wayback Machine (Definition of "AI-complete" first added to jargon file.)
- ISBN 978-3-030-37591-1, retrieved 2024-03-25
- S2CID 220687545– via ABI/INFORM Collection.
- from the original on 2023-04-15. Retrieved 2023-04-15.
- ISBN 978-3-031-05642-0, retrieved 2023-04-15
- ^ Ide, N.; Veronis, J. (1998). "Introduction to the special issue on word sense disambiguation: the state of the art" (PDF). Computational Linguistics. 24 (1): 2–40. Archived (PDF) from the original on 2022-10-09.
- ^ Musk, Elon (April 14, 2022). "Elon Musk talks Twitter, Tesla and how his brain works — live at TED2022". TED (conference) (Interview). Interviewed by Chris_Anderson_(entrepreneur). Vancouver. Archived from the original on December 15, 2022. Retrieved December 15, 2022.
- ^ Lenat, Douglas; Guha, R. V. (1989), Building Large Knowledge-Based Systems, Addison-Wesley, pp. 1–5
- ^ "A Generalist Agent". www.deepmind.com. Archived from the original on 2022-08-02. Retrieved 2022-05-26.
- ^ a b Dafna Shahaf and Eyal Amir (2007) Towards a theory of AI completeness Archived 2020-11-07 at the Wayback Machine. Commonsense 2007, 8th International Symposium on Logical Formalizations of Commonsense Reasoning Archived 2021-01-19 at the Wayback Machine.
- ^ Yampolskiy, Roman (2012), "AI-Complete, AI-Hard, or AI-Easy – Classification of Problems in AI" (PDF), 23rd Midwest Artificial Intelligence and Cognitive Science Conference, MAICS 2012, Cincinnati, Ohio, USA, 21-22 April 2012, retrieved 2024-04-05
- ISBN 978-3-642-29693-2
- ISBN 978-3-030-37591-1, retrieved 2024-04-05
- ISBN 978-3-030-84728-9