AI winter
Part of a series on |
Artificial intelligence |
---|
In the
The term first appeared in 1984 as the topic of a public debate at the annual meeting of AAAI (then called the "American Association of Artificial Intelligence").[2] Roger Schank and Marvin Minsky—two leading AI researchers who experienced the "winter" of the 1970s—warned the business community that enthusiasm for AI had spiraled out of control in the 1980s and that disappointment would certainly follow. They described a chain reaction, similar to a "nuclear winter", that would begin with pessimism in the AI community, followed by pessimism in the press, followed by a severe cutback in funding, followed by the end of serious research.[2] Three years later the billion-dollar AI industry began to collapse.
There were two major winters approximately 1974–1980 and 1987–2000,[3] and several smaller episodes, including the following:
- 1966: failure of machine translation
- 1969: criticism of artificial neural networks)
- 1971–75: DARPA's frustration with the Speech Understanding Research program at Carnegie Mellon University
- 1973: large decrease in AI research in the United Kingdom in response to the Lighthill report
- 1973–74: DARPA's cutbacks to academic AI research in general
- 1987: collapse of the LISP machine market
- 1988: cancellation of new spending on AI by the Strategic Computing Initiative
- 1990s: many expert systems were abandoned
- 1990s: end of the Fifth Generation computerproject's original goals
Enthusiasm and optimism about AI has generally increased since its low point in the early 1990s. Beginning about 2012, interest in artificial intelligence (and especially the sub-field of machine learning) from the research and corporate communities led to a dramatic increase in funding and investment, leading to the current (as of 2024[update]) AI boom.
Early episodes
Machine translation and the ALPAC report of 1966
NLP research has its roots in the early 1930s and begins its existence with the work on machine translation (MT).[4] However, significant advancements and applications began to emerge after the publication of Warren Weaver's influential memorandum in 1949.[5] The memorandum generated great excitement within the research community. In the following years, notable events unfolded: IBM embarked on the development of the first machine, MIT appointed its first full-time professor in machine translation, and several conferences dedicated to MT took place. The culmination came with the public demonstration of the IBM-Georgetown machine, which garnered widespread attention in respected newspapers in 1954.[6]
Just like all AI booms that have been followed by desperate AI winters, the media tended to exaggerate the significance of these developments. Headlines about the IBM-Georgetown experiment proclaimed phrases like "The bilingual machine," "Robot brain translates Russian into King's English,"[7] and "Polyglot brainchild."[8] However, the actual demonstration involved the translation of a curated set of only 49 Russian sentences into English, with the machine's vocabulary limited to just 250 words.[6] To put things into perspective, a 2006 study made by Paul Nation found that humans need a vocabulary of around 8,000 to 9,000-word families to comprehend written texts with 98% accuracy.[9]
During the Cold War, the US government was particularly interested in the automatic, instant translation of Russian documents and scientific reports. The government aggressively supported efforts at machine translation starting in 1954. Another factor that propelled the field of mechanical translation was the interest shown by the Central Intelligence Agency (CIA). During that period, the CIA firmly believed in the importance of developing machine translation capabilities and supported such initiatives. They also recognized that this program had implications that extended beyond the interests of the CIA and the intelligence community.[6]
At the outset, the researchers were optimistic.
By 1964, the
Machine translation shared the same path with natural language processing from the rule-based approaches through the statistical approaches up to the neural network approaches, which have in 2023 culminated in large language models.
The failure of single-layer neural networks in 1969
Simple networks or circuits of connected units, including
Interest in perceptrons, invented by Frank Rosenblatt, was kept alive only by the sheer force of his personality.[14] He optimistically predicted that the perceptron "may eventually be able to learn, make decisions, and translate languages".[15] Mainstream research into perceptrons ended partially because the 1969 book Perceptrons by Marvin Minsky and Seymour Papert emphasized the limits of what perceptrons could do.[16] While it was already known that multilayered perceptrons are not subject to the criticism, nobody in the 1960s knew how to train a multilayered perceptrons. Backpropagation was still years away.[17]
Major funding for projects neural network approaches was difficult to find in the 1970s and early 1980s.[18] Important theoretical work continued despite the lack of funding. The "winter" of neural network approach came to an end in the middle 1980s, when the work of John Hopfield, David Rumelhart and others revived large scale interest.[19] Rosenblatt did not live to see this, however, as he died in a boating accident shortly after Perceptrons was published.[15]
The setbacks of 1974
The Lighthill report
In 1973, professor
The report was contested in a debate broadcast in the BBC "Controversy" series in 1973. The debate "The general purpose robot is a mirage" from the Royal Institution was Lighthill versus the team of Donald Michie, John McCarthy and Richard Gregory.[21] McCarthy later wrote that "the combinatorial explosion problem has been recognized in AI from the beginning".[22]
The report led to the complete dismantling of AI research in the UK.[20] AI research continued in only a few universities (Edinburgh, Essex and Sussex). Research would not revive on a large scale until 1983, when Alvey (a research project of the British Government) began to fund AI again from a war chest of £350 million in response to the Japanese Fifth Generation Project (see below). Alvey had a number of UK-only requirements which did not sit well internationally, especially with US partners, and lost Phase 2 funding.
DARPA's early 1970s funding cuts
During the 1960s, the
This attitude changed after the passage of
AI researcher Hans Moravec blamed the crisis on the unrealistic predictions of his colleagues: "Many researchers were caught up in a web of increasing exaggeration. Their initial promises to DARPA had been much too optimistic. Of course, what they delivered stopped considerably short of that. But they felt they couldn't in their next proposal promise less than in the first one, so they promised more."[25] The result, Moravec claims, is that some of the staff at DARPA had lost patience with AI research. "It was literally phrased at DARPA that 'some of these people were going to be taught a lesson [by] having their two-million-dollar-a-year contracts cut to almost nothing!'" Moravec told Daniel Crevier.[26]
While the autonomous tank project was a failure, the battle management system (the Dynamic Analysis and Replanning Tool) proved to be enormously successful, saving billions in the first Gulf War, repaying all of DARPAs investment in AI[27] and justifying DARPA's pragmatic policy.[28]
The SUR debacle
As described in:[29]
In 1971, the Defense Advanced Research Projects Agency (DARPA) began an ambitious five-year experiment in speech understanding. The goals of the project were to provide recognition of utterances from a limited vocabulary in near-real time. Three organizations finally demonstrated systems at the conclusion of the project in 1976. These were Carnegie-Mellon University (CMU), who actually demonstrated two system [HEARSAY-II and HARPY]; Bolt, Beranek and Newman (BBN); and System Development Corporation with Stanford Research Institute (SDC/SRI)
The system that came closest to satisfying the original project goals was the CMU HARPY system. The relatively high performance of the HARPY system was largely achieved through 'hard-wiring' information about possible utterances into the system's knowledge base. Although HARPY made some interesting contributions, its dependence on extensive pre-knowledge limited the applicability of the approach to other signal-understanding tasks.
DARPA was deeply disappointed with researchers working on the Speech Understanding Research program at Carnegie Mellon University. DARPA had hoped for, and felt it had been promised, a system that could respond to voice commands from a pilot. The SUR team had developed a system which could recognize spoken English, but only if the words were spoken in a particular order. DARPA felt it had been duped and, in 1974, they cancelled a three million dollar a year contract.[30]
Many years later, several successful commercial
For a description of Hearsay-II see Hearsay-II, The Hearsay-II Speech Understanding System: Integrating Knowledge to Resolve Uncertainty and A Retrospective View of the Hearsay-II Architecture which appear in Blackboard Systems[32]
Reddy gives a review of progress in speech understanding at the end of the DARPA project in a 1976 article in Proceedings of the IEEE.[33]
Contrary view
Thomas Haigh
The setbacks of the late 1980s and early 1990s
The collapse of the LISP machine market
In the 1980s, a form of AI program called an "
In 1987, three years after Minsky and Schank's
By the early 1990s, most commercial LISP companies had failed, including Symbolics, LISP Machines Inc., Lucid Inc., etc. Other companies, like Texas Instruments and Xerox, abandoned the field. A small number of customer companies (that is, companies using systems written in LISP and developed on LISP machine platforms) continued to maintain systems. In some cases, this maintenance involved the assumption of the resulting support work.[40]
Slowdown in deployment of expert systems
By the early 1990s, the earliest successful expert systems, such as XCON, proved too expensive to maintain. They were difficult to update, they could not learn, they were "brittle" (i.e., they could make grotesque mistakes when given unusual inputs), and they fell prey to problems (such as the
The few remaining expert system shell companies were eventually forced to downsize and search for new markets and software paradigms, like
The end of the Fifth Generation project
In 1981, the
Strategic Computing Initiative cutbacks
In 1983, in response to the fifth generation project, DARPA again began to fund AI research through the Strategic Computing Initiative. As originally proposed the project would begin with practical, achievable goals, which even included artificial general intelligence as long-term objective. The program was under the direction of the
Jack Schwarz, who ascended to the leadership of IPTO in 1987, dismissed expert systems as "clever programming" and cut funding to AI "deeply and brutally", "eviscerating" SCI. Schwarz felt that DARPA should focus its funding only on those technologies which showed the most promise, in his words, DARPA should "surf", rather than "dog paddle", and he felt strongly AI was not "the next wave". Insiders in the program cited problems in communication, organization and integration. A few projects survived the funding cuts, including pilot's assistant and an autonomous land vehicle (which were never delivered) and the DART battle management system, which (as noted above) was successful.[46]
AI winter of the 1990s and early 2000s
A survey of reports from the early 2000s suggests that AI's reputation was still poor:
- Alex Castro, quoted in The Economist, 7 June 2007: "[Investors] were put off by the term 'voice recognition' which, like 'artificial intelligence', is associated with systems that have all too often failed to live up to their promises."[47]
- Patty Tascarella in Pittsburgh Business Times, 2006: "Some believe the word 'robotics' actually carries a stigma that hurts a company's chances at funding."[48]
- John Markoff in the New York Times, 2005: "At its low point, some computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers."[49]
Many researchers in AI in the mid 2000s
In the late 1990s and early 21st century, AI technology became widely used as elements of larger systems,[51][52] but the field is rarely credited for these successes. In 2006, Nick Bostrom explained that "a lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore."[53] Rodney Brooks stated around the same time that "there's this stupid myth out there that AI has failed, but AI is around you every second of the day."[54]
Current AI "spring" (early 2020s-present)
AI reached the highest levels of interest and funding in its history in the 2020s, by every possible measure, including: publications,[55] patent applications,[56] total investment ($50 billion in 2022),[57] and job openings (800,000 U.S. job openings in 2022).[58] The successes of the current "AI spring" or "AI boom" are advances in language translation (in particular,
The 2022 release of OpenAI's AI chatbot ChatGPT which as of January 2023 has over 100 million users,[60] has reinvigorated the discussion about artificial intelligence and its effects on the world.[61][62] Google CEO Sundar Pichai has stated that AI will be the most important technology that humans create.[63]
See also
Notes
- ^ AI Expert Newsletter: W is for Winter Archived 9 November 2013 at the Wayback Machine
- ^ a b c Crevier 1993, p. 203.
- ^ Different sources use different dates for the AI winter. Consider: (1) Howe 1994: "Lighthill's [1973] report provoked a massive loss of confidence in AI by the academic establishment in the UK (and to a lesser extent in the US). It persisted for a decade ― the so-called '"AI Winter'", (2) Russell & Norvig 2003, p. 24: "Overall, the AI industry boomed from a few million dollars in 1980 to billions of dollars in 1988. Soon after that came a period called the 'AI Winter'"
- ^ "The Era of Mechanical Translation and How It Crashed (History of LLMs #1)". Turing Post. 16 June 2023. Retrieved 11 September 2023.
- ISBN 9780262120029.
- ^ a b c "The Story of AI Winters and What it Teaches Us Today (History of LLMs. Bonus)". Turing Post. 30 June 2023. Retrieved 11 September 2023.
- ^ Electronic brain translates Russian Chemical and Engineering News
- ^ Polyglot brainchild aclanthology.org
- .
- ^ a b John Hutchins 2005 The history of machine translation in a nutshell. Archived 13 July 2019 at the Wayback Machine
- ^ Hutchins, John. (1995). "The whisky was invisible", or Persistent myths of MT Archived 4 April 2020 at the Wayback Machine
- ^ Russell & Norvig 2003, p. 21.
- ^ McCorduck 2004, pp. 52–107
- ^ Pamela McCorduck quotes one colleague as saying, "He was a press agent's dream, a real medicine man." (McCorduck 2004, p. 105)
- ^ a b Crevier 1993, pp. 102–5
- ^ Minsky & Papert 1969.
- ISBN 978-0-262-03803-4.
- ^ Crevier 1993, pp. 102–105, McCorduck 2004, pp. 104–107, Russell & Norvig 2003, p. 22
- ^ Crevier 1993, pp. 214–6 and Russell & Norvig 2003, p. 25
- ^ a b Crevier 1993, p. 117, Russell & Norvig 2003, p. 22, Howe 1994 and see also Lighthill 1973
- ^ "BBC Controversy Lighthill debate 1973". BBC "Controversy" debates series. ARTIFICIAL_INTELLIGENCE-APPLICATIONS¯INSTITUTE. 1973. Retrieved 13 August 2010.
- ^ McCarthy, John (1993). "Review of the Lighthill Report". Archived from the original on 30 September 2008. Retrieved 10 September 2008.
- ^ Crevier 1993, p. 65
- ^ a b NRC 1999, under "Shift to Applied Research Increases Investment" (only the sections before 1980 apply to the current discussion).
- ^ Crevier 1993, p. 115
- ^ Crevier 1993, p. 117
- ^ Russell & Norvig 2003, p. 25
- ^ NRC 1999
- ISBN 0-201-17431-6.
- ^ Crevier 1993, pp. 115–116 (on whom this account is based). Other views include McCorduck 2004, pp. 306–313 and NRC 1999 under "Success in Speech Recognition".
- ^ NRC 1999 under "Success in Speech Recognition".
- ^
Engelmore, Robert; Morgan, Tony (1988). Blackboard Systems. Addison-Wesley. pp. 25–121. ISBN 0-201-17431-6.
- ^ Reddy, Raj (April 1976). "Speech recognition by machine: a review". Proceedings of the IEEE. 64 (4): 501–531. .
- ^ "There Was No 'First AI Winter' | December 2023 | Communications of the ACM".
- ^ Newquist 1994, pp. 189–201
- ^ Crevier 1993, pp. 161–2, 197–203
- ^ Brooks, Rodney. "Design of an Optimizing, Dynamically Retargetable Compiler for Common LISP" (PDF). Lucid, Inc. Archived from the original (PDF) on 20 August 2013.
- ^ Avoiding another AI Winter, James Hendler, IEEE Intelligent Systems (March/April 2008 (Vol. 23, No. 2) pp. 2–4
- ^ Crevier 1993, pp. 209–210
- ^ ISBN 978-0-9885937-1-8.
- ^ Newquist 1994, p. 296
- ^ Crevier 1993, pp. 204–208
- ^ Newquist 1994, pp. 431–434
- ^ Crevier 1993, pp. 211–212
- ^ McCorduck 2004, pp. 426–429
- ^ McCorduck 2004, pp. 430–431
- ^ Alex Castro in Are you talking to me? The Economist Technology Quarterly (7 June 2007) Archived 13 June 2008 at the Wayback Machine
- ^ Robotics firms find fundraising struggle, with venture capital shy. By Patty Tascarella. Pittsburgh Business Times (11 August 2006) Archived 26 March 2014 at the Wayback Machine
- ^ a b Markoff, John (14 October 2005). "Behind Artificial Intelligence, a Squadron of Bright Real People". The New York Times. Retrieved 30 July 2007.
- ^ Newquist 1994, p. 423
- ^ NRC 1999 under "Artificial Intelligence in the 90s"
- ^ Kurzweil 2005, p. 264.
- ^ AI set to exceed human brain power CNN.com (26 July 2006) Archived 3 November 2006 at the Wayback Machine
- ^ Kurzweil 2005, p. 263.
- ^ UNESCO (2021).
- ^ "Intellectual Property and Frontier Technologies". WIPO. Archived from the original on 2 April 2022. Retrieved 30 March 2022.
- ^ DiFeliciantonio (2023).
- ^ Goswami (2023).
- ^ Christian 2020, p. 24.
- ISSN 0261-3077. Retrieved 30 October 2023.
- ZDNet.
- ^ "OpenAI CEO Sam Altman says AI will reshape society, acknowledges risks: 'A little bit scared of this'". ABC News.
- ^ "Artificial intelligence could be our saviour, according to the CEO of Google". 24 January 2018.
References
- OCLC 1233266753.
- UNESCO Science Report: the Race Against Time for Smarter Development. Paris: UNESCO. 2021. ISBN 978-92-3-100450-6. Archivedfrom the original on 18 June 2022. Retrieved 18 September 2021.
- DiFeliciantonio, Chase (3 April 2023). "AI has already changed the world. This report shows how". San Francisco Chronicle. Archived from the original on 19 June 2023. Retrieved 19 June 2023.
- Goswami, Rohan (5 April 2023). "Here's where the A.I. jobs are". CNBC. Archived from the original on 19 June 2023. Retrieved 19 June 2023.
- ISBN 0-465-02997-3.
- Howe, J. (November 1994). "Artificial Intelligence at Edinburgh University : a Perspective". Archived from the original on 17 August 2007. Retrieved 30 August 2007.
- ISBN 978-0-670-03384-3.
- Lighthill, Professor Sir James (1973). "Artificial Intelligence: A General Survey". Artificial Intelligence: a paper symposium. Science Research Council.
- ISBN 0-262-13043-2.
- ISBN 1-56881-205-1
- NRC (1999). "Developments in Artificial Intelligence". Funding a Revolution: Government Support for Computing Research. National Academy Press. Archived from the original on 12 January 2008. Retrieved 30 August 2007.)
{{cite book}}
: CS1 maint: bot: original URL status unknown (link - ISBN 978-0-9885937-1-8.
- ISBN 0-13-790395-2
Further reading
- Gleick, James, "The Fate of Free Will" (review of Kevin J. Mitchell, Free Agents: How Evolution Gave Us Free Will, Princeton University Press, 2023, 333 pp.), The New York Review of Books, vol. LXXI, no. 1 (18 January 2024), pp. 27–28, 30. "Agency is what distinguishes us from machines. For biological creatures, reason and purpose come from acting in the world and experiencing the consequences. Artificial intelligences – disembodied, strangers to blood, sweat, and tears – have no occasion for that." (p. 30.)
- in a sentence—such as "he", "she" or "it"—refers.
- Luke Muehlhauser (September 2016). "What should we learn from past AI forecasts?". Open Philanthropy Project.
- Gursoy F and Kakadiaris IA (2023) Artificial intelligence research strategy of the United States: critical assessment and policy recommendations. Front. Big Data 6:1206139. doi: 10.3389/fdata.2023.1206139: Global trends in AI research and development are being largely influenced by the US. Such trends are very important for the field's future, especially in terms of allocating funds to avoid a second AI Winter, advance the betterment of society, and guarantee society's safe transition to the new sociotechnical paradigm. This paper examines, through a critical lens, the official AI R&D strategies of the US government in light of this urgent issue. It makes six suggestions to enhance AI research strategies in the US as well as globally.
- Roivainen, Eka, "AI's IQ: IQ alone", Scientific American, vol. 329, no. 1 (July/August 2023), p. 7. "Despite its high IQ, ChatGPTfails at tasks that require real humanlike reasoning or an understanding of the physical and social world.... ChatGPT seemed unable to reason logically and tried to rely on its vast database of... facts derived from online texts."
External links
- ComputerWorld article (February 2005)
- AI Expert Newsletter (January 2005)
- "If It Works, It's Not AI: A Commercial Look at Artificial Intelligence startups"
- Patterns of Software- a collection of essays by Richard P. Gabriel, including several autobiographical essays
- Review of "Artificial Intelligence: A General Survey" by John McCarthy
- Other Freddy II Robot Resources Includes a link to the 90 minute 1973 "Controversy" debate from the Royal Academy of Lighthill vs. Michie, McCarthy and Gregory in response to Lighthill's report to the British government.