Statement on AI risk of extinction

Source: Wikipedia, the free encyclopedia.

On May 30, 2023, hundreds of artificial intelligence experts and other notable figures signed the following short Statement on AI Risk:[1][2]

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

At release time, the signatories included over 100 professors of AI including the two most-cited computer scientists and Turing laureates Geoffrey Hinton and Yoshua Bengio, as well as the scientific and executive leaders of several major AI companies, and experts in pandemics, climate, nuclear disarmament, philosophy, social sciences, and other fields.[1][2] Media coverage has emphasized the signatures from several tech leaders;[2] this was followed by concerns in other newspapers that the statement could be motivated by public relations or regulatory capture.[3] The statement was released shortly after an open letter calling for a pause on AI experiments.

The statement is hosted on the website of the AI research and advocacy non-profit Center for AI Safety. It was released with an accompanying text which states that it is still difficult to speak up about extreme risks of AI and that the statement aims to overcome this obstacle.[1] The center's CEO Dan Hendrycks stated that “systemic bias, misinformation, malicious use, cyberattacks, and weaponization” are all examples of “important and urgent risks from AI… not just the risk of extinction” and added, “[s]ocieties can manage multiple risks at once; it’s not ‘either/or’ but ‘yes/and.’”[4]

The Prime Minister of the United Kingdom, Rishi Sunak, retweeted the statement and wrote, "The government is looking very carefully at this."[5] When asked about the statement, the White House Press Secretary, Karine Jean-Pierre, commented that AI "is one of the most powerful technologies that we see currently in our time. But in order to seize the opportunities it presents, we must first mitigate its risks."[6]

Among the well-known signatories are:

James Pennebaker and Ronald C. Arkin.[7]

See also

References

  1. ^ a b c "Statement on AI Risk | CAIS". www.safe.ai. Retrieved 2023-05-30.
  2. ^
    ISSN 0362-4331
    . Retrieved 2023-05-30.
  3. ^ Wong, Matteo (2023-06-02). "AI Doomerism Is a Decoy". The Atlantic. Retrieved 2023-12-26.
  4. ^ Lomas, Natasha (2023-05-30). "OpenAI's Altman and other AI giants back warning of advanced AI as 'extinction' risk". TechCrunch. Retrieved 2023-05-30.
  5. ^ "Artificial intelligence warning over human extinction – all you need to know". The Independent. 2023-05-31. Retrieved 2023-06-03.
  6. ^ "President Biden warns artificial intelligence could 'overtake human thinking'". USA TODAY. Retrieved 2023-06-03.
  7. ^ "Statement on AI Risk | CAIS". www.safe.ai. Retrieved 2024-03-18.