P(doom) explained

P(doom) is a term in AI safety that refers to the probability of catastrophic outcomes (or "doom") as a result of artificial intelligence.[1] The exact outcomes in question differ from one prediction to another, but generally allude to the existential risk from artificial general intelligence.

Originating as an inside joke among AI researchers, the term came to prominence in 2023 following the release of GPT-4, as high-profile figures such as Geoffrey Hinton[2] and Yoshua Bengio[3] began to warn of the risks of AI. In 2022, a survey of AI researchers, which had a 17% response rate, found that the majority believed there is at least a 10% chance that our inability to control AI could cause an existential catastrophe.[4]

Sample P(doom) values

Namedata-sort-type=number P(doom)Notes
Dario Amodei10-25%[5] CEO of Anthropic
Elon Musk10-20%[6] Businessman and CEO of X, Tesla, and SpaceX
Paul Christiano50%[7] Head of research at the US AI Safety Institute
Lina Khan15%Chair of the Federal Trade Commission
Emmet Shear5-50%Co-founder of Twitch and former interim CEO of OpenAI
Geoffrey Hinton10%-50%AI researcher, formerly of Google
Yoshua Bengio20%[8] Computer scientist and scientific director of the Montreal Institute for Learning Algorithms
Jan Leike10-90%[9] AI alignment researcher at Anthropic, formerly of DeepMind and OpenAI
Vitalik Buterin10%Cofounder of Ethereum
Dan Hendrycks80%+ Director of Center for AI Safety
Grady BoochAmerican software engineer
Casey Newton5%American technology journalist
Eliezer Yudkowsky95%+ Founder of the Machine Intelligence Research Institute
Roman Yampolskiy99.9%[10] Latvian computer scientist
Marc Andreessen0%[11] American businessman
Yann Le Cun<0.01%[12] Chief AI Scientist at Meta

Criticism

There has been some debate about the usefulness of P(doom) as a term, in part due to the lack of clarity about whether or not a given prediction is conditional on the existence of artificial general intelligence, the time frame, and the precise meaning of "doom".[13]

In popular culture

See also

Notes and References

  1. Web site: Thomas . Sean . 2024-03-04 . Are we ready for P(doom)? . 2024-06-19 . The Spectator . en-US.
  2. News: Metz . Cade . 2023-05-01 . 'The Godfather of A.I.' Leaves Google and Warns of Danger Ahead . 2024-06-19 . The New York Times . en-US . 0362-4331.
  3. News: One of the "godfathers of AI" airs his concerns . 2024-06-19 . The Economist . 0013-0613.
  4. Web site: 2022-08-04 . 2022 Expert Survey on Progress in AI . 2024-06-19 . AI Impacts . en-US.
  5. News: Roose . Kevin . 2023-12-06 . Silicon Valley Confronts a Grim New A.I. Metric . 2024-06-17 . The New York Times . en-US . 0362-4331.
  6. Web site: Tangalakis-Lippert . Katherine . Elon Musk says there could be a 20% chance AI destroys humanity — but we should do it anyway . 2024-06-19 . Business Insider . en-US.
  7. Web site: 2023-05-03 . ChatGPT creator says there's 50% chance AI ends in 'doom' . 2024-06-19 . The Independent . en.
  8. News: 2023-07-14 . It started as a dark in-joke. It could also be one of the most important questions facing humanity . 2024-06-18 . ABC News . en-AU.
  9. Web site: Railey . Clint . 2023-07-12 . P(doom) is AI's latest apocalypse metric. Here's how to calculate your score . Fast Company.
  10. Web site: Altchek . Ana . Why this AI researcher thinks there's a 99.9% chance AI wipes us out . 2024-06-18 . Business Insider . en-US.
  11. Marantz . Andrew . 2024-03-11 . Among the A.I. Doomsayers . 2024-06-19 . The New Yorker . en-US . 0028-792X.
  12. Web site: Wayne Williams . 2024-04-07 . Top AI researcher says AI will end humanity and we should stop developing it now — but don't worry, Elon Musk disagrees . 2024-06-19 . TechRadar . en.
  13. Web site: King . Isaac . 2024-01-01 . Stop talking about p(doom) . en . LessWrong.
  14. Web site: 2024-05-07 . GUM & Ambrose Kenny-Smith are teaming up again for new collaborative album 'III Times' . 2024-06-19 . DIY . en.