Silicon Valley’s New Metric: The ‘P(doom)’ of A.I. Threats

As A.I. rapidly evolves, tech leaders in Silicon Valley assess the existential risks using ‘p(doom)’, a statistic reflecting the likelihood of A.I.-induced catastrophes.
Silicon Valley's New Metric: The 'p(doom)' Of A.i. Threats Silicon Valley's New Metric: The 'p(doom)' Of A.i. Threats
Evil Ai Brain In Data Center. Danger Of Strong Artificial Intelligence Threat To Humanity Future Risk Of Bad Scenario Creating Dangerous Asi Scary Superintelligence. Generative Ai

Keep up to Date with Latin American VC, Startups News

Silicon Valley is buzzing with a new metric, ‘p(doom)’, shorthand for the “probability of doom”, indicating how likely artificial intelligence is to cause human extinction or other major calamities. This concept, once an insider joke among A.I. enthusiasts, has gained traction following the advancements sparked by ChatGPT.

Prominent figures like Dario Amodei of Anthropic and Lina Khan from the FTC have expressed their p(doom) percentages, reflecting varied levels of concern about A.I. risks. Even within OpenAI, the term surfaced during leadership transitions, with interim CEO Emmett Shear’s high p(doom) raising concerns among employees.

The term originated over a decade ago on LessWrong, an online forum for the Rationalist movement, and was popularized by figures like Eliezer Yudkowsky. It reflects the broader debate in Silicon Valley between those urging caution in A.I. development and others advocating for faster progress.

Leading A.I. researchers, including Geoffrey Hinton and Yoshua Bengio, have expressed significant concerns about unregulated A.I., estimating substantial risks of catastrophic outcomes. These estimates range from 10 to 20 percent, highlighting the urgency of addressing potential dangers.

However, p(doom) isn’t about precise predictions. It serves as a gauge of where individuals stand on the A.I. risk spectrum and signifies serious contemplation about its potential impacts. The term also sheds light on differing perspectives regarding A.I. governance and safety practices, influencing individuals’ p(doom) scores.

Ajeya Cotra, a senior researcher at Open Philanthropy, notes that while p(doom) offers a quick reference point, it oversimplifies the complexity of A.I. risks, which heavily depend on governance and regulatory measures. It essentially acts as a Rorschach test, reflecting more about our views on human capability to manage emerging technologies than about A.I. itself.

Keep up to Date with Latin American VC, Startups News