Andrew Critch vs. Liron Shapira: Will AI Extinction Be Fast Or Slow?

Опубликовано: 21 Июнь 2025
на канале: Doom Debates
5,098
191

Dr. Andrew Critch is the co-founder of the Center for Applied Rationality, a former Research Fellow at the Machine Intelligence Research Institute (MIRI), a Research Scientist at the UC Berkeley Center for Human Compatible AI, and the co-founder of a new startup called Healthcare Agents.

Dr. Critch’s P(Doom) is a whopping 85%! But his most likely doom scenario isn’t what you might expect. He thinks humanity will successfully avoid a self-improving superintelligent doom scenario, only to still go extinct via the slower process of “industrial dehumanization”.

00:00 Introduction
01:43 Dr. Critch’s Perspective on LessWrong Sequences
06:45 Bayesian Epistemology
15:34 Dr. Critch's Time at MIRI
18:33 What’s Your P(Doom)™
26:35 Doom Scenarios
40:38 AI Timelines
43:09 Defining “AGI”
48:27 Superintelligence
53:04 The Speed Limit of Intelligence
01:12:03 The Obedience Problem in AI
01:21:22 Artificial Superintelligence and Human Extinction
01:24:36 Global AI Race and Geopolitics
01:34:28 Future Scenarios and Human Relevance
01:48:13 Extinction by Industrial Dehumanization
01:58:50 Automated Factories and Human Control
02:02:35 Global Coordination Challenges
02:27:00 Healthcare Agents
02:35:30 Final Thoughts

**Show Notes**

Dr. Critch’s LessWrong post explaining his P(Doom) and most likely doom scenarios: https://www.lesswrong.com/posts/Kobbt...

Dr. Critch’s Website: https://acritch.com/

Dr. Critch’s Twitter:   / andrewcritchphd  

---

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at https://DoomDebates.com and to    / @doomdebates