Our top researchers and industry leaders have been warning us that superintelligent AI may cause human extinction in the next decade. If you haven't been following all the urgent warnings, I'm here to bring you up to speed:
Human-level AI is coming soon
It’s an existential threat to humanity
The situation calls for urgent action
Listen to this 15-minute intro to get the lay of the land. Then follow these links to learn more and see how you can help:
Lethal Intelligence
https://lethalintelligence.ai/
A catalogue of AI doom explainers by notable people in the field
The Compendium
https://www.thecompendium.ai/
A longer written introduction to AI doom by Connor Leahy et al
AGI Ruin — A list of lethalities
https://www.lesswrong.com/posts/uMQ3c...
A comprehensive list by Eliezer Yudkowksy of reasons why developing superintelligent AI is unlikely to go well for humanity
AISafety.info
https://aisafety.info
A catalogue of AI doom arguments and responses to objections
PauseAI.info
https://pauseai.info/
The largest volunteer org focused on lobbying world government to pause development of superintelligent AI
PauseAI Discord
/ discord
Chat with PauseAI members, see a list of projects and get involved
---
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at https://DoomDebates.com and to / @doomdebates