In partnership with

Introducing the first AI-native CRM

Connect your email, and you’ll instantly get a CRM with enriched customer insights and a platform that grows with your business.

With AI at the core, Attio lets you:

  • Prospect and route leads with research agents

  • Get real-time insights during customer calls

  • Build powerful automations for your complex workflows

Join industry leaders like Granola, Taskrabbit, Flatfile and more.

AI for Impact Opportunities

Make Newsletter Magic in Just Minutes

Your readers want great content. You want growth and revenue. beehiiv gives you both. With stunning posts, a website that actually converts, and every monetization tool already baked in, beehiiv is the all-in-one platform for builders. Get started for free, no credit card required.

AI: To Doom or Not?

You know the AI safety debate has gotten intense when researchers start calculating the exact percentage chance that their own creation will end humanity. It's like watching someone build a rocket and simultaneously betting on whether it'll reach Mars or explode on the launchpad — except we're all on the rocket.

What's the Deal with P(Doom)?

P(doom) is the probability that artificial intelligence will cause human extinction or permanent disempowerment. According to a 2023 survey of AI researchers, the average estimate for AI causing human extinction within 100 years sits at 14.4% (median of 5%). To put that in perspective: those are worse odds than getting struck by lightning, but better than winning the lottery.

But here's where it gets wild: individual predictions range from Yann LeCun's optimistic <0.01% to Roman Yampolskiy's apocalyptic 99.9%. Nobel laureate Geoffrey Hinton (who literally helped invent the neural networks powering today's AI) puts it between 10-50%, while Anthropic CEO Dario Amodei estimates 25%. That's not a rounding error — that's experts fundamentally disagreeing about whether humanity has a coin flip's chance of survival.

The Doomer Case: Why We Should Worry

  • The experts are genuinely scared: When Hinton left Google to speak freely about AI risks and Geoffrey Bengio started warning about existential threats, the AI community took notice. These aren't fringe conspiracy theorists — they're the architects of modern AI.

  • Speed vastly outpaces safety: AI capabilities are advancing exponentially faster than safety research, governance frameworks, and our ability to understand what these systems are actually doing. We're essentially driving at 100 mph while still designing the brakes.

  • Organizations like Future of Life Institute and Center for Humane Technology have documented how current AI development prioritizes capability over safety. The 2025 AI Safety Index shows significant gaps in safety infrastructure globally.

  • The alignment problem is unsolved: We don't know how to guarantee that superintelligent AI will share human values or follow our intentions, even when we think we've programmed it correctly.

  • It's a one-shot problem: Unlike climate change where we get feedback and can course-correct, a misaligned superintelligent AI could potentially prevent any human intervention.

The Critical Counterpoint: "If We Build It, We Will All Die" Ignores Today's Harms

Here's the uncomfortable truth about AI doomerism: while we obsess over hypothetical extinction events, AI is causing measurable harm right now.

The focus on speculative p(doom) calculations creates a dangerous blindspot:

  • Real people are losing jobs today to AI automation, with little support or transition planning

  • Algorithmic bias is denying loans, healthcare, and opportunities to marginalized communities right now

  • Misinformation and deepfakes are destabilizing democracies in real-time, not in some distant future

  • Artists and creators are watching their livelihoods evaporate as generative AI floods markets with derivative content

  • Mental health impacts from social media algorithms optimized for engagement over wellbeing affect billions daily

The doomer narrative can become a form of privilege — focusing on science fiction scenarios while ignoring the mundane but devastating ways AI is reshaping power, labor, and society. It's like worrying about a meteor strike while your house is actively on fire.

Additionally, the "if we build AGI, we all die" framing can paradoxically justify reckless speed in AI development. If extinction is supposedly inevitable anyway, why bother with careful, inclusive governance? Why address bias, explainability, or worker displacement when the "real" problem is superintelligence?

The Skeptic Case: More Reasons for Doubt

  • The prediction spread is absurd: When expert estimates range from 0.01% to 99.9%, we're clearly operating with massive uncertainty. These aren't scientific predictions — they're educated guesses at best.

  • Opportunity cost is enormous: Resources poured into preventing speculative superintelligence scenarios could address concrete AI harms affecting billions today.

  • Regulatory capture risk: Doomerism rhetoric can be weaponized by existing AI giants to push for regulations that conveniently block competitors while cementing their own dominance.

  • Historical precedent favors skepticism: From electricity to automobiles to the internet, technology pessimists have consistently overestimated catastrophic risks while underestimating adaptation and governance.

  • The AI winter possibility: We may hit fundamental limitations in current AI approaches long before reaching anything resembling dangerous superintelligence.

Where Do We Go From Here?

Perhaps the answer isn't choosing between doomer and optimist camps, but embracing a "both/and" approach: Take long-term risks seriously while urgently addressing present-day harms. Support organizations doing both — like Future of Life Institute's work on AI safety frameworks AND groups pushing for algorithmic accountability and worker protections today.

Resources to Go Deeper

📖 MIT Tech Review: The AI Doomers Feel Undeterred
🎥 YouTube Discussion on AI Existential Risk
🔬 Future of Life Institute — AI Safety Index & Research
🧠 Center for Humane Technology — Addressing Current AI Harms
📊 P(doom) Tracker — See how predictions evolve

So here's my question for you: What's YOUR p(doom)? Provide your input below.

RESOURCES & NEWS

😄 Joke of the Day
How many AI safety researchers does it take to change a lightbulb?
We can't tell you—it's an information hazard. 💡🔒

📰 News

Hackers Expose 1,100-Phone Farm Powering AI-Generated TikTok Ads
An a16z-backed operation using 1,100 phones was exposed after hackers gained control of the massive device farm that floods TikTok with covert AI-generated influencer content and advertisements, revealing the industrial-scale infrastructure behind synthetic social media manipulation.

The Real AI Arms Race: Data Center Energy Demand
Tech Policy Press argues the urgent moratorium conversation should focus on halting new data center construction rather than AI model development, as energy infrastructure requirements threaten climate goals while Big Tech's AI buildout consumes unprecedented electricity and water resources.

AI Agents Set to Transform Scientific Research in 2026
Nature's 2026 preview highlights AI agents integrating multiple large language models to conduct complex, multi-step research with minimal human oversight—potentially delivering the first consequential scientific advances made autonomously by AI, though researchers warn of serious failure modes including accidental data deletion.

Over Half of Academic Peer Reviewers Now Use AI Tools
A Frontiers survey of 1,600 researchers found more than 50% use artificial intelligence while peer reviewing manuscripts, prompting publishers to implement policies requiring human accountability, clear guidelines, and proper training as AI becomes embedded in the scientific publishing process.​

MIT Technology Review Names AI Materials Discovery as Overhyped
Despite renewed interest and funding, AI-driven materials discovery remains waiting for its breakthrough moment, with researchers questioning whether machine learning can truly accelerate the identification of new compounds and substances beyond incremental improvements.

💼 Jobs, Jobs, Jobs

80,000 Hours - High-impact career opportunities board featuring positions in AI safety research, biosecurity, global health, animal welfare, and evidence-based policy at leading research institutes, effective altruism organizations, and mission-driven startups worldwide.

👤 LinkedIn Profile to Follow

Timnit Gebru - Founder, Distributed AI Research Institute (DAIR)
Computer scientist and pioneering researcher exposing algorithmic bias, co-founder of Black in AI, and leading voice challenging Big Tech on AI safety, data sovereignty, racism in AI systems, and the concentration of power in artificial intelligence development.

Also see our weekly List of Amazing People to follow ont Feed Awesome

🎧 Today's Podcast Pick

Social Change Career Podcast - "Navigating the World of Work in the Age of AI"
Check out the recording on LinkedIn—the audio episode will be live on Monday as part of over 200 episodes of the Social Change Career Podcast at pcdn.global/listen.

Keep Reading

No posts found