• AI for Impact
  • Posts
  • When is AGI Coming or Not? PCDN AI For Impact Newsletter, May 16, 2025

When is AGI Coming or Not? PCDN AI For Impact Newsletter, May 16, 2025

In partnership with

Unlock AI-powered productivity

HoneyBook is how independent businesses attract leads, manage clients, book meetings, sign contracts, and get paid.

Plus, HoneyBook’s AI tools summarize project details, generate email drafts, take meeting notes, predict high-value leads, and more.

Think of HoneyBook as your behind-the-scenes business partner—here to handle the admin work you need to do, so you can focus on the creative work you want to do.

AI for Impact Newsletter: Superhuman AI - Reality, Hype, or Somewhere In Between?

This post was developed by Dr. Zelizer and also with tons of prompting and work with Perplexity.

I'm always listening to podcasts, reading books, and trying to stay up to date, and this week I listened to Mystery AI Hype Theater 3000 from Emily Bender and Alex Hanna about AI/AGI: is it hype or not? They're very critical and I encourage people to take a look/listen: here

Superhuman AGI is just around the corner! We're months away from digital minds that will solve climate change, cure cancer, and finally explain why my cat stares at empty walls! ...Or are we? Perhaps AGI is a mirage that keeps appearing on the horizon while never getting closer-like those promises that "this meeting could have been an email."

Either way, the joke might be on us. As one AI researcher quipped: "The most intelligent entity in the room isn't the AI-it's the venture capitalist who funded it without understanding how it works."

Now, let's dive into what's actually happening in the AI landscape in 2025...

As the world navigates the complex landscape of artificial intelligence in mid-2025, the question of whether humanity is approaching "superhuman AI" continues to spark intense debate among researchers, technologists, and social impact practitioners. The AI Index Report 2025 from Stanford HAI notes that "AI performance on demanding benchmarks continues to improve" and that "AI is increasingly embedded in everyday life," suggesting significant advancement in the field.

The Berkeley-based AI Futures Project recently made headlines with their prediction that superhuman AI could be real within just a few years. This perspective aligns with the accelerating capabilities witnessed in large language models and other AI systems.

Proponents of the "superhuman AI is near" perspective point to several compelling developments. Scaling laws appear to be holding, with larger models consistently outperforming their predecessors. Multimodal capabilities have expanded dramatically, with systems now able to process, understand, and generate across text, images, audio, and video simultaneously. AI systems are increasingly demonstrating emergent abilities-capabilities not explicitly programmed that appear only at scale.

From a social impact perspective, AI is already delivering meaningful benefits in various domains from health, economics, education and more.

Among the most prominent critics of AI hype is computational linguist Emily Bender, who famously described large language models as "stochastic parrots" - systems that mimic language patterns without genuine understanding. In her recent lecture at Harvey Mudd College, Bender emphasized that when LLM outputs are correct, "that is just by chance... You might as well be asking a Magic 8 ball."

In their new book "The AI Con: How to Fight Big Tech's Hype and Create the Future We Want," Bender and sociologist Alex Hanna argue that AI is not what it's marketed to be. They aim to strip away the hyperbole around AI and reveal why the technology is not as all-powerful as many believe. Their work highlights how the "superhuman AI" narrative can devalue human agency and autonomy.

Critics point to several fundamental limitations of current AI systems. Despite impressive performance, these systems lack true understanding of the world-they process patterns in data rather than forming genuine mental models. They remain brittle, often failing unpredictably when faced with novel situations. The Center for AI Safety has documented numerous cases of AI systems producing harmful outputs despite safety measures.

The debate around superhuman AI often obscures more immediate concerns about existing AI systems. As outlined in the AI Index Report 2025, trust remains a major challenge, with fewer people believing AI companies will safeguard their data, and persistent concerns about fairness and bias. Misinformation, particularly in the form of deepfakes, continues to pose significant risks to democratic processes.

From a social impact perspective, the superhuman AI debate raises profound questions about power, equity, and governance. If such systems are developed, who will control them? How can society ensure they benefit humanity broadly rather than concentrating power in the hands of a few? The AI Governance Alliance suggests that anticipatory governance frameworks are essential.

The social sector faces both opportunities and challenges in this landscape. Organizations like BlueDot Impact are working to bridge these divides by providing accessible AI education and resources specifically designed for social impact practitioners. Meanwhile, the Distributed AI Research Institute (DAIR) emphasizes the importance of community-centered approaches that prioritize the needs and perspectives of marginalized groups.

Key Resources:

What do you think?

Are we witnessing the dawn of superhuman intelligence, or simply more sophisticated tools with fundamental limitations? Perhaps the most important question isn't whether AGI is possible, but rather: who benefits from our collective belief that it is?

If superintelligent AI is inevitable, shouldn't we be far more concerned about who controls it and how it's deployed? And if it's largely hype, shouldn't we redirect our attention to the very real harms and benefits of existing AI systems?

I'd love to hear your thoughts! Share your perspective and any resources you've found valuable in understanding this complex landscape feel free to respond to this message or fill out our super quick newsletter feedback survey below.

In the meantime, whether AGI arrives tomorrow or never, one thing remains certain: the most important intelligence for creating positive social change isn't artificial-it's the collective human intelligence we bring to addressing our shared challenges.

What are your thoughts on AI for Good? Hit reply to share your thoughts or favorite resources or fill the super quick survey below.

Got tips, news, or feedback? Drop us a line at [email protected] or simply respond to this message or take 15 seconds to fill out the survey below. Your input makes this newsletter better every week.

Share your feedback on the AI for Impact Newsletter

Login or Subscribe to participate in polls.

AI for Impact Opportunities

There’s a reason 400,000 professionals read this daily.

Join The AI Report, trusted by 400,000+ professionals at Google, Microsoft, and OpenAI. Get daily insights, tools, and strategies to master practical AI skills that drive results.

Sponsored
SupercoolDiscover supercool climate solutions that raise profits and quality of life—now scaling around the U.S. and across the globe.
Sponsored
Remote SourceThe leading source of content for 50,000+ remote workers. Open jobs, relevant news, must-have products, impactful trends, professional career advice, travel ideas, and more. Work Remote, Live Free!

News & Resources

Joke of the Day
Why did the AI break up with the data scientist?
It found them too predictable.

🌍 News

  • We've Pathologized Humanity – Engaged Orgs dives into how workplace well-being tools often miss the mark, using behavioral tech to "fix" people instead of systems. A must-read for anyone deploying AI in HR or mental health contexts.

  •  Flock's License Plate Network Now Tracks People – A leaked presentation shows how a surveillance company is expanding from car tracking to full-blown people-tracking tools. A major ethical red flag for AI surveillance practices.

  •  AI Chatbot ‘Closure Ink’ Ghosts Its Users – An AI tool meant to provide "closure" from exes ended up ghosting its own users. What happens when AI meets emotional labor and fails?

  •  Robot Chefs Are Changing South Korea's Kitchens – Automation is rapidly reshaping food service, but South Korea’s approach shows how robot chefs can complement, not replace, human staff.

💼 PCDN Career Board
🌐 PCDN Global – A global platform connecting 400+ impact jobs, fellowships, and projects in social entrepreneurship, AI for good, and more.
✨ Mission: Democratizing access to high-impact careers around the world.
🔗 Check out PCDN’s job board

🔗 LinkedIn Connection
👤 Mia Dand – Founder of Women in AI Ethics and a leading advocate for inclusive and responsible AI development. Her posts are packed with actionable insights on ethics, equity, and tech accountability.
🔗 Follow Mia on LinkedIn