In partnership with

The Gold standard for AI news

AI keeps coming up at work, but you still don't get it?

That's exactly why 1M+ professionals working at Google, Meta, and OpenAI read Superhuman AI daily.

Here's what you get:

  • Daily AI news that matters for your career - Filtered from 1000s of sources so you know what affects your industry.

  • Step-by-step tutorials you can use immediately - Real prompts and workflows that solve actual business problems.

  • New AI tools tested and reviewed - We try everything to deliver tools that drive real results.

  • All in just 3 minutes a day

AI for Impact Opportunities

Balancing AI's Profit, Public Good, and Regulation

Why did the AI refuse to play poker? Because it kept trying to optimize everyone's hands instead of winning!

The race to develop artificial intelligence has created one of the defining tensions of our time: can AI serve the public good while generating massive profits, and who gets to write the rules?

The answer matters enormously because AI is already reshaping healthcare, education, criminal justice, and how we consume information. Getting this balance right could determine whether AI becomes a tool for human flourishing or simply another mechanism for concentrating wealth and power.

The tension between profit and public interest isn't abstract. OpenAI's transition from nonprofit to public benefit corporation exemplifies the challenge. The company argues it needs private capital to develop transformative AI capabilities, but critics worry that shareholder returns will inevitably compromise its original mission to benefit humanity. This pattern repeats across the industry as AI companies backtrack on safety commitments when deregulation becomes politically popular. Meanwhile, AI data centers consume three to five times more power than traditional facilities, creating environmental costs that rarely appear on corporate balance sheets. The fundamental question remains: can market incentives alone drive ethical AI development, or do we need robust public oversight?

Fortunately, some organizations are working to bridge this divide. The Partnership on AI brings together over 100 organizations from industry, civil society, and academia to develop best practices for responsible AI development. Their work spans critical areas including safety-critical AI, fairness and accountability, AI's impact on labor and the economy, and media integrity. The AI for Good Global Summit, organized by the International Telecommunication Union in partnership with over 50 UN agencies, connects stakeholders to harness AI for solving global challenges while advancing standards and governance frameworks. The Forum on Information and Democracy focuses specifically on ensuring democratic control of AI in the information space, producing policy recommendations for states and companies. And research institutes like the AI Now Institute and Data & Society provide critical independent research on AI's social implications, examining how algorithmic systems affect labor, justice, privacy, and power distribution. These organizations demonstrate that thoughtful governance frameworks can emerge when diverse stakeholders collaborate rather than letting private interests set the agenda alone.

The regulatory landscape remains messy and contested. The United States currently lacks comprehensive federal AI legislation, instead relying on a patchwork of state regulations and specific federal rules addressing narrow concerns like non-consensual intimate imagery. While some view this fragmentation as stifling innovation, others argue that flexible, context-specific regulation better matches AI's heterogeneous nature than sweeping frameworks that quickly become outdated. The public utility model offers one intriguing path forward, treating large AI systems like electricity infrastructure subject to public oversight, reasonable access requirements, and transparency standards. This approach recognizes that when 500 million people use a service weekly, its reliability and fairness become matters of public concern rather than purely private business decisions. The challenge lies in designing governance that protects the public interest without crushing innovation entirely.

And why did the policymaker bring a ladder to the AI ethics meeting? Because everyone kept saying we need "oversight"! But seriously, folks, when it comes to AI governance, we actually need both the vision from above and the voices from the ground.

Find out why 100K+ engineers read The Code twice a week.

That engineer who always knows what's next? This is their secret.

Here's how you can get ahead too:

  • Sign up for The Code - tech newsletter read by 100K+ engineers

  • Get latest tech news, top research papers & resources

  • Become 10X more valuable

RESOURCES & NEWS

😄 Joke of the Day
Why don't AI models ever get lost? They always follow their training path! 🗺️

📰 News

Wikipedia Faces Existential Threat as AI Chatbots Drain Human Traffic
The Wikimedia Foundation reports a significant decline in human visitors to Wikipedia as users increasingly access its content through AI chatbots and search engines that summarize articles without clicking through, threatening the long-term sustainability of the volunteer-driven encyclopedia that powers these very AI systems.

MIT Research Reveals AI Scaling Laws May Hit Wall Within Decade
A groundbreaking MIT study published this week shows that the AI industry's obsession with massive models could be headed for a cliff, as researchers discovered that efficiency improvements in smaller models may soon narrow the performance gap with computationally expensive giants, potentially democratizing AI development and reducing the advantage of tech giants with vast resources.

World's First AI-Only Scientific Conference Launches Next Week
Nature reports that Agents4Science 2025, hosted by Stanford's James Zou on October 22, will feature research papers entirely written and reviewed by AI agents, with a 16% acceptance rate comparable to prestigious journals, serving as a crucial experiment to assess whether AI can independently conduct meaningful scientific research.

Guardian Investigation: AI May Be Creating a "Permanent Underclass"
A critical analysis published October 15 argues that AI is creating a permanent underclass of workers, while the technology itself may not be industrially inevitable or sustainable, urging pushback against tech industry narratives that frame AI deployment as unstoppable destiny.

Comprehensive arXiv Study Maps AI Adoption Across Global NGO Sector
The first systematic literature review of AI adoption in NGOs reveals that organizations using AI applications saw fundraising increase by an average of 20%, with applications ranging from chatbots serving refugees to optimizing vaccine distribution in low-income countries, though budget constraints remain the primary barrier to widespread adoption.

💼 Jobs, Jobs, Jobs

Climatebase - Curated climate solutions job board connecting mission-driven professionals with roles at climate tech startups, clean energy companies, sustainable agriculture ventures, and environmental nonprofits working to build a decarbonized future.

👤 LinkedIn Profile to Follow

James Zou - Computer Science Professor, Stanford University
Leading researcher in human-AI collaboration and organizer of Agents4Science 2025, the world's first conference for AI-authored research, exploring how AI agents can autonomously conduct scientific discovery while addressing critical questions about research integrity and the future of human-AI teaming.

🎧 Today's Podcast Pick

Your Undivided Attention - "The Crisis That United Humanity—and Why It Matters for AI"
The Center for Humane Technology explores how 198 countries united in 1987 to draft the Montreal Protocol and save the ozone layer, examining what this unprecedented moment of global coordination can teach us about addressing the existential challenge of uncontrollable AI development.

Keep Reading

No posts found