• AI for Impact
  • Posts
  • Is the Future Bot to Bot, AI for Impact Newsletter, June 25, 2025

Is the Future Bot to Bot, AI for Impact Newsletter, June 25, 2025

In partnership with

Find out why 1M+ professionals read Superhuman AI daily.

In 2 years you will be working for AI

Or an AI will be working for you

Here's how you can future-proof yourself:

  1. Join the Superhuman AI newsletter – read by 1M+ people at top companies

  2. Master AI tools, tutorials, and news in just 3 minutes a day

  3. Become 10X more productive using AI

Join 1,000,000+ pros at companies like Google, Meta, and Amazon that are using AI to get ahead.

When Bots Talk to Bots: Protecting Human Connection and Social Impact in the AI-to-AI Internet

The next stage of the internet is arriving fast: one where software agents increasingly “talk” to each other, draft and distribute most of what we read, and even negotiate decisions before a human ever looks in. For people and organizations working on social impact, this shift is both a multiplier and a minefield—magnifying reach while undermining the very information and relationships that change-makers rely on.


The business case is clear: Fortune Business Insights pegs the worldwide chatbot market at US $5.4 billion in 2023, on track for US $15.5 billion by 2028. Nearly a billion people already interact with chatbots every month, and next-generation “agentic” systems are coming on-line. MIT Technology Review notes that autonomous AI agents—tools able to plan, delegate, and execute tasks with minimal supervision—are the sector’s hottest frontier. Microsoft’s XiaoIce, for example, routinely sustains 23-turn conversations, out-chatting most humans and hinting at how easily AI-to-AI dialogue could become the default layer of digital interaction.

An internet thick with AI-generated “slop”
With generative tools now integrated into every major CMS and search-optimized workflow, the web is flooding with synthetic prose. An audit of 100,000 newly registered domains by plagiarism-detection firm Originality.AI found that 57 percent of fresh pages in 2024 were written mostly by machines. Wired calls the trend “AI-generated junk” that is polluting search results and crowding out human expertise. The burden is heaviest in lower-resource languages. Moreover, for activists, journalists, and NGOs that depend on trustworthy, context-rich information, the signal-to-noise ratio is deteriorating by the week.

Model collapse—when AIs forget reality
Reliance on synthetic text feeds a deeper technical risk: “model collapse.” In the peer-reviewed study “The Curse of Recursion: Training on Generated Data Makes Models Forget”, researchers from Oxford, Cambridge, and the University of Toronto show that when large language models learn from content produced by earlier models, quality plunges—becoming incoherent within nine recursive rounds. Aid agencies that scrape the open web for disaster assessments could soon be drawing on data that is literally losing its grip on reality.

Social and psychological fallout


Technology meant to “connect everyone” may be amplifying loneliness. A nine-month MIT Media Lab longitudinal study tracking 39.5 million chatbot exchanges linked heavy daily use to higher self-reported loneliness and lower offline socialization. The Montreal AI Ethics Institute warns that always-agreeable bot companions create accountability-free echo chambers, eroding the reflective dialogue social movements need to thrive.

Power concentration and digital colonialism
Almost four-fifths of all AI-chat traffic flows through a single provider: Statcounter shows ChatGPT with 79.8 percent global share, versus 11.8 percent for Perplexity and 5.1 percent for Microsoft Copilot. That dominance, combined with training sets tilted toward English and Western cultural references, is what Brookings calls “digital colonialism”: a handful of firms in the Global North defining knowledge standards for the rest of the world.

Guardrails and counter-models
Think tanks and multilaterals are scrambling to erect safeguards:

Impact-first innovation in practice
Around the world, social entrepreneurs are demonstrating that AI can serve inclusion when humans stay firmly in the loop:

  • India’s Karya pays rural workers fair wages to create high-quality, local-language data that trains more representative models while preserving 12 endangered tongues.

  • Kenya’s Ushahidi combines volunteer mapping with machine-learning classifiers so that every crisis report is algorithmically triaged but human-verified.

  • Brazil’s NOSSAS equips activists with generative-text toolkits that automatically flag and block synthetic spam, protecting civic forums from bot-driven manipulation.

Some Key recommendations
1 – Champion authenticity: advocate for universal labels on AI-generated text and images (note this post is prepared by perplexity.ai with lots of human prompting and some editing)
2 – Invest in sovereign data: co-create local datasets with affected communities to prevent model collapse and cultural erasure.
3 – Demand human oversight: insist that any impact-sector deployment keep people in the decision loop, especially where rights or resources are on the line.
4 – Build alliances: partner with the institutes above to embed ethics reviews, bias testing, and red-team drills from day one.

Whether the web becomes an endless hall of chatbots echoing each other—or a multiplier for inclusive progress—will hinge on these collective choices. The technology is moving quickly; the window for shaping it to serve people and planet is open, but narrowing fast.

Social Impact Opportunities

Sponsored
Remote SourceThe leading source of content for 50,000+ remote workers. Open jobs, relevant news, must-have products, impactful trends, professional career advice, travel ideas, and more. Work Remote, Live Free!

News & Resources

😄 Joke of the Day
Why did the AI refuse to share its lunch? Because it didn’t want to reveal its neural sandwich!

🌍 News

  • Meta’s Louisiana AI Data Center Sparks Energy Worries
    Meta plans a $10 billion, 4-million‑sq‑ft AI data center in Holly Ridge, Louisiana, drawing criticism as Entergy proposes three new natural gas plants to power it. Residents fear rising utility bills and environmental harms. Critics urge transparency and question if clean-energy claims are sufficient.

  • New Privacy Book Reviews Target Surveillance in Academia
    MIT Technology Review highlights three recent books examining the rising surveillance state in higher education, urging a rethink of privacy frameworks on campuses.

  • India Uses AI & Satellites to Tackle Urban Heat Risk
    In Delhi, researchers and NGOs deploy AI models like Sunny Lives to analyze heat risk at building level. This allows heat action plans to allocate resources—like shelters and cooling stations—more precisely.

🔗 LinkedIn Connection

Conor Grennan – A passionate advocate for ethical technology and social impact. His insights span global education, nonprofit leadership, and strategic partnerships.

💼 Career Resource

Explore impactful job opportunities on Wellfound—ideal for professionals looking to work with mission-driven AI startups and social enterprises.