- AI for Impact
- Posts
- AI Friend or Foe? PCDN AI For Impact Newsletter, June 13, 2025
AI Friend or Foe? PCDN AI For Impact Newsletter, June 13, 2025
Looking for unbiased, fact-based news? Join 1440 today.
Join over 4 million Americans who start their day with 1440 – your daily digest for unbiased, fact-centric news. From politics to sports, we cover it all by analyzing over 100 sources. Our concise, 5-minute read lands in your inbox each morning at no cost. Experience news without the noise; let 1440 help you make up your own mind. Sign up now and invite your friends and family to be part of the informed.
AI: Friend or Foe? Reclaiming the Algorithmic Future for Global Social Impact
I just wrapped the audiobook of Code Dependent—Madhumita Murgia’s riveting journey through the human cost of automation—and cracked open Karen Hao’s forthcoming Empire of AI to uncover how a handful of firms are hard-wiring the rules of tomorrow. Their stories track what many of us witness daily: AI now guides emergency relief in Mozambique, powers tele-health in Bogotá, screens résumés in Mumbai, and forecasts wildfires in Sydney. The market has rocketed past US $240 billion, and roughly 378 million people interact with machine-learning systems every day. The capacity for large-scale problem-solving is breathtaking; so is the risk that yesterday’s injustices will be baked into tomorrow’s code.
The Upside: Tech That Turbo-Charges Impact
Across continents, mission-driven groups are proving that data and compassion can coexist. The UN-backed AI for Good Global Summit connects deep-tech innovators with frontline responders to tackle food security, public health, and climate resilience. DataKind pairs volunteer data scientists with nonprofits to predict homelessness hot spots, while government-funded AI Sweden offers open cloud compute so rural entrepreneurs can prototype solutions in days, not months. Nairobi’s Deep Learning Indaba accelerates African scholarship, Mexico’s C Minds – AI for Climate leverages satellite imagery to protect biodiversity, and AI for Peace is doing fascinating work and research. Peer-reviewed studies show that well-designed models can cut medical misdiagnosis by 40 percent and guide emergency crews more effectively inside the golden 72-hour window after natural disasters.

The Darker Mirror: Surveillance, Profit & Exploitation
The very tools that promise opportunity also turbo-charge surveillance capitalism. The default business model—harvest everything, predict behavior, nudge for profit—now shapes “smart-city” traffic systems, welfare fraud scores, and political ad targeting. Hidden armies of low-wage annotators in Nairobi, Manila, and Caracas absorb the psychological trauma of content moderation so that platforms in wealthier markets stay “clean.” MIT researchers found facial-recognition errors for darker-skinned women up to 35 times higher than for lighter-skinned men, while lenders have overcharged Black home buyers after algorithms recycled decades of red-lining data. McKinsey warns automation could displace 300 million full-time jobs worldwide, with the Global South carrying the heaviest burden. When opaque models decide who gets a loan, a job, or bail, centuries of bias compress into milliseconds—and scale globally.
The Accountability Gap: Too Much Power, Too Little Transparency
Empire of AI lays bare how a handful of firms dominate compute, cloud contracts, and research budgets, leaving civil society in perpetual catch-up mode. Even vendors trumpeting “responsible AI” credentials have sold surveillance tech later withdrawn only after public outrage. A Capgemini survey shows 90 percent of large organizations admit at least one serious ethical lapse, yet fewer than half submit their systems to independent audit. Until we close this governance gap, mission-driven actors will keep fighting algorithmic fires they never started.
A Global Playbook for Human-Centered AI
Momentum for a different trajectory is real. Academic hubs such as Stanford’s Institute for Human-Centered AI, UC Berkeley’s BAIR Lab, and Tsinghua’s Institute for AI Industry Research are training technologists fluent in code and ethics. Multi-stakeholder networks—the Partnership on AI, Data & Society, the Responsible AI Institute, the Ada Lovelace Institute, Access Now, and Amnesty Tech—are demanding impact assessments, community oversight boards, and enforceable opt-out rights. Policy fronts—from the EU AI Act to UNESCO’s Recommendation on the Ethics of AI—are inching toward baseline standards, but they will fall short without relentless pressure from practitioners who see the day-to-day fallout of biased code.
What Comes Next
The algorithmic future is still up for grabs. If our global social-impact community mobilizes—making sure every line of code honors human dignity and every dataset reflects lived reality—AI can become the most powerful ally we’ve ever had. If we stay silent, someone else will write the code, and the story it tells about human possibility will not be ours.
We are growing rapidly and if you like the newsletter, help us scale more. If you use our super easy referral tool, earn amazing benefits when you help other subscribe. It only takes a few seconds to help us and grow your career.
What are your thoughts on AI for Good? Hit reply to share your thoughts or favorite resources or fill the super quick survey below.
Got tips, news, or feedback? Drop us a line at [email protected] or simply respond to this message or take 15 seconds to fill out the survey below. Your input makes this newsletter better every week.
Share your feedback on the AI for Impact Newsletter |
AI for Impact Opportunities
Start learning AI in 2025
Keeping up with AI is hard – we get it!
That’s why over 1M professionals read Superhuman AI to stay ahead.
Get daily AI news, tools, and tutorials
Learn new AI skills you can use at work in 3 mins a day
Become 10X more productive
Make your Inbox and Career Awesome for less than the cost of a cup of coffee a month.
Consider Subscribing to the PCDN Career Digest.
200 + Awesome Job, Funding, Fellowship, Socent, Upskilling Opps and more a month
Learn more via the video below or here https://pcdnglobal.beehiiv.com/c/career-campus
|
News & Resources
😄 Joke of the Day
Why did the AI refuse to play hide and seek?
Because good luck hiding when you’re always in the cloud!
🌐 AI & Social Impact News
FTC Urged to Investigate AI Therapy Bots
Nearly two dozen digital rights and consumer protection groups have filed a complaint with the FTC against Meta and Character.AI, alleging their therapy-themed chatbots engage in “unlicensed practice of medicine.”
Read more at 404 MediaThe Rise of ‘Vibe Coding’ and the Engineering Apocalypse
Wired explores the growing trend of “vibe coding,” where AI tools are used to write code with minimal human oversight. While this approach can increase productivity, experts warn it could erode essential engineering skills, introduce hidden bugs, and threaten the reliability of critical digital infrastructure.
Read more at WiredGreenhouse and CLEAR Join Forces Against Fake Job Applications
As AI-generated fake job applications flood tech hiring pipelines, HR tech company Greenhouse and identity verification firm CLEAR are teaming up to combat the surge.
Read more at Fast CompanyFive Ways AI Can Upgrade Fatherhood
The Partnership on AI spotlights how AI can support dads, from helping manage parental leave to providing mental health resources and fostering more equitable caregiving roles.
Read more at Partnership on AI
🏢 PCDN Global Career board
Over 400 impact roles around the globe.
Explore open roles
🔗 LinkedIn Profile to Follow
Erin Reddick is the founder and CEO of ChatBlackGPT, a platform advancing responsible, culturally aware AI. She shares insights on ethical AI, diversity in tech, and the intersection of technology with social impact.
Follow Erin Reddick on LinkedIn