• AI for Impact
  • Posts
  • Data, Data, Data, PCDN AI for Impact Newsletter, June 11, 2025

Data, Data, Data, PCDN AI for Impact Newsletter, June 11, 2025

In partnership with

AI You’ll Actually Understand

Cut through the noise. The AI Report makes AI clear, practical, and useful—without needing a technical background.

Join 400,000+ professionals mastering AI in minutes a day.

Stay informed. Stay ahead.

No fluff—just results.

Data, Data, data…

The phrase "data is the new oil" has become the rallying cry of the digital age, but honestly, this comparison doesn't capture the full picture of what's happening globally. While oil gets refined once and burned, data behaves more like a renewable resource that becomes exponentially more powerful when combined. When platforms like WeChat in China or WhatsApp in India collect user behavior data alongside payment information and social connections, they're creating digital profiles that are far more valuable than any single dataset. The real transformation happens when organizations can vacuum up massive amounts of information from every digital touchpoint - your shopping patterns in São Paulo, commute data in Lagos, or social media interactions in Jakarta - suddenly revealing insights about human behavior that transcend borders.

But this global data extraction has created a copyright and consent crisis that's playing out differently across continents. In Europe, GDPR has forced companies to be more transparent about data collection, while in many developing nations, digital rights frameworks are still catching up to the reality of AI training. When AI systems get trained on content scraped from the internet - including copyrighted articles, creative works, and personal posts - they can reproduce that protected material in unexpected ways. The legal questions are staggering: if an AI trained on Brazilian music generates a melody that sounds suspiciously like a copyrighted song, who's liable? Companies like Google DeepMind and IBM's AI research division are developing ethical AI frameworks to address these concerns, but the complexity multiplies when you consider that AI companies often can't even identify what specific content influenced a particular output.

The privacy implications extend far beyond individual concerns to entire communities and cultures. These AI systems are trained on datasets that might contain indigenous knowledge, traditional practices, or community-specific information without proper consent or attribution. There's a growing movement among digital rights advocates in countries like Kenya, Mexico, and the Philippines who are demanding that AI development respect cultural sovereignty and community ownership of knowledge. Organizations like the Distributed AI Research Institute (DAIR) are working across disciplines to establish ethical best practices in data science, conducting community-rooted AI research free from Big Tech's pervasive influence. The consent frameworks developed in Silicon Valley simply don't translate to communities where collective decision-making is the norm, or where digital literacy varies dramatically.

Perhaps most concerning is how AI systems are creating personalized echo chambers that adapt to local biases and cultural contexts. When someone in Mumbai asks a chatbot about climate change, it might emphasize different aspects than the same question asked in Miami or Manchester. This isn't necessarily malicious, but it means people worldwide are getting AI responses that reinforce their existing worldviews rather than challenging them with diverse perspectives. The Algorithmic Justice League, founded by computer scientist Joy Buolamwini in 2016, has been at the forefront of exposing these biases through research, artwork, and policy advocacy. Their work has shown how facial recognition software can fail to detect "highly melanated" faces, highlighting the urgent need for more inclusive AI development. The conversational nature of these interactions makes this particularly dangerous because users feel like they're getting objective information when they're actually receiving culturally and politically filtered responses.

The scenario that keeps technologists awake at night is an internet where chatbots primarily communicate with other chatbots. We've already seen early versions of this when AI systems start generating content that other AI systems then consume and build upon. Imagine this at global scale - AI-generated news articles being summarized by other AI systems, which then inform chatbot responses, creating an endless feedback loop where original human knowledge gets increasingly diluted. This is also referred to as AI slop. We could end up with different AI ecosystems in different regions, each developing their own digital dialects and biases, making cross-cultural understanding even more challenging.

For those building careers in this space, the opportunities are vast but require constant learning and ethical awareness. Whether you're interested in AI ethics research, data policy advocacy, or developing AI solutions for social good, the field demands people who understand both technical capabilities and human implications.

Staying informed in this rapidly evolving field requires following diverse voices and perspectives from around the world. Essential podcasts include the Machine Ethics Podcast, which features discourse on AI ethics with academics, authors, and business leaders, and The Radical AI Podcast, which centers marginalized voices in AI ethics discussions. The Technically Human Podcast explores what it means to be human in the age of tech through interviews with industry leaders, while The TWIML AI Podcast covers machine learning and AI with top minds in the field. For those interested in practical applications, Practical AI makes artificial intelligence accessible to everyone through discussions with technology professionals and business leaders.

Key Organizations and Resources to Follow

1. Distributed AI Research Institute (DAIR) - Founded by Timnit Gebru, this interdisciplinary institute conducts community-rooted AI research free from Big Tech influence, supporting data workers globally including the Data Labelers Association in Kenya.

2. Algorithmic Justice League - Founded by Joy Buolamwini in 2016, this digital advocacy organization uses research, artwork, and policy advocacy to increase awareness about AI bias and harms, particularly focusing on facial recognition technology's failures with darker-skinned individuals.

3. AI Now Institute - Research institute examining the social implications of artificial intelligence, focusing on bias, labor, power, and safety in AI systems.

4. Partnership on AI - Collaborative effort between major tech companies and civil society organizations to study and formulate best practices on AI technologies.

5. Future of Humanity Institute - Oxford-based research institute studying existential risks and future technologies, including AI safety and governance.

6. Center for AI Safety - Nonprofit organization focused on reducing societal-scale risks from artificial intelligence through technical research and advocacy.

7. International Rescue Committee - Pioneering AI-driven educational chatbots and early-warning systems for climate resilience in humanitarian contexts.

8. Mercy Corps - Implementing AI for climate resilience and humanitarian aid optimization across global crisis zones.

9. Human Rights Watch - Investigating AI's impact on human rights globally and advocating for justice in algorithmic decision-making.

10. Data & Society Research Institute - Independent nonprofit research institute focused on the social and cultural issues arising from data-centric technological development.

Grow your Career and the Newsletter

We are growing rapidly and if you like the newsletter, help us scale more. If you use our super easy referral tool, earn amazing benefits when you help other subscribe. It only takes a few seconds to help us and grow your career.

What are your thoughts on AI for Good? Hit reply to share your thoughts or favorite resources or fill the super quick survey below.

Got tips, news, or feedback? Drop us a line at [email protected] or simply respond to this message or take 15 seconds to fill out the survey below. Your input makes this newsletter better every week.

Share your feedback on the AI for Impact Newsletter

Login or Subscribe to participate in polls.

AI for Impact Opportunities

Make your Inbox and Career Awesome for less than the cost of a cup of coffee a month.

Consider Subscribing to the PCDN Career Digest.

200 + Awesome Job, Funding, Fellowship, Socent, Upskilling Opps and more a month

Learn more via the video below or here https://pcdnglobal.beehiiv.com/c/career-campus

Learn AI in 5 minutes a day

This is the easiest way for a busy person wanting to learn AI in as little time as possible:

  1. Sign up for The Rundown AI newsletter

  2. They send you 5-minute email updates on the latest AI news and how to use it

  3. You learn how to become 2x more productive by leveraging AI

Sponsored
Remote SourceThe leading source of content for 50,000+ remote workers. Open jobs, relevant news, must-have products, impactful trends, professional career advice, travel ideas, and more. Work Remote, Live Free!

News & Resources

Joke of the Day

Why don't robots ever panic during a crisis?
Because they always keep their processes in control!

🌍 News

  • 🎯 DHS Deployed Predator Drones During LA Protests, Audio Reveals
    Shocking audio confirms DHS used military surveillance drones to monitor 2020 protests in Los Angeles—highlighting the urgent need for AI surveillance oversight.
    👉 Read on 404 Media

  • 🛑 For Abuse Survivors Using Chatbots, “Delete” Doesn’t Always Mean Deleted
    Survivors using AI-powered mental health tools may have their data stored indefinitely, raising grave ethical and safety concerns.
    👉 Explore on Tech Policy Press

  • 🚖 Robotaxis in the Global South: A Very Different Ride
    As Waymo and Apollo Go expand globally, Rest of World examines how infrastructure, regulation, and trust gaps shape the adoption of autonomous vehicles.
    👉 Read more

  • 📱 Migrant Safety Apps Are Booming—But at What Cost?
    New apps offer undocumented migrants real-time raid alerts and safety tips. But many collect sensitive data without clear privacy protections.
    👉 Full story on Rest of World

LinkedIn Profile
Alix Dunn – Co-founder of The Gradient and Executive Coach for mission-driven tech leaders. A thoughtful voice on ethical tech, organizational design, and the human side of AI.
👉 Follow her on LinkedIn