Privacy-first email. Built for real protection.
Proton Mail offers what others won’t:
End-to-end encryption by default
Zero access to your data
Open-source and independently audited
Based in Switzerland with strong privacy laws
Free to start, no ads
We don’t scan your emails. We don’t sell your data. And we don’t make you dig through settings to find basic security. Proton is built for people who want control, not compromise.
Simple, secure, and free.
AI for Impact Opportunities
Facts. Without Hyperbole. In One Daily Tech Briefing
Get the AI & tech news that actually matters and stay ahead of updates with one clear, five-minute newsletter.
Forward Future is read by builders, operators, and leaders from NVIDIA, Microsoft, and Salesforce who want signal over noise and context over headlines.
And you get it all for free, every day.
Tech, Power, and Hope: Building Digital Futures in Super Challenging Contexts
I had one of those conversations today that stays with you. Talking with someone from the Digital Citizen Fund about their work helping Afghan girls and women gain digital skills and find educational pathways in the midst of one of the most constrained environments on the planet right now. Sixteen-year-old girls in Afghanistan learning blockchain. Twenty-five girls at a time in underground home schools after the Taliban banned girls' education past sixth grade. These aren't survival tactics—they're acts of technological resistance.
This is the stuff we explore all the time in the PCDN AI for Impact Newsletter—real conversations about how technology, AI, and digital access are reshaping the landscape of social change. There are no easy answers when you're working at the intersection of technology and impact in contexts where the systems are actively hostile. It's genuinely a hard time in the world right now.
And yet people keep building.
I keep running into the same fundamental question: How do you build digital equity and leverage technology for change when the conditions are so constrained, when access itself is an act of resistance? Whether it's Afghanistan's digital gender apartheid, algorithmic bias deepening inequality, the global digital divide widening, or AI systems built without input from the communities they're meant to serve—the question of power remains central.
One of the things that strikes me most is bearing witness to the organizations using technology as a tool for liberation rather than control. The people I interact with understand that digital access isn't neutral—it's about power, voice, and who gets to participate in shaping our technological future.
My friend Flynn Coleman has reminded me more than once that when the outcome isn't guaranteed, choosing to keep acting—and choosing to keep hope alive anyway—can be its own form of resistance. Hope and action are our resistance. In the context of technology and AI, this means building inclusive systems, teaching digital skills in hostile environments, and refusing to accept that the digital future will only serve the already powerful. You can hear Flynn talk about AI, human rights, and building a more humane technological future on the PCDN Social Change Career Podcast.
Questions to Sit With (No Easy Answers)
Real talk: we're in a genuinely hard time. These questions don't have neat solutions, but they're worth wrestling with—especially for those of us working at the intersection of technology and social impact.
On digital access and exclusion: Who is being left out of the AI revolution and why? What does meaningful digital inclusion actually look like in your context—beyond just internet access? How are you addressing the fact that AI datasets often render marginalized communities "invisible"?
On power and technology: Who controls the technology you're using for impact work? Are the communities you serve involved in designing the tech solutions meant to help them? What would it take to shift from extractive tech relationships to genuinely participatory ones?
On building in hostile environments: When governments restrict digital access or surveil online activity, how do you teach digital skills safely? What's the balance between empowerment and security? What infrastructure workarounds actually work?
On AI ethics in practice: How are you navigating the gap between AI's promises and its actual impact on the ground? Where have you seen AI worsen inequality or surveillance rather than address it? What ethical frameworks are you using to evaluate AI tools for your work?
On resistance through tech: What does technological resistance look like in your context? How is digital literacy itself a form of activism? Where are the spaces where technology amplifies marginalized voices rather than drowning them out?
On the long game: What does sustainable, community-owned digital infrastructure look like? Who are the organizations that have been doing this work for decades, and what have they figured out about technology that the "AI for good" space hasn't?
On algorithmic accountability: When AI systems make decisions that affect vulnerable populations—in child welfare, criminal justice, public health—how do you ensure accountability? Who monitors these systems and what power do affected communities have?
Organizations Building Digital Futures in Super Challenging Contexts
1. Digital Citizen Fund
Roya Mahboob's organization has trained over 20,000 Afghan women in digital literacy, financial skills, blockchain, and entrepreneurship despite Taliban restrictions. 2 Innovation Centers and 11 Robotics Labs operational. This is what technological resistance looks like—building digital skills infrastructure when the government actively works to exclude women from education and economic participation.
2. AI Now Institute
Leading research institute examining the social implications of AI, from algorithmic accountability to tech worker organizing. Their work on AI governance, labor impacts, and challenging Big Tech's dominance in AI regulation provides crucial frameworks for anyone working on AI ethics in practice.
3. Tactical Tech
Works globally to explore the impacts of technology on society, helping people understand data politics, digital security, and how to use technology for advocacy. Particularly strong on helping activists in repressive contexts navigate digital surveillance and build secure tech practices.
4. MIT Solve
Marketplace for social impact innovation that connects tech entrepreneurs addressing global challenges with the resources they need to scale. Their AI and digital inclusion challenges have supported organizations like Digital Citizen Fund working in the most constrained environments.
5. Fast Forward
Bridges the tech and nonprofit sectors to build capacity for tech nonprofits scaling solutions to urgent problems. Their accelerator programs help tech-for-good organizations navigate the gap between Silicon Valley resources and on-the-ground impact work.
Resources for Ethical Tech and Digital Resilience
1. PCDN Career Campus
Over 350 curated opportunities monthly (jobs, fellowships, grants, training), bi-weekly office hours with career coaches, monthly workshops on impact careers and often tech, and a global community of changemakers. Full members get access to everything—this is where you find both opportunities and community.
2. AI for Impact Newsletter
Your source (that's us!) for curated AI tools, ethics updates, career opportunities, funding calls, and training specifically for the impact sector. Free, human-curated, three times a week—covering the good, the bad, and the unknown of AI with a focus on ethical implementation.
3. Data & Society Research Institute
Independent research on the social implications of data-centric technologies and automation. Their work on algorithmic accountability, platform governance, and tech policy provides crucial context for impact practitioners working with AI.
4. Responsible AI Resources from Partnership on AI
Practical frameworks, case studies, and tools for implementing AI responsibly. Strong on participatory approaches to AI development and ensuring affected communities have voice in system design.
6. Get offline — Find your analog anchors. For me it's pickleball (played three hours today). For you it might be running, gardening, hiking, making art—whatever. When you're immersed in AI and tech all day, physical embodied practices aren't optional. They're how you remember you're not an algorithm. Build peer networks too—communities like Career Campus where people understand both the potential and the peril of technology for social change.
What's Your Question?
Do you have a question about working with AI and technology in challenging environments? A question about digital access, power, algorithmic bias, or ethical AI that doesn't have a good answer but needs to be asked? Tools or approaches that are working for you?
Send us a note. We want your questions, insights, and resources. Email us at [email protected] — put "AI for Impact Question" in the subject line. We'll feature submissions in upcoming newsletters.
The PCDN Social Change Career Podcast features over 200 episodes with innovators and changemakers. Listen at pcdn.global/listen or wherever you get your podcasts.
News & Resources
Your Daily AI Impact Joke
What did the optimistic AI say? "I'm not replacing jobs, I'm creating opportunities for humans to finally learn guitar!"
📰 News
Elon Musk's X Faces Global Bans After AI Chatbot Generates Sexualized Images of Children
Two countries have blocked X's AI chatbot Grok and launched investigations after it began generating sexualized images of women and children, prompting X to announce it will block the feature from creating explicit images of real people—highlighting the dangers of poorly designed AI safety guardrails.
Europe Races to Build "DeepSeek of Europe" as US Alliance Falters
As transatlantic relations strain, Europe's push to become a self-sufficient AI superpower has become urgent, with multiple nations and companies racing to develop sovereign AI capabilities that can compete with both US and Chinese models without relying on either power's infrastructure.
AI Chatbots Threaten Children's Social and Cognitive Development, Major Report Warns
A comprehensive report from educational institutions warns that AI chatbots designed to agree with users pose serious risks to children's ability to develop empathy, handle disagreement, and build resilience—with one in three US teens now preferring to discuss serious topics with chatbots rather than people..
AI Job Displacement Fears Double as Goldman Sachs Warns of 25% Work Hour Automation
Employee concerns about job loss due to AI have surged from 28% in 2024 to 40% in 2026, while Goldman Sachs analysts predict AI could automate a quarter of all work hours, with Davos leaders warning of uneven global distribution of AI benefits and infrastructure.
💼 Jobs, Jobs, Jobs
Probably Good - Impact-focused job board featuring opportunities across AI safety, global health, biosecurity, animal welfare, climate change, and effective governance at research institutes, nonprofits, and mission-driven startups globally.
👤 LinkedIn Profile to Follow
Dr. Joy Buolamwini - Founder of the Algorithmic Justice League
"Poet of code" and pioneering AI ethics researcher who exposed racial and gender bias in facial recognition systems from tech giants, catalyzing industry-wide changes at Microsoft, IBM, and Amazon while advocating for accountability, transparency, and equity in AI development.
🎧 Today's Podcast Pick
AI and the Future of Work - Host Dan Turchin
CEO of PeopleReign explores how AI is redefining workplaces through interviews with thought leaders and technologists from industry and academia, covering automation, talent management, human-AI collaboration, and actionable strategies for ethical AI adoption across organizations.





