• AI for Impact
  • Posts
  • Good AI/Bad AI, PCDN AI for Impact Newsletter, June 27, 2025

Good AI/Bad AI, PCDN AI for Impact Newsletter, June 27, 2025

In partnership with

Stop Asking AI Questions, and Start Building Personal AI Software.

Feeling overwhelmed by AI options or stuck on basic prompts? The AI Fast Track is your 5-day roadmap to solving problems faster with next-level artificial intelligence.

This free email course cuts through the noise with practical knowledge and real-world examples delivered daily. You'll go from learning essential foundations to writing effective prompts, building powerful Artifacts, creating a personal AI assistant, and developing working software—all without coding.

Join thousands who've transformed their workflows and future-proofed their AI skills in just one week.

The Good, The Bad, and The Algorithmic: AI's Double-Edged Revolution

Why did the AI break up with the algorithm? Because it found out the algorithm was seeing other datasets!

Artificial intelligence isn't just changing how people work—it's fundamentally reshaping everything from medical diagnosis to military warfare, from creative expression to climate monitoring. But behind every breakthrough that promises to make life better lurks a shadow side that's keeping ethicists, policymakers, and concerned citizens awake at night. This is the unvarnished truth about our AI-powered future: a world where the same technology that could cure cancer might also poison our planet.

🚨 THE DARK SIDE: When Silicon Dreams Become Digital Nightmares

💀 The Environmental Catastrophe: AI's Dirty Secret

The pristine image of AI as a clean, digital solution masks an environmental disaster unfolding in server farms across the globe. Every time someone asks an AI chatbot to write a poem, generate an image, or solve a problem, they're contributing to an energy consumption crisis that's spiraling out of control.

⚡ The Energy Monster Awakens

New research estimates that AI accounts for 20% of global data center electricity use; it could rise to nearly 50% by the end of this year. The computational requirements for cutting-edge AI systems have been doubling every few months, creating an exponential curve of energy demand that's outpacing efficiency improvements.

Training GPT-3 produces approximately 552 metric tons CO2e. To put this into perspective, training a single large AI model can consume as much electricity as the annual usage of several hundred homes.

💡 The scale is staggering: AI could consume up to 82 terawatt-hours in 2025, equivalent to Switzerland's annual power usage.

Each ChatGPT query consumes 2 to 10 times more energy than a Google search. Researchers report that "80 to 90% of the electricity consumed" by AI models today goes into inference, not training.

💧 Water: The Hidden Crisis

Perhaps even more concerning than energy consumption is AI's massive water footprint. Data centers could affect climate plans by keeping fossil fuel plants online, raising water and energy bills and risking water resources.

📊 The numbers are alarming:

☠️ Air Pollution: The Invisible Killer

The rapid advancement of AI systems raises concerns about their environmental impact, particularly in climate change. The review highlights the energy-intensive nature of AI model training and deployment, contributing to greenhouse gas emissions and electronic waste generation.

⚠️ Environmental justice concerns: Low-income communities and communities of color disproportionately bear the health costs of AI's environmental impact, often receiving little economic benefit from data centers while suffering the health consequences.

👁️ The Surveillance State: Privacy's Final Frontier

AI-powered surveillance has evolved from science fiction dystopia to everyday reality with breathtaking speed. The same computer vision technologies that help doctors diagnose diseases are being deployed to track, monitor, and control populations.

🎭 Facial Recognition: The End of Anonymity

Facial recognition systems falsely identified African American and Asian faces 10 times to 100 times more than Caucasian faces, according to the National Institute of Standards and Technology. Among law enforcement databases in the United States, the highest error rates came in identifying Native Americans.

Three commercially released facial-analysis programs from major technology companies demonstrate both skin-type and gender biases. For darker-skinned women, the error rates ballooned to more than 20 percent in one case and more than 34 percent in the other two.

🔍 The surveillance ecosystem includes:

🏛️ The AI Leviathan

AI algorithm companies and governmental agencies collaboratively forge an AI-surveillance ecosystem, termed an 'AI Leviathan'. The growing dependence on enormous volumes of personal data has raised concerns about privacy breaches and put current ethical and legal frameworks under pressure.

🤖 Robo-Warriors and Digital Battlefields

Military applications of AI are moving beyond science fiction into reality with alarming speed. The integration of artificial intelligence into weapons systems is fundamentally changing warfare and raising profound ethical questions about human judgment in life-and-death decisions.

⚔️ Autonomous Weapons: Machines Making Kill Decisions

Governments worldwide are increasingly investing in autonomous weapons systems. Many are developing programs and technologies to gain advantages over adversaries, creating mounting pressure for others to follow suit.

These investments mark the early stages of an AI arms race. Like the nuclear arms race of the 20th century, this military escalation poses a threat to all humanity and is ultimately unwinnable, incentivizing speed over safety and ethics.

⚡ The escalation is real:

⚖️ The Ethics of Machine War

The Department of Defense's ethical principles for AI list "Responsible" as the primary quality. This is why a human hand must squeeze the trigger, why a human hand must click "Approve."

Outsourcing human agency and judgment to algorithms built around mathematical optimization will challenge all existing law and doctrine in fundamentally new ways.

⚖️ Algorithmic Bias: Digital Discrimination at Scale

AI systems don't just reflect human biases—they amplify and systematize them, creating discrimination at unprecedented scale and speed. The promise of objective, data-driven decision-making has given way to algorithmic systems that perpetuate and worsen existing inequalities.

🎯 The Objectivity Myth

Data sets used to assess AI performance were more than 77 percent male and more than 83 percent white. Joy Buolamwini, a researcher in the MIT Media Lab's Civic Media group, emphasizes the importance of understanding how these methods apply to other applications.

"The same data-centric techniques used to determine gender are also used to identify criminal suspects or unlock phones. And it's not just about computer vision."

💼 The Great Job Displacement: When Robots Come for White Collars

AI isn't just automating factory jobs—it's targeting cognitive work previously thought to be uniquely human. The pace of change is accelerating, and many workers lack the time or resources to adapt.

📈 The Social Impact

The impact of AI on citizens' lives has gained considerable public interest, driven by Generative AI models, particularly ChatGPT. Concerns about potential adverse effects range from privacy risks to escalating social inequality.

Analysis aims to provide insights that can inspire policy, regulation, and responsible development practices to foster citizen-centric AI.

THE BRIGHT SIDE: AI's Incredible Potential for Good

Despite serious risks and challenges, AI demonstrates remarkable potential to solve humanity's most pressing problems. From healthcare breakthroughs to climate solutions, artificial intelligence is opening possibilities that seemed like science fiction just years ago.

🏥 Healthcare Revolution: Saving Lives at Scale

AI is transforming medical care in ways that are already saving lives and improving outcomes for millions of patients worldwide. The same pattern recognition capabilities are being used to detect diseases earlier, develop treatments faster, and make healthcare more accessible.

🌱 Comparative Environmental Impact

Interestingly, AI systems emit between 130 and 1500 times less CO2e per page of text generated compared to human writers, while AI illustration systems emit between 310 and 2900 times less CO2e per image than human counterparts. This suggests that while AI training is energy-intensive, per-use emissions can be significantly lower than human alternatives for certain tasks.

🩺 Healthcare Applications

Research emphasizes AI's role in optimizing healthcare workflows, reducing resource waste, and facilitating sustainable practices like telemedicine. Development of energy-efficient AI models, green computing practices, and renewable energy integration are discussed as potential solutions.

🌍 AI for Social Good: Global Impact

A collaborative project between APRU, ESCAP and Google.org brought together academics and government stakeholders to develop country-specific AI governance frameworks and empower transparent AI solutions to tackle socio-economic challenges in Asia and the Pacific.

🎯 Real-world applications include:

🚨 Humanitarian Applications

The Generative AI workshop at the IFRC provided a platform to explore the transformative potential of GenAI in addressing complex challenges faced by humanitarian workers and disaster-affected people globally.

The IFRC Network has been using AI to better prepare, respond and learn from disasters. Examples include AI models for hazard impact forecasting, street-level imagery for damage assessment, and AI to tag operational learning.

🌾 Smart Agriculture and Food Security

Wireless sensor networks and Internet of Things devices are revolutionizing smart agriculture by increasing production, sustainability, and profitability as connectivity becomes increasingly ubiquitous.

🌟 Join the Global Conversation

The ITU AI for Good Global Summit 2025 is happening July 8–11 in Geneva. Attend online for free and connect with a global community of innovators, researchers, and practitioners working to harness AI for social impact. The summit showcases groundbreaking projects from organizations like UNICEF, WHO, and UNDP using AI to advance the Sustainable Development Goals.

📚 Resources for Understanding and Shaping AI's Future

Understanding AI ethics and governance isn't just for technologists—it's essential for anyone who wants to have a voice in how these powerful systems shape society.

🏛️ Policy and Governance Frameworks

IEEE's Ethically Aligned Design provides comprehensive guidelines for developing AI systems that align with human values and well-being, covering everything from human rights considerations to environmental sustainability.

The EU's Ethics Guidelines for Reliable AI are shaping AI regulation worldwide, focusing on creating systems that are lawful, ethical, and robust. These guidelines are particularly important as the European Union develops comprehensive AI legislation.

The OECD AI Principles provide an international framework for AI governance that has been adopted by over 40 countries, emphasizing inclusive growth, human-centered values, and transparency.

🔬 Research and Advocacy Organizations

The AI Now Institute at NYU conducts critical research on the social implications of AI, focusing on issues like algorithmic accountability, labor rights, and bias reduction. Their reports provide essential insights into how AI systems affect different communities.

Algorithmic Justice League, founded by Joy Buolamwini, fights bias in AI systems and advocates for more inclusive and equitable technology development. Their work has been instrumental in raising awareness about facial recognition bias and algorithmic discrimination.

The Partnership on AI brings together major tech companies, nonprofits, and academic institutions to develop best practices for AI development and deployment, fostering collaboration between industry and civil society.

🌐 International Frameworks and Declarations

The Montreal Declaration for Responsible AI emerged from a participatory process involving researchers, civil society organizations, and citizens. It emphasizes principles like well-being, autonomy, and justice in AI development.

The Toronto Declaration specifically addresses protecting human rights in the age of artificial intelligence, with a focus on preventing discrimination and ensuring accountability in AI systems that affect people's lives.

The Asilomar AI Principles were developed by AI researchers and include 23 principles covering research issues, ethics and values, and longer-term concerns about superintelligence.

🎓 Academic and Technical Resources

MIT's Computer Science and Artificial Intelligence Laboratory conducts cutting-edge AI research while maintaining a focus on ethical considerations and social impact. Their work spans from technical AI development to policy recommendations.

Stanford's Human-Centered AI Institute focuses specifically on developing AI that augments human capabilities rather than replacing them, emphasizing the importance of keeping humans in the loop.

The Berkman Klein Center at Harvard offers a comprehensive collection of AI ethics resources, including research papers, case studies, and practical tools for implementing ethical AI practices.

🔮 The Path Forward: Choices That Will Define Our Future

The story of artificial intelligence is still being written, and the choices made today will determine whether AI becomes humanity's greatest tool for solving problems or its greatest source of new challenges. The technology itself is neither inherently good nor evil—it's a powerful amplifier of human intentions, capabilities, and biases.

🗳️ The Power of Policy and Participation

Democratic Engagement is Essential

The future of AI won't be determined solely by technologists and corporations. It requires active participation from citizens, communities, advocacy groups, and democratic institutions. The more people understand AI and engage in discussions about its development, the better equipped societies will be to make informed decisions about implementation and governance.

Policy Choices Matter

Governments around the world are grappling with how to regulate AI without stifling beneficial innovation. The European Union's AI Act, proposed legislation in the United States, and international cooperation efforts will shape how AI develops globally. These policy decisions will determine whether AI serves the public interest or primarily benefits a small number of powerful actors.

Corporate Responsibility is Crucial

Technology companies hold enormous power in determining AI's trajectory. Whether they prioritize profit over social impact, include diverse voices in development teams, consider long-term consequences, and submit to democratic oversight will significantly influence whether AI benefits everyone or exacerbates existing inequalities.

🌟 Building AI That Serves Humanity

Inclusive Development

The AI development process must include voices from affected communities, ethicists, social scientists, and domain experts beyond computer science. Diverse perspectives are essential for identifying potential harms and ensuring that AI systems work for everyone, not just the privileged few.

Transparency and Accountability

AI systems that affect people's lives must be transparent, explainable, and subject to oversight. This means moving away from "black box" algorithms toward systems that can be audited, understood, and held accountable for their decisions.

International Cooperation

AI's global nature requires coordinated responses to challenges like climate impact, job displacement, security risks, and ethical concerns. No single country or organization can address these challenges alone.

🤔 Questions for the Future

As AI continues to evolve and permeate every aspect of society, several critical questions will shape its development:

How can innovation and human rights protection work together rather than in tension?

What role should different stakeholders—governments, corporations, civil society, and individuals—play in governing AI systems that affect millions of lives?

Can technology help reduce inequality, or will current trends worsen existing disparities between the powerful and the powerless?

How can societies prepare for AI-driven economic changes while ensuring that nobody gets left behind?

What does meaningful human control over AI systems look like in practice, and how can it be maintained as systems become more sophisticated?

How can the benefits of AI be distributed more fairly across society rather than concentrated among a small number of tech companies and their shareholders?

The AI for Good movement represents one promising approach to these challenges, bringing together technologists, policymakers, and communities to ensure AI serves humanity's highest aspirations. But success isn't guaranteed—it requires sustained effort, democratic participation, and a commitment to putting human welfare above technological capability or corporate profit.

The conversation about AI's future is just getting started, and everyone has a role to play in shaping how this story unfolds. Whether AI becomes a force for human flourishing or a source of new problems depends on the choices made today by researchers, companies, policymakers, and engaged citizens around the world.

And finally, because every serious discussion about AI needs a little levity: Why did the machine learning algorithm go to therapy? It had too many deep learning issues to process!

Grow your Career and the Newsletter

We are growing rapidly and if you like the newsletter, help us scale more. If you use our super easy referral tool, earn amazing benefits when you help other subscribe. It only takes a few seconds to help us and grow your career.

What are your thoughts on AI for Good? Hit reply to share your thoughts or favorite resources or fill the super quick survey below.

Got tips, news, or feedback? Drop us a line at [email protected] or simply respond to this message or take 15 seconds to fill out the survey below. Your input makes this newsletter better every week.

Share your feedback on the AI for Impact Newsletter

Login or Subscribe to participate in polls.

AI for Impact Opportunities

HR is lonely. It doesn’t have to be.

The best HR advice comes from those in the trenches. That’s what this is: real-world HR insights delivered in a newsletter from Hebba Youssef, a Chief People Officer who’s been there. Practical, real strategies with a dash of humor. Because HR shouldn’t be thankless—and you shouldn’t be alone in it.

News & Resources

😄 Joke of the Day
Why did the AI go to therapy? Because it had too many deep feelings—even its layers!

🤖 AI Fact of the Day
Did you know? The term "deep learning" originally comes from neuroscience and refers to how the brain learns through multiple layers—just like how today's neural networks work!

🌍 News

  • U.S. ICE Deploys Facial Recognition App
    Leaked emails reveal that U.S. Immigration and Customs Enforcement is piloting a new facial recognition app to identify individuals—a move stirring privacy and civil rights concerns. 404 Media

  • AI Analyst Outperforms Human Investors
    A Stanford GSB case study reports an AI-driven analyst that consistently beat human investors in stock-picking—showing strong promise for AI in financial analysis. Stanford Insights

  • Traditional Media Under Pressure
    Analysis suggests that established media outlets are struggling to retain influence in the AI era, with many turning to AI content tools and strategic adjustments. The Liberalist

  • Amsterdam Welfare Workers vs Discriminatory AI
    A recent investigation by MIT Technology Review highlights how an Amsterdam welfare agency’s AI system led to biased outcomes—underscoring the need for robust oversight and transparency. MIT Tech Review

🎓 PCDN Career Board
Explore over 400 remote and onsite social impact roles. jobs.pcdn.global

💼 Person to follow on Linkedin
Hana Ravit Schank
An influential voice in human-centered tech and policy. Hana is a journalist, author, and public sector innovator who isn’t afraid to speak truth to power—bringing ethical rigor to digital transformation efforts.
Connect with Hana