• AI for Impact
  • Posts
  • Who Owns a Voice? PCDN AI for Impact Newsletter, July 23, 2025

Who Owns a Voice? PCDN AI for Impact Newsletter, July 23, 2025

In partnership with

AI You’ll Actually Understand

Cut through the noise. The AI Report makes AI clear, practical, and useful—without needing a technical background.

Join 400,000+ professionals mastering AI in minutes a day.

Stay informed. Stay ahead.

No fluff—just results.

The Voice Revolution Lab: AI's Double-Edged Sword in Social Impact

An investigative deep-dive into how artificial intelligence is transforming—and threatening—human communication for social good

The report was developed through human prompting and human editing with Perplexity Lab

The era of authentic human voice may be ending faster than we think. In boardrooms across Silicon Valley, executives are making decisions that will fundamentally alter how people communicate, trust, and connect with each other. Meanwhile, in refugee camps, nonprofit offices, and accessibility centers worldwide, this same technology is quietly revolutionizing how to serve humanity's most vulnerable populations.

This is the paradox of AI voice technology: a tool so powerful it can preserve a dying person's voice for their children, yet so dangerous it can fabricate political speeches that could destabilize democracies. The question isn't whether this technology will reshape society—it already has. The question is whether we'll harness it for good or let it destroy the very trust that makes human collaboration possible.

I will say for me personally I use a bunch of AI voice tools and they can be amazing in the impact sector. A few of my favorites so far:

Money Penguin GIF by Pudgy Penguins

The Market Reality: Following the Money Trail

The numbers tell a stark story. The AI voice cloning market exploded from $1.9 billion in 2023 to an estimated $25.6 billion by 2030. This isn't gradual adoption—this is a technological gold rush with profound implications for social impact work.

Follow the investment patterns and you'll see the priorities:

  • Entertainment and media: 28% of market share—prioritizing profit over people

  • Customer service automation: 22%—replacing human jobs with synthetic voices

  • Accessibility and healthcare: Only 18%—the actual human need comes third

North America dominates 40.5% of this market, while regions with the greatest humanitarian needs—Latin America (5.7%) and Middle East & Africa (3.5%)—receive scraps. This geographic inequality reveals who's truly benefiting from AI voice revolution: wealthy corporations and developed nations.

The World Economic Forum estimates that generative AI could add $182-308 billion annually to the social economy sector, yet this potential remains largely theoretical while the risks are devastatingly real.

The Bright Side: When AI Voices Actually Serve Humanity

Before we dive into the darkness, let's acknowledge where AI voice technology genuinely transforms lives—because these applications prove the technology's enormous potential when deployed ethically.

Accessibility Breakthroughs: Giving Voice to the Voiceless

Google's Project Euphonia represents AI voice cloning at its most noble. When former NFL player Tim Shaw lost his voice to ALS, Google's AI team painstakingly reconstructed his original voice from archived television interviews. This wasn't corporate PR—this was technology restoring human dignity.

The Voiceitt platform takes this further, supporting people with speech disabilities by interpreting their unique speech patterns and converting them to clear, understandable voice output. For the 7.5 million Americans with speech disabilities, this technology isn't convenience—it's liberation.

Multilingual Impact: Breaking Down Language Barriers

AsyLex, a Swiss NGO, partners with AWS to create AI-powered legal assistance for refugees. Their multilingual AI legal assistant provides 24/7 support in multiple languages, helping asylum seekers navigate complex legal systems they couldn't previously understand.

Preserving Cultural Heritage

Indigenous communities worldwide are using AI voice cloning to preserve endangered languages and cultural stories. When tribal elders pass away, their voices—and the linguistic patterns unique to their communities—can live on to teach future generations.

The Dark Laboratory: Where Ethics Go to Die

But for every inspiring use case, there are three more that should keep us awake at night. The same technology saving Tim Shaw's voice is being weaponized in ways that threaten the foundation of human trust.

The Lovo controversy exposed the industry's ethical vacuum. Voice actors Paul Skye Lehrman and Linnea Sage discovered their voices had been cloned and used for "hundreds of thousands of scripts around the world" without their knowledge or consent. Lehrman's voice became Lovo's default option for roughly two years, generating over 7 million voice-overs.

"Voice is as personal as our fingerprints," Lehrman said. "It's just such a violation of our humanity and an invasion of our privacy."

This isn't isolated. Consumer Reports found that four out of six major voice-cloning products lacked significant safeguards to prevent fraud or ensure consent. Users could "easily create" voice clones using publicly accessible audio with no verification that creators had permission.

The Deepfake Epidemic: Erosion of Truth

The statistics are alarming:

  • 845,000 imposter scams were reported in the U.S. in 2024 alone

  • 3000% increase in deepfakes online between 2023 and 2024

  • 27% of people cannot differentiate between real and deepfake audio recordings

The "Armor" movie trailer controversy in 2025 demonstrated how quickly things can spiral. When a studio used AI to clone deceased French actor Alain Dorval's voice without proper family consent, the backlash was immediate and devastating. France's culture minister condemned the practice, calling it "digital resurrection" without proper ethical guardrails.

Corporate Exploitation: The SAG-AFTRA Betrayal

The SAG-AFTRA union's controversial deal with Replica Studios reveals how even protective institutions can be compromised. Despite months of strikes fighting AI replacement of actors, the union quietly signed agreements allowing AI voice cloning in video games.

Voice actor Steve Blum, once credited by Guinness as the most prolific in video games, said "nobody" he knew had approved the deal. The betrayal runs deeper: 78% of the union voted to ratify the agreement, but most members learned about it after the fact.

The Organizations: Heroes and Villains in the Voice Wars

The Good Actors: Fighting for Ethical AI

Mozilla Foundation leads the charge for trustworthy AI, launching Mozilla.ai with $30 million in funding specifically focused on ethical, open-source alternatives. Their "Trustworthy AI Funding Principles" explicitly prioritize transparency, inclusion, and human agency over corporate profit.

Partnership on AI provides frameworks for ethical AI development, while AI4Good (the foundation, not the ITU initiative) connects technologists with social impact organizations. Their focus on refugee assistance and disaster response demonstrates AI's potential for genuine human service.

The Problematic Players: Profit Over People

HireVue represents AI ethics gone wrong. This recruitment company uses voice analysis to evaluate job candidates, raising concerns about discrimination and privacy violation. The Electronic Privacy Information Center filed FTC complaints challenging their "unfair and deceptive trade practices".

Clearview AI epitomizes surveillance capitalism. Despite claiming they hadn't used facial recognition during certain periods, internal data revealed extensive government contracts, including with agencies supervising federal crime convictions. Their voice cloning capabilities remain largely undisclosed but deeply concerning.

ElevenLabs faced backlash for the Alain Dorval incident, demonstrating how even companies with good intentions can cause harm without proper ethical frameworks. Their premature announcement of cloning a deceased actor's voice showed the industry's rushed approach to sensitive applications.

The Regulatory Vacuum: Where Oversight Fails

The FTC launched a "Voice Cloning Challenge" to address AI voice harms, but their response feels inadequate against the technology's rapid advancement. Meanwhile, SAG-AFTRA's conflicting positions—fighting AI replacement while simultaneously licensing member voices—reveal regulatory capture in real-time.

European regulators express "real concerns" about Big Tech AI dominance, but their focus remains on competition rather than ethical implementation. The UK's CMA warns about "interconnected webs" of AI partnerships among Google, Apple, Microsoft, Meta, Amazon, and Nvidia, yet meaningful intervention remains absent.

The Ethical Minefield: Questions That Keep Ethicists Awake

The Identity Question: What Makes a Voice "Yours"?

When Google cloned Tim Shaw's voice, they gave him back his identity. When Lovo cloned Lehrman's voice without consent, they stole it. The difference isn't technological—it's ethical. But current legal frameworks can't distinguish between these scenarios.

Core ethical dilemmas include:

  • Can a voice be owned like property?

  • Who controls AI voices of deceased individuals?

  • How do we balance accessibility benefits with consent requirements?

  • What happens when AI voices sound more "natural" than human voices?

The Labor Question: Whose Jobs Are We Automating?

Voice actor Joe Gaudet watched a company clone his voice after 30 video projects, then cut him out of future work by using his clone for "quick edits". "You feel like you're useless and you have no value," Gaudet said. "It's the worst feeling in the world."

The economic impact is staggering: CISAC estimates that music and audiovisual creators risk losing 24% and 21% of their revenues respectively by 2028 due to AI substitution. That's a cumulative loss of €22 billion over five years.

The Trust Question: How Do We Verify Authenticity?

Audio watermarking won't solve this crisis. Bad actors can remove watermarks, and absence of watermarks doesn't prove authenticity. We're creating a world where no voice can be trusted without technical verification—a fundamental shift in human communication.

Public awareness campaigns matter more than technical solutions. But are we preparing society for a future where every audio clip requires fact-checking?

The Social Impact Laboratory: Real-World Experiments

Experiment 1: Refugee Assistance at Scale

UNHCR's chatbot experiments reveal AI voice technology's potential for humanitarian work. Their Lebanon-based Humanitarian Innovation Lab uses social media platforms to provide refugees with critical information. Early results show improved access to services, but concerns remain about data privacy and digital exclusion.

Experiment 2: Educational Equity Through Voice

Murf AI offers 20% discounts to educational institutions and nonprofits, democratizing voice technology access. But this charity pricing model reveals the underlying inequality: organizations serving the most vulnerable populations still can't afford full access to transformative technology.

Lenny Learning uses AI to support K-12 educators in mental health and life skills education. Their "AI co-creator" approach shows promise, but we must ask: are we training children to prefer AI voices over human connection?

Experiment 3: Medical Voice Preservation

Google's Project Euphonia demonstrates AI's profound potential for medical applications. But it also reveals disturbing inequalities: why did it take a famous NFL player's illness to trigger this development? What about the thousands of ALS patients without media archives?

The Economic Reality: Who Really Benefits?

Nonprofit funding for AI remains inadequate. The McGovern Foundation's $73.5 million in AI grants across 144 organizations sounds impressive until you compare it to the $25.6 billion voice cloning market. Social impact organizations are operating with pocket change while corporations extract billions.

The funding priorities reveal everything:

  • Customer service automation: Billions in investment

  • Entertainment media: Massive venture capital

  • Accessibility and healthcare: Grant-dependent scraps

This economic structure guarantees that AI voice technology will prioritize profit over people, efficiency over ethics, automation over human dignity.

The Regulatory Reckoning: What Actually Works?

Successful Models: Learning from Winners

The NO FAKES Act in Congress, supported by SAG-AFTRA, Disney, and The Recording Academy, would make unauthorized AI voice copying illegal. This represents genuine bipartisan concern about AI's implications for human identity and creative labor.

GDPR requirements in Europe treat voice data as personal data, requiring clear consent and offering deletion rights. Early evidence suggests these protections work when properly enforced.

Failed Models: Learning from Disasters

Industry self-regulation has failed completely. Company "ethics teams" often get laid off during budget cuts , while harmful products launch without meaningful oversight.

Voluntary ethical frameworks lack enforcement mechanisms. Companies can adopt impressive-sounding principles while simultaneously violating them in practice.

The Path Forward: A Framework for Ethical Implementation

Immediate Actions for Social Impact Organizations

  1. Demand transparent consent mechanisms from any AI voice vendor

  2. Require clear disclosure when synthetic voices are used

  3. Prioritize human-centered design over cost savings

  4. Build community voice into AI deployment decisions

  5. Establish clear policies about AI voice usage before implementation

Policy Recommendations That Might Actually Work

Consent Infrastructure: Require technical standards for voice consent that can't be bypassed by bad actors.

Economic Justice: Tax AI voice profits to fund displaced worker retraining and social impact applications.

Transparency Mandates: Require clear, unmistakable disclosure of synthetic voices in all applications.

Democratic Governance: Include affected communities in AI development decisions, not just corporate stakeholders.

The Provocation: What Are We Really Building?

Here's the uncomfortable truth: we're not just building voice technology. We're building a world where human authenticity becomes a luxury good, where trust requires technical verification, where the most vulnerable populations get excluded from technological benefits while bearing the risks.

The real question isn't whether AI voice technology is good or bad—it's whether we'll let corporations define its future or demand it serves human flourishing.

Every day we delay ethical action, the power imbalance grows. Tech companies are investing billions in AI voice capabilities while social impact organizations scramble for grant funding. This isn't an accident—it's a choice about whose voices matter in our automated future.

The voice revolution is happening with or without us. The only question is: will we shape it, or will it shape us?

AI for Impact Opportunities

Find out why 1M+ professionals read Superhuman AI daily.

AI won't take over the world. People who know how to use AI will.

Here's how to stay ahead with AI:

  1. Sign up for Superhuman AI. The AI newsletter read by 1M+ pros.

  2. Master AI tools, tutorials, and news in just 3 minutes a day.

  3. Become 10X more productive using AI.

News & Resources

😄 Joke of the Day
Why did the AI become a motivational speaker?
Because it already knew how to neural network!

🌍 News Highlights

  • 🧒 Teens and Deepfakes: A New Digital Threat
    Children are using AI tools to create deepfakes of each other—raising concerns about bullying, privacy, and the lack of legislative protections.
    🔗 Read more – The Markup

  • 🎵 Spotify Publishes AI-Sung Songs by Dead Artists—Without Consent
    Controversy erupts as AI-generated music mimicking deceased artists appears on Spotify, reigniting debates on digital rights and posthumous consent.
    🔗 Read more – 404 Media

  • 🤔 Will We Ever Trust AI with Big Decisions?
    From medical diagnoses to criminal sentencing, AI is taking on weighty roles—but will society ever feel truly comfortable handing over the reins?
    🔗 Read more – New Scientist

🎓 Learning Resource
Wellfound (formerly AngelList Talent) is a powerful job platform for those seeking roles at mission-driven work at top startups. Filter by impact, funding stage, or remote work to find your next opportunity.
🔗 Explore Wellfound Jobs

🔗 LinkedIn Connection
Tamara Kneese – Director of Climate, Technology, and Justice Program at Data & Society Research Institute
🔗 Follow Tamara on LinkedIn