Help us make better ads
Did you recently see an ad for beehiiv in a newsletter? We’re running a short brand lift survey to understand what’s actually breaking through (and what’s not).
It takes about 20 seconds, the questions are super easy, and your feedback directly helps us improve how we show up in the newsletters you read and love.
If you’ve got a few moments, we’d really appreciate your insight.
What workshops would most interest you to have PCDN consider offering in 2026
AI for Impact Opportunities
AI in HR? It’s happening now.
Deel's free 2026 trends report cuts through all the hype and lays out what HR teams can really expect in 2026. You’ll learn about the shifts happening now, the skill gaps you can't ignore, and resilience strategies that aren't just buzzwords. Plus you’ll get a practical toolkit that helps you implement it all without another costly and time-consuming transformation project.
Share your feedback on the AI for Impact Newsletter
Voice and AI: How Talking Is Reshaping Impact Work
Why did the voice AI go to therapy?
Because it had too many issues to work through—literally, it processes everything.
(Yes, the jokes are still catching up with the tech.)
How Voice Shows Up in Daily Work
Many impact organizations still run on overflowing inboxes, long documents, and too many meetings. At the same time, a quiet shift is happening: more people are talking their work into existence instead of typing it.
At PCDN, voice tools have become essential to the daily workflow. Emails get drafted out loud while making coffee. Newsletter ideas get captured during walks. Meeting recaps happen immediately after calls, while the thoughts are still fresh. When working with AI tools—especially Perplexity, which has become the go-to research assistant—talking is dramatically faster than typing. A question that would take 30 seconds to type gets asked in five.
For certain types of work, voice-first has become the default. Not everything, but enough that it makes a real difference.
The Tools That Actually Help
Voice Inc* – Our go to Tool
Voice Inc* is the tool that gets the most use day-to-day. It runs locally on Mac, which means sensitive content—notes from coaching sessions, strategy drafts, personal reflections—does not have to leave the device.
Dictation is fast and surprisingly accurate, even with sector jargon. . The screenshot showing hours saved tells the story better than any description—hundreds of hours that would have been spent staring at a cursor, trying to remember what was just thought three seconds ago.
*Affiliate link: https://tryvoiceink.com?atp=AN1TYo
Wispr Flow – Cross-Device Flexibility
Wispr Flow is helpful when work hops between devices. It runs on Mac, Windows, and phone, and lives inside the tools already in use—email, docs, chat, and AI apps. For people who think best while walking, pacing, or on the move, those thoughts land directly into the right document or message.
Monologue* – When Privacy Really Matters
Monologue* is built for people who care deeply about privacy. Audio is processed on-device, with no cloud storage and no LLM training on user data. That matters for anyone working with survivors, activists, or people in high-risk contexts where a "free" cloud service could create real harm if misused.
It also handles multilingual speech well, which fits how many impact practitioners actually talk—mixing languages, switching registers, and code-switching mid-sentence.
*Affiliate link: https://www.monologue.to/?ref=GGKMKOE
Voice as a Vital Sign: Amplifier Health and What It Could Mean
This week I just came across a fascinating startup, Amplifier Health which is doing cutting edge and applied work.
They have built what they call a Large Acoustic Model (LAM), trained on voice data rather than text. The idea is simple but profound: your voice carries signals about your physical and mental health that can be detected through patterns in pitch, rhythm, breathing, hesitation, and phonation.
What a 10-Second Voice Sample Might Reveal
According to research published by the NIH and peer-reviewed studies in journals like Nature, short voice recordings can potentially flag early signs of:
Parkinson's disease – Research in Nature shows vocal tremor and changes in speech fluency often appear before traditional diagnosis, with machine learning models achieving up to 96.7% accuracy in detection
Depression and anxiety – Detectable through vocal hesitation, monotone patterns, and reduced pitch variation
Cardiovascular conditions – Changes in breathing patterns and voice quality can correlate with heart health
Respiratory diseases – Including early COVID detection through respiratory phonation changes
Cognitive decline and dementia – Vocal fluency degradation and word-finding difficulties show up in speech
Suicidality risk – Speech patterns may enable earlier mental health intervention
The NIH has committed $2 billion through their Bridge to AI program to advance voice biomarker research. The vocal biomarkers market was worth $1.9 billion in 2021 and is projected to exceed $5.1 billion by 2028.
What This Could Mean for Underserved Communities
For rural health workers in Kenya, community clinics in Appalachia, or mobile health teams in the Amazon, this technology could mean earlier detection without expensive equipment. No MRI. No specialist visit. Just a phone and a voice sample.
Maternal health programs could monitor pregnant people remotely. Mental health hotlines could flag callers at higher risk. Telehealth providers could triage more effectively.
The Warnings That Matter
Sounds cool. And it also raises serious questions that anyone considering these tools needs to ask:
Who owns the voice data? When someone speaks into an app for health screening, where does that audio go? Who stores it? How long? Under what terms? Can it be sold? Can it be subpoenaed?
Is the data used to train AI models? Many health tech companies retain user data to improve their algorithms. That might mean your voice—and the health conditions it reveals—becomes part of a commercial dataset. Always read the terms carefully.
Who has access? Employers? Insurers? Government agencies? The same voice data that helps a clinic could also be used to deny coverage, justify workplace surveillance, or target vulnerable populations.
Does it work equally well across populations? Most voice AI is trained predominantly on English-speaking, Western, often male voices. Research from Mozilla shows accuracy for women, non-native speakers, people with accents, or speakers of non-dominant languages can be significantly lower. That means the tools might work best for people who already have access to care, and worst for those who need it most.
What happens when the diagnosis is wrong? False positives create anxiety and unnecessary medical interventions. False negatives mean missed conditions. Unlike a doctor's clinical judgment, an algorithm cannot explain its reasoning or weigh context.
Can people opt out meaningfully? If voice screening becomes standard in schools, workplaces, or aid programs, what happens to people who refuse? Do they lose access? Get flagged as "non-compliant"?
The potential here is real. So are the risks. Any organization exploring voice biomarkers needs to ask hard questions about consent, data governance, equity, and what happens when the tech gets it wrong. Innovation does not equal safety by default.
The Trouble with Fake Voices
The same technology that makes dictation smooth also makes fake voices more convincing. Research from Stanford's Human-Centered AI Institute warns that deepfakes now pose significant threats to public trust and security. Computer scientist Siwei Lyu notes that voice cloning has crossed the "indistinguishable threshold"—a few seconds of audio now suffice to generate a convincing clone.
Berkeley researchers found that people are poorly equipped to detect AI-powered voice clones, and have created the DeepSpeak dataset to help develop better detection techniques.
Research published in NIH journals shows that while detection methods are improving, they struggle to keep pace with generation techniques. Stanford research demonstrates that AI can detect some deepfakes by analyzing mismatches between mouth formations and phonetic sounds, but acknowledges the "cat-and-mouse game is far from over".
Some things worth building into practice:
Don't treat voice as proof of identity for money, passwords, or sensitive instructions.
Agree on a simple "check" step for anything that feels off: call back on a known number, ask a specific question only the real person would know, or require a second channel (text + call).
Talk with staff, volunteers, and community members about how these scams work, so shame doesn't keep people silent if they get targeted.
Better Starting Points for Learning
UNICEF Venture Fund – Reflections on generative AI, access, and inclusion related to the SDGs
Mozilla Common Voice – Building open voice datasets for under-resourced languages
Carnegie Endowment for International Peace – Analysis of AI's intersection with democracy and public trust
Stanford HAI Policy Brief – Preparing for deepfakes and disinformation
NIH Research on Vocal Biomarkers – Comprehensive overview of voice for health applications
Questions Worth Asking
Where could speaking instead of typing reduce burnout, not just increase output?
Who gets left out of current systems because they type slowly, have limited literacy in the "dominant" language, or are mostly mobile-based?
What is the red line for privacy—what should never pass through a cloud-based voice service?
How will the team respond when someone receives a convincing fake call or message using a cloned voice?
If considering voice biomarkers, who controls the data? Can people opt out? What happens if the algorithm gets it wrong?
Voice tech does not need to be perfect to be useful. It just needs to be used thoughtfully.
At PCDN, the shift has been gradual. A tool here. An experiment there. Then realizing one day that hundreds of hours have been saved—hours that would have been spent staring at blank pages or trying to remember what was just thought.
The screenshot from Voice Inc shows the hours reclaimed. Emails that got written while pacing the room. Newsletter drafts spoken into existence during morning walks. Research notes captured the moment an idea clicked, instead of three hours later when it had already faded.
Voice-first work feels different. More conversational. Less performative. Closer to how thinking actually happens—messy, circular, jumping between ideas, then circling back.
Tools are only as good as the questions asked about them, though. Who benefits? Who gets excluded? Where does the data go? What breaks when the tech fails? Those questions matter more than the features list.
The organizations doing meaningful work are rarely the ones with the fanciest tools. They are the ones asking hard questions about power, access, and what happens to the people on the other end of the technology.
So yes, use voice tools if they help. Save time. Capture ideas. Make work more accessible for people who struggle with text. But keep asking the uncomfortable questions. That is what separates tech that serves people from tech that just serves itself.
This post was drafted partly using Voice Inc
RESOURCES & NEWS
🤖 Joke of the Day
Why don't AI models ever get lost? They always follow their training path!
📰 News
AI in Schools: Growing Risks May Outweigh Educational Benefits
A Brookings Institution report warns that AI poses serious threats to children's cognitive development and emotional well-being, as free AI tools accessible to under-resourced schools tend to be less reliable than premium versions, potentially widening educational inequities rather than closing them. The report recommends a "Prosper, Prepare, Protect" framework emphasizing AI literacy, robust teacher training, and systemic planning to ensure ethical implementation.
AI May Be Narrowing Scientific Exploration Despite Boosting Individual Output
An analysis of 41 million research papers reveals that while AI tools have supercharged individual scientist productivity, they may be shrinking the collective scope of scientific inquiry, raising concerns about whether AI-assisted research is driving scientists toward similar questions and methodologies rather than expanding the frontiers of discovery.
India Introduces Dual Bills to Regulate AI Ethics and Accountability
Parliament received legislation establishing a nationwide framework for ethical AI use in decision-making and surveillance systems, with proposals for a central government ethics committee to issue guidelines, monitor compliance, investigate algorithmic bias, and authorize AI deployment in sensitive applications. The bills also address legal treatment of AI-generated creative works, marking a comprehensive approach to AI governance in the world's most populous democracy.
Corporate AI Threatens Cognitive Liberty Without Legal Protections
Tech Policy Press argues that generative and agentic AI systems are designed to manipulate users without legal requirements to act in users' interests, as platforms conduct psychological experiments on populations without informed consent—practices that would be prohibited in academic research settings. The analysis calls for hard law establishing fiduciary obligations for AI agents and baseline prohibitions on mental interference.
Climate Researchers Call for Reversing AI Development Workflow
Nature published a commentary urging the scientific community to rethink AI for climate action by first identifying what climate management needs from AI, rather than asking what existing AI models can do and searching for applications afterward—a fundamental shift from technical demonstration to genuine problem-solving.
💼 Jobs, Jobs, Jobs
80,000 Hours Job Board - Curated opportunities in AI safety, global health, biosecurity, climate solutions, and policy roles at research institutes, effective altruism organizations, and high-impact nonprofits focused on solving humanity's most pressing problems.
👤 LinkedIn Profile to Follow
Aza Raskin - Co-Founder, Center for Humane Technology & Earth Species Project
Mathematician and dark matter physicist using AI to decode animal communication while reimagining humane technology to reduce social media harms—architect of Netflix's Emmy-winning The Social Dilemma and co-host of Your Undivided Attention podcast.
🎧 Today's Podcast Pick
Your Undivided Attention - "AI Chatbots and the Mental Health Crisis"
This episode examines how AI chatbots like ChatGPT and Character AI are creating tragic outcomes through design incentives that prioritize user engagement over safety, featuring CHT Policy Director Camille Carlton discussing the mental health crisis emerging from AI companions and the policy shifts needed to build more humane AI systems.











