Don’t get SaaD. Get Rippling.
Remember when software made business simpler?
Today, the average company runs 100+ apps—each with its own logins, data, and headaches. HR can’t find employee info. IT fights security blind spots. Finance reconciles numbers instead of planning growth.
Our State of Software Sprawl report reveals the true cost of “Software as a Disservice” (SaaD)—and how much time, money, and sanity it’s draining from your teams.
The future of work is unified. Don’t get SaaD. Get Rippling.
AI for Impact Opportunities
When to AI and When Not to AI: A Real-Talk Guide for Changemakers
A joke to start: Why did the nonprofit director break up with the AI chatbot? It kept automating empathy and nobody believed anything it said anymore. Turns out, you can't outsource authenticity—no matter how good the algorithms are.
For those of us building solutions that actually matter, the AI question gets asked differently. It's not just about efficiency. It's about whether we're building toward justice or scaling injustice. Whether we're freeing our teams to do deeper impact work or replacing the human connections that make change possible.
The Honest Truth About Human-AI Teams
Here's something that might surprise you: research from MIT shows that human-AI combinations often perform worse than either humans or AI alone. Not because the technology doesn't work, but because we haven't figured out when to use which. The average AI-human team underperformed compared to the best human system or the best AI system operating independently.
The key insight? Combinations work when each does what they do best. When humans and AI are forced to work on tasks where they're both mediocre, disaster ensues. But when you put AI on the repetitive, data-driven work and humans on the stuff requiring judgment and context? Magic happens.
When to Deploy AI: The Impact Use Cases
Use AI for Liberation Work
The best reason to use AI is to free humans from tedious, soul-crushing tasks that distract from your actual mission. Initial research summaries? Data entry? Formatting reports? First-draft emails? These are AI's legitimate wins. Every hour your team spends wrestling with admin work is an hour they're not doing the thinking, relationship-building, and creative problem-solving only they can do.
The question to ask: Will this AI deployment give our team back time to do more strategic, human-centered work?
Use AI When You Have Expertise to Catch It Failing
AI is most useful when you already know the domain well enough to spot when it gets it wrong. If you're analyzing program outcomes in your area of deep expertise, AI can accelerate pattern-finding. But if you're learning something new? Don't skip the productive struggle. That's where expertise lives.
Use AI for Scaling Communication (With Your Voice)
Generative AI can help personalize donor communications, translate materials into community languages, or create first drafts of social media content. The pattern here matters: AI handles the volume and personalization logic. You handle making sure it's authentic, culturally grounded, and actually reflects your organization's values. Use AI to multiply the reach of your message, not to replace your message.
Use AI When Errors Are Obvious and Low-Stakes
If something goes wrong and you'll catch it immediately with no damage done, AI can be helpful. First drafts of timelines. Initial data summaries. Early-stage brainstorming. These are safe sandboxes. Final decisions about resource allocation or community targeting? Different story entirely.
When to Say No: Your Non-Negotiables
Don't Use AI for Decisions About People's Rights or Resources
This is the hard line. When decisions affect someone's access to services, opportunities, safety, or dignity, humans must be in the loop—and thoughtfully. AI systems carry the biases of their training data and designers. They've discriminated against women in hiring, over-targeted marginalized communities in policing, and denied benefits to vulnerable people through automated systems designed "for efficiency."
For nonprofits and social impact organizations, this is where stakes are highest. Any tool that determines who gets program access, who receives aid, or how communities are prioritized must have human judgment embedded throughout.
Don't Use AI When Human Connection Is the Product
If your work is about building trust, showing up, being present—don't automate that part. CEO thank-yous. Crisis communications. Staff appreciations. Donor conversations. These are where authenticity is non-negotiable. Yes, you could use AI to generate something fast. No, you shouldn't. The efficiency you gain will be dwarfed by the trust you lose.
Don't Use AI to Skip the Learning
When you need to deeply understand something to do your work well, the friction of learning is a feature, not a bug. If you ask an AI to summarize a complex policy you'll be advising on for years, you'll never develop the nuanced judgment you need. Sometimes the struggle is the point.
Don't Use AI When You're Unclear on Failure Modes
AI fails differently than humans do. It hallucinates. It drifts. It becomes overconfident in wrong answers. It doubles down when pressured with slightly different questions. Before deploying any AI system, ask: How specifically could this go wrong? What would we miss? Who would be hurt? If you can't answer those questions, you're not ready.
Don't Use AI for "Icky" Situations
Some applications just feel wrong—and that feeling is data. Using AI to infer character from body language? Analyzing employees' tone of voice to assess trustworthiness? Automating decisions about who looks like they have leadership potential based on facial expressions? These aren't just ethically questionable; they're reputational disasters waiting to happen.
Don't Build Complex AI When Simple Automation Works
A decision tree handles most use cases more transparently and reliably than a fancy AI system. If a basic rule-based automation achieves your goal without hallucinations, bias drift, or explainability nightmares, use that instead. Generative AI introduces real costs and risks that only make sense when simpler solutions truly won't work.
Don't Deploy AI Faster Than You Can Verify
If quality-checking the AI's output will take longer than doing the work yourself, you're not saving time—you're creating a false sense of efficiency. This is especially true for extracting numerical data, citing sources, or any work where errors hide at scale. A fast wrong answer isn't progress.
The Three Reflections That Actually Matter
1. Know Your Blind Spots
Research shows we're bad at "metaknowledge"—knowing what we actually know and don't know. This means we're terrible at deciding when to trust an AI vs. trusting ourselves. A radiologist might override correct AI advice because they doubt themselves, or follow incorrect AI recommendations because they're exhausted.
Solution: Build feedback loops. Regularly calibrate your team's confidence against actual outcomes. Run small tests before scaling. Get comfortable with measuring how well your judgment actually works.
2. Bias Isn't a Feature, It's a Choice
Every AI system is trained on data created by humans making decisions. Those decisions carry our histories—our discrimination, our assumptions, our blind spots. When an AI trains on decades of hiring data dominated by one demographic, it learns to replicate that pattern. When it's trained on programs that served certain communities more than others, it'll do it again.
Action: Before using any AI, ask who trained it and on what data. Ask whose perspectives are missing. Ask who might be harmed. Then ask yourself whether you're comfortable with those answers. If you're not, you shouldn't deploy it in your community.
3. Speed vs. Depth in Social Change
There's a real tension here. AI gives you speed. Social impact often requires depth—slow relationship-building, cultural understanding, consultation with communities most affected. Rushing to AI-generated solutions can mean missing critical community voices and unintended consequences that only show up over time.
Question to sit with: What are we optimizing for? Fast execution or wise execution? Usually it's not one or the other, but the balance matters.
Resources Worth Your Time
Decision Frameworks & Guides
MIT's research on human-AI collaboration patterns is genuinely useful for thinking through when combinations might work.
The practical framework from Ideafloats breaks down rule-based vs. pattern-based problems and helps you ask whether AI actually improves user experience.
Debevoise's detailed breakdown of when not to use AI gives you the hard cases—high-stakes decisions, explainability requirements, sensitivity domains.
Human-AI Collaboration Reality Check
Smythos's analysis of human-AI collaboration challenges goes deep on metaknowledge, integration issues, and data management—realistic obstacles you'll actually face.
Responsible AI for Nonprofits
NTEN's Responsible AI Hub has vetted resources and vendor evaluation questions built for social sector organizations specifically.
Quick Ethics Orientation
UNESCO's Recommendation on the Ethics of AI provides international standards emphasizing transparency, fairness, and human rights—good baseline thinking.
Deon's Ethics Checklist turns AI ethics into a step-by-step workflow so it's not just abstract principle.
What This Means for Your Work
The pattern across all of this: AI is a tool that works best when it handles what humans hate and shouldn't waste energy on, while humans handle what requires wisdom, context, relationships, and judgment.
In social impact, that means AI can be transformative—but only if we're intentional. It can give your team back 20 hours a week currently spent on admin. It can accelerate your research. It can help you communicate at scale. But it should never automate the human decisions that lie at the heart of your work.
The real test: After deploying AI, do you have more time and energy for the uniquely human work of building trust, fostering belonging, and reimagining what's possible? If the answer is yes, you're using it right. If the answer is no—if you've just added more complexity and risk without freeing your team for deeper work—you've missed the point.
Startups get Intercom 90% off and Fin AI agent free for 1 year
Join Intercom’s Startup Program to receive a 90% discount, plus Fin free for 1 year.
Get a direct line to your customers with the only complete AI-first customer service solution.
It’s like having a full-time human support agent free for an entire year.
Looking for the best newsletter platform?
We switched to Beehiiv last year
Creating our newsletters is now a joy.
They are also launching some amazing new features in mid-November.
🌱 Explode Your Growth
Our subscriber count is soaring.
Beehiiv's referral tools work wonders.
💰 Easy Monetization
Start earning with Beehiiv.
We now make over $200+ a month from ads and subscriptions.
Our revenue is growing fast.
✍️ Creating is Fun Again
The process is effortless.
The design is clean and intuitive.
The AI tools are a huge help.
📊 World-Class Analytics
Track your growth.
Measure what matters.
Make smart, data-driven decisions.
🌐 Build a Beautiful Website
They have a great new website builder.
It's simple and looks amazing.
👉 Sign up with our affiliate link. Get a 1-month free trial. Plus, get 20% off for three months.
Beehiiv also has a free plan for up to 2,500 subscribers.
Using our link also helps support PCDN's work. It's a win-win.
RESOURCES & NEWS
😄 Joke of the Day
Why did the VC reject the AI startup's pitch? They said, "Your business model has no margin for error—unlike your model training!" 📉💰
📰 News
Google DeepMind Scaling AI to Transform Science—While Facing Pressure to Commercialize
Following its 2024 Nobel Prize for AlphaFold, DeepMind is racing to replicate success across scientific disciplines with AlphaGenome (predicting non-coding DNA functions), GNoME (materials discovery), and weather forecasting improvements. However, internal tensions are mounting as the company accelerates commercial product releases almost weekly, with staff unhappy about prioritizing profit over responsible AI development—and facing competition from OpenAI and Mistral launching dedicated scientific discovery teams.
White House Targets State AI Laws as Tech Investment Hits $400B Annually
The Trump administration circulated a draft executive order titled "Eliminating State Law Obstruction of National AI Policy" aimed at preempting California and Colorado's AI regulations, framing state laws as "fear-based regulatory capture" hampering U.S. "AI dominance." Meanwhile, Congress is moving to attach an AI regulatory moratorium to the year-end defense bill, signaling continued tension between industry deregulation goals and state consumer-protection efforts.
The AI Bubble Is Bigger Than Dot-Com—And Nobody Agrees If It Will Pop
Tech giants collectively invested ~$400B in AI data centers for 2025, with Morgan Stanley projecting $3 trillion by 2028, yet McKinsey reports 80% of companies using AI saw no earnings impact. Despite widespread bubble warnings from economists and even industry leaders like Sam Altman, venture funding remains robust—Lambda secured $1.5B, Luma AI $900M—creating a fundamental mismatch between investment velocity and proven ROI.
DeepSeek R1 "De-Censored" While Model Compression Breakthroughs Slash Computing Costs
Quantum physicists developed techniques to compress DeepSeek R1 while stripping out Chinese content restrictions, achieving near-equivalent performance with dramatically lower computing requirements. Simultaneously, Cerebras achieved 10x faster LLM inference using wafer-scale chips—signaling a shift toward efficiency and decentralization that could democratize AI access and challenge the current GPU-dependent dominance of frontier labs.
💼 Jobs, Jobs, Jobs
80,000 Hours - High-impact AI safety, biosecurity, climate, global health, and effective governance roles at nonprofits, research institutes, and mission-driven startups globally.
👤 LinkedIn Profile to Follow
Demis Hassabis - CEO & Co-Founder, Google DeepMind
Leading AI researcher sharing insights on responsible AI scaling, scientific breakthroughs, and advancing global development goals through practical AI applications.
🎧 Today's Podcast Pick
Tech Policy Press Podcast - "What Are the Implications if the AI Boom Turns to Bust?"
Explores whether today's $400B+ annual AI investment reflects real fundamentals or unsustainable hype, with economists debating how a market correction could reshape policy, jobs, and the narratives driving government-industry alignment.












