Every, The Only Subscription You Need to Stay at the Edge of AI (partner affiliate post)
A daily newsletter on what comes next in tech. 100K+ readers.
It is one of the best places to access amazing AI tools at a yearly affordable price, including Monologue (voice-to-text for Mac), Spiral (a mind-blowing writing partner) ,Coral (AI email assistant for mac ), and Sparkle (Mac desktop organization).
Beyond the tools, you'll get access to excellent workshops and insights on building with and using AI effectively. Even if you don't want to sign up for the paid tier, you can subscribe to their free newsletters and podcast to stay updated on AI trends and strategies.
AI for Impact Opportunities
The Dark Side of AI: The "Human Bottleneck"
Who has the power to decide whether someone lives or dies? And what happens when that decision is being automated?
That framing comes from a book written by the current commander of Israel's elite intelligence Unit 8200, published in 2021. He argued that AI should resolve the "human bottleneck" in military targeting: the delay caused by requiring human analysis and human approval before a strike. The technology he was describing already existed. +972 Magazine's landmark investigation revealed it as "Lavender" – an AI system that generated tens of thousands of potential strike targets in Gaza, with human operators approving each one in as little as 20 seconds. In some documented cases, operators described the process as rubber-stamping whatever the machine recommended.
AI is now embedded in the kill chain across multiple active conflicts. In the opening hours of Operation Epic Fury – the joint U.S.-Israeli offensive against Iran that began on February 28, 2026 – roughly 900 strikes were conducted in twelve hours. That operational tempo is only possible when AI is doing the targeting work. When a school in Minab was destroyed and an estimated 175 children killed, Congress asked whether an algorithm had selected the target. The question remains unresolved. The Pentagon says it is investigating. No one has been held accountable. This is not because the answer is unknowable – it is because we have not built systems to answer it.
The accountability gap is structural, and it is growing. International humanitarian law requires meaningful human control over lethal force. But "meaningful" falls apart when a human approves a machine's recommendation under cognitive overload in under a minute. Legal scholars at West Point's Lieber Institute have flagged this directly: autonomous weapons are proliferating faster than any legal framework can govern them. China and Russia are investing heavily in swarming drone AI. NATO is deploying automated intelligence networks at European front lines. The Anthropic-Pentagon standoff – where Anthropic refused to allow Claude to operate in fully autonomous lethal systems – was remarkable not because a company said no, but because that "no" is currently one of the only limits that exists.
Surveillance runs alongside all of this. AI systems are being used in conflict zones to map social networks, flag behavioral patterns, and enable preemptive targeting of individuals who have not committed any act of violence. The infrastructure tested in Gaza migrates. It shows up at borders. It shows up in urban policing. It shows up in the monitoring of activists and journalists. The pattern is consistent across contexts: the technology moves fast, the oversight lags, and communities with the least power absorb the consequences.
For those working on technology, justice, and rights, the question is not whether this is our concern. It is. The question is how we engage – with honesty about what is happening, solidarity with those bearing the cost, and a clear-eyed push for the governance frameworks that don't yet exist. AI is not neutral. Neither is silence about how it is being used.
Some Key Resources
The New Republic: "Who's Deciding Where the Bombs Drop in Iran? Maybe Not Even Humans."
Lieber Institute, West Point: "Legal Accountability for AI-Driven Autonomous Weapons"
Stop Killer Robots: stopkillerrobots.org
Go from AI overwhelmed to AI savvy professional
AI will eliminate 300 million jobs in the next 5 years.
Yours doesn't have to be one of them.
Here's how to future-proof your career:
Join the Superhuman AI newsletter - read by 1M+ professionals
Learn AI skills in 3 mins a day
Become the AI expert on your team
Facts. Without Hyperbole. In One Daily Tech Briefing
Get the AI & tech news that actually matters and stay ahead of updates with one clear, five-minute newsletter.
Forward Future is read by builders, operators, and leaders from NVIDIA, Microsoft, and Salesforce who want signal over noise and context over headlines.
And you get it all for free, every day.
Looking for the best newsletter platform?
We switched to Beehiiv almost two years ago.
Creating our newsletters is now a joy.
🌱 Explode Your Growth
Our subscriber count is soaring.
Beehiiv's referral tools work wonders.
💰 Easy Monetization
Start earning with Beehiiv.
We now make over $300 a month from ads and subscriptions.
Our revenue is growing fast.
✍️ Creating is Fun Again
The process is effortless.
The design is clean and intuitive.
The AI tools are a huge help.
📊 World-Class Analytics
Track your growth.
Measure what matters.
Make smart, data-driven decisions.
🌐 Build a Beautiful Website
They have a great new website builder.
It's simple and looks amazing.
👉 Sign up with our affiliate link. Get a 14 day free trial. Plus, get 20% off for three months.
Beehiiv also has a free plan for up to 2,500 subscribers.
NEWS & Resources
🤖 AI 4 Impact Joke
Why did the AI quit its job at the nonprofit?
Because every time it tried to "maximize impact," it just kept recommending a bigger dataset.
📰 News
The Hidden Costs of 'Helpful' AI — Nature — Even when AI tools genuinely aid individual decision-making, they can quietly de-skill entire professions by narrowing how uncertainties and values are debated. For changemakers and educators, this is a critical warning: AI adoption without intention can hollow out the very expertise and human judgment that social impact work depends on.
The Ridiculously Nerdy Intel Bet That Could Rake in Billions — Wired — Advanced chip packaging — not just raw processing power — is quietly emerging as the decisive battlefield in the AI hardware race. Intel is betting billions that this under-the-radar manufacturing technique will be the choke point that determines who leads the next phase of AI infrastructure, with big implications for AI access and equity globally.
Five Concerns About AI Data Centers — and What to Do About Them — ITIF — A new ITIF report tackles the real impacts of AI infrastructure on electricity grids, water, and household pricing — and argues the real problem isn't AI data centers themselves, but the outdated policy frameworks governing them. The findings matter for communities and advocates pushing for equitable, sustainable AI infrastructure.
💼 Jobs, Jobs, Jobs
PCDN.Global Job Board — Whether you're recruiting your next hire or searching for your next role,the board has over 1,500 impact opportunities from around the globe. From peacebuilding to climate justice, international development to social entrepreneurship — if the role is about changing the world, it's here.
👤 LinkedIn Profile to Follow
Dr. Alondra Nelson — AI Policy, Equity & Democratic Accountability — Harold F. Linder Professor at the Institute for Advanced Study and former acting director of the White House Office of Science and Technology Policy, Dr. Nelson is the architect of the Blueprint for an AI Bill of Rights. Named to the inaugural TIME100 AI list, she brings a rare social-science lens to tech governance — especially where AI intersects with race, power, and democracy. Her ongoing argument that today's AI deregulation is really "hyper-regulation by other means" is essential reading for the impact community right now.
🎧 Today's Podcast Pick
Mystery AI Hype Theater 3000 — Hosted by linguist Emily M. Bender (University of Washington) and sociologist Alex Hanna (DAIR Institute), this is the podcast the AI hype machine doesn't want you to listen to. Each episode dissects inflated AI claims with sharp science and sharper wit — covering everything from phantom AGI breakthroughs to the political economy of Big Tech.




