Become the go-to AI expert in 30 days
AI keeps coming up at work, but you still don't get it?
That's exactly why 1M+ professionals working at Google, Meta, and OpenAI read Superhuman AI daily.
Here's what you get:
Daily AI news that matters for your career - Filtered from 1000s of sources so you know what affects your industry.
Step-by-step tutorials you can use immediately - Real prompts and workflows that solve actual business problems.
New AI tools tested and reviewed - We try everything to deliver tools that drive real results.
All in just 3 minutes a day
AI for Impact Opportunities
Power, little accountability: two critical AI stories from the past week
Can the man shaping our AI future be trusted?
Ronan Farrow — the investigative journalist who broke the Harvey Weinstein story — spent a year and a half reporting on Sam Altman and OpenAI. The result, published in The New Yorker, is titled: "Sam Altman May Control Our Future — Can He Be Trusted?"
Worth naming clearly what this piece surfaces: a pattern of alleged deceptions inside one of the most powerful organizations in human history. Farrow traces the 2023 board firing and sudden reinstatement of Altman, the secret memos sent by OpenAI's chief scientist to board members, and what happens when transformative technology collides with enormous financial incentives and almost no meaningful oversight.
In his follow-up conversation with Katie Couric, Farrow makes the point that matters most: this isn't really a story about one executive. It's a story about power without accountability. About who gets to make decisions that affect all of us — and who doesn't.
If you haven't read Karen Hao's Empire of AI, add it to your list, it is one of my favorites and reflects many of the same topics covered by Farrow.
Anthropic built a model too dangerous to release. Then accidentally leaked it.
Anthropic — the company founded explicitly around AI safety — announced this week that its newest model, Claude Mythos, is being treated differently from previous releases. According to reports, Anthropic has described it as exceptionally powerful — with capabilities in cybersecurity that raise serious concerns, including potentially identifying vulnerabilities and enabling cyberattacks at scale. The full picture of what it can do, and exactly how or whether it will be released more broadly, is still emerging.
The model is being restricted to roughly 40 companies — Microsoft, Amazon, Apple, CrowdStrike, and others — for defensive security work only, under a new initiative called Project Glasswing.
Mythos was accidentally exposed in a data leak in March before Anthropic announced it. Security researchers found nearly 3,000 internal files in an unsecured data cache. Anthropic's own draft materials described it as "the most powerful AI model we have developed to date."
There's a version of this story where responsible restraint is exactly what you want to see — companies pausing before shipping. But there's another version where the harder question is: what does it mean that these capabilities exist at all? Who is making those decisions? And what oversight exists beyond the companies themselves?
So what does this mean for impact work?
The nonprofit sector, social enterprises, civil society organizations — all of you are being handed AI tools built inside systems with very different priorities. Profit. Speed. Market dominance.
Power is concentrating in a handful of companies. Transparency is shrinking. And the governance structures that should exist around this technology are still catching up — if they're catching up at all.
The people building this technology are also the people deciding whether it's safe to use. That should concern anyone working toward a more just and equitable world.
Become An AI Expert In Just 5 Minutes
If you’re a decision maker at your company, you need to be on the bleeding edge of, well, everything. But before you go signing up for seminars, conferences, lunch ‘n learns, and all that jazz, just know there’s a far better (and simpler) way: Subscribing to The Deep View.
This daily newsletter condenses everything you need to know about the latest and greatest AI developments into a 5-minute read. Squeeze it into your morning coffee break and before you know it, you’ll be an expert too.
Subscribe right here. It’s totally free, wildly informative, and trusted by 600,000+ readers at Google, Meta, Microsoft, and beyond.
Looking for the best newsletter platform?
We switched to Beehiiv almost two years ago.
Creating our newsletters is now a joy.
🌱 Explode Your Growth
Our subscriber count is soaring.
Beehiiv's referral tools work wonders.
💰 Easy Monetization
Start earning with Beehiiv.
We now make over $300 a month from ads and subscriptions.
Our revenue is growing fast.
✍️ Creating is Fun Again
The process is effortless.
The design is clean and intuitive.
The AI tools are a huge help.
📊 World-Class Analytics
Track your growth.
Measure what matters.
Make smart, data-driven decisions.
🌐 Build a Beautiful Website
They have a great new website builder.
It's simple and looks amazing.
👉 Sign up with our affiliate link. Get a 14 day free trial. Plus, get 20% off for three months.
Beehiiv also has a free plan for up to 2,500 subscribers.
NEWS & Resources
🤖 Your AI Impact Joke
Why did the AI startup hire an ethicist before a marketer?
Because “move fast and break things” hits differently when the thing is democracy.
News
Mustafa Suleyman says AI development won’t hit a wall anytime soon — In a new MIT Technology Review interview published April 8, Microsoft AI CEO Mustafa Suleyman argues that the current compute boom means AI progress is unlikely to stall soon, signaling that debates about social impact, labor shifts, and governance will only get more urgent as capabilities keep advancing.
Why OpenAI bought “SportsCenter for Silicon Valley” — NPR reported on April 8 that OpenAI acquired TBPN, a niche but influential Silicon Valley talk show, framing the move as an effort to shape the public narrative around AI and raising questions about what happens when major AI firms increasingly own parts of the media ecosystem covering them.
Meta’s New AI Model Gives Mark Zuckerberg a Seat at the Big Kid’s Table — WIRED reported April 8 that Meta launched Muse Spark, its first model since its AI reboot, and said benchmark results suggest the model is highly competitive, making Meta newly relevant again in the frontier-model race.
Katrina Manson on “Project Maven” and how the U.S. is using AI in war — NPR’s April 9 coverage spotlights how Project Maven helped push AI into U.S. military operations, with reporting that AI may already be reshaping how the Pentagon fights conflicts, a reminder that AI-for-impact conversations also need to include accountability in defense and security uses.
Four Pages That Could Reshape American AI Policy — Tech Policy Press highlighted a short April 8 policy document that it says could significantly influence U.S. AI regulation, making this worth watching for anyone focused on how guardrails, industrial strategy, and public accountability may evolve.
💼 Jobs, Jobs, Jobs
Probably Good Job Board — A curated board of high-impact roles across causes like global development, climate, animal welfare, and public-interest work, with a strong fit for people looking to align career moves with measurable social impact.
👤 LinkedIn Profile to Follow
Melody Koh — Partner and Chief Product Officer at NextView Ventures. With a strong background as a product leader, Melody frequently shares insights on early-stage startups, the intersection of product management and venture capital, and how founders can build defensible, long-lasting AI companies.
📺 Great Show
The Age of A.I. — A solid watch for an impact-oriented audience because it uses accessible storytelling to show how AI connects to health, robotics, science, and human potential.







