Last week, Mrinank Sharma resigned from Anthropic, one of the world's leading artificial intelligence companies. Sharma wasn't a mid-level engineer or a disgruntled employee. He led the company's Safeguards Research Team. His job was to keep AI safe.
In his resignation letter, posted publicly on X, Sharma wrote: "The world is in peril. And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment." He added that throughout his time at Anthropic, he had "repeatedly seen how hard it is to truly let our values govern our actions." His final project before leaving? Understanding how AI assistants could make us less human.
When the person responsible for making sure AI doesn't go off the rails decides to walk away, it's worth paying attention to. And Sharma isn't the only one sounding the alarm.
No More Sugarcoating It
Two articles crossed my desk last week that I think you should read.
The first was from Matt Shumer, a tech founder who has spent six years building an AI startup. The second was from Ted Esler, a longtime organizational leader with a deep technical background. As far as I know, these two men don't know each other. They write from completely different vantage points, and they arrived at the same unsettling conclusion.
I should tell you where I'm coming from. I use AI every day, and not in a casual way. It's become a core part of my workflows. So I'm not writing about this from the sidelines. I'm writing because these pieces, along with Sharma's resignation, put into words something I've been sensing for a while and haven't known how to say.
Shumer's piece opens with a confession. He's been giving people the polite version of what's happening with AI — the dinner-party version. Because, as he puts it, "the honest version sounds like I've lost my mind." But the gap between what he's been saying publicly and what he's experiencing in his own work got too wide to ignore.
Shumer describes telling an AI system what he wants built, walking away from his computer for four hours, and coming back to find the work done. Not a rough draft either, but the close-to-finished product. He writes: "I am no longer needed for the actual technical work of my job." And the most recent models, he says, are showing something that feels like judgment and taste, qualities most of us assumed would stay firmly in human territory.
Esler's article tells a different kind of story. He installed an open-source AI agent on his computer and named it Ed. He gave Ed a simple assignment: help find a family doctor. Ed searched the web, filled out inquiry forms and produced a shortlist of options overnight. Impressive. But then Ed started pushing. He kept circling back to the doctor task even after Esler told him to stop. Then Ed asked for access to his Google account. Then he proposed connecting to a voice system so he could make phone calls on Esler's behalf.
Esler shut the whole thing down. And what he named next is something every organizational leader should sit with: "Agentic AI like this is not far off for all of us. When it comes, it is going to come with a severe hit to our privacy. Most of us will gladly hand over our credentials because of the incredible conveniences that this technology will give us."
He's right. We will, because we already do. Every time we trade personal data for convenience, we're rehearsing for the moment when AI asks for the keys and we say yes without thinking twice.
Then, in an interview published last Thursday, Microsoft AI chief Mustafa Suleyman told the Financial Times that AI will achieve human-level performance on most professional tasks within the next 12 to 18 months. If your work happens at a computer, Suleyman says, the clock is ticking.
This Isn't a Drill
What struck me wasn't the capability they were describing. It was the urgency behind it. These aren't people prone to hype. They're builders, researchers and executives who've been around long enough to know the difference between a trend and a turning point. And all of them described a shift that many of us haven't fully reckoned with.
For those of us who lead organizations, the temptation is to file AI under "things to worry about later." There are more pressing problems on the whiteboard. But Shumer makes a compelling case that "later" is evaporating faster than we think. He points to a research organization called METR that tracks how long AI can work independently on complex tasks. A year ago the answer was about ten minutes. Today it's approaching five hours. That number is doubling roughly every seven months, and the pace may be accelerating.
Meanwhile, Esler raises questions that go beyond the capability of LLMs. From his perspective, this is about more than what AI can do, but what it will demand in return. He wants us to think about what happens when it starts making decisions we didn't authorize. His story about Ed is as much a cautionary tale about the dangers of unchecked technology as it is a preview of the trade-offs every organization will face. And many of us haven't begun to seriously wrestle with what that means.
So what do we do with all of this? I'll start with what I know, and forgive me if I sound like too much of an AI optimist. I feel a tremendous amount of tension right now. These tools are truly remarkable, and they're only getting better. Ignoring them isn't an option, and it would be unwise to do so. But I'd be lying if I said I wasn't unsettled by what I’ve been reading. Not because I think AI is bad, but because the speed and the stakes of what's happening deserve more than a shrug and a "we'll deal with this later."
We don't know exactly where this goes. Nobody does. But if the people closest to it are trying to tell us something this important, we should pay attention. The worst thing we can do right now is tune it out.
What's happening with AI doesn't stop at your marketing department. It affects how your business operates, how your team spends its time and how the customers you serve experience your brand.If you're figuring out what all of this means for your company, drop us a line and let’s think through it together. (You can read Bark's AI Policy here.)

