Field Notes From the Efficiency Era, Part 2: AI Won't Fix Your Broken Systems
How AI spotlights and accelerates dysfunction.
Is This Thing On?
Desperate for Answers
A senior leader once asked me how I measured my team’s success.
Finally, I thought. I love these conversations. Not “what’s your sprint velocity”, or “what’s your tech debt quota”, but something that actually mattered. I was ready. I even had a framework with a maturity model I’d used. Because of course I did.
He nodded along. Blinked. Waited.
I waited too! For a follow-up, a request for clarification, something to signal that he remembered there was another human on the other side of this exchange.
But the silence didn’t feel thoughtful. It felt vacant. He wasn’t chewing on ideas. He wasn’t pushing back or building on anything. Was he just… absorbing?
Holy fuck. He’s treating me like ChatGPT.
Confirming what I already knew, I asked: “So, uh, what do you usually look at?” And he said—and I’ll never forget it—
“I don’t have anything. I just wanted to know what you were looking at.”
I almost fell out of my fucking chair.
This was someone managing half a dozen managers, asking me how to know whether his teams were succeeding. Like I could pass him the answer off my shelf full of leadership cheat codes.
That yearning, please-god-please-just-give-me-the-answer look of desperation? It wasn’t just his problem. It was a symptom of the broader system in which he operated. One that rewards output over understanding. Movement over meaning. Give that system access to AI? It doesn’t improve. It just accelerates the dysfunction.
The Six-Month VP
This was not an isolated occurrence. In another galaxy, not so far away—still in the same terminal dysfunction supercluster—I watched a VP flame out in record time. Gone in six months flat. There was something so absurd about his approach. So confounding. So… captivating? His playbook? “Leadership-by-copy-paste”. He’d take a problem, drop it into ChatGPT, and paste the output back into Slack or a slide deck. That was his whole value-add.
He didn’t hide it well. Eyes darting between tabs before parroting back some milquetoast advice like it was divine strategy. The look of bewilderment on his face when we reacted as if we’d just read a puddle-depth Business Insider listicle from 2014. Probably because we just had.
It’s tempting to point to AI as the culprit here. But the problem isn’t the tool, it’s what the quick, plausible-sounding answer does to our monkey brains. The dopamine hit when we repeat it back like we had the idea ourselves. This “looks smart, acts dumb” pattern in action; built to impress the uninformed rather than to withstand scrutiny.
But no. The problem wasn’t AI. AI didn’t make him bad at his job. He was already bad at his job. AI just made it obvious faster.
B.A., or “Before AI”
We’ve always had these types of leaders.
Before AI, they could hide for years in the corporate fog of war. Flitting between “we’re still aligning on the strategy” deflections and “let’s take this offline” stall tactics. Their teams carried on without them, suffering quietly through their ineptitude, while they managed to fake it upward just well enough to fool the decision-makers.
But give those same leaders a tool that appears to make decisions instantly, and suddenly, they’re decomposing in front of us.
Glassy-eyed, slack-jawed, lying on the table of the surgical theater with their guts exposed, while the whole org enjoys the inexpressibly nauseating aromas.
Is this what they meant when they said AI was coming for our jobs? By mercilessly outing everyone who’s already outsourced the only part of the job that mattered—the thinking?
The Death of Struggle
Brain on Autopilot
It’s tempting to treat these tools like a true thinking partner. One that will challenge your assumptions and help you refine raw ideas, arriving at a result forged in deep thought and lived experiences. It’s a lie we tell ourselves to justify eschewing the hard bits. It’s the Rainbow Road shortcut. Deep thought rendered optional. But the longer you go without practicing the hard bits, the more those muscles atrophy.
Thinking is the job. Everyone who writes, builds, designs, and leads? At least for those who are doing it passably, it’s the part that separates the professionals from the charlatans.
The MIT study making the rounds this week offers a bleak glimpse confirming what some of us have long feared: The more you lean on LLMs, the less capable you become of thinking at all. The mental effort that drives learning is being muted. The participants showed measurable declines in memory, comprehension, and even confidence. They couldn’t recall the answers they’d just been handed, let alone explain or apply them.
Their brains literally disengaged.
Just like a TikTok binge breaks your attention span long after you’ve closed the app, study participants couldn’t thoughtfully re-engage once the tool was gone. AI tooling is rewiring our brains to expect reward without struggle, leaving people unequipped to re-engage with real, difficult problems.
Look, I use these tools too. I’m not a luddite or a hater. They have their place. But mastering a tool doesn’t just mean knowing how to use it—it means knowing when not to. If you reach for it at the first sign of difficulty, like any common vice, you’re replacing the resistance that sharpens your thinking with the comfort that dulls it.
If you’re not doing the hard part—fighting through the friction, going deep on a problem, fumbling and iterating, struggling with complexity, and, you know, learning—then what the hell are you actually doing? Delegating your growth? Practicing mental decline?
Dysfunction Accelerator
Decisions Under Pressure
In the ZIRP era, you’d be rewarded just for sounding smart, but the Efficiency Era demands receipts. Previously, you might bet on raising another round before the consequences of your decisions caught up with you, but pressure compounds much too quickly now, and shallow decision-making gets punished in real time.
Nowhere is this more obvious than in how the hype cycle is pumping up AI.
Executives, founders, and desperate engineering leaders who have never understood how the work actually gets done in the first place are panicking. Their boards are pitching AI as a silver bullet. Their competitors are shoving it into every corner of their product. They’re bracing against pressure to follow suit or risk being left behind. To risk digging then marking their grave with the epitaph: “Didn’t realize the next big thing was actually the next big thing ☠”.
Circle of Plausibility
Here’s the rub: If your system was already failing—if your decision-making was low-context, your processes were all performative, your culture already fragile—AI won’t fix it. I’m sorry to break the news. It will amplify everything that was broken, accelerating the dysfunction. You’re signing up to speed-run your way through the Five Stages of Floundering.
You’ve heard the pitch: AI summarizes customer calls, turns them into Jira tickets, rewrites the roadmap, codes the feature, and drafts the release notes. Devoid of human judgment and critical thinking, robots talk to robots in a grotesque circle of plausibility.
Companies running this playbook aren’t moving any faster. Just like the ZIRP era flunkies, they’re mistaking movement for meaning. They’re producing more unexamined bullshit per unit of time and lobbing incomplete, incoherent thoughts into production. Training teams to stop asking questions. Flattening their product into inoffensive, derivative mush. All in the name of speed. All possessed by fear and FOMO.
And that’s the tragedy. When you shortcut the thinking, you shortcut the learning. When you outsource the struggle, you rob yourself of the growth and the innovation it breeds. AI won’t kill your system, but it might show you just how dead it already is.