The AI Panic and the AI Burnout Are the Same Problem
Matt Shumer's viral AI essay and NetworkChuck's burnout video are two sides of the same coin. The real answer isn't learning AI harder. It's hiring expertise.
Two pieces blew up this week about AI, and if you're anywhere near LinkedIn or YouTube, you probably saw at least one of them.
Matt Shumer, CEO of OthersideAI, published a 5,000-word essay called "Something Big Is Happening" that racked up over 80 million views. His argument: AI has crossed a threshold. He says he describes what he wants built in plain English, walks away from his computer for four hours, and comes back to find the work done. He compared this moment to February 2020, right before COVID changed everything. His prediction, citing Anthropic CEO Dario Amodei: 50% of entry-level white-collar jobs will be eliminated within one to five years.
Then NetworkChuck, a tech creator with over 5 million subscribers who's built his brand teaching people Linux, networking, and cloud computing, posted a video titled "I kind of hate AI... and it almost made me quit YouTube." He recorded it during a sabbatical in Japan because the stress of trying to keep up with AI was burning him out. He referenced a UC Berkeley study where researchers spent eight months embedded at a 200-person tech company and found that 62% of workers using AI tools reported burnout, anxiety, and decision paralysis. Not hypothetical future workers. Current ones. Today.
Chuck also did something interesting: he pushed back on Shumer directly. He pointed out that Shumer walked back his claims in a CNBC interview, admitting "if I had known how viral this was going to go, I would have thought about certain parts and rewritten some of the parts for sure." Chuck noted that Shumer used AI to write the essay itself. And he referenced Gary Marcus, the NYU professor who called the whole thing "weaponized hype that tells people what they want to hear, but stumbles on the facts."
Most people are treating these as opposing takes. Shumer says AI is coming for your job. Chuck says AI is already wrecking the people who try to use it. One is doom about the future. The other is doom about the present.
They're not opposing takes. They're the same problem described from two different seats in the room.
The builder sees one thing. The user sees another.
Shumer is a builder. He runs an AI company. He has engineers on staff. He spends his days inside the tools, shaping them, testing them, pushing their limits. When he says he walks away and comes back to finished work, I believe him. That's what it looks like when you've invested hundreds of hours building the workflows, the prompts, the infrastructure, the feedback loops.
What he doesn't say is how long it took to get there. He doesn't talk about the failed iterations, the hallucinated code, the outputs that looked right but broke in production. He skips straight to the highlight reel.
Chuck is a user. A sophisticated one. The guy teaches people how to build servers and deploy Kubernetes clusters. He's not tech-illiterate. And he's telling you, on camera, that he feels "like an idiot" trying to build AI workflows. That despite being one of the most technically literate creators on the internet, AI makes him feel stupid.
That's not weakness. That's honesty about what the learning curve actually looks like from the other side.
The stat that should bother you
The UC Berkeley study is worth sitting with. Researchers embedded themselves in a company for eight months and watched what happened when AI tools were adopted. The employees didn't use AI less over time. They used it more. They took on more tasks. Their roles expanded beyond their job descriptions. Product managers started writing code. Researchers started taking engineering tickets.
And by month six, 62% of them were burned out.
Not 62% of Luddites who refused to use the tools. 62% of the people who adopted them enthusiastically.
The study identified why: AI's conversational interface makes it effortless to start work. No blank page. No learning curve friction. So people started sending "one more quick prompt" before leaving their desks. The boundary between working and not-working dissolved. The productivity gains didn't translate into working less. They translated into working more, on more things, with less recovery time.
One engineer in the study put it bluntly: "You had thought that maybe, 'oh, because you could be more productive with AI, then you save some time, you can work less.' But then, really, you don't work less. You just work the same amount or even more."
That's the user experience of AI in 2026. Not the conference keynote version. The actual, lived version.
The question nobody is answering
Here's what bothers me about both of these pieces.
Shumer's advice, boiled down: spend an hour a day experimenting with paid AI tools. Use the top models. Integrate AI into your actual work.
Chuck's video, for all its raw honesty about burnout, lands in roughly the same place: you still need to figure this out, it's just going to be harder and more stressful than anyone is admitting.
Neither of them answers the obvious follow-up question.
If a 5-million-subscriber tech creator with deep skills in Linux, networking, Docker, and cloud infrastructure admits on camera that he's not good at AI and it almost made him quit his career... what is a small business owner with zero technical background supposed to do?
Seriously. A restaurant owner. A law firm partner. An HVAC company. A real estate agent. Someone running a landscaping business with 15 employees.
"Just spend an hour a day experimenting with AI" is advice that sounds practical until you realize these people are already working 10-hour days. They don't have an hour. They don't know which tools to experiment with. They don't know what good output looks like. They don't know how to tell when AI is hallucinating versus when it's being useful. They don't have the technical literacy to evaluate whether a workflow is working or just producing confident-sounding garbage.
The advice to "just use AI" is the 2026 version of "just learn to code." It sounds empowering. It's actually dismissive. It takes a genuine, complex specialization problem and repackages it as a personal discipline issue.
This is a specialization problem. We've solved it before.
When your pipe bursts at 2 AM, you don't go watch plumbing tutorials. You call a plumber.
When you get audited, you don't spend an hour a day experimenting with tax law. You call your accountant.
When you get sued, you don't download a legal AI tool and start prompting. You call a lawyer.
Business has always had an answer for fast-moving, complex expertise that falls outside your core competency: you hire someone who already figured it out. Someone who's made the mistakes, done the iterations, and built the systems. Someone who can translate the capability into results without requiring you to become an expert yourself.
AI is not different. It just feels different because it's new, because it touches everything, and because the people selling it have a financial interest in making you believe anyone can do it with a subscription and some curiosity.
The reality: building effective AI workflows is a skill. It takes hundreds of hours of trial and error. It requires understanding what models are good at and what they're terrible at. It demands knowing when to trust the output and when to throw it away. It means building guardrails, feedback loops, quality checks, and escalation paths.
That's not something you pick up in an hour a day between running your actual business.
What winning actually looks like
The people who come out of this transition well won't be the ones who panicked and tried to automate everything themselves. They won't be the ones who buried their heads and pretended AI wasn't happening. And they won't be the 62% who adopted the tools with enthusiasm and burned out six months later.
The winners will be the ones who recognized AI expertise as a specialization, hired for it, and got back to running their business.
I say this from experience, not theory. I run a marketing company where AI handles roughly 80% of the execution. Not because I read an essay and got inspired. Because I spent months building the systems, breaking them, rebuilding them, and learning what actually works versus what demos well on a stage. I've built products around this. My company's social media platform costs $20 a week and does what a full-time social media manager does, because the AI infrastructure behind it has been iterated on relentlessly.
That didn't happen in an hour a day. It happened because this is what I do all day, every day. It's my specialization.
And the honest truth is: if I were a restaurant owner or a law firm partner or an HVAC company, I wouldn't try to build this myself either. I'd find someone who already had, and I'd hire them. The same way I hire an accountant for my taxes despite being perfectly capable of learning tax law if I had infinite time and no business to run.
The real conversation we should be having
Shumer is right that AI capabilities are accelerating faster than most people realize. Chuck is right that the human cost of trying to keep up is real and largely unacknowledged. Gary Marcus is right that the hype cycle is distorting reality.
All three of those things can be true at the same time.
What none of them are saying clearly enough is this: the gap between AI's capability and the average person's ability to use it effectively is not shrinking. It's growing. Every month the tools get more powerful, the skill required to deploy them well increases. The people who've been building with AI daily for the past year are pulling further ahead, not waiting for everyone else to catch up.
That's not a doom scenario. It's just how specialization works. The gap between what a great accountant knows and what you know about tax law grows every year too. You don't lose sleep over it because you've already solved that problem. You hired the accountant.
AI is reaching the same inflection point. The question isn't whether you should use AI. Of course you should. The question is whether you should be the one building the workflows, engineering the prompts, debugging the outputs, and keeping up with a field that changes every week.
For most business owners, the honest answer is no. And there's nothing wrong with that. It's the same answer you'd give about plumbing, law, accounting, and every other specialization that your business depends on but that isn't your core job.
The AI panic and the AI burnout are the same problem, viewed from different angles. The solution is the same one business has used for every complex expertise challenge in history.
Stop trying to become the expert. Hire one.
See what Beacon can do
AI-powered content that sounds like you. Six platforms. Anti-slop quality gates. Built for marketers who care about results, not vanity metrics.
See Plans & Pricing