In March 2026, a software developer shared something on HackerNews that prompted 14 comments and likely a few uncomfortable internal conversations:
“80% or more of my work day is spent iterating with Claude in a way that generates so much data and so many responses that I can’t even keep up with, let alone validate everything. […] my coworkers who aren’t developers are now having to deal with developers tools so everyone can collaborate, but the mental load is huge and they don’t feel able to be honest with leadership about it.”
Notice the last sentence. Not “AI doesn’t work.” Not “we’re behind technologically.” But: employees don’t dare tell leadership.
That’s the real problem.
Technology is no longer the bottleneck
In 2024, the question was still: “Does AI work well enough to use in production?” By 2026, that question has been answered affirmatively for most office-based tasks. Code, text, analysis, data processing, customer handling — the models are there.
The new bottleneck is human capacity to absorb speed.
AI tools can now generate in minutes what previously took days. That sounds like an advantage. It is — but only if the organisation is ready to receive and act on that output. And most organisations aren’t set up for it.
The result is a new kind of stress that doesn’t have a good name yet. It’s not overwork in the classic sense. It’s over-capacity: more information, more output, more decision material — than any single person can validate, prioritise, and act on.
This is a leadership problem
When an employee says “we can’t keep up,” there are two possible responses.
The first is to see it as an individual capacity issue. The person isn’t good enough at using AI. They need training. They need to practice more.
The second is to ask the organisational question: What are we actually requiring of our people, and what is the realistic capacity we’ve allocated for it?
The first response is the most common. The second is the right one.
When a CFO decides that AI should accelerate the reporting process, it’s not enough to implement a tool and expect the team to figure it out. Speed requires capacity at both ends. If an AI system produces three times as many report drafts in half the time, but no one has time to read and approve them — you haven’t saved anything. You’ve created a new backlog problem.
What actually works
Organisations that succeed with AI implementation typically share three characteristics.
They start with the process, not the technology. Before implementing an AI system, they map exactly which decisions are made in the existing process, and who makes them. AI then replaces specific steps — not the entire process at once.
They design for human capacity. The goal isn’t for AI to produce as much as possible. The goal is for the organisation to handle what AI produces. That’s an important difference. An AI agent that generates 50 leads per day is useless if the sales team can realistically handle 10.
They build psychological safety in from the start. What kills AI implementations isn’t technical failure. It’s silent resistance. Employees who work around the system, don’t report errors, or don’t dare tell leadership they’re overwhelmed. That problem isn’t solved with a better model. It’s solved with leadership behaviour.
What this requires of you as a leader
The question isn’t “what can our AI system do?” It’s: “What can our organisation absorb and act on?”
That requires an honest capacity assessment. Not of the technology — of the people. What is the realistic time available? What is the decision-making capacity in the organisation? What happens if we double the volume of information — who makes the decisions, and how quickly?
It requires scaling change to the organisation’s speed, not to the technology’s speed. That might sound like a constraint. It’s the opposite: it’s the only implementation strategy that actually holds.
And it requires leaders to actively ask — and actively listen — to whether people can keep up. Not as a one-off check, but as a structured part of the implementation process.
That’s typically the conversation we start with when we help organisations with AI strategy and implementation. Not “what can the technology do?” but “what is your organisation ready for — and what does it take to get it ready for more?”
Read also: