The problem with your AI rollout was named in 1984
In 1984, a psychologist gave a name to what your workforce is feeling right now. Forty years later, we're still not listening.
Craig Brod coined the term technostress in his book Technostress: The Human Cost of the Computer Revolution. He defined it as a modern disease of adaptation. The psychological cost of not being able to cope with new technology in a healthy way. It was a diagnosis that was a human one. And in 1984, the technology in question was the desktop computer.
Four decades on, the technology has changed beyond recognition. The psychological response hasn't changed at all.
The Same Four Problems, With an AI Lens
Brod identified four dimensions of technostress that played out whenever a new technology disrupted how people worked. Every one of them is showing up in workplaces right now and it is amplified by the speed, opacity, and unpredictability of generative AI.
Anxiety around new technology.
We aren’t talking technophobia. It's the rational response of a professional being asked to work with a tool that behaves differently every time they use it. Deterministic technology, the kind we've spent decades building change programmes around, does what you tell it. AI doesn't. It guesses. The output shifts with context, with the prompt, with how you framed the question. For people trained on accuracy, repeatability, and control, that's not a feature. It's a source of constant low-grade apprehension.
Adaptation stress.
The pace of change in AI isn't linear, it's relentless. New models, new capabilities, new interfaces, often before anyone has mastered the last version. A 2025 study published in Frontiers in Artificial Intelligence found that generative AI is intensifying adaptation stress particularly among younger professionals, who report growing concerns about whether the skills they're building today will still be relevant tomorrow. The result isn't just fatigue. It's a workforce that's running to stand still, burning cognitive energy on keeping up rather than doing the work.
Job insecurity.
This is the quiet one. It doesn't always show up in engagement surveys because people don't tend to say it out loud. But it's there, the unspoken question of whether AI will assist your role or absorb it. Every announcement about AI "replacing" tasks lands differently when you're the person doing those tasks. And the reassurance that "AI won't take your job, someone using AI will" doesn't help as much as leaders think. It just shifts the anxiety from replacement to inadequacy.
Overidentification.
This is the dimension most organisations miss entirely. When people work closely with AI, they start emulating it. Speed over reflection. Certainty over nuance. Copy-paste over reasoning. Communication becomes flatter, more transactional, more machine-like. The human qualities that make professional judgment valuable such as context, empathy, the ability to sit with ambiguity, get smoothed away in the effort to keep pace with the tool.
These aren't personality flaws. They're not resistance to change. They're a documented, predictable psychological response to technology that disrupts how people think, decide, and relate to their work. And right now, most AI strategies are not taking this into account at all.
The Cost Is Already Measurable
If the psychological dimensions feel abstract, the organisational cost doesn't.
A 2026 BCG study of 1,488 workers, published in Harvard Business Review as "When Using AI Leads to Brain Fry," found that the constant need to monitor, verify, and filter AI outputs is creating measurable cognitive overload. Workers in high AI-oversight roles reported 14% more mental effort, 12% more mental fatigue, and 19% greater information overload than those with low oversight.
The consequences go beyond tiredness. Workers experiencing what the researchers called "AI brain fry" were 39% more likely to report an intention to quit. They described mental fog, difficulty focusing, longer decision-making times, and a persistent "buzzing feeling" that didn't stop when the workday ended.
The roles hit hardest? Marketing (26%) and HR (19%). Marketing in particular has been among the most aggressive early adopters of generative AI and HR are the front line for the change whilst also trying to figure out how to use it themselves. It seems the teams using AI the most are burning out the fastest because the human cost of working with it hasn't been accounted for.
This is technostress in its modern form. The technology is new. The psychological mechanism is exactly what Brod described in 1984.
The Gap in Your AI Rollout Is Where the Technostress Lives
Most AI strategies are built around the technology. Which tools to deploy, which processes to automate, which use cases to scale. The human side, when it's addressed at all, gets reduced to training. Run a workshop. Build a prompt library. Show people how to use the tool.
None of that touches the four dimensions Brod identified. Training doesn't resolve anxiety when the source of the anxiety is a technology that behaves unpredictably or threatens someones value. Upskilling programmes don't address adaptation stress when the updates outpace the learning. A town hall about AI strategy doesn't quiet the fear of replacement. And no one is designing for overidentification, the slow drift toward machine-like thinking.
My big message, if you are still here, is that the gap in the fancy rollouts is psychological readiness not technical capability. You cannot run an advanced AI system on an anxious, exhausted workforce that hasn't been supported to work with it properly.
What Needs to Change
Closing the gap means treating technostress as a design constraint, not a soft-skills problem.
That starts with acknowledging that the pace of AI change is itself a source of harm to wellbeing at work, and building deliberate breathing room into rollout timelines. It means creating space for people to voice the quiet concerns. The 3am thoughts about relevance, about replacement, about whether they're falling behind. Too often we reach for the same change diagnosis we have always used and frame those concerns as resistance. Closing the gap also also means putting in deliberate education and monitoring for overidentification. Watching for the signs that people are outsourcing their judgment to the tool rather than using it alongside their own thinking. Leaders in particular need support with this area so that they can support their team with coaching and feedback, and it’s not something that can be trained in a lunch and learn.
At the organisational layer, Executives need to give this area the attention it deserves. We need to stop the board room conversations and delegation of ‘we need to do more with AI’ and rather accept that the cost of not getting this right will eclipse whatever savings the automation provided in the first place. A workforce that's too fatigued to think critically, too anxious to push back on bad outputs, and too overloaded to maintain the judgment that AI can't replicate is not an “augmentated workforce”.
Craig Brod saw this coming in 1984, when the most advanced technology in the office was a beige desktop computer. The question isn't whether technostress is real or whether it was a fad in the psychology arena in its day, it's why, forty years on, we're still building rollout plans that pretend it isn't. For me, the evidence is staring us straight in the face. And whilst the advancement of this technology really excites me, I cannot help but wonder what our teams of employees will look like in 12 months time if this is not a priority in 2026.
Sources
Original Framework: Brod, C. (1984). Technostress: The Human Cost of the Computer Revolution. Addison-Wesley. View on Internet Archive
AI Technostress in Young Professionals: "Technostress and Generative AI in the Workplace." Frontiers in Artificial Intelligence (2025). Read the study
AI Brain Fry: Bedard, J. et al. (2026). "When Using AI Leads to Brain Fry." Harvard Business Review / BCG. Read the study