AI angst...and the question lawyers aren't asking about AI
- Anna Wesson
- Mar 18
- 2 min read
Everyone is talking about what AI will do to your firm. Almost no one is talking about what it is already doing to your people.
The dominant fear around AI in professional services goes something like this: the technology will replace human jobs. It is repeated so often it has become accepted as fact. But new research tells a more nuanced - and more urgent - story.
People are not primarily afraid of being replaced by AI. They are afraid of being replaced by colleagues who use AI better than they do. That is a different problem, and it creates a very different dynamic inside teams.
Studies show that professionals in finance and law score among the highest on measures of AI "angst" - anxiety about what the technology means for their roles and their futures. When that anxiety has nowhere to go, people go quiet. They do not ask questions. They do not admit what they do not understand. That silence has a name: low psychological safety.
"The issues are increasingly not with the software - they are in the space between the software and the people using it."
Imagine two teams navigating an AI rollout. In the first, people worry that asking a basic question will mark them as behind. Concerns go unspoken. Anxiety accumulates. Adoption stalls. In the second, the leader has built an environment where not knowing is treated as the starting point for learning. Questions are encouraged. Mistakes are shared. People experiment, compare notes, and move forward - imperfectly but together.
The difference between those two teams is not the technology. It is the culture the leader has built around it.
Harvard Business School professor Amy Edmondson - who defined psychological safety - and researcher Jayshree Seth have turned their attention to exactly this. Their conclusion is pointed: organisations that treat AI as a purely technical challenge are missing the point. Introducing AI without attending to how people relate to it is like adding a new team member and never communicating with them. The integration fails - not because the tool is wrong, but because the human conditions for using it well were never created.
For law firm leaders, this is both a warning and an opportunity. The firms that will get the most from AI are not necessarily those with the biggest budgets or the most sophisticated tools. They are the ones whose people feel safe enough to say "I do not know how this works," to challenge an AI output that does not sit right, and to share what they are learning - including what is going wrong.
Building that culture is a leadership task, not a technology task. The question is not whether your firm has adopted AI. The question is whether your people feel safe enough to use it honestly. If the answer is uncertain, that is where the work begins.



