
Human-centering AI & Organizations: What a British Mystery, the Founding Fathers, and Anthropic’s CEO can teach us about the next wave of AI use in the workplace
Share
A season 3 episode of Beyond Paradise, a light, pleasant British crime series, kicks off with the main character asking his subordinate “Why didn’t the call come through us?” when told about a car with a body being found in a river. The call had been routed through the county’s central station instead of the local, and the detective inspector instantly wanted to know why.
What’s interesting about this is that the writers didn’t see fit to have their main character first ask his colleague if the colleague knew whose body it was or even if he was trying to find out. Instead, it seemed most realistic to their plot to have the character ask a question about organizational boundaries. Indeed, that was the B plot of the entire episode (it was a silly one).
Humans do this all the time. As much as we want to be human-centered, the lack of values alignment across human groups like shareholders, administrators, front line staff, and customers frequently results in all of us rallying around a sort of pretend human: the organization.
We use organizations as our proxy and our bogeymen. We talk about their values, their voices, and their strategic goals. We defend them, and we even sometimes claim to love them.
The system doesn’t work that well, as organizations are famously disfunctional, but it’s worked well enough until recently, when we all seem to be aligning around one certainty: that even though we can’t figure out how to best balance the competing values of human groups, we agree that we should certainly not center our strategic product, service, program, or system solutions around our new consumer obsession: AI.
This makes sense: we want AI to remain a tool that serves us, not a platform we serve. But as we feed a platform more and more of our disorganized thoughts and raw data, the platform could easily become The Organization: its personality, its voice, its strategic decisions.
But organizations, like AIs, are not human. They never live and they never die. They are not concerned about cancer or whether their children are happy. They do not feel any type of love. Until now, as dominating as organizations are in our lives, they have been kept in check by the lack of a single voice and point of view. In effect, organizations are perpetuated through what they offer to the public, internal and external marketing decks, performance plans, and codes of conduct.
But now, AI could take on the role of Organization. Wondering what The Org should do? Ask the AI trained on all the datasets someone could get their hands on. Wondering if The Org would accept that type of behavior from a subordinate? Ask the disembodied proxy of its founder. If organizations are now given a voice and personality through AIs, things could become very dark very quickly.
But this will not necessarily happen. At least, not in functional organizations. Consider the recent statements of Anthropic CEO Dario Amodei. He observes that “....interpretability’s actually something businesses are interested in…the ability to see inside the model is very appealing because it helps to reduce the amount of unpredictability.”*
The word “actually” here is telling. Anthropic’s founders are some of the many AI-boosters (Ishmael Interactive included) who have long advocated for expanded interpretability in AI models. Not because anyone is particularly on a mission to save humanity, but because the answer to responsibly integrating AI into workplaces is not to make it less human and more machine-like, as that would defeat the purpose of AI, but instead to make it more human, that is, more accepting of its own fallibility and more questioning of its reasoning.
This type of ability to both have confidence and doubt oneself is one of cornerstones of humanity and a focus of Enlightenment thinkers like James Madison, Thomas Jefferson, and John Jay. To change one’s mind when presented with evidence to the contrary of what one previously thought is one of the delights of existence. While we cannot teach AI delight, we can teach it parameters for doubt and change. And we should.
But without both the training and the guts to push back on AIs, human actors risk ceding authority and power to non-human actors that do not—cannot—participate in the human experience but can dominate our existence if we let them. But by training and empowering a workforce that questions AI responses and by creating organizational checks on AI dependency at strategic and individual levels, we can avoid this massive pitfall of a wonderful modern technology. It all starts with questioning AIs on a daily basis and with requiring that AI companies permit interpretability in their models. Because the humans who make up organizations may not all agree with each other’s values, but we seem to have one thing in common: the knowledge that AI is a tool, not an oracle. We should treat it that way, actually.
*Dario Amodei, Money Talks podcast from the Economist magazine, 31 July, 2025. [00:15:04]
What we’re reading this week
Artificial intelligentsia: an interview with the boss of Anthropic by Money Talks from the Economist Magazine (31 July 2025)
The Bitter Lesson [of AI Research] by Rich Sutton (19 March 2019)