Dark Light

Confidence in agentic AI builds quickly. The demonstrations are compelling, with systems that sequence tasks, negotiate constraints, and adapt to feedback with remarkable fluency. But confidence also masks a risk. Agentic AI falls short when organisations misunderstand what trust actually demands once autonomy enters the system.

“The demonstrations are genuinely impressive,” says Daniel J. Jacobs. “It feels like the future arriving.” Once these systems leave controlled pilots, they collide with operating environments shaped by legacy data, informal processes, and human relationships that rarely surface in workflow diagrams. For Jacobs, founder of the IT strategy and transformation consultancy Starkhorn, this is the point where trust becomes the real constraint.

When Automation Meets Invisible Work

The lesson became clear during a pilot Jacobs oversaw to automate a procurement workflow inside a multi-entity organization. On paper, it was a textbook use case. The process was repetitive, rule-bound, and slowed by layered approvals, supplier back-and-forth, and data reconciled manually across systems that didn’t talk to each other. Early results reinforced expectations. The agent sequenced approvals, drafted supplier responses, and reconciled records faster than any human team could, delivering speed and technical accuracy that looked like a clear win.

“What the agent missed was everything the humans had been doing without realizing it,” Jacobs says. “They were preserving relationships, interpreting signals, and exercising judgment that never appeared in the documented process.”

The issues that surfaced were cumulative. Supplier relationships became strained as responses, while factually correct, missed nuance and tone. Decisions were escalated to human managers without enough context to explain why the agent had acted as it did, forcing them to reverse-engineer its logic. Inconsistent data across systems was flagged but not resolved, pushing additional work back onto already stretched teams.

The Discomfort of Partial Success

This is where many organizations misread the moment. Agentic AI works well enough to inspire confidence, yet it occupies an uncomfortable middle ground where abandonment feels wasteful and acceleration can feel reckless. “The disappointment isn’t about the technology,” Jacobs says. “It’s about the gap between what we hoped for and what was actually possible.” Autonomy, he argues, is a relationship that must be built through attention, constraint, and earned trust. That reality is harder to accept than outright failure. If the system didn’t work at all, leaders could move on. Instead, they face a technology that works just well enough to make retreat feel wasteful, but not well enough to run unsupervised. The business case assumed speed. What the deployment actually demanded was patience.

Resistance as a Signal, Not an Obstacle

One of the most revealing dynamics in agentic AI deployments is human resistance. Some teams embrace the tools immediately, while others push back in ways that often frustrate leadership. “The pushback was protecting something real,” Jacobs says. “People’s sense of competence, their relationships, their understanding of how work actually gets done.” When an agent automates a task, it implicitly claims that the task was routine and mechanical. For someone who performed that work with skill and judgment, that claim can feel like erasure. Resistance is often a signal that leaders have underestimated the complexity of the work or overestimated what the agent can handle.

Building the Relationship Agentic AI Requires

The challenge with agentic AI isn’t technical. It’s relational. Every agent operates within a web of human relationships it can’t see, and its outputs ripple through that web in ways that are difficult to predict. Trust isn’t built by expanding scope. It’s built by investigating failures honestly, listening to the people whose work is being reshaped, and resisting the pressure to scale before the organisation genuinely understands what it’s scaling. “It also requires honesty about what we don’t know,” Jacobs says. “These systems are genuinely novel. Anyone who claims certainty about how this will unfold is selling something.”

Why This Matters at the Leadership Level

What remains genuinely hard is knowing when to trust the system. Agents don’t hedge, caveat, or signal doubt the way humans do. They project confidence regardless of whether they’re right, which makes oversight not just necessary but cognitively exhausting. There is also a longer-term risk that is easy to overlook. If agents handle routine work, humans may lose the skills needed to manage exceptions. If people become reviewers of machine output rather than decision-makers, judgment can erode over time.

The pressure to deploy continues to intensify. Boards want progress, competitors are moving, and the technology itself is advancing faster than most governance frameworks can accommodate. Jacobs believes the leaders who navigate this well will not be the fastest adopters, but the ones willing to hold these tensions openly rather than pretending they’re resolved. “The systems can generate outputs,” Jacobs says. “They can’t understand the humans they affect. That responsibility remains with us.”

Follow Daniel J. Jacobs on LinkedIn

Related Posts