Skip to content

The Word That Separates AI Leaders From AI Laggards

I recently wrote about Mayo Clinic’s leadership philosophy and the decision to trust people enough to let the answers come from them. It struck a nerve. The follow-up question I kept hearing was a practical one: what does that actually look like inside an organisation?

MIT Sloan’s research points to something specific. It starts with a word.

Most organisations have an AI governance function. Policies. Approval processes. Risk frameworks. Oversight committees.

Mayo Clinic has something different. They call it stewardship.

That single word choice, documented by Tom Davenport and Randy Bean in the MIT Sloan Management Review across multiple studies of Mayo Clinic’s AI journey, turns out to explain a great deal about why some organisations are pulling ahead on AI while others are stalling.

Governance and stewardship look similar on an org chart. In practice, they produce completely different cultures.

A governance model asks: did you get approval for this? A stewardship model asks: how can we help you do this well? One creates a compliance relationship between the AI team and the rest of the organisation. The other creates a consulting relationship — where people with use case ideas come forward, where innovation moves from the edges toward the centre, where trust flows in both directions.

At Mayo Clinic, the stewardship model meant that clinicians and researchers, the people with the deepest knowledge of the work, became the source of AI use cases, not the recipients of centrally mandated tools. MIT Sloan noted that the AI team functions as enablers: providing consulting, education, regulatory guidance, and technical support to anyone who wants to build something.

The result wasn’t chaos. It was scale. Over 200 use cases developed. Headcount grew. Mission advanced.

This is the insight most technology leaders miss. The question isn’t whether your AI strategy is technically sound. It’s whether the people responsible for AI have a governance relationship or a stewardship relationship with the rest of the organisation.

The constraint isn’t the technology. It’s the relationship between the people responsible for AI and the people being asked to use it.

That relationship is a leadership problem. And it’s solvable.

How often do you find that the real blocker isn’t the strategy — it’s the alignment behind it?

Best regards, Brian  
  
PS: I’m running a small, private session for senior technology and cyber leaders on why outcomes rise or fall based on the quality of your conversations, the agreements you make, and what gets lost in between.

And it sits entirely within the quality of your conversations, the trust you’ve built, and the agreements you’ve made with the people doing the work.

If any of this is top of mind right now, this session is worth an hour of your time.

When Technical Excellence Stops Delivering Outcomes  

📅 Date: Friday, 17 April, 2026 — 10:30 AM AEST

📍 Live on Zoom

Reserve your seat hereRegistration Link