RivermindRivermind

Two banks. Similar size. Similar technology budgets. Similar strategic ambitions for AI.

Bank A has a 40-person data science team. It has deployed 23 AI models in the past three years. Twelve are still running. Of those twelve, six are being actively monitored and maintained. The business can attribute clear revenue impact to two of them.

Bank B has an 8-person AI team. It has deployed 4 AI models in three years. All four are running. All four are actively monitored. All four have documented business outcomes that the CFO can point to in the annual report.

Bank B is winning on AI. Not because of technology. Because of operating model.

Why Team Size Is the Wrong Variable

The instinct in most organisations is to respond to AI underperformance by adding capability — hiring more data scientists, acquiring better tools, establishing an AI centre of excellence. The assumption is that the bottleneck is technical capacity.

MIT Sloan Management Review's 2026 analysis of enterprise AI found that the organisations with the largest AI teams are not the ones achieving the highest returns. The differentiating factor is not team size. It is whether AI is treated as an operating model question or a technology question.

Organisations that treat AI as a technology question ask: do we have the right models? Do we have enough data? Do we have the right tools? These are not wrong questions. But they are insufficient.

Organisations that treat AI as an operating model question ask: who owns this in production? How does the output reach the person who acts on it? What happens when the model is wrong? Who decides when to retrain? How is performance measured? These questions determine whether a model creates business value or sits in a server room producing outputs that nobody reads.

The Three Operating Model Differences That Matter

1. Accountability is assigned to business outcomes, not to model performance

In organisations where AI underperforms, the AI team is typically accountable for model accuracy. They are judged on whether the model performs well technically. The business outcome — whether revenue actually increased, whether the fraud rate actually decreased, whether the compliance cost actually fell — is someone else's problem.

In organisations where AI delivers, a named individual is accountable for the business outcome that the AI is supposed to improve. That person may not be technical. They are accountable for the result, which means they care deeply about whether the model is working in production, whether the workflow integration is functioning, whether the team is acting on the outputs. They cannot pass responsibility for the outcome to the data science team.

This accountability structure changes the questions that get asked. It moves the conversation from "is the model accurate?" to "is the business metric moving?"

2. AI is integrated into existing decision processes, not added alongside them

A common failure pattern is deploying AI as an additional input to a decision that is already made through an existing process. The credit committee still meets. The risk model still runs. The AI recommendation is presented alongside these existing inputs. In practice, the decision-makers default to the process they have always used and the AI output is noted but not acted on.

The institutions that extract value from AI restructure the decision process around the AI output. The credit model doesn't run alongside the AI recommendation - the AI output becomes a required input to the credit model. The AML investigator doesn't review the AI alert as an optional step — the AI triage determines which cases get reviewed first.

This requires a willingness to change how decisions are made, not just what information is available when they are made.

3. The Chief AI Officer reports to business leadership, not IT

MIT Sloan's 2026 benchmark survey found that 38% of large enterprises have appointed a Chief AI Officer or equivalent role, but there is little consensus on to whom that role reports. It is currently split among business, technology, and transformation leadership.

The research suggests that where the CAIO reports matters significantly. When the role sits under IT, it is oriented toward infrastructure, tool selection, and technical capability. When it sits under business leadership, it is oriented toward business outcomes, operating model design, and accountability structures.

Organisations where the CAIO or equivalent reports to the CEO or COO are more likely to have the accountability structures and workflow integration that distinguish high-performing AI programmes from technically sophisticated but commercially ineffective ones.

What This Means for Regional Banks

For mid-market and regional banks, the implication is liberating. You do not need to build a 40-person data science team to compete on AI. You need to be precise about where AI creates business value, ruthless about integrating it into the workflows where decisions are actually made, and disciplined about assigning accountability for business outcomes rather than model metrics.

The ZS research on pharma and banking found that only 40% of AI pilots reach scaled deployment. The gap is not technical. It is organisational. The pilots that don't scale typically fail at the operating model questions: who owns this? How does it change how we work? Who is accountable if the business outcome doesn't materialise?

Bank B isn't winning because it has better AI. It's winning because it asked those questions before it built anything.

Share

Subscribe for updates

Stay updated with the latest product news and exclusive behind-the-scenes insights.

RivermindRivermind