RivermindRivermind

The board approved an AI governance framework eighteen months ago. A consulting firm produced a 60-page document covering principles, risk tiers, approval workflows, and oversight responsibilities. It sits in a SharePoint folder that most of the organisation doesn't know exists.

Meanwhile, three production AI systems are running without documented model cards. Two of them were deployed before the framework was written. One was built by a team that left the company. Nobody is quite sure what data any of them were trained on.

This is AI governance theatre. And as the EU AI Act's operational requirements take full effect in August 2026, the distance between governance as documentation and governance as operational reality is about to become very expensive.

The Difference Between a Policy and a System

Governance theatre is characterised by investment in the visible artifacts of governance — policies, principles, committees, frameworks — without building the operational infrastructure that makes governance real.

Real governance has specific properties. It is auditable: every model has documented training data, validation methodology, and approval history. It is monitorable: someone is actively watching model performance and receives alerts when outputs drift from expected ranges. It has teeth: models that fail governance checks are actually suspended, not quietly permitted to keep running while a remediation plan is drafted.

The gap between policy and system is usually a resourcing decision. Building genuine governance infrastructure is expensive. Writing a governance policy is not. Under pressure to demonstrate AI responsibility without incurring the cost of actually achieving it, most organisations choose the policy.

What Regulators Are Actually Looking For

The EU AI Act's requirements for high-risk AI systems are specific and operational. They include:

  • Technical documentation that must be drawn up before the system is placed on the market or put into service

  • Automatic logging of events throughout the system's lifetime

  • Human oversight measures that enable natural persons to effectively oversee the system during its operation

  • Accuracy, robustness, and cybersecurity requirements with ongoing monitoring obligations

The critical word in each of these requirements is operational. The documentation must exist before deployment. The logging must be automatic, not retrospective. The human oversight must be effective, meaning someone must actually be able to intervene when the system produces a problematic output.

A policy document that describes these requirements without implementing them does not satisfy them. Regulators examining AI systems in banking, insurance, and critical infrastructure will be looking at system architecture, deployment records, and operational logs - not governance frameworks.

Penalties under the EU AI Act reach 35 million euros or 7% of global annual revenue for the most serious violations. For a mid-sized bank or insurer, that is not a theoretical risk.

The Four Gaps Most Organisations Have

Gap 1: No model inventory

Most organisations cannot produce a complete list of the AI systems currently running in production. Systems deployed as experiments that became permanent fixtures. Models maintained by vendors who were subsequently replaced. Shadow AI built by departments that needed a solution quickly.

You cannot govern what you cannot find. The first step in moving from governance theatre to real governance is establishing a model inventory that is actively maintained, not compiled retrospectively when an audit request arrives.

Gap 2: No model cards

A model card documents what a model does, what data it was trained on, what its known limitations are, how it has been validated, and who approved it for deployment. Most production AI systems in enterprises do not have one.

Without model cards, organisations cannot answer the basic questions a regulator will ask: what was this trained on? How was it validated? Who decided it was safe to deploy?

Gap 3: No monitoring in production

Models trained on historical data will drift as the world changes. A fraud detection model trained on pre-pandemic transaction patterns behaves differently in a post-pandemic environment. A demand forecasting model calibrated before a supply chain disruption will produce systematically wrong outputs after one.

Monitoring model performance in production - tracking output distributions, comparing predictions to outcomes, detecting when a model's behaviour has changed materially - is operationally demanding. Most organisations don't do it. They find out a model has drifted when the business outcome it was supposed to improve starts getting worse.

Gap 4: No real override mechanism

Governance frameworks typically include language about human oversight and override capabilities. In practice, many AI systems are deeply embedded in workflows in ways that make override operationally difficult. The override mechanism exists in the policy. It doesn't exist in the system.

This is particularly acute in automated decision systems. An AI that recommends loan terms, flags transactions, or determines insurance premiums needs a mechanism by which a human can review and reverse the recommendation without disrupting the downstream workflow. Building that mechanism after deployment is significantly harder than designing it in from the start.

Moving From Theatre to Reality

The transition from governance theatre to operational governance is not primarily a technology challenge. It is a prioritisation challenge.

Organisations that have made the transition consistently describe the same sequence: they started with the model inventory, establishing what was actually running in production. They then documented the highest-risk systems first, building model cards and establishing monitoring for the models where a failure would have the most significant consequences. They built override capabilities into new deployments as a design requirement, not a retrofit.

None of this is technically complex. It is organisationally demanding because it requires someone to own it — not as a compliance function that reports to legal, but as an operational function that sits alongside the teams running AI in production.

The EU AI Act doesn't require perfect governance from day one. It requires demonstrable progress and genuine operational controls. The organisations that will struggle are not those that haven't finished building their governance infrastructure. They are the ones that have confused building a policy with building a system.

Share

Subscribe for updates

Stay updated with the latest product news and exclusive behind-the-scenes insights.

RivermindRivermind