Thinkscoop
Enterprise AI

Designing an Enterprise AI Operating Model (2023–2025)

Thinkscoop Engineering Jun 10, 2025 13 min read
Designing an Enterprise AI Operating Model (2023–2025)

From 2023 to 2025, the most successful enterprises stopped treating AI as a side project and gave it a real operating model: clear ownership, budget, guardrails, and roadmaps.

Between 2023 and 2025 we watched AI move from experiment to expectation inside large organisations. The shift did not happen because models got marginally better - it happened because leaders started treating AI like any other strategic capability, with a real operating model behind it, not just a steering committee and a vague mandate.

The enterprises that struggled most in this period were those that tried to scale AI without first answering basic governance questions: who owns an AI system once it is live, how do new AI projects get funded and prioritised, and what happens when an AI system causes harm or makes a costly mistake? The organisations that answered those questions early moved faster and more safely than those that improvised them under pressure.

Three Layers of an AI Operating Model

  • Foundation: platform, governance, security, and evaluation shared across the company - the shared infrastructure that every AI product is built on top of
  • Domains: cross-functional squads that own AI features in a specific business area - combining product, engineering, and domain expertise
  • Portfolio: a simple process for intake, prioritisation, and funding of new AI bets - preventing random acts of AI while enabling experimentation

The enterprises that made tangible progress by 2025 were rarely the ones with the flashiest AI labs. They were the ones with boring, well-defined answers to questions like who owns AI incidents and how do AI projects get funded and staffed. That operational clarity made everything else easier.

The Foundation Layer: What Needs to Be Shared

The foundation layer prevents every team from solving the same problems independently. Without it, organisations end up with five different evaluation frameworks, three different model access policies, and no common understanding of what governance means in practice. With it, teams can focus on domain problems rather than infrastructure.

  • A shared model access gateway with usage tracking, access controls, and cost attribution
  • A common evaluation framework with shared rubrics that can be adapted by domain teams
  • A governance policy that covers data usage, model selection approval, and incident response
  • Security standards for AI systems including data residency, access logging, and PII handling
  • A shared component library of reusable prompts, retrieval patterns, and agent building blocks

Domain Teams: Embedding AI Expertise in the Business

Central AI platform teams are necessary but not sufficient. The organisations that made AI stick in their business units embedded AI capability directly into cross-functional product teams - not as a separate AI squad reporting to IT, but as a practitioner who sat with the domain team, understood their workflows, and owned the AI features they shipped.

This model - a small central platform team plus embedded domain practitioners - scaled better than either extreme (a central team that tried to build everything for everyone, or fully autonomous domain teams reinventing foundations). It distributed knowledge while maintaining coherence.

Portfolio Management: Stopping Random Acts of AI

By 2024, many large enterprises had dozens of AI pilots in flight. Without a portfolio process, these pilots competed for the same engineers, infrastructure, and data governance attention. Priorities were set by whoever had the loudest sponsor, and the AI that was actually highest value often was not the AI that got built.

  1. 1A lightweight intake form for new AI proposals: business value hypothesis, data dependencies, estimated effort, and required governance sign-offs
  2. 2A quarterly prioritisation meeting with representatives from business, engineering, and risk
  3. 3A shared scoring framework that weights impact, feasibility, and strategic alignment - not just excitement
  4. 4Clear criteria for when a pilot moves to production, gets another iteration, or gets parked
  5. 5A portfolio dashboard showing the status and outcomes of all active AI investments

Budgeting for the Unsexy Parts

The single most common operating model failure we saw between 2023 and 2025 was underfunding the non-glamorous parts of AI operations. Evaluation, governance, change management, and ongoing monitoring were consistently the first things cut when project budgets got tight. They were also consistently the things whose absence caused the most expensive failures.

A useful rule of thumb

If your AI project budget does not include explicit line items for evaluation infrastructure, governance review, and change management, it is almost certainly underfunded for production success. A reasonable target: 20–30% of total project cost for these categories combined. That number will feel high until you experience the cost of not investing there.

Building something in this space?

We'd be happy to talk through your use case. No pitch - just an honest conversation about what's feasible.

Book a 30-minute call

Key takeaways

  • An AI programme needs an explicit operating model, not a loose steering committee
  • Central AI platform teams work best when paired with embedded product teams
  • Budgeting for evaluation, governance, and change management is non-negotiable
  • A shared intake and prioritisation process prevents random acts of AI
  • Clear success metrics turn experiments into long-term investments
Back to all articles