Thinkscoop
Strategy

AI Change Management: What 2022–24 Taught Us About Adoption

Thinkscoop Engineering Oct 30, 2024 13 min read
AI Change Management: What 2022–24 Taught Us About Adoption

The success of AI initiatives between 2022–24 correlated far more with change management than model choice. Teams that felt involved adopted faster and pushed harder.

Across dozens of engagements from 2022–24, the same pattern emerged: technically similar AI systems saw wildly different adoption depending on how people were brought into the journey. The best systems were introduced quietly, with humility and curiosity. The worst arrived with fanfare and were quietly ignored - or worse, actively resisted by the people who were supposed to benefit from them.

This asymmetry was not about the quality of the AI. We saw mediocre models adopted enthusiastically when the rollout was handled well, and excellent models shunned when teams felt like subjects of an experiment rather than participants in a solution. Change management was the variable that mattered most - and it received the least budget and attention in almost every project we encountered.

Make Teams Co-Authors, Not Audiences

The most successful pilots started with listening. Teams were asked which parts of their job felt repetitive, risky, or frustrating - and which parts they would never want to delegate. The AI features that followed felt like answers to those conversations, not corporate experiments dropped from above. When people recognise their own words in the use-case framing, adoption is almost inevitable.

Practically, this meant spending the first two to four weeks of a project in structured interviews and observation sessions with the people who would use the system. Not just asking 'what do you want?' but watching what people actually did, where they paused, where they made mistakes, and where they expressed frustration. Those observations were worth more than any stakeholder survey.

Frame It as an Experiment, Not a Rollout

Language shapes expectations. Projects framed as 'AI rollouts' or 'digital transformation initiatives' primed people for a binary outcome: it works or it fails. Projects framed as 'experiments we are running together for the next eight weeks' primed people for iteration and honest feedback. The second framing consistently produced more candid, useful input - and lower resistance when things needed adjusting.

A simple reframe that worked

One client changed their internal communications from 'We are deploying an AI assistant to the customer service team' to 'We are running an eight-week experiment with twelve volunteers to find out how AI can reduce the most frustrating parts of customer service work.' Volunteer numbers tripled. Feedback quality doubled.

Respond Visibly to Feedback

The single most powerful driver of adoption trust was visible, fast response to feedback. When a reviewer flagged a problem on a Monday and it was acknowledged - not fixed, just acknowledged - on Wednesday, their willingness to keep submitting feedback increased dramatically. When feedback disappeared into a black hole, people stopped giving it.

  • Acknowledge every piece of substantive feedback within 48 hours, even if the response is 'we are looking at this'
  • Publish a weekly summary of feedback received and what changed as a result
  • Hold a short fortnightly show-and-tell where recent improvements driven by user feedback are demonstrated
  • Name the people whose feedback led to specific changes - recognition is powerful in small teams
  • Be honest when feedback cannot be acted on, and explain why - people respect candour more than silence

Leaders Have to Actually Use the Tools

In every engagement where a team leader personally used the AI tool and talked openly about how it helped them, adoption by the broader team was higher - often significantly. In organisations where leaders endorsed the initiative in an all-hands deck but were never seen actually using it, adoption was slow and fragile.

This is not about performative tech enthusiasm. It is about credibility. When a manager says 'I used this tool to prepare for yesterday's difficult client call and it saved me an hour', that is more persuasive than any workshop or training deck. Identify two or three senior users early and invest in making their experience excellent before a broader launch.

Address Accountability Fears Directly

The most common unspoken fear in 2022–24 AI rollouts was accountability: if the AI suggests something wrong and I act on it, whose fault is it? If my job changes because of AI, does that mean I was doing something wrong before? Teams needed explicit, policy-level answers to these questions before they could use AI tools comfortably.

  1. 1Publish a clear policy on AI-assisted decisions: who is accountable when an AI suggestion leads to an error
  2. 2Make it explicit that using AI tools well is a performance positive, not a signal of prior under-performance
  3. 3Provide a formal channel for reporting AI errors without fear of blame - not just a generic feedback form
  4. 4Define clear categories: what the AI is authorised to do autonomously vs. what always requires human sign-off
  5. 5Update job descriptions and performance frameworks to reflect AI-augmented work, rather than leaving a policy gap

What Good Adoption Looked Like by 2024

By 2024, the AI programmes that had followed these principles were showing measurable differences: higher usage rates, faster iterations from feedback, lower incident rates from AI errors caught by engaged users, and - importantly - employees who were actively advocating for more AI features rather than tolerating the ones they had. That advocacy was the real signal that change management had worked.

The organisations that treated change management as an afterthought - a one-day training session tacked onto a six-month build - never achieved those outcomes. The ones that treated it as a parallel workstream from day one, funded and staffed alongside the technical work, consistently did.

Building something in this space?

We'd be happy to talk through your use case. No pitch - just an honest conversation about what's feasible.

Book a 30-minute call

Key takeaways

  • Front-line teams cared more about trust and workload than AI as a concept
  • Pilots framed as experiments generated better feedback than top-down rollouts
  • Visible response to feedback built more trust than glossy training decks
  • Leaders who personally used the tools drove higher adoption
  • Clear accountability policies reduced fear around mistakes
Back to all articles