· Professional Applications  · 8 min read

Aim for the Pain, Not the Pride

Two counterintuitive moves that drove 97% Copilot adoption - recruiting your loudest skeptics and replacing your training program with a five-minute game.

Two counterintuitive moves that drove 97% Copilot adoption - recruiting your loudest skeptics and replacing your training program with a five-minute game.
Loading the Elevenlabs Text to Speech AudioNative Player…

I want to tell you about two decisions I made during our Copilot rollout that often make people tilt their heads when I described them. If you’ve studied change management deeply, the logic behind both of them is sound. But a lot of enterprise adoption programs, especially for AI, aren’t being built by people who’ve had a chance to study change management deeply. They’re built by people who’ve been handed a license count, a training budget, and a deadline. And from that vantage point, both of these moves can seem unintuitive at best!

We’ve pushed Microsoft 365 Copilot past 97% active adoption across our premium M365 Copilot users at Huron, nearly half of our 10,000 employees. Along the way we did a lot of things you’d expect: built curriculum, ran workshops, tracked metrics, briefed leadership. That stuff mattered. But the two moves I keep coming back to, the ones I’d repeat first if I had to start over tomorrow…

Give Your Loudest Skeptics a Front-Row Seat

When we were selecting our first wave of Copilot Champions, the obvious approach was to recruit enthusiasts. People who were already excited about AI, already experimenting on their own, already posting about it in Teams channels nobody asked for. And we did recruit some of those people. But I also went looking for skeptics.

Not passive skeptics. Not the “I’ll wait and see” crowd. I’m talking about people who had opinions, who had voiced those opinions publicly, and who carried real credibility in their teams because of their technical skill. A number of well established IT admins, in our case. People whose peers trusted their judgment precisely because they didn’t get excited about every new thing IT rolled out. Often because they helped roll all those new things out and were painfully aware of how the sausage was made.

The logic was simple, even if it felt risky at the time. If an enthusiast tells you Copilot is great, you nod politely and assume they’d say that about any shiny new tool. If a known skeptic tells you Copilot solved a real problem for them, you lean in. Skeptics have earned a kind of narrative authority that enthusiasts haven’t, specifically because they’re harder to impress. Their endorsement carries weight that no training session or executive email can manufacture. “Wait, you got Bob, to start using Copilot?! Okay, now I’ve gotta see what I’ve been missing.”

But recruiting skeptics was only half the move. The second half is the part that actually made it work. I didn’t point them at Copilot’s flashiest features. I didn’t walk them through the highlight reel of what it could do in Word or PowerPoint or Teams. In fact, they often came to me with fresh release notes in hand and explained exactly why those features still didn’t meet their expectations. Instead, I asked a version of a question I’ve learned to ask every new user: what part of your job do you hate doing?

For my fellow IT admins, the answer came fast. Writing long, carefully worded emails. The kind of cross-functional communication where you need to be diplomatic, where tone matters as much as content, where one misread sentence can derail a project relationship. IT folk are (maybe stereotypically) brilliant at systems thinking, at logic, at building things that work. Many of them will tell you openly that crafting a three-paragraph email to a non-technical stakeholder about a timeline slip feels like pulling teeth. They can do it. They’d rather just be in there right away fixing the problem instead of talking about why it is a problem in the first place.

So that’s where we aimed Copilot.

Not at their code. Not at their core competency. At the task they dreaded most on any given Tuesday. Draft this email for me. Give me the right words to prove I genuinely care about the timeline conversation and free me up to go make it better. Match the tone of the last three messages in this thread so I don’t sound like a different person suddenly showed up.

What happened next is the part I’d build an entire adoption philosophy around if I could. Once Copilot proved itself on the thing they hated, they started reaching for it on other things. Not because someone told them to. Not because a training module suggested it. Because the tool had earned a small amount of trust by solving a genuine pain point, and that trust made them curious enough to try it somewhere else. Then somewhere else. Then somewhere else. Even if it wasn’t outright trust, they were forming the muscle memory that when something annoying or troublesome popped up, they’d open Copilot to at least see if it could help. (Spoiler alert: it often did!)

The skeptic who used Copilot to draft a difficult stakeholder email on Monday was experimenting with meeting summaries by Wednesday. By the following week, they were showing a teammate. And when that teammate heard the recommendation, they heard it from someone they respected, someone who doesn’t hype things, someone who had been openly skeptical three weeks earlier.

That conversion carries more adoption force than any training program I’ve ever designed. A skeptic’s grudging “okay, this actually helped” travels through an organization faster than an enthusiast’s glowing review. I’ve watched it happen cohort after cohort.

The principle underneath all of this: don’t introduce a tool where someone is already competent and comfortable. Introduce it where they’re frustrated. Competence creates resistance because people don’t want help with the thing they’re proud of doing well. Frustration creates openness because people are actively looking for a way out of the thing they dread. Aim for the pain, not the pride.

Replace the Training Program with a Game

Our second counterintuitive move looks even stranger on paper. We built a genuinely comprehensive training program. Four levels of curriculum, fundamentals through advanced automation. Multiple live sessions every week. Recordings, quick-start guides, exercises. But the single most effective thing we did for early activation was play a game.

Here’s how it worked. You get two prompts side by side from an Agent. Both attempt the same task. One is clearly better than the other. Your job is to figure out why. That’s it. No formal program structure. No dataset to prepare. No special setup or login. Just well-written instructions and a few minutes of someone’s attention.

If you’ve ever run enterprise training, you know the unspoken contract. Employees show up because they’re told to, they absorb what they can, they leave, and then the real question begins: will any of this translate into changed behavior when they sit back down at their desk? Completion rates tell you who attended. They tell you nothing about what stuck.

The prompt game worked because it inverted that entire dynamic. Nobody was mandated to play it. There was no completion metric. People played it because it was low-stakes, a little fun, and they could do it in five minutes between meetings. The competitive instinct kicked in for some people. The puzzle-solving instinct kicked in for others. Either way, they were learning the single most important skill in AI adoption, how to talk to the tool, without anyone framing it as “learning.”

Our prompt game was the smallest possible version of dog-fooding applied to enterprise AI. Well before the current gold rush on Agents began, we built this funny little game show host right in Agent Builder with just a brief set of instructions and a deliberate decision not to ground it in knowledge that could easily go stale. We relied on the continual progression of improved model offerings to invisibly upgrade the choices and learning on offer as the technology itself improved. Evergreen, interactive, and no mind-numbing data entry or content refreshing for my team.

I often ended every training session with a Share link for the Prompt Game Agent, regardless of the primary topic being discussed. When I could tell new users that the Agent providing them hours of guided learning was itself built in Agent Builder in under ten minutes, something clicked. They went from “this is a training tool someone built for me” to “wait, I could build something like this for my own team.” That transition, from consumer to creator, is where adoption becomes self-sustaining.

The Thread Between These Two Moves

On the surface, recruiting skeptics and building a prompt game don’t seem related. One is a people strategy. The other is a learning design. But they share something fundamental: both are designed around how people actually change behavior, not how we wish they would.

We wish people would adopt new tools because the tools are objectively better. We wish a well-structured training program would reliably produce changed habits. We wish enthusiasm from leadership would cascade naturally through an organization.

None of that is guaranteed in practice. People change behavior when a tool solves a problem they personally care about, in a moment when they’re open to trying something different. Skeptics writing emails they hate and employees playing a five-minute prompt game between meetings don’t look like an adoption strategy from the outside. They look too small, too informal, too unserious.

That’s exactly why they worked. The bar was low enough that nobody needed courage to try. And once someone tries, once they feel the tool do something useful in their own hands, you’ve crossed the hardest threshold in adoption from knowledge into action. Everything after that is momentum.

Enjoying the read?

I send a short email once a month — behind-the-scenes notes, honest takes, and first word on workshops. No spam, no fluff.

Back to Blog

Related Posts

View All Posts »
The Frontier of Cowork with Copilot & Claude
Narrate

The Frontier of Cowork with Copilot & Claude

Microsoft built their most ambitious agentic feature on Anthropic's technology. The 'who copied who' debate misses the point. The real challenge is helping thousands of professionals shift from asking AI questions to directing AI work.

A Field Guide to Copilot's AI Models
Narrate

A Field Guide to Copilot's AI Models

GPT, Claude, Gemini, reasoning models - Microsoft's multi-model Copilot puts several AI engines at your fingertips. Here's what each does best and when to take the wheel.

Lab #3 - Clean Up Messy Data with 'Edit in Copilot'
Activate

Lab #3 - Clean Up Messy Data with 'Edit in Copilot'

The third hands-on lab in my community resource series gives you a deliberately wrecked spreadsheet and walks you through fixing it with Edit with Copilot - one problem at a time, then all at once. Includes a practice file you can download and break yourself.