· Professional Applications  · 10 min read

The Frontier of Cowork with Copilot & Claude

Microsoft built their most ambitious agentic feature on Anthropic's technology. The 'who copied who' debate misses the point. The real challenge is helping thousands of professionals shift from asking AI questions to directing AI work.

Microsoft built their most ambitious agentic feature on Anthropic's technology. The 'who copied who' debate misses the point. The real challenge is helping thousands of professionals shift from asking AI questions to directing AI work.
Loading the Elevenlabs Text to Speech AudioNative Player…

A few weeks ago I sat down at a conference booth an hour before the doors opened and didn’t have a single thing to put on my TV display. What I did have was a 100MB file of Copilot analytics and hundreds of our senior leaders who very much wanted to know how their business unit was doing on the AI adoption roadmap. In under 5 minutes I had Claude Code build a rotating conference display for my display rotator. Interactive slides, live usage stats pulled from the Excel sheet, branding guidelines fed in through plain language. The whole thing looked better than anything I could have hand-coded in a day.

An hour later, I used the same tool to draft a workshop deck referencing the same data but pivoting to how to boost adoption by going hands on with the trending tools. No code involved. Just “here’s what I need to cover, here’s the tone, here are the constraints.” Claude planned the structure, built the content, and I edited from there.

I was straddling a line between developer tool and productivity assistant without really caring about the transition. That line is exactly where the Cowork conversation lives right now.

The Takes Are Missing the Point

If you’ve been following AI news this week, you’ve seen two sizable camps. Camp one says Microsoft copied Claude Cowork and slapped their brand on it. Camp two says this is the future of enterprise AI and Microsoft is leading us there.

Both camps need to adjust their lens a little, so let me take the louder one first.

Yes, Anthropic launched Claude Cowork in January. It rattled enterprise software stocks badly. I still don’t know if I was more inundated by my friends in tech or in finance scrambling to understand what the heck was happening. And yes, Microsoft’s product shares a name, a concept, and literally runs on Claude’s underlying technology. I understand the optics.

But the collaboration timeline tells a different story than the public launch dates. The $30 billion Azure compute deal happened in November 2025. Claude showed up in GitHub Copilot before that. It landed in Copilot Researcher and Copilot Studio before this year started. The trajectory has been pointing here for a while for anyone who’s been watching.

What Anthropic demonstrated with Claude Cowork was proof that the agentic model works for productivity tasks from the everyday to the business defining. What Microsoft is building with Copilot Cowork is proof that the agentic model can work inside enterprise governance boundaries. Those are different problems. For the teams I help lead into the Frontier every day, business and IT alike, the second one is the one that matters at the scale where things get complicated.

Cloud vs. Local Is a Governance Decision, Not a Feature Choice

Claude Cowork runs on your desktop. It accesses folders you point it at. It works with files on your machine, connects to services through MCP connectors, and automates browser tasks through Chrome. For individual users and small teams, this is genuinely powerful.

But I’ve spent years managing AI adoption for nearly 10,000 users at a consulting firm. I can tell you exactly what happens when a tool that touches files, emails, and workflows runs locally on individual machines without a centralized governance layer.

You get shadow IT. Not the malicious kind. The well-intentioned kind, which is actually harder to manage. People swapping tips and building automations that touch client data without realizing it. Agents configured to read email threads with sensitive project details. Workflows that work beautifully for one person and create compliance headaches when someone on their team tries to replicate them.

Copilot Cowork sidesteps this by running in the cloud, inside the Microsoft 365 tenant. Identity, permissions, and compliance policies apply by default. Actions are auditable. The Work IQ layer means the agent has context about your work, your colleagues, your files, all within the security boundaries your organization already maintains.

When we built our governance framework for Copilot at Huron, the whole philosophy was bowling alley bumpers. Boundaries that are largely invisible to the person experimenting. Sturdy enough that nothing breaks if someone drifts toward an edge. Visible enough that everyone feels safe knowing they’re there. A local-first agent with folder-level sandboxing and individual judgment as the security model is essentially asking every employee to build their own bowling alley. Some will do it well. Most won’t know they need to.

Both Products Are Right About Different Things

I want to be careful here, because there’s a version of this argument that dismisses Claude Cowork, and that would be dishonest and, bluntly, boring.

The grassroots energy that Anthropic’s desktop tool unleashes is real. When someone can point Claude at a messy downloads folder and say “organize this,” or feed it receipt screenshots and get a formatted expense report, the activation energy for trying AI drops to almost zero. No IT ticket. No training prerequisite. No waiting for someone like me to provision access.

That’s the same energy that drove our earliest Copilot adoption. Our best early wins came from people who started experimenting before we had 10% of our current program in place. The people who drove demand weren’t waiting for permission, they just appreciated a knowledgeable ride-along. They tried one small thing, got a result, and told someone about it over coffee. Claude Cowork enables that kind of discovery beautifully at the individual level.

The question is what happens at the organizational level. An individual experimenting with local file automation is learning. A thousand individuals doing the same thing simultaneously, across client engagements and internal projects, with no shared visibility into what’s being accessed or automated? That’s a governance horror movie dressed up as innovation.

Microsoft’s answer is less exciting (to those adventurous explorers). Build the agentic capability into the existing enterprise trust layer. Let the security, compliance, and identity systems that organizations already run do their jobs. It’s the answer that lets me sleep at night as the person responsible for this at scale. Anthropic’s answer is more freeing. Let people discover what’s possible without gatekeepers. It’s the answer that creates the excitement adoption programs need to get off the ground.

The best outcome is probably both, in sequence: let individuals discover agentic AI through tools like Claude Cowork, then give organizations a governed path to operationalize what people learned. Microsoft and Anthropic seem to understand this, given that they built the thing together. I know I still have Claude Cowork plugging away on my own personal hardware and projects with no plans to leave it behind.

The Multi-Model Confirmation

There’s a second story here that I’ve been excitedly watching play out. Microsoft’s flagship agentic feature runs on Claude. Not GPT. Not an OpenAI reasoning model. Anthropic’s technology.

I wrote in October about the shift to multi-model Copilot. I wrote in January about which models do what and when to reach for each one. Both of those posts argued that Microsoft was building an ambient fabric of intelligence rather than a single-vendor AI product. Copilot Cowork is the strongest proof yet.

Think about what Microsoft just did. They took a competitor’s technology, from a company that triggered a massive selloff in the SaaS market Microsoft dominates, integrated it into their most important productivity platform, and positioned it as the engine behind their most ambitious new capability. That takes confidence in your platform strategy and a clear-eyed assessment of which model is best for the job.

For the rest of us, the implication is practical: the model layer is becoming genuinely interchangeable. The value is in the platform, the data graph, the security perimeter, and the workflow integration. If Microsoft will build their headline feature on a competitor’s model because it’s the right tool for agentic reasoning, then the “which AI vendor should we pick” debate is already outdated. The better question is which platform gives you the governance you need with access to the best models as they emerge.

Also, I see you hiding back there in GitHub Copilot, Gemini. Come join the party already, we know it’s just a matter of time at this point.

The Part I Keep Coming Back To

Here’s what occupies my planning as an org-scale implementer instead of just an excited user.

Most of the people I support are spread across a wide range of learning, with very few at them feeling self-confident at being ‘on the cutting edge’. We spent two years getting nearly 10,000 knowledge workers comfortable with the new ways of thinking, powerful patterns, and potential pitfalls of standard conversational AI. That typing a clear description of what you want is a professional skill worth developing. Many of them are still building that muscle.

Now the interaction model is shifting underneath them. Copilot Cowork moves from “ask a question, get an answer” to “describe an outcome, approve a plan, let the agent work across applications while you do something else.” The skills that matter start looking less like prompt crafting and more like management. Comfort with delegation. Tolerance for imperfect intermediate steps. Judgment about when to intervene and when to let it run. For many it dredges back up concerns we’ve been carefully addressing of “AI replacing me”.

I flagged this in my first newsletter edition. Prompt engineering as a standalone discipline may have a shorter shelf life than we expected. Rather, it becomes the cornerstone of an even larger multiplier. The people who adapt fastest to agentic capabilities will be the ones with instincts for directing work rather than performing every step of it. That’s a different training challenge than “here’s how to write a good prompt,” and most adoption programs haven’t started building for it yet.

The sequencing problem is real. You can’t introduce a more capable tool to people who haven’t fully trusted the basic one. If someone is still skeptical that Copilot can draft a decent meeting summary, asking them to hand off a multi-step product launch workflow is going to land badly. The capability is ready. The human readiness has to catch up.

This is why the Research Preview approach makes sense to me. Rolling Copilot Cowork out through the Frontier program first, to organizations that have already invested in adoption infrastructure, means the first users will be people whose teams have context for what this tool is and what it requires of them. That’s a better foundation than a consumer launch followed by enterprise cleanup.

What I’m Watching

We are absolutely moving from the era of chatting to the era of doing. Copilot Cowork makes that concrete. Time flies by in AI!

But the shift from assistance to execution changes more than what AI can do. It changes what we need to be good at. It changes what governance has to cover. It changes the stories people tell themselves about their relationship with the tools they use every day.

Getting the technology right is the easier half. Microsoft and Anthropic are clearly working on it, and the early architecture looks solid.

Getting the human side right, helping people transition from “I ask AI questions” to “I direct AI work,” is the challenge nobody has solved at scale yet. The organizations that approach it as a change problem, not a feature announcement, will be the ones where agentic AI delivers on what it promises.

I’m in the Frontier program and watching for Copilot Cowork access. When I get hands-on time, I’ll write about what the experience actually feels like from inside a deployment. Until then, I’m paying less attention to “who built what first” and more attention to what happens when this capability meets a few thousand professionals who didn’t ask for another shift in how they work.

That’s always where the real story gets written.

Enjoying the read?

I send a short email once a month — behind-the-scenes notes, honest takes, and first word on workshops. No spam, no fluff.

Back to Blog

Related Posts

View All Posts »
A Field Guide to Copilot's AI Models
Narrate

A Field Guide to Copilot's AI Models

GPT, Claude, Gemini, reasoning models - Microsoft's multi-model Copilot puts several AI engines at your fingertips. Here's what each does best and when to take the wheel.

Lab #3 - Clean Up Messy Data with 'Edit in Copilot'
Activate

Lab #3 - Clean Up Messy Data with 'Edit in Copilot'

The third hands-on lab in my community resource series gives you a deliberately wrecked spreadsheet and walks you through fixing it with Edit with Copilot - one problem at a time, then all at once. Includes a practice file you can download and break yourself.

Lab #2 - Master Your Inbox with Copilot Prioritize
Activate

Lab #2 - Master Your Inbox with Copilot Prioritize

The second hands-on lab in my community resource series tackles email overload. Configure Outlook's AI prioritization feature, set up custom rules for what matters most, and learn how to refine your instructions as Copilot learns your patterns.