· Core Philosophy  · 5 min read

The Death of Thought Leadership (And What Replaces It)

John Winsor argues AI has made it trivially easy to perform expertise without possessing it. He's right - but his solution still centers the outsider.

John Winsor argues AI has made it trivially easy to perform expertise without possessing it. He's right - but his solution still centers the outsider.

What I Found

John Winsor, an executive fellow at Harvard Business School and six-time author, published a piece in Harvard Business Review this week arguing that traditional thought leadership is dying. His replacement term is “thought doership,” and his core claim is that AI has made it trivially easy to perform expertise without possessing it. The result is a flood of polished, confident content that sounds authoritative but was never forged through experience. Organizations that keep hiring keynote speakers and buying frameworks end up stuck, because the people advising them have never actually built anything in the domain they’re advising on.

Any Curate post (like this one) that references something behind a paywall will be a bit longer for the sake of anyone who isn’t able to access the original material for whatever reason.

Has AI Ended Thought Leadership? — Harvard Business Review

Why It Matters

Winsor’s point is good, and it matches what I’ve been watching play out in the AI adoption space specifically.

Over the past two years, I’ve sat through conference talks, read industry reports, and watched LinkedIn turn into a firehose of AI adoption advice from people who have never deployed anything to an actual workforce. The tells Winsor identifies are exactly the ones I’ve learned to filter for: no scar tissue (every story is a win), altitude lock (comfortable with macro trends but unable to describe what the difference between an Agent and an agentic workflow looks like to an analyst a week into their first project), and a rate of expertise accumulation that doesn’t match any plausible timeline of real work. You can absolutely have an AI adoption framework ready to share three months into learning the ropes… But speaking from personal experience it’s less “hot take” and more “active fire suppression at all times”.

Where Winsor really lands, for me, is his distinction between knowing and navigating. A thought leader can narrate the future of AI in the workplace eloquently. A practitioner can tell you what happened when they tried to onboard 300 users in a month, which assumptions broke by week three, and what they changed as a result. Both kinds of knowledge have value, but organizations drowning in the first kind while starving for the second kind is driving a ton of the failure stories that are so prevalent today.

His observation about the “faux-expert pipeline” also connects to something I’ve been thinking about since I wrote about AI slop last year. The same dynamics that make it easy to generate low-quality AI content make it easy to generate low-quality AI thought leadership. A few hours with a good LLM and you can produce a passable adoption playbook that hits every keyword, cites the right McKinsey reports, and sounds like someone who’s been in the trenches. The output reads well… It just wasn’t earned and it won’t hold up. I’ve got the smoldering plan wreckage to show for that a couple times over. But they all look lovely printed out and put in a folio.

The Tension

Here’s where Winsor and I start to diverge. His solution is aimed squarely at executives hiring external advisors. Stop booking keynote speakers, he says, and start hiring “thought doers” who embed with your team for eight-week sprints, co-own experiments, and stay through the build. That’s good advice if you have a budget for outside operators. But it sidesteps a bigger question: what about the people already inside your organization who are doing the work every day? Those who will continue the work for months and years regardless of how long their “thought doer” works alongside them.

The most valuable AI adoption knowledge I’ve encountered hasn’t come from external advisors or embedded consultants. (Though I know some great ones and absolutely recommend them as the right tool at the right time.) It’s come from the internal operators who built something, watched it succeed or fail in real conditions, and then took the time to share what they learned. The project manager who figured out that meeting summaries were the gateway drug for skeptics. The team lead who discovered that hours saved from stand-ups spread faster than training decks. The educator who redesigned their onboarding after watching the third cohort hit the same wall.

These people aren’t thought leaders. They aren’t really “thought doers” in Winsor’s framing either, because his definition still centers the outsider who comes in to help. The internal practitioner who narrates their own experience is something else: an operator who also reflects. Someone who builds and then makes sense of what they built, publicly, so other people can learn from it.

My Takeaway

What Winsor identifies as the problem, I’ve been trying to build the alternative to.

When I started writing publicly about our Copilot deployment, it wasn’t because I wanted to be a thought leader. It was because I kept hitting problems that nobody in my immediate network was talking about, because they hadn’t been close enough to the work to encounter them. Or were too busy solving them to spend the time dissecting them. The gap between “AI will transform the enterprise” and “here’s what actually happened when we tried” was enormous. And the only way to close it was for the people doing the work to start talking about it, “scar tissue” and all.

I’ve been looping back and revisiting a lot of our early days on this topic over on my LinkedIn WE:AI newsletter, if you haven’t caught it yet. Because the problem Winsor names at the industry level plays out in a very specific way when you zoom into how organizations actually try to get people using these tools.

The future won’t belong to the people who describe it best or even to the external operators who embed for a sprint. It’ll belong to the practitioners who build, reflect, and share, from inside the work, on repeat. That’s not thought leadership. It’s something quieter and harder, and it’s what the field actually needs.

Enjoying the read?

I send a short email once a month — behind-the-scenes notes, honest takes, and first word on workshops. No spam, no fluff.

Back to Blog

Related Posts

View All Posts »
Beyond the Sparkle: Why We Need a UX for AI
Narrate

Beyond the Sparkle: Why We Need a UX for AI

Microsoft says Copilot is the UI for AI, but we’re still missing the UX. It’s time to move past 'human in the loop' toward intentional human-AI relationship design.

Lab #2 - Master Your Inbox with Copilot Prioritize
Activate

Lab #2 - Master Your Inbox with Copilot Prioritize

The second hands-on lab in my community resource series tackles email overload. Configure Outlook's AI prioritization feature, set up custom rules for what matters most, and learn how to refine your instructions as Copilot learns your patterns.

A Field Guide to Copilot's AI Models
Narrate

A Field Guide to Copilot's AI Models

GPT, Claude, Gemini, reasoning models - Microsoft's multi-model Copilot puts several AI engines at your fingertips. Here's what each does best and when to take the wheel.

Lab #1 - The Bio Builder
Activate

Lab #1 - The Bio Builder

I've turned my live demos into a self-serve lab to learn or share with your users. In just twenty minutes, learn how to build a white-label ready AI agent that solves real workplace annoyances.