· Reflection & Growth  · 7 min read

Article Scrapbook, Vol 1

A curated collection of notes, reflections, and connections from five recent articles that helped frame my AI perspective as we close out 2025.

A curated collection of notes, reflections, and connections from five recent articles that helped frame my AI perspective as we close out 2025.

My curated collection of notes, reflections, and connections from five recent articles that helped frame my AI perspective as we close out 2025.

AI Slop - How Every Media Revolution Breeds Rubbish and Art

A pile of colorful, abstract shapes representing AI slop

https://www.scientificamerican.com/article/ai-slop-how-every-media-revolution-breeds-rubbish-and-art/

  • Key Thesis: “AI slop” is not a new problem. It’s a predictable pattern that follows every major media revolution (e.g., printing press, internet).

  • The Pattern: First, the new tool “floods the zone” with a ton of low-quality “rubbish.”

  • The Payoff: Eventually, the “art” (i.e., the high-value, transformative use) emerges from that excess.

  • Team Takeaway: We shouldn’t get discouraged by the sheer volume of low-quality AI content. Our job is to ignore the “slop” and focus on finding (or creating) the high-value, emergent uses.

This caught my eye because I wrote about exactly this tension in my “Stories, Technology, and the Human Experience” post. We’re at that inflection point where “we have to exercise our own knowledge and restraint to avoid contributing to the growing tidal wave of ‘AI slop’ drowning out so many genuine stories.” The challenge isn’t just creating better AI outputs - it’s developing organizational judgment about what’s worth creating in the first place. It’s a good reminder, though, that there are a lot of lessons to be learned in the act of weathering a new means of creation, and some of them may very well shape the foundations of ‘good’ work tomorrow.

That doesn’t absolve deliberate abuse of AI tools though, nor really solve how to push through a new form of media creation that doesn’t just lower the bar of entry but practically drops the cost to 0 on a global scale.

One Line to Remember: “The ordinary act of making at scale will always include waste. But with work and luck, it’ll also produce the seeds of the next thing.”

For AI Productivity Gains, Let Team Leaders Write the Rules

Stylized image of a team collaborating on writing rules

https://sloanreview.mit.edu/article/for-ai-productivity-gains-let-team-leaders-write-the-rules/

  • The Problem: There’s a major gap between corporate AI policies and what teams actually need to get work done.

  • The Thesis: A hybrid governance model is essential.

    1. Corporate (Top-Down): Sets the overall, broad AI guidelines (e.g., security, ethics).

    2. Team Leaders (Bottom-Up): Must be empowered to create their own specific rules for their teams.

  • The “Why”: Team leaders are the only ones with the “local context” and understanding of daily risks. They are the ones who can apply judgment and figure out how to actually integrate the tools to get productivity gains.

The balance between user innovation and extensive governance is an ever-present conversation, especially as we dive deeply into the confluence of automation and AI with Agentic capabilities. I saved this article in particular for its nuanced approach that advocates for a “no size fits all” approach to empower the teams who are building directly.

Our vision for governance continues to be a set of high level safety rails that are largely invisible to the average user who just wants to learn about AI and improve their own personal productivity. They should be just visible enough that everyone feels safe, though, knowing that they won’t get in trouble by ‘breaking something’, and will have clear guidance on who to contact when they’re naturally ready for a higher level of permissions.


5 Critical Skills Leaders Need in the Age of AI

An icon representing five critical leadership skills for AI

https://hbr.org/2025/10/5-critical-skills-leaders-need-in-the-age-of-ai

  1. Adaptability
  2. Strategic Data Acumen (not just analysis, but knowing what questions to ask)
  3. Empathy
  4. Ethical Judgment (e.g., managing algorithmic bias )
  5. Continuous Learning

A personal pick aimed squarely at my own role leading a genAI team at Huron Consulting. It’s full of great examples of AI adaption with leaders across hugely successful companies, from the usual suspects like Amazon and Microsoft to Pepsi Co and Russel Reynolds.

All 5 items are never-ending journeys of practice and growth, but I’m absolutely calling out #2 for myself in 2026 as I grow not only my understanding of the colossal scope of the work done at Huron but also how to convert that knowledge into shareable data for analysis and taking action! I’d love recommendations on more learning materials here if anybody has personal favorites.


AI Is Changing the Structure of Consulting Firms

An illustration showing a journey of change for consulting firms

https://hbr.org/2025/09/ai-is-changing-the-structure-of-consulting-firms

This is a critical read. It uses consulting as a case study, but it’s a potential blueprint for all knowledge work.

  • Central Idea: AI is automating the traditional “junior-level” tasks (research, data modeling, analysis).

  • The Effect: This automation breaks the traditional “pyramid” structure (many juniors at the bottom, few partners at the top).

  • The New Model: The “Obelisk”. This structure is leaner, taller, and has fewer layers. It’s built around three core human roles:

    1. AI Facilitators: Early-career people who are fluent in the AI tools, data pipelines, and workflows.
    2. Engagement Architects: Experienced managers who define the problem, interpret the AI’s output, and translate it into actionable strategy.
    3. Client Leaders: Senior execs who manage the high-level relationships and firm strategy.

Now I do want to call out that I don’t necessarily think this is the way forward, as it inherently leans towards a lot of job loss with real human impacts. But it is a model many companies will pursue and not without merit.

I think there’s a more nuanced opportunity here that I plan to write more on at a later date. In short, there’s a lot of value to companies returning to an older model of investing in junior talent as a show of good faith in building careers and lowering the risk on both sides of the equation. Being able to hire fresh-start employees and train them via the act of implementing AI solutions for the business would be an amazing accelerator to ensure best-practice knowledge is threaded throughout an entire enterprise while again reducing the risk of cold-hiring for the business itself.


One Year of Agentic AI: Six Lessons from the People Doing the Work

Illustration about lessons learned from AI collaboration

https://www.mckinsey.com/capabilities/quantumblack/our-insights/one-year-of-agentic-ai-six-lessons-from-the-people-doing-the-work

This one is the most practical. It’s based on 50+ actual agentic AI builds, so it’s not just theory.

  • Lesson 1: Focus on the WORKFLOW, not the agent. This is the main takeaway. The real value is in fundamentally changing the entire process, not just plugging in a cool tool.
  • Lesson 2: Agents aren’t always the answer. We need to be selective. They’re best for complex, “multistep decision-making” where inputs are highly variable, not for simple tasks that rules-based automation can handle.
  • Lesson 3: Stop “AI Slop.” This is the key risk. To build trust and avoid low-quality output, we must treat agents like “new employees.” More notes below on this item in particular
  • Lesson 4: Verify every STEP, not just the outcome. As agents scale, we have to build in monitoring at each step to catch errors early, not just at the end.
  • Lesson 5: The best use case is the REUSE case. We should identify recurring tasks and build reusable agent components, not endless one-off agents.
  • Lesson 6: Humans are still essential. Our role shifts to oversight, compliance, handling edge cases, and applying judgment.

A great article from McKinsey on working with Agentic AI, but with a few obvious biases. First and foremost, a lot of this advice is focused on top-down Agent creation and application. Lesson 3 in particular is very helpful when applied to considering company-wide Agents provided to employees as tools to do their job. It doesn’t land so well with the vision of teams being equipped to learn and explore on their own to drive personal productivity in a grass-roots fashion.

What’s here is good, but a very solid reminder of why it’s one of a list of multi-faceted pieces on the topic of AI in 2025.

Closing Thoughts

These five articles are a few highlights from hundreds I’ve read this year. A handful of the ones that stuck. The ones that reframed something I thought I understood or called out a tension I’d been feeling but couldn’t quite name.

2025 has been wild for AI - breakthroughs and setbacks, hype cycles and genuine innovations, predictions that aged like milk and quiet shifts that changed everything. This collection is just my personal scrapbook of the year’s chaos: the debates about slop and art, the practical mess of governance, the very human question of what happens to careers when workflows transform, and the practitioners figuring it out in real time.

Your scrapbook probably looks different! We’re all navigating the same storm but from different boats, picking up different lessons along the way.

What made it into yours?

Back to Blog

Related Posts

View All Posts »
Copilot, Chapter Two

Copilot, Chapter Two

This chapter is about everything that happened next—how we scaled without losing the soul of the story, how leaders learned to model the future, and how pivotal choices turned Copilot from “that cool AI thing” into an ordinary part of daily work.

Copilot as a Multi-Model Framework

Copilot as a Multi-Model Framework

With Claude joining Copilot, the conversation shifts from a single-vendor future to genuine model choice. This is more than a feature update; it’s a new chapter in enterprise scale AI.