Showing, Instead of Telling
An Experiment in Scaling Expertise
Last week I kept IOL brief because I was building.
I am building. Every day. Experimenting. Running things, seeing what they surface, learning from what they miss. Most of it stays inside the Lab. This week, I’m letting some of it out.
What I’m sharing today isn’t a commercial product. It’s one of my daily experiments. Lightweight, live, and instructive whether it works or not. In this case, it’s The Insider’s Guide to Innovation at Microsoft made interactive. Readers of the book will recognize what I’m doing — developing in the open, a practice shared in the Visual Studio Code story. What gets seen gets tested. What gets tested gets sharper.
A New Experience of/for Expertise
I recently audited the assets of the Lab, tangible and intangible, and found that our intangibles vastly outnumbered our tangibles. That didn’t surprise me philosophically but it nudged me practically. Intangible assets can be all sorts of things that are hard to capture: emergent, subconscious, instinctual, ephemeral. Expertise itself often lives in responses that fire before you know you’re doing it. For our use case, scaling expertise, it’s the tacit abilities.
It’s not what experts know that’s hardest to transfer. It’s how they see.
A behavioral scientist doesn’t ask “what’s the stakeholder readiness?” as a checklist item. The question isn’t applied to the situation — it’s the lens through which the situation becomes legible to them in the first place. The expertise isn’t in the answer. It’s in the perception that makes the right question visible before any answer is attempted.
I call this expert perception. Cognitive scientists have studied it under other names for decades. Gary Klein found that experienced firefighters almost never compare options — they recognize the situation and simulate forward.¹ Angus Fletcher, one of my influences, argues that this kind of intelligence is narrative before it’s logical — experts enter problems through causal speculation, not correlation.² The expertise isn’t in the analysis. It’s in what they notice before analysis begins.
Historically, that kind of perception hasn’t been scalable. The Lab’s hypothesis: AI architected the right way changes that.
The Experiment
I built an innovation coach. A diagnostic tool with the 77 innovation frameworks from The Insider’s Guide to Innovation at Microsoft embedded in its architecture, powered by a stripped down version of CORTX — Regenerous Labs’ behavioral intelligence engine.
I ran the same problem statement — a challenge the Lab is facing in recruiting experts — through Innovation Coach and through a leading LLM, several actually.
The LLM output was genuinely useful. Competent, structured, actionable. It returned a smart strategy. It identified an economic misalignment correctly, proposed three viable models, and gave tactical next steps. I think a reader would find it helpful.
Innovation Coach did something very different. It didn’t answer the question I asked. It questioned my question and then answered with an analysis and strategy that addressed what I actually needed.
It surfaced this: our pitch was analytical — a direct mismatch for what it identified as a reactive audience. Analytical framing activates more resistance, not less. The medium we used, that failed, was undercutting the message.
It also surfaced a critical assumption we have been making — that the right people would join our ecosystem once the financial model was right. And it showed me why that assumption was almost certainly wrong.
The LLM answered my question and gave me a reasonable path forward. Innovation Coach reframed my problem, questioned that, and put me on another path altogether.
What Did I Learn?
Innovation Coach runs on the same underlying AI models I test against. What’s different is the architecture. I did not just embed the knowledge of a few experts and our book. I layered multiple lenses at different apertures inside the diagnostic sequence that runs before looking for answers. Diverge-Converge-Synthesize in software.
So what did I learn? Or do I think I learned?
In this experiment and others I’ve learned how a layered architecture can augment the human-in-the-loop to not only analyze like an expert, but see like one. And that evidence calls back to the protection paradox I wrote about before: the experts who scale their perception — who embed it, systematize it, make it accessible — will matter more, not less, as AI’s abilities expand. The experts who hoard their abilities will find, eventually, that AI can approximate what they were protecting well enough for most potential clients.
That’s the existential crisis.
Try it
Innovation Coach is live at www.regenerouslabs.com/innovationcoach
Give it a real problem. Then give the same problem to your favorite LLM. See what each one sees — and what each misses.
Tell me what you find!
Connections to The Insider’s Guide to Innovation at Microsoft
Developing in the Open — VS Code’s transparent GitHub development as a learning accelerator; what gets seen gets tested
The 77 Innovation Frameworks — The book’s full framework library serves as the Innovation Coach’s action library
Language as Strategic Tool — “Expert perception” as new vocabulary that changes what we can see and scale
B2Me Journey — The Innovation Coach diagnoses cognitive mode first, consistent with the book’s emotional-before-cognitive principle
Sources
¹ Gary Klein, Sources of Power: How People Make Decisions (MIT Press, 1998). Klein’s Recognition-Primed Decision model, developed from fieldwork with firefighters and military commanders, showed that experienced decision-makers recognize situations as familiar types and simulate one response forward rather than comparing options. gary-klein.com/rpd
² Angus Fletcher, Storythinking: The New Science of Narrative Intelligence (Columbia University Press, 2023). Fletcher argues that narrative cognition — causal speculation rather than correlational reasoning — is a distinct mode of intelligence that precedes and shapes logical analysis. cup.columbia.edu
Also read: A. Mark Williams et al., “Expertise and the Interaction between Different Perceptual-Cognitive Skills: Implications for Testing and Training,” Frontiers in Psychology 7 (2016). Research on perceptual-cognitive expertise demonstrates that experts process environmental information through structured perceptual frameworks that shape anticipation and decision-making before conscious analysis begins. frontiersin.org
A note on how this piece was made: This piece was created with the help of AI — specifically Claude, Perplexity, and a team of expert personas built by Regenerous Labs. Direction, judgment, and final decisions by me. Say it ugly, build it better. Onward!