Distilling leadership wisdom

Leave a comment
Growth

This month, we’re going to explore a practical technique for learning from leaders you’ll never have access to: distilling their thinking into an AI role you can query on demand.

Throughout history, those who’ve succeeded the most have often had something in common: access to the best people, whether as mentors, advisors, or sounding boards. That access has always been scarce and unevenly distributed. But now there’s an opportunity to democratise it: you can create your own coaches and advisors from some of the smartest minds out there, using nothing more than their interviews, podcasts, and talks.

You’ve probably experienced this: you read an interview with someone whose thinking you admire, the insights resonate deeply, and for a few days you see your work differently, but then the ideas slip away, dissolved into the background of the day-to-day. This technique is about making that wisdom stick, not by memorising it, but by creating something you can return to whenever you need a different perspective on a problem.

To be clear, this isn’t about impersonation, and it’s certainly not about outsourcing your judgment to someone else’s thinking. It’s about having a different set of questions available when you’re stuck: the questions that someone you admire might ask, applied to your own context.

It also opens the door to something else: trying out your thinking with people you’d find challenging to work with, without the friction of actually being in the room with them.

We’ll start with how to choose who to distil, then cover how to gather the raw material from interviews and podcasts, and walk through the step-by-step process for turning it into something usable. Finally, we’ll look at how to build and use the role in practice, and I’ll hint at how the same technique works in reverse: extracting your own thinking from your own body of work.

If you’d like to dig deeper, here are some related articles from the archive:

  • Councils of agents: group thinking with LLMs explores a lighter-weight version of this idea: using AI to simulate multiple perspectives in a single conversation.
  • Use it or lose it covers the importance of not outsourcing your thinking entirely to AI, which is relevant here: this technique augments your judgment, it doesn’t replace it.
  • Coaching is a reminder of what coaching looked like before AI: the push and pull of directive guidance versus helping someone work through their own problems. What we’re building here is a version of that, on demand.

I’ll be honest: this started as a bit of fun. I was curious whether I could make an AI channel the thinking of a founder I admire, and the initial experiment was more novelty than anything serious. But, actually, it turned out to be genuinely useful.

Having those questions available when I’m stuck on a decision, or when I want to stress-test an idea, has changed how I work through problems. What began as an experiment became a tool I actually reach for.

Let’s get going. Or, as Jeff would say: it’s Day 1.

Choosing who to distil

Not every leader is a good candidate for this. The technique works best when three conditions are met: there’s enough source material to work with (a few hours of interviews, ideally more), you genuinely want to apply their thinking to your own context (not just consume their content passively), and their frameworks are somewhat transferable (they think in principles, not just anecdotes).

Founders with extensive podcast appearances are often good candidates, as are authors who’ve done multiple interviews about their books, and executives who speak at conferences. The pattern to look for is someone who’s been asked similar questions from different angles, which surfaces their underlying principles rather than rehearsed soundbites. You want enough material to triangulate on how they actually think.

To give you a sense of what works: Claire Hughes Johnson has Scaling People plus hours of podcast interviews on Lenny’s Podcast and First Round Review, and her thinking is structured enough to extract clear frameworks. Charlie Munger’s “latticework of mental models” is perhaps the most codifiable leadership thinking available, spread across shareholder letters and his Acquired interview. Naval Ravikant has been interviewed extensively on Tim Ferriss and The Knowledge Project, and his ideas on decision-making are applicable to any field or domain.

They’re working in completely different worlds, but the pattern is the same: there’s plenty of material to work with, their thinking is rooted in principles rather than anecdotes, and what they’ve learned travels beyond their specific context.

Gathering the raw material

The simplest approach is YouTube. Most podcast interviews end up there, and YouTube has a built-in transcript feature: click the three dots below the video, select “Show transcript”, and you get the full text. I copied each transcript into its own file, one per interview, which made it easier to process them separately before synthesising across sources.

If you want to go deeper, there are more elaborate options. Some podcasts publish their own transcripts: Tim Ferriss has an archive of over 800 episodes, and Lex Fridman publishes full transcripts on his site. Aggregators like Tapesearch let you search across millions of podcast transcripts to find every appearance by a specific person. But for most purposes, YouTube and a few tens of minutes of copy-paste will get you everything you need.

Here’s what a raw YouTube transcript typically looks like when you copy it:

0:00
today I want to talk about something that
0:02
I think is really important which is the
0:04
idea of working backwards and so you
0:07
know at Amazon we we famously start with
0:10
the customer and work backwards from
0:12
there and I think that's that's
0:14
something that a lot of companies say
0:16
but don't actually do right they they
0:18
sort of pay lip service to customer
0:20
obsession but then when push comes to
0:22
shove they optimise for something else

It’s not pretty: timestamps interrupt every few seconds, there’s no punctuation, and sentences break mid-thought across lines. Don’t worry about cleaning this up yourself: we’ll do that next. What matters is capturing the full conversation.

The distillation process

The work now is to turn those messy transcripts into something structured: cleaned up, synthesised across sources, and organised into principles you can actually use. The good news is that this can happen in a single conversation. Feed your transcripts to an AI with a prompt like this:

I have transcripts from several interviews with [name]. Please:

1. Clean up each transcript: remove timestamps, fix punctuation, and mark who's speaking (interviewer vs [name])
2. Extract the key principles, mental models, and decision-making frameworks that [name] articulates
3.Identify patterns that recur across multiple interviews — these are likely their core beliefs
4. Organise the output by domain (e.g., decision-making, people, product, strategy)
5. Include direct quotes where they're particularly memorable
6. Output everything as a structured principles document I can reference later

The transcripts are below.

Within minutes, you’ll have a working principles document. Resist the temptation to have the AI research further or expand on each principle: the value is in capturing their thinking from their words, not generic material found online about the same topics. I’ve found it useful to keep the distillation specific: it makes the role you create from it more pointed and useful as a coach.

Building the role

Once you have a principles document, you can turn it into a role: a structured definition that tells the AI how to apply those principles to your questions. If you want to see how this fits into a larger system of roles, I covered my full daily driver setup in the April subscriber edition, but here’s the core structure you need.

A role typically includes:

  • Description: Who this persona represents and why you’re using it
  • Core questions: The questions this person tends to ask when evaluating ideas
  • Mental models: The frameworks they apply
  • Tone: How it should behave (direct, challenging, supportive, etc.)
  • Reference: A pointer to your principles document so the AI has the full context

Here’s a short generic example to give you a flavour of what to aim for:

Founder Lens

Description: Applies the mental models of [leader name] to my current context. Not pretending to be them, but using their frameworks to surface angles I might miss.

Core questions:

- Is this derived from first principles or copied from others?
- Does this require courage? If not, is it ambitious enough?
- What would someone who genuinely cares do here?
- What are the unstated assumptions?

Mental models:

- First principles thinking starts from atomic building blocks
- Change your opinion when you get better information
- The best ideas often feel uncomfortable

Reference: See context/founder-lens/principles.md

Tone: Direct, challenging, focused on clarity over comfort. Asks questions rather than giving answers. Pushes for specificity.

If you’re using Claude Code, this role definition goes in your CLAUDE.md file, and the principles document lives as a separate file in your project. I keep mine in a context/ folder with subfolders for each role.

Activating a role just means asking for it: “founder lens on this decision” or “what would [name] think about this approach?” The AI reads the role definition and the principles document, then applies that perspective to your question. Other tools will have their own way of handling system prompts and context files, but the structure is the same: a role definition that points to a richer source of material.

Using it in practice

What it feels like in use is less “chatbot pretending to be someone” and more “having their guidance available when you’re stuck”. So I often find that it surfaces an angle you wouldn’t necessarily have considered on your own, because they may see the world differently to you. The value isn’t in the answers, it’s in the reframing: the questions force you to articulate assumptions you hadn’t examined, or to defend choices you’d been making on autopilot.

I find it most useful before committing to a significant decision, even when I have consensus with my own group, because I want to stress-test my thinking before it becomes expensive to change course. It’s also valuable when I’ve been circling the same problem for too long and need a way to break out of my own perspective.

With that in mind, it’s worth considering whether there are any founders or notable people out there who would have a very contrarian perspective to yours. Including them as part of this exercise can surface blind spots that a like-minded advisor never would. The role doesn’t tell you what to do, but it does make you defend your reasoning to a different persona.

Here’s what this might look like in practice. Imagine you’ve distilled Elon Musk’s thinking into a founder lens, and you’re using it to stress-test a timeline your team has given you:

You: Founder lens on this: my team says migrating our payment system to the new provider will take six months. I’ve pushed back but they’re confident that’s realistic.

[Founder]: What’s actually in those six months? Break it down for me. Not the project plan — the actual work. If you had to ship something in six weeks instead, what would you cut?

You: We’d probably skip the automated rollback tooling and do manual rollbacks if needed. And we’d run the systems in parallel for longer rather than building a full reconciliation dashboard upfront.

[Founder]: So the core migration is six weeks, and the other four and a half months are de-risking. Is that de-risking based on something that’s actually happened before, or is it fear of something that might happen?

You: Honestly, it’s fear. We’ve never had a payment migration go wrong, but the consequences would be severe.

[Founder]: Severe how? Quantify it. What’s the actual blast radius if you migrate and something breaks? Are we talking about lost revenue, angry customers, regulatory problems? And what’s the cost of moving slowly — four months of carrying two systems, delayed features, team context-switching?

You: I hadn’t thought about the cost of delay that concretely.

[Founder]: That’s the question. You’re insuring against a risk you haven’t priced, with a premium you haven’t calculated. Maybe the six months is right. But you should know what you’re buying with that time.

The lens didn’t tell you what to do, but it made you defend your reasoning to a different set of priorities.

The mirror

The same technique works in reverse. Instead of encoding someone else’s thinking into a role, you can extract your own: feed your writing, decisions, and documentation to an AI, and ask it to identify your patterns. The output is a principles document that captures how you actually think, not how you imagine you think.

I recently ran this process on my own writing: several years of newsletter articles, fed through the same extraction steps. The output was a set of principles organised by how often they recur. Some I expected to find: “constraints are superpowers” shows up repeatedly, as does the conviction that managers should stay technical. Others surprised me: I hadn’t realised how consistently I return to the idea that autonomy and ownership are non-negotiable, or how often I argue recently for being in the details rather than delegating everything, which is a marked change from how I used to be when I first started out in management.

The value isn’t just in having the list. Your previous writing, whether internal memos or public articles, tends to represent your best self: considered, articulate, thoughtful. The messy day-to-day often pulls you away from that. Having your principles explicit means you can check whether you’re still aligned to what you actually believe, or whether you’ve drifted without noticing.

This is another reason to write, even if only to yourself. Writing has always been a tool for thinking, but now it’s also a tool for self-knowledge: the more you write, the more material you have to analyse, and the clearer the picture of who you actually are. This month’s subscriber edition goes deeper into the inward-facing version of this technique: how to gather your own material, what prompts to use, and what to do with the principles once you have them.

Your turn

The technique is straightforward enough to try this week. So why not give it a go?

  • Find your candidate. Have a look through all of the podcasts and interviews that you’ve read, listened to, or watched recently, and see whether anyone that you found particularly inspiring could form a coach that can be your go-to in your day-to-day.
  • Gather the material. Find two or three interviews with them and run them through the distillation process: clean the transcripts, extract principles, and organise them by domain.
  • Build a simple role. Create a role definition with a handful of their core questions and mental models, and don’t worry about getting it perfect on the first pass.
  • Try it on a real decision. Pick something where you’d value a different perspective and see what surfaces.
  • Iterate. Refine the role based on what’s useful and what isn’t: the first version is never the final one.

Wrapping up

And remember: this isn’t about hero worship, and it’s certainly not about replacing your judgment with theirs. It’s about making the wisdom of people you admire more accessible to yourself, on demand, when you need a different perspective. The insights you read in interviews don’t have to fade anymore; they can become tools you reach for.

Until next time.

Who will be the senior engineers of 2035?

Leave a comment
Current affairs

This month, we’re going to explore a question that is a pertinent topic right now: where are the senior engineers of the future going to come from?

After years of post-Covid layoffs, hiring has slowed across the board as companies wait to see how AI efficiency gains and the economy play out. Unfortunately, juniors are caught up in that slowdown: it’s a hard time to be graduating with a computer science degree.

Meanwhile, AI is absorbing the small changes and bug fixes that used to be perfect training tasks for junior engineers, and the managers who traditionally developed early-career talent are stretched thin or being cut entirely. When viewed purely as profit and loss, some short-term rationale can be derived. However, the long-term consequences are worrying.

We’ll start by looking at the pipeline we used to have: how senior engineers traditionally emerged through years of mistakes, mentorship, and low-stakes learning. Then we’ll examine what’s replacing it, and hypothesise whether AI will actually fill the gap. We’ll explore three possible scenarios for 2035, and finish with what this means depending on where you sit in the industry.

If you’d like to dig deeper, here are some related articles from the archive:

  • Coaching explores how managers develop people at different experience levels, and why the approach needs to change as someone grows.
  • Delegation creates career progression looks at how handing over tasks is an act of trust and an opportunity to learn.
  • Use it or lose it covers the risk of skill atrophy in the AI era, and why deliberate practice still matters.

Let’s explore.

The programmer’s path

For decades, the path was well-worn: you got hired as a junior, paired with someone more experienced, and were given tasks that existed as much for your development as for the work itself.

As part of your ramp-up into real-world programming, you’d follow this loop: write some code (often pairing with others), submit a pull request, get feedback that made you rethink your approach, fix it, and learn something. Over time, through repetition and correction, you built judgment: one of the key skills that separates senior engineers from those with less experience.

Learning with other people was not the only path. Some of the most influential figures in computing taught themselves to code before they were adults, tinkering with whatever hardware they could access; learning through curiosity and obsession rather than formal training. I was no different (although I do not claim to be influential): I learned to code on our family computer, building websites with HTML and simple tools in Visual Basic.

Yet, although they may have begun alone, many autodidacts eventually joined or created teams and companies, and there they learned a different set of lessons: how to collaborate, how to navigate trade-offs, how to build things that outgrow your singular contributions. And these are things that are very hard to learn in formal education or on your own.

However, what both paths shared was this: you had to do the work yourself and with others, and the doing was the point.

What’s replacing it

The traditional pipeline is breaking down in several places at once. Hiring freezes, driven by post-pandemic correction and uncertainty about AI’s impact on headcount, mean fewer junior roles: entry-level tech postings have dropped 67% since 2022, and a Stanford study found that employment for software developers aged 22-25 has declined nearly 20% from its late 2022 peak.

Harvard study found the effect is even sharper in firms actively adopting AI: junior employment fell 7.7% relative to non-adopters within six quarters. The managers who used to mentor early-career engineers are being cut or stretched across larger teams. And the tasks that once served as training ground, such as the small bug fixes and incremental features, are increasingly being handed to AI tools instead.

For leadership purely concerned with cost, the assumption is compelling: AI will keep improving, so we’ll need fewer humans, so why invest in growing them? A LeadDev survey found that 54% of engineering leaders plan to hire fewer juniors, reasoning that AI enables seniors to handle more.

On a balance sheet, the short-term economics make sense. But there’s an incredibly important question that needs to be addressed: what happens to the skills that juniors used to develop along the way, if AI is taking them and there are fewer juniors to do them in the first place?

The answer matters because much of the judgment of an experienced craftsperson comes through being in the details, making mistakes, and learning from them.

Consider judgment under ambiguity, organisational navigation, the instinct for when a shortcut will come back to bite you and when it’s the right call; these aren’t skills you acquire by reading documentation or even prompting AI for answers, but through years of making mistakes in environments where the stakes were low enough to fail safely.

There’s a term that comes to mind for how one progresses from junior to senior: scar tissue. The scars come from shipping something that broke in production and staying up to fix it, from proposing an architecture that didn’t scale and having to rebuild it, from navigating a difficult stakeholder relationship and learning, the hard way, what actually works.

AI can answer questions in the same way that revising for an exam can help you memorise an answer, but it can’t give you the scars that you can then apply to new problems in the future.

So what happens if we continue down this path? There are a number of possibilities that could all prove true to some degree. Analysing them, and where they could lead, is one of the ways we can steer towards a future that makes sense for the industry.

Three possible futures

The talent crunch. We’ve seen a version of this before. During the pandemic hiring boom of 2021-2022, tech job postings more than doubled and salaries hit record highs. Top candidates fielded multiple offers; poaching was rampant. Companies that weren’t seen as desirable places to work found themselves outbid by those that were.

Now imagine that dynamic, but worse. Senior engineers don’t appear from nowhere: they’re the juniors you hired five or ten years ago who learned, failed, recovered, and grew.

Cut the pipeline today, and the shortage doesn’t show up immediately. Instead it shows up in 2035, when the industry finds itself desperate for engineers with the scar tissue that comes from years of real-world experience. When the critical system goes down at 3am, there’s no one left who knows how it works. The parallels to other industries that neglected their pipelines, such as nursing and skilled trades, become impossible to ignore.

The bifurcation. Instead of a shortage, a split emerges. On one side: vibe coders who move fast, shipping features by orchestrating AI tools, comfortable with velocity but shallow on fundamentals. On the other: engineers who understand how things actually work, but who are increasingly rare and expensive.

The middle disappears. A HackerRank advisory board described this as a “hollowed-out career ladder”: seniors at the top, AI handling the grunt work, and very few people learning in between. The traditional path from junior to mid-level to senior breaks down because the rungs in the middle are gone.

This pattern shows up elsewhere. Economists have documented job polarization for decades: automation eliminates middle-skill routine work while high-skill and low-skill jobs grow. The same dynamic appears in wealth distribution, where the middle class has steadily shrunk across developed economies. In retail, analysts call it the barbell economy: luxury and discount thrive while the mid-market hollows out. Software engineering may be next.

The tinkerer’s precedent. There’s a more optimistic reading, grounded in computing history. Formal computer science education didn’t exist until 1962, when Purdue established the first department. The first undergraduate degree followed in 1967. Yet computing advanced for decades before that, driven by aforementioned autodidacts who taught themselves from manuals, blueprints, and tinkering: messing around for curiosity’s sake.

Each new abstraction layer was supposed to be the end of entry-level programming. Before software existed at all, “computers” were humans doing calculations by hand. Then came machine code, then assembly, then BASIC: created at Dartmouth for liberal arts students, it came pre-installed on home computers and enabled a generation of bedroom coders.

The browser brought JavaScript, which democratised web development. Cloud computing abstracted away infrastructure. Each time, instead of eliminating entry points, the new layer created different ones. And with each step up, we wouldn’t wish to go back: would you want to write everything in assembly today? I thought not.

If AI follows the same pattern, we could see a world where everyone becomes part product manager, part engineer, part designer. Building software gets faster, and the bottleneck shifts elsewhere: to sales, to go-to-market, to the hard work of finding customers and convincing them to pay. The constraint moves, but it doesn’t disappear.

There’s an alternative version of this future: the number of software engineers a company needs seriously dwindles, teams of fifty become teams of five, and average company size shrinks. The engineers who thrive are the entrepreneurial ones, those who can ship whole products rather than just features. Everyone else competes for fewer and fewer seats.

What makes this scenario different from the first two is that it still offers a path. Each earlier abstraction layer required you to understand programming logic; you were just expressing it at a higher level. AI potentially abstracts away programming itself: describe what you want and let the model figure out how. That could mean the ultimate democratisation, or the elimination of the ladder entirely. But if history is any guide, new entry points will emerge, even if we can’t yet see what they look like.

What this means for you

Which scenario plays out, and in what combination, depends partly on choices being made right now: not just by industry leaders, but by individuals at every level, including yourself.

We still have years ahead in which these effects will compound, and that means years in which deliberate choices can steer us toward a better future than the one we might waltz into without thinking. Here’s what you can do depending on where you sit.

If you’re a senior engineer, your expertise is becoming scarcer, not more common. This is an extremely strong position to be in, especially if you’re investing heavily in AI-first engineering skills. The scar tissue you’ve accumulated, the instinct for where the edge cases hide, the ability to debug systems you’ve never seen before: these take years to develop and can’t be downloaded. Three things you can do:

  • Mentor actively. Find a junior and invest in their growth, even if nobody’s asking you to. This is actually more fun than it used to be: you’re exploring AI tooling together, learning alongside each other rather than just handing down wisdom.
  • Make your knowledge visible. Use AI to generate excellent documentation and diagrams, and use it to level up the knowledge you transfer to the rest of the organisation. The tribal knowledge in your head is worth far more when it’s shared, and the barrier to doing so has never been lower.
  • Invest in engineering-led initiatives. The increased velocity you get through AI has to go somewhere. Instead of just shipping more features, use that time to work on the problems that often get deprioritised: performance, latency, resilience, and the kind of deep technical work that builds lasting competitive advantage.

If you’re a manager, the juniors you develop aren’t a cost centre; they’re a strategic bet. Every engineer you grow into a capable mid-level or senior is one you won’t have to poach from a competitor when the talent crunch bites. Remember the 2021-2022 hiring war? It could look mild by comparison. Three things you can do:

  • Make the case for junior hiring. Frame it as risk mitigation, not charity. Show leadership the cost of senior attrition, the salary inflation in the market, and what happens when institutional knowledge walks out the door. Juniors are cheaper to hire and, with good mentorship, can become your most loyal senior engineers.
  • Rethink what training looks like. The tasks that used to be training ground are being absorbed by AI. So create new ones: pair juniors with seniors on complex problems, give them ownership of small but real projects, let them lead incident retrospectives. OpenAI is experimenting with a “super senior + super junior” model for exactly this reason. The goal is scar tissue, not busywork.
  • Invest in your own internal tooling. AI has made building custom tools almost as easy as building a spreadsheet. Instead of waiting for engineering to prioritise your visibility needs or settling for off-the-shelf software that doesn’t quite fit, build the tools yourself. Whether it’s a planning dashboard, a bragdoc generator, or a way to track knowledge distribution across the team, the friction between “I wish I had a tool for this” and actually having it has never been lower. I covered this in detail in Just build the tools yourself.

If you’re early in your career, the path is harder than it used to be, but it’s not closed. The key is to seek out the experiences that build judgment, even when the system isn’t handing them to you. Three things you can do:

  • Seek scar tissue deliberately. Volunteer for on-call rotations. Take the messy migration project nobody else wants. When something breaks, be the one who stays to understand why. None of this is new: the tooling changes and the level of abstraction shifts, but the engineers who learn the most have always been the ones actively seeking scar tissue.
  • Don’t outsource your understanding. AI tools are incredible accelerators, but they can also become a crutch. When the model gives you an answer, take the time to understand why it works. Read the documentation. Trace the code path. The goal isn’t to reject AI; it’s to use it without losing the ability to think for yourself. I wrote about this in Use it or lose it.
  • Connect your work to the business. The engineers who get trusted with bigger problems are the ones who understand why those problems matter. Learn what your company’s metrics are, how your team contributes to them, and what keeps your leadership up at night. Then be proactive about getting closer to that work. Ask to be included in architecture discussions, request to shadow incident response, and when you get feedback, ask “why does this matter?” not just “what should I change?” Technical skill gets you in the door; business context gets you to the table.

If you’re a senior leader planning to spend more on AI than on people, you’re making a bet, whether you’ve articulated it or not. The bet is that AI will improve faster than your talent base depreciates: that you can cut junior hiring today and still have the senior engineers you need when it matters. That may be true. But if it isn’t, you won’t know until it’s too late to fix. Three things you can do:

  • Stress-test your assumptions. What happens if AI progress plateaus for a few years? What happens if your most experienced engineers leave? What does your team look like in 2035 if you hire no juniors between now and then? Run the scenarios. The answers might surprise you. Why not use AI to do the modelling?
  • Treat junior hiring as R&D, not overhead. The return on investment isn’t immediate, but it’s real. Every junior you develop into a senior is institutional knowledge you don’t lose when someone resigns. And some of the best junior talent right now, especially those who are AI-first, are incredible at their jobs, hungry to learn, and waiting for the right opportunity to come along. Frame junior hiring as investment, not cost, in your planning.
  • Measure your knowledge concentration. How many people on your team can debug your most critical systems? What’s your bus factor on key services? If the answer is “one or two,” you have a fragile organisation, regardless of how productive AI makes those individuals. Track knowledge distribution the way you track uptime.

Wrapping up

Who will be the senior engineers of 2035? We don’t know yet. We’re running an experiment on the industry’s talent pipeline, and the results won’t be in for years.

But here’s what we do know: the outcome isn’t predetermined. The senior engineers of 2035 are being made right now, in the decisions about who gets hired, who gets mentored, and who gets the chance to fail safely.

Until next time.