Over the last few months, I’ve been writing about a number of factors that AI — specifically LLMs — are changing the way the role of management works. If you haven’t read the previous posts, then they are as follows, in descending order of publication:
- New advice for aspiring managers, which covers how the cultural shift in our industry is changing the way we think about management and how aspiring managers should adapt to that change.
- A bag of worries: tackling overwhelm with LLMs, which is a technique I’ve been using to help me manage my own never-ending to-do list by offloading some of the cognitive load of prioritization to an LLM.
- A weekly mind meld, which uses some LLM assistance to communicate weekly with my department.
- LLMs: an operator’s view, the original post in this series, which covers some of the cultural change addressed in the first post in this list, and how code review and hiring are also changing.
I’m always trying to find new ways to use LLMs because it’s both useful and a lot of fun. One of the unsurprising elements of being a CTO is that the buck stops with me when it comes to decisions about my engineering department, so I’ve been leaning more on LLMs to help me evolve my thinking, challenge my assumptions, and make better decisions.
I thought I’d write up some of the ways that I’ve been doing that in the hope that it might help other engineering leaders to do the same.
Effectively, I now think of LLMs as a co-processor for my brain. It isn’t always correct or even trustworthy, but in practice it always puts momentum behind my thinking, and often helps me to see things from a different perspective.
Here’s what we’ll cover in this post:
- Prompting: using LLMs to help me think through problems.
- Pair prompting: using LLMs to help me work on solutions with a human partner.
- Deep research: challenging myself to look deeper into a problem than I might otherwise do.
- Contrarian thinking: using LLMs to help me explore alternative perspectives and challenge my assumptions.
- The executive assistant: breaking through organizational stalemate by having the LLM tell me what to do.
- The coach and sounding board: using LLMs to help me think through my own feelings and reactions to situations.
Prompting
Although it may seem beyond obvious in 2025, I still believe that many leaders are not using just simple prompting to help them think through problems.
I believe that some of this comes from habit; that is that they’ve spent potentially decades thinking through problems on their own or in front of a blank page, so the new habit of injecting momentum into their thinking with an LLM is not yet second nature.
I also think that some of it comes from a lack of understanding of how to use LLMs effectively. If you are reading this and you think that applies to you, then Andrej Karpathy’s How to use LLMs video is a great overview with plenty of examples, including how the “zip file of the internet” works.
The way I trained my own habit of using simple prompting more was to have my LLM open in a thin browser window (about one-sixth width) on the left-hand side of my screen so that it was always visible. That forced me to use it. I’ve seen others do similar things, like even having a post-it note that says “what would AI do?” to remind them to use it.
Now that at the time of writing most LLMs can search the internet for real-time information, there’s a ton of value from simple prompting, as LLMs are getting good enough to be an infinitely patient partner, from searching the web and summarizing information.
I won’t enumerate over every single way that you might use simple prompting to help your thinking, but I will highlight the learning that I’ve taken away from it. This is an observation about myself rather than something grounded in proven neuroscience; however, I notice that I have a bias to the following behaviors:
- My brain tends to be fast, solution-oriented, and optimizes for speed of thought. This means that I often jump to conclusions, and I can miss important details. When using LLMs, I find that I can both slow down my thinking by writing and also open myself to other options since LLMs will often suggest alternatives that I might not have thought of, even if I don’t agree with them.
- The act of writing reveals more layers of depth than making a snap judgement. When I take the time to articulate my thoughts, I often uncover nuances and complexities that I hadn’t considered before. These can come both from myself and from the LLM, especially when asking it to be contrarian, as we’ll see later.
- Because LLMs greatly reduce the friction of research, I find myself going deeper into topics than I would have otherwise. This is a great way to avoid the Dunning-Kruger effect where I might otherwise have thought I knew enough about a topic to make a decision, but in reality I didn’t. A simple additional prompt to the LLM to “tell me more”, “what else should I know?” or “what do other people say online about this?” can help me to avoid that trap.
Just remember that you get out what you put in, so the more effort you put into your prompts, the more you’ll get out of the LLM. If you ask a simple question, you’ll get a simple answer. If you ask a complex question, you’ll get a complex answer.
If you’re struggling with getting good answers from your LLM, then check out the one of my previous posts which shows you how to use existing detailed prompting guides into reusable Gems/GPTs/equivalents to help you get better results.
In short, get your LLM open on your screen at all times and every time you need to think through a problem, or answer a non-trivial question, ask it for help. I found that within a couple of days, this became second nature and improved my thinking and decision-making significantly.
For example:
You are an AI assistant acting as a highly experienced Technical Advisor to a Chief Technology Officer (CTO). Your primary goal is to efficiently process and synthesize technical communications from engineering teams.
Your task is to analyze raw Slack message threads from engineers, identify key technical discussions, problems, and proposed solutions.
For each provided Slack message thread, perform the following steps:
1. **Summarize the Core Technical Issue:** Provide a concise summary (1-2 sentences) of the central technical problem or discussion point being addressed in the thread. Focus on the 'what' and 'why' from an engineering perspective.
2. **Identify Root Causes/Key Factors (if discernible):** Based on the discussion, extract any explicitly mentioned or clearly implied root causes, contributing factors, or critical dependencies. List these as bullet points.
3. **Extract Proposed Solutions/Action Items:** Identify all suggested solutions, workarounds, or next steps discussed by the engineers. List these clearly. If multiple solutions are debated, briefly note the pros and cons discussed for each.
4. **Recommend a Strategic Solution (as a CTO's Advisor):** Based on the technical context and common industry best practices, recommend the most viable or impactful solution from the proposed options, or suggest a new, high-level strategic direction if the existing ones are insufficient. Justify your recommendation briefly, considering factors like technical feasibility, potential impact on system stability, resource allocation, and alignment with business objectives. If immediate action is required, highlight it.
Ensure your output is structured clearly, prioritizing actionable insights and high-level summaries for quick review by a CTO. Maintain a professional, analytical, and solution-oriented tone.
[INSERT SLACK MESSAGE THREAD HERE]
Pair prompting
The next step in prompting is involving another human. This is a technique that I call “pair prompting”, and it can be done both synchronously and asynchronously. The asynchronous approach I specifically learned about on a podcast episode of Stratechery, as Ben Thompson uses that technique with his research assistant.
Pair prompting is just like pair programming, but with an LLM as the third member of the team. The idea is that you and your partner can use the LLM to help you both think through a problem together, and it can help you to see things from a different perspective.
If you’re doing this synchronously then you can do it over a video call or in person. However, the utility of the LLM really shines in asynchronous pair prompting, and Ben Thompson’s approach is a great example of that. He mentioned that prior to using LLMs, he would ask his research assistant to explore a topic and then write up a summary of their findings. The input and output of that process would be documented (the instruction and the document).
Now, with LLMs, since the link can be shared to a prompting session to allow for collaboration, not only is the input and output documented, but the entire thinking process is documented as well.
What this means is that if you’ve tasked one of your teams to go and explore how a feature should be built, by using pair prompting, or even just sharing the link to the LLM session, you can see the entire thought process that they went through, add your own thoughts to it, or see whether any of the assumptions that they made were incorrect.
This is a highly underused technique that exploits how LLMs need their context window (since every call to an LLM is the whole context of the conversation), and it allows you to see the entire thought process that your team went through, rather than just the final output.
Next time you need to think through a solution with a colleague, try using an LLM as a conduit for your discussion. You get the input, the output, and importantly, all of the thinking in between.
Here’s a prompt that could form a template to start a pair prompting session:
You, as the AI assistant, will facilitate a pair prompting session between a Chief Technology Officer (CTO) and a Senior Engineer. The primary objective of this session is to collaboratively research and outline architectural considerations, technology stacks, and key features essential for building a scalable and robust chat application.
Throughout our discussion, after each significant point or decision, please provide a concise "Session Summary" that captures the essence of our findings, decisions, and the reasoning behind them. This will serve as a living document of our thought process for others to review.
Each "Session Summary" should include the following sections:
Key Discussion Points: Main topics covered.
Decisions Made: Conclusions or choices reached (if any).
Rationale: The reasoning or justification for these decisions.
Next Steps/Open Questions: What needs to be explored next or what questions remain.
Let's begin by exploring the fundamental architectural patterns suitable for chat applications. Please outline common patterns like client-server, peer-to-peer, and hybrid models, and highlight their respective pros and cons in the context of real-time communication.
Remember, this is a collaborative research session. Your role is to guide our discussion, provide comprehensive information, and facilitate a clear, documented thought process.
Deep research
From talking to people in my network, the deep research functionality of LLMs is still underused. I think that this is partially because it takes a long time for deep research reports to be generated, which means that you need a good prompt to start with, and the output can be highly verbose, but this is definitely improving with time.
My own predominant usage of deep research as a CTO has been to help me explore what’s going on in the wider world and industry so I can synthesize that information into a coherent application for my own organization.
All of these deep research usages replace situations where I would have spent a ton of time searching the internet and reading articles. In a way, it does for my thinking what trusted review sites do for product research: something like What Car? for my brain.
I looked through my own LLM history and looked at the things that I’ve used deep research for, and they include:
- Seeking opinions on the best way to organize out-of-hours support, including what the legal implications could be in different countries for employment contracts and additional hours worked.
- Getting a deep dive into what competitors may be doing in particular areas, and seeing whether a product idea that we have is already being done by someone else.
- Getting ideas on how to improve our incident response process, including what other companies do and how they handle incidents, and software recommendations.
- Exploring potential new markets or customer segments for our products, including what the competitive landscape looks like and what the key trends are.
All of these are things that would have taken me hours to do thoroughly, and deep research allows me to fire off a request, go and do something else, and then read it when it’s ready.
Similar to needing to push oneself into the habit of using simple prompting regularly, using deep research will also take some time and effort to fit into your own routine. We’ve never had research assistants before, so we need to train ourselves to remember to use them often and to use them effectively.
Contrarian thinking
We all have biases, and we all have blind spots. LLMs are great at helping us actively explore them and to challenge our assumptions.
I like to employ an LLM as a contrarian thinker whenever I need to make a controversial decision or ensure that what I am thinking is not just a confirmation of my own biases.
This works for pretty much anything, and it helps me feel more confident in my decisions, especially when I know that I have considered the alternatives.
Here’s a prompt that might help you to get started with contrarian thinking:
As a Chief Technology Officer, I need a rigorous evaluation of my proposed strategies. Act as a highly critical and contrarian advisor, whose primary objective is to identify flaws, challenge assumptions, and present alternative viewpoints to any argument I present. Your responses should be structured as follows:
Re-state my argument concisely: Confirm your understanding of the core point I am making.
Identify underlying assumptions: Pinpoint any unstated assumptions that my argument rests upon.
Present the contrarian view: Offer a directly opposing or alternative perspective, supported by logical reasoning or potential counter-evidence.
Highlight potential risks or weaknesses: Detail any vulnerabilities, oversights, or negative consequences that my argument might entail.
Propose alternative solutions or considerations: Suggest different approaches or factors I might not have considered.
Maintain a professional yet assertive tone. Do not agree with my statements unless you have exhausted all possible contrarian angles. Focus on constructive criticism to foster robust decision-making. My argument is: [INSERT YOUR ARGUMENT HERE]"
Give it a go whenever you have to decide something big or controversial. It really helps.
The executive assistant
In the same way I’ve never had a research assistant, I’ve never had an executive assistant — the sort of person who can help me to manage my time, my priorities, and my tasks.
I am definitely guilty of not taking the time to critically look at everything I need to do in my day, and using an LLM as an executive assistant can help me to break through that organizational stalemate.
There’s also some odd cognitive trick with being told how to structure your day rather than structuring it yourself that makes you take it more seriously. I think that this is because it feels like an external authority is telling you what to do, rather than you just making it up yourself.
If you’re going through particularly busy periods, then taking a few minutes to write to the LLM about what’s on your plate and asking it to help you prioritize can be a great way to get some clarity and focus.
For example:
I am an Engineering Manager. Here's a raw list of my tasks for today, along with my calendar:
Meetings: 9 AM - Standup, 10 AM - 1:1 with Alex, 11:30 AM - Product Sync, 3 PM - Architecture Review.
Tasks: Review PR for new payment gateway, Draft Q3 OKRs for my team, Prepare feedback for Alex's performance review, Respond to urgent security vulnerability email, Research distributed tracing tools, Update project status for stakeholders, Block out focus time for coding (if possible).
Context: The security vulnerability is critical and needs a response by end of day. Alex's 1:1 is important for career development.
Given this, help me structure my morning (8 AM - 12 PM) to maximize productivity. Prioritize tasks, suggest blocks for focused work, and identify any conflicts or areas where I might need to delegate or defer. Also, recommend a single most important task for me to tackle first thing.
Yes, there are tools that can do this for you, but you can go a long way with just a simple prompt.
Similarly, even though tools like Granola exist, you can use a plain prompt to help digest what you should do after meetings given your notes or a recorded transcript.
Here are my raw notes from a 90-minute architecture brainstorming session. Please read through them and extract:
Key decisions made (e.g., 'We will use Kafka for async messaging').
Clear action items, including who is responsible and a rough due date if mentioned (e.g., 'Sarah to research Kafka cluster sizing by Friday').
Open questions or topics that need further discussion.
Any technical risks identified.
Notes:
scribbled details about API Gateway options
discussion on database sharding strategy - leaning towards range sharding but some concerns about hot spots
Sarah mentioned looking into Kafka for event bus, seems promising, maybe by end of week?
need to decide on observability stack - Prometheus + Grafana or DataDog?
John brought up data consistency issues with eventual consistency model
agreed to use GraphQL for public API
next meeting for deep dive on database sharding next Tuesday
urgent: security review of data encryption at rest - assigned to platform team
This will help you to get the most out of your meetings and ensure that you don’t forget anything important.
The coach and sounding board
Finally, sometimes we all need some support. LLMs can be a great sounding board for your own thoughts, feelings, and reactions to situations.
For example:
I've just been informed that a significant re-organization is coming to our department in the next 4-6 weeks, and some of my team members will be transitioning to new teams or roles. I want to ensure I lead my team through this change as effectively and empathetically as possible, minimizing disruption and maintaining morale.
Act as my personal executive coach. Ask me probing questions that help me anticipate potential challenges and formulate a proactive strategy for communication and support before the reorg is officially announced. Focus on:
- My initial communication strategy to the team (pre-announcement and post-announcement).
- How I can best prepare to address team members' anxieties, uncertainties, and potential resistance.
- Specific actions I can take to support those transitioning and those remaining.
- My own emotional preparedness for leading through this period of change.
Start by asking me the first question, and I will respond.
The key here is in specifying up front how the LLM can coach you, and how to structure the conversation as a two-way dialogue. Feel free to adapt the prompt to your own needs at a given time.
Summary
I still believe that we are underusing LLMs in our day-to-day work as managers and leaders. Partially this is because they require some upfront effort and creativity to get the best out of them.
I’ve been trying hard to overcome these creativity blocks to find ways to improve my thinking, reasoning and effectiveness as a leader, and I hope that some of the techniques I’ve shared here will help you to do the same.
If you’ve got any of your own cool ideas and prompts, I’d love to see them.