A weekly mind meld

Leave a comment
Managing managers

Leaders can often find it hard to build deep trust and alignment with their teams, especially if those teams are quite big, or if the leader in question is quite senior. Doing regular skip-skip-skip levels is out of the question, attending every meeting is impossible, and the power dynamics of the org can make it hard for staff to really get to know you one-on-one.

You need a solution that works for you and your team, and allows the most efficient use of everyone’s time, and is also archival, searchable, and shareable. The good news is that this solution already exists, and it predates slideshows, videos, and, come to think of it, even the internet. It’s called writing.

Since starting my new CTO role, I’ve been sharing a weekly update with my team. I think about it as a mind meld, which has the Wiktionary definition of:

From the Star Trek franchise, where the term was first used in 1966 for a telepathic ability possessed by the alien race of Vulcans to share thoughts and feelings with another individual.

It’s how I continually open up my thoughts to the team with a long-term goal to reduce any mental alignment gap between us. I like to think that the more I share, the more they can understand what I believe is important and why, and the more that my style of working and thinking can propagate through the team.

There are a few rules and guidelines I follow when writing these mind melds. They should:

  • Take no longer than 60 minutes to write.
  • Be no longer than 1,500 words.
  • Be sent out on a Friday afternoon as a way to close the week.
  • Have a conversational tone and high ease of reading, similar to how I write this newsletter.
  • Mix general updates with praise and feedback on things we can do better.
  • Be sent to the entire team in a way that anyone in the company can also read it.

I will call out that because I have done a ton of writing and that English is my first language, I can write fairly quickly compared to other people who don’t write as often. However, writing is a skill that can be learned and improved over time, so don’t let that stop you from trying.

I’ll spend the rest of the article going over my process for collecting the information I want to share, how I structure it, and then give you some hypothetical examples.

Collecting information

The first step is to continually collect information throughout the week. I wrote back in January about my daily system for how I capture notes and tasks using Logseq, but everyone uses something different.

The key is that you engage with your daily activities mindfully in a way that keeps your weekly update in mind. What I mean by this is that you are always on the lookout for:

  • Direct experiences that you have had that would be valuable to share with the team. This could be anything from conversations with customers to shareable summaries of closed-door meetings such as executive reviews.
  • Events that can be celebrated, such as a big project shipping, a long-standing bug being resolved, or performance improvements that have been rolled out.
  • Things that could be improved, such as an incident that happened, an inefficient process that is causing friction, or data that highlights a problem that needs to be fixed (e.g. a drop in performance or an unexpected increase in infrastructure costs).
  • Events that are happening in the near future that you want to remind people about.

For me, as I go about my week, I’ll tag things in my notes with a weekly-update tag. This allows me to quickly search for them all later and use that as a starting point for my writing. I spend most of my week hammering out notes, so adding a tag is a super simple way to aggregate a week’s worth of information on a Friday.

Structuring the mind meld

So, we get to Friday, and I’ve blocked out some time to write my update. I’ve got my tagged notes that I can use as a starting point, but we need some structure to the update.

Here’s the rough structure that I hang my updates on:

  • Intro: A short paragraph that sets the tone for the update. Assuming nothing big and serious has happened, I keep it light and observational. For example, the other week I was in Helsinki for our exec committee meeting, so a paragraph about the trip and the meme around the Finnish weather was a good way to start.
  • General updates: I batch together any general one-line updates that I want to share. This covers anything from welcome new hires, to upcoming events, to general company and engineering news. It’s important stuff that doesn’t need its own section.
  • The main event: I’ll pick one topic that I think is the most important thing to share. For example, if we are making a major change of some kind, this is where I’ll go into detail about it. Similarly, if I’ve had a key observation that I think is worth exploring more deeply, this is it. In the following section, I’ll cover some examples of what I mean by this. Typically I’ll try to hang this on any key principles or values that I want to reinforce. The main event usually takes up the most space, maybe between 300-500 words.
  • The sideshows: This is where I cover the other less important topics that I want to share, maybe a few hundred words each. This could be a summary of a recent incident, a shout out to a team or individual, or discussion around a process that we are trying to improve.
  • The wrap up: I finish with a short paragraph that wraps up the update. This is usually a call to action such as asking for feedback or to share thoughts via comments or on Slack. If the tone of the update is light, I might share some interesting articles or podcasts that I found interesting during the week.

As outlined before, I try to keep the entire update to around 1,500 words. This is a good length for people to read in one sitting, and I can hammer it out in one pass in around an hour.

Once I’m done and before I send it I’ll also do a quick proof read to check for typos and grammar. I’ll also then get an LLM to scrutinize it to see if there are ways that I can improve the content.

Here’s an example of a prompt I might use:

You are an expert editor. Please analyze the following draft of my weekly update that I sent as CTO to my department.

First, conduct a thorough proofread for grammar, spelling, and clarity. Then, evaluate the content for completeness, conciseness, and strategic alignment with our company's goals.  Suggest specific revisions to improve the update's impact, including:

- Identifying any missing information that should be included.
- Rephrasing sentences for better clarity and flow.
- Ensuring the update is concise and avoids unnecessary jargon.
- Highlighting key achievements and their impact on the company's objectives.
- Suggesting ways to make the update more engaging for the intended audience (e.g., using visuals, summarizing key takeaways).
- Applying 'Chain of Thought' prompting, break down complex updates into step-by-step reasoning for clarity.
- If applicable, use 'Tree of Thoughts' to explore alternative ways to frame certain updates for maximum impact.

Here is the draft of my weekly update:

[insert draft here]

If you’re looking for help with generating prompts for you that are this detailed, then a neat trick that you can use is to take the PDF whitepaper on Prompt Engineering as input to your own GPT or Gemini Gem, and then create a reusable tool that utilizes it to improve the input prompt that you give it.

For example, the prompt text for your reusable tool using the whitepaper to generate prompts could be:

You are a tool to generate excellent prompts that will greatly improve output compared to what is given as input. You will use the attached book on Prompt Engineering to formulate these prompts and you will return the improved prompt along with your reasoning for why it is better.

You are always helping a CTO do their job, so frame the prompts as such.

I find myself using this trick all the time now, and it works really well. I don’t copy the output verbatim because I like to keep my own writing style and voice, but it does help me make a number of edits and improvements to the text.

An example mind meld

Instructions aside, it’s probably best to just show you an example of a weekly mind meld. Here’s a hypothetical one following the structure above. I’ve made it up, but it should give you a good idea of what I mean.

The April 2025 mind meld

Hey team,

I’m back home after spending a week visiting our London office. It was nice to spend a few days away from the screen and get to spend time with many of you who I don’t get to see as often. I even managed to get out for a couple of evening walks in the unusually warm weather.

There are a few short updates that are worth sharing before we go any further:

  • A huge welcome to our five (!) new starters this week: Alice, Bob, Charlie, Dave, and Eve. It’s great to have you here with us, and as you get your dev environments set up, please DM me if you have any issues: we’ve put a lot of effort into improving cold starts recently, but I know there can still be some hiccups which we will continue to work on.
  • Congratulations to the infrastructure team for completing the migration of our final legacy database. We are now 100% on our new database platform which is not only faster, but also so much cheaper and scalable.
  • A reminder that we have our quarterly all hands next week. The invite is in your calendar and you can submit Q&A via the link in the invite.

Speed of decision making

There is an important topic that I wanted to cover this week: speed of decision making. We’ve been growing a lot recently and as we do, we need to be hyper-aware about our rate of progress.

As I talked to many of you in person, the main complaint that I heard was that we are moving too slowly. This wasn’t just in one area, but across all teams and projects. I think this is a symptom of our growth, and it’s something that we need to address as a team.

Here’s something that I would like us all to try: if you are blocked on progress in any way, shape or form, and it has been more than 24 hours, please escalate it to me. I will then work with you to unblock it. This could be anything from a decision that needs to be made, to a resource that you need, or even just a conversation that you need to have with someone.

This might seem like an unscalable solution, but I want to make it super clear that we cannot afford to slow down at this stage of our growth. The longer we take to make decisions, the more ground that our competitors will gain on us. As such, I will be prioritizing any escalation that I receive, and I will work with you to get it resolved as quickly as possible.

Honestly, nothing is too small. Just DM me and we will fix it. Trust me.

Incident response

I wanted to bring attention to the new process that we are starting around incident response. We’ve been working hard to improve our incident response process, and I think we are finally getting to a place where we can start to see some real improvements.

We have implemented new rotas, new tools, and new processes to help us respond to incidents faster and more effectively. I know that this is a big change for many of you since we have dramatically expanded our on-call rota, but I think it is a necessary step to take. For far too long we have been relying on a small number of people to respond to incidents, and this has led to burnout and frustration.

I want to thank everyone who was involved in turning around the incident that happened earlier this week. I was scrolling through the Slack messages and I was impressed by the organization, communication, and speed of response from many of you who hadn’t done this before. The RCA that was done the following day was also very insightful, and we’re already getting to work on the action items that were raised.

Let’s keep at this and build our muscles in this area.

Thoughts for the weekend

I’m aware that there is a public holiday coming up in the UK, so a number of us have an extra day off. If you get a bit of free time, here are some cool things to check out:

  • Have a play with the latest Gemini model, 2.5 Flash. I’ve been very impressed by Google’s latest models, and if you’ve only been using ChatGPT for a while, you might be surprised by how Gemini compares.
  • I really enjoyed the latest Acquired podcast on Epic Systems. If you’re outside of the US, you might not have heard of them, but they are an impressive and curious company — just search for pictures of their campus in Wisconsin.
  • And lastly, if you really just want to get away from the screen and do nothing for three days, that is totally fine too.

Have a great weekend.

And that’s a wrap

I’ve been enjoying writing weekly to my team so far. Even though writing isn’t video or audio, I still think it’s the most scalable way to communicate with a large team, and one that allows me to keep pushing forward on the things that I believe are important, whilst keeping the team aligned and informed.

I hope this article has given you some ideas on how to do the same. I’d love to hear how you communicate with your own teams.

LLMs: An Operator’s View

Leave a comment
Managing managers

Many of you that read my articles are operators of some kind.

You may run one or many teams, or even a whole company. And, even if you are not a manager by definition, you may wield a great deal of influence over directions and decisions.

In the midst of the current LLM explosion, we as operators find ourselves amongst:

  • A blistering pace of improvement in the capabilities of LLMs. New models and products are being released at a rate that is hard to keep up with.
  • Immense noise and hype online making all sorts of claims, good and bad, about what the future holds.
  • An expectation from our companies to go full-on with “AI”, which typically means LLMs, both in developer tooling and in customer-facing products. AI is the new data is the new cloud.
  • Echoes in the industry that we are all now overstaffed as a result of productivity gains: that everyone should do more with less, and that AI is the answer to that.

Note: this article is not a technical overview of how to build products with LLMs. Instead, the intent is to touch upon what leadership should do from the perspective of the productivity of teams and organizations, and consequently how we should think about spending our budgets to make that happen. There are plenty of hot takes out there on AI. This is not intended to be one of them.

What we’ll cover related to LLMs is:

  • The (real) rising floor of developer productivity.
  • The changing size of organizations.
  • The increasing importance of code reviews.
  • The changing nature of interviews and identifying talent in short spaces of time.

The intent is that this should provoke thought and discussion, and will hopefully help you think about how to allocate your budget and focus in the coming months and years.

The Floor Is Rising

With CopilotCursorCline and other LLM-based developer tools, the floor of developer productivity is rising.

At the time of writing in 2025, I believe even the most AI-skeptical developers are now seeing the productivity gains that LLMs can provide. Yes, it was true that, several years ago, the promise and the hype far outweighed the consistent proof of benefits, but in a post GPT-4 world, LLMs have become an integral part of the developer’s toolkit, even if it is just for fast research or rubber ducking rather than agentic pair programming.

I don’t know many developers that would give them up now, myself included. I go too fast with them to go back to the old way of doing things.

If for some reason (!) you haven’t fully leaned into LLM-assisted coding yet, the benefits are plentiful:

  • LLMs are fantastic for getting over the cold start problem of a new idea. You can go from nothing to a throwaway prototype in no time at all, starting with a vague prompt of what you want and iterating on it. There are numerous “vibe coding” projects that are generating some serious revenue.
  • You can use a prompt to sketch out whole architecture ideas with back of the envelope calculations and tradeoffs.
  • Copilot-style autocompletion is now very good unlocks the next step in your thought process.
  • Agent-based tools like Cursor or Copilot Chat, when kept under control, can be a great way to get a lot of boilerplate code written quickly.
  • Writing tests, and therefore driving up code coverage, is now much easier. LLMs can write tests for you, and agent-based tools can execute the red-green cycle for you as you go.

If you haven’t yet spent an afternoon or evening with Cursor, then please, please, please make time and see how fast you can go from a blank page to a fully functioning hobby project. It is incredible how fast you can go from nothing to something.

So in terms of the Gartner hype cycle J-curve, we are clearly on the upward slope towards enlightenment. The tools are getting better, and they are getting better fast. It is unclear as to how far the Bitter Lesson will take us, and predictions currently range from being on the cusp of hitting a plateau of productivity to full-blown AGI, but it is clear that an organization that does not embrace LLMs will be left behind by their competitors.

As an operator, up-skilling your team to use these tools is now essential. Securing the necessary budget to give everyone access to the Pro tiers of ChatGPT, Cursor, or whatever tools represent the best fit for your team is a table stakes activity. And yes, this does mean that your budget will increase, but the productivity gains from an existing team will more than make up for it. Trade the cost of hiring new people for the cost of acquiring tooling.

You should also take the adoption of this tooling seriously. It is not just a case of giving everyone subscriptions and hoping for the best. You need to invest time and effort into training your team on how to use these tools effectively.

  • Run a survey to see what tools your team is already using and how they are using them. As part of the survey, identify which of your engineers are already fully ingrained in the new LLM workflows and which are not.
  • Identify champions based on the previous point and have them run training sessions and also overindex on pair programming with those who are less familiar (or more skeptical) of the tools.
  • Promote a culture of sharing best practices and tips for using LLMs. Get your champions to lean in and share their workflows and processes with the rest of the team. Videos work wonders here.
  • Track the usage of AI tools over time as you adopt them. For example, Cursor offers team analytics, and you can see how many lines of code are being generated and accepted. Use this as part of the feedback loop to see how your team is progressing. Is usage increasing or decreasing? Why?
  • Cross-reference the usage data with other metrics you are collecting. For example, how is the average number of commits to the codebase changing as tool usage increases? What about the number of incidents or reported bugs? What’s happening with your DORA metrics as a result?

Focus on showing that the tools are making a difference, and this too can be motivation to bring skeptical engineers on board.

Organization Sizes Are Changing

Given that the way that we create software has changed, there is another operator’s consideration: the size of your organization.

Layoffs have been rife since the end of ZIRP. Overlapping this period has been the rise of LLMs, and in some cases, the two have been conflated: organizations haven’t just shrunk because of AI efficiency gains, but they also haven’t just shrunk because of the macroeconomic environment either; the two are becoming somewhat intertwined, if you believe what these companies are saying.

However, it is true that from a company-operator’s perspective, the hard-to-exactly-quantify, but definitely real, productivity gains from LLMs allow you the ability to do more with less.

And amongst a tricky economic environment, instead of staying same-sized and increasing output, there has been a trend in many organizations to reduce headcount and combine this with AI tooling to (sort of) maintain the same level of output.

If you think about it, many of the world’s largest companies are (or were) staffed to pre-LLM productivity levels off of the back of ZIRP, and you could argue that there consequently has been an exchange of a large chunk of money that used to pay salaries for a smaller chunk of money that pays for tokens and subscriptions.

One could even argue, especially at large companies, that if all developers could go, let’s say, twice as fast with the new tooling, then other bottlenecks would appear that would limit the speed of progress anyway, so less really is more.

These bottlenecks may already exist: the sometimes glacial speed of making decisions, the amount of change and new features that your users can stomach at once, the time it takes to go through cycles of shipping and learning and iterating and so on.

Many companies are already at the point where they effectively block the speed of their own progress in other ways than just the number of developers they have. Making those developers faster may not actually help them ship more features, and in fact, it may make things worse.

Maybe you work for a company like this.

Going back to the operator’s perspective, if you currently work for a small or medium-sized company, a good idea would be to focus your attention on giving everyone access to the right tools and training to become more productive before you go on another hiring spree. Get everyone coding like they should be coding in 2025 first, assess and prove the productivity gains, get your tooling in place, and then look at hiring more people.

And remember that tooling goes beyond developers: we’re talking about all employees. A pro subscription to ChatGPT is just as useful for a marketer’s efficiency as it is for a developer. Giving each employee in a 50-person company a ChatGPT Pro subscription is still cheaper than hiring a senior developer or two. Think about macro efficiency gains across the whole organization, not just in engineering.

Reviews Are More Important Than Ever

The flip side of the productivity gains is that more code is being written, and, most importantly, not all of it has been as carefully thought through as handcrafted code.

If you’ve used Cursor without specifically asking the prompt to slow down, go step-by-step and ask for your input frequently, you’ve likely seen it go off and blast out hundreds of lines of code that is hard to keep track of.

Now, this is great for getting a prototype up and running, but it is not so great for production code: the code generation starts to go faster than you can meaningfully comprehend it as a human, and bugs can be introduced that are hard to spot. In the best case, the code can be messy or unoptimized. In the worst case, it can be full of security holes that could seriously compromise your organization.

As such, with the faster production of code, as an operator it is more important than ever to ensure you have a strong review process in place: if your most senior engineers were getting a half-arsed rubber stamp thumbs up from their peers (not advised, but it happens), now you need to ensure that all code is being scrutinized as the origins of it are less clear.

You could:

  • Make it clear to your organization that even though LLMs can generate lots of code almost instantly, human reviewers can only digest so much. Keep PRs small, commits clear, and code easy to read.
  • Increase the number of required reviewers on your PRs. For example, go from one reviewer to two. You could also have engineers flag their own PRs that have heavy LLM usage to call out that they need extra scrutiny.
  • Give people a refresher on security best practices (shock horror!) so they can be better aware of when LLMs are generating code that is insecure.
  • Make improvements in your incident postmortem process to ensure that you are learning from your mistakes. Share any production issues that stemmed from overlooked generated code widely across the organization so that everyone can learn from them.
  • Investigate AI tools such as DeepCode by Snyk or Graphite’s Diamond that could help detect issues in code before it is even reviewed by a human.

Am I Even Interviewing You? Or Your LLM?

The typical tech interview process for individual contributors, which involves some combination of coding challenges, white boarding, and system design, has had another curve ball thrown at it by LLMs.

When interviewing remotely, we may have previously been concerned about candidates using a search engine to look up answers, but now we have to consider that they might be side-channeling all of the questions to an LLM.

If you are an interviewer, how can you tell that off to the side of the Google Meet window is another browser window with a prompt open? By the time you have described the system design specification, the candidate could have easily typed it into the prompt and have gotten an incredibly detailed and plausible answer back.

And hey, don’t just take my word for it, try it: open Grok and type “I am doing a system design interview. Help me with it. I have to design Instagram from scratch. Give me back of the envelope calculations and follow the structure of the ByteByteGo book.”

Scary, huh?

If your candidates are good at placing their windows on the screen in the right places and keeping their eye movement under control, you might not even notice that they are doing it. How are we meant to get good signal from candidates now that we can’t figure out if we’re talking to them or a prompt?

If you want to test a candidate completely without LLM assistance, you could ask them to share their entire screen so you can see what is going on. However, this feels invasive. Alternatively, a lighter touch version is to have the interviewer share their screen and problems are tackled together via pair programming and high bandwidth conversation where it would be hard for the candidate to be typing away into a prompt and then trying to pass off the answer as their own.

Alternatively, you could go in the complete opposite direction: accept that LLMs are now part of the job, and like the rest of the article, embrace them.

For example, if you want to hire people that can tackle large and ambiguous problems quickly with LLMs, get them to demonstrate these skills in the interview. This is similar to exams at school that are open book: you can use whatever resources you want, but you have to demonstrate that you know how to use them and that you can think critically about the answers that they give you.

The choice is yours as an interviewer: either allow LLMs or don’t, but be explicit about it ahead of the interview so that the candidate knows what to expect. If you do allow LLMs, you should also be clear about the rules of conduct in the interview: are they allowed to use them for everything? Are they allowed to use them for some things? What are the boundaries? Don’t make them guess.

Regardless of which way you go, you’ll need to adapt your interview process to ensure that you are getting the right signal.

  • Having candidates seek solutions to leetcode problems is not going to work. LLMs can easily dump code for doubly linked lists and binary trees, and annotate the answers with all of the big-O complexities attached.
  • Instead, questions that you ask should be sufficiently ambiguous that part of the interview is figuring out the specific requirements of the problem and what the code or system should do. Doing this in a conversational manner is a great way to see how the candidate thinks, and if you’ve not allowed LLMs to be used, it should be obvious through long periods of silence if they’re trying to bend the rules.
  • Spot-test their knowledge: the interviewer should be able to interrogate components of the candidate’s solution as they go along, asking questions about the design and implementation that highlight whether the candidate actually has knowledge here, or at least is able to think about their solution critically and from first principles. For example, if they think a cache should be implemented, ask them why, and what the tradeoffs are. Ask for some examples of caches they have used before and how they worked. Pick a point in the solution and go fully down the rabbit hole with them. Think latency, throughput, and failure modes. Answers should be fairly instantaneous if they know their stuff.
  • If candidates come to a solution quickly, see if there are alternative ways in which they could have approached the problem. For experienced engineers, it should be possible to have a conversation about the tradeoffs of different approaches to the one they have taken. If they’ve used a batch processing system, ask about how a streaming system could look. If they’ve written code that is synchronous, ask them how they would make it asynchronous, and so on. Probe deeper.
  • Use methods of collaboration that LLMs are not good at. For example, a shared whiteboard is a great way to think about problems together, interactively, proving that you are really working together with the candidate in the same way that you would if they were a new hire.

Design your interview process to find the kinds of candidates that you really want to work with. If you’re looking for people that are great at using LLMs, then have your interview process find these people. Be open about it.

If instead you value candidates that are great at coding or design solutions unassisted despite the tools we all now have available, that’s also fine, but be open about that too. Let them know way ahead of time that this is how it’s going to be. You can’t have it both ways, and you need to design your process accordingly to get the right signal.

And That’s a Wrap

If you haven’t already, you need to start bringing your team(s) into the present day. Software development isn’t just changing, it has changed, and if you haven’t been adapting already, you’re getting left behind. This isn’t just important for your company, but it’s also incredibly important for your employees: you owe them access to the best tools available to do their jobs.

Happy prompting.