A masterclass in engineering leadership from Carta, Stripe, Uber, and Calm | Will Larson (CTO at Carta)
Episode 112

A masterclass in engineering leadership from Carta, Stripe, Uber, and Calm | Will Larson (CTO at Carta)

Will Larson is the CTO at Carta, an ownership and equity management platform that raised at a $7.4b valuation in 2021. Prior to joining Carta, Will was CTO at Calm, founded Stripe's Foundation Engineering org, and led Uber’s Platform Engineering people and strategy.

Play Episode

Will Larson is the CTO at Carta, an ownership and equity management platform that raised at a $7.4b valuation in 2021. Prior to joining Carta, Will was CTO at Calm, founded Stripe's Foundation Engineering org, and led Uber’s Platform Engineering people and strategy. Will also writes extensively about engineering leadership, and has authored two books in this area: Staff Engineer, and An Elegant Puzzle.

In today’s episode we discuss:

Referenced:

Where to find Brett Berson:

Where to find Will Larson:

Where to find First Round Capital:

Timestamps:

(00:00) Introduction

(03:03) The nuances of taking lessons from old companies

(14:28) The value of writing down engineering principles

(17:03) How to structure a strategy document

(18:48) The 2 parts of any engineering strategy

(21:08) Advice for turning strategy into action

(23:44) Carta's unique "navigator" model

(24:50) The Hidden Variable Problem

(29:59) Explaining, measuring, and optimizing velocity

(35:28) Useful metrics for engineering orgs

(39:08) The balance between micromanagement and understanding details

(43:03) Management anti-patterns

(45:49) How to execute policies whilst managing their exceptions

(47:56) What an excellent engineering executive looks like

(53:53) How Will has evolved as an engineering executive

(56:56) How to communicate with executives

(63:18) Things that derail meetings

(66:10) How to approach presentation feedback

(67:30) A bad sign when working with direct reports

(69:13) Advice for growing as an early-career engineer

(71:11) Will's model for developing engineering teams

(74:33) Sources of inspiration for Will's views on engineering management

Brett: Thanks so much for joining.

Will: Super excited to be here.

Brett: Maybe an interesting place to start would be... I'm really interested in now that you've worked across so many different companies and so many different engineering organizations. How have you picked apart sort of global rules or frameworks that tend to work across engineering teams versus context specific ways of running a high functioning engineering org? If those two things are at all different?

Will: The number one way that I see new engineering leaders struggle when they come to a company is they just assume the context from their previous company applies as is. And if you got new leaders who come in and really make a mess of things, it's always because they assume the previous context from their earlier company, maybe a much larger company like a Google that has, you know, tens of thousands of engineers, or maybe a much smaller company with like a startup with only 10, they just assume that context actually applies to their new environment.

And what I try to do is just test the ideas and just see what the reaction is before I actually implement them, right? You know, when I was at Calm, Calm when I joined about 25 engineers, about 100 when I left. And when I joined, we were going through a migration from our monolith to a number of services. And I canceled that migration I said, Hey, this isn't working. 

When I came into Carta going through a similar kind of migration, and you know, my first urge is like, Hey, I should cancel this here too. It worked. It worked great at Calm. But instead of doing that, I just like tested that idea. I talked to different folks, particularly folks who thought it was an exceptionally good idea to be migrating out.

You just tested the idea with them, like, kind of mining for conflict. And so that's the biggest thing I'd say there, things are not the same across companies, but you can figure out pretty quickly by finding something controversial and testing for conflict. The wrong way to do this is to actually implement the conflict heavy thing.

The right way to do it is just to go have a bunch of conversations. And that's how you'll figure out what's real at your new company, rather than just assuming that what was true at your previous company applies here as well.

Brett: In sort of that example, how do you pick apart, the sort of bias and tendency that people have, which is to keep doing the thing that they're already doing?

Will: It's, it's tricky, right? Because people start doing things for good reasons, and then continue doing them because of what you just said, where there's like a momentum. Like, once you start doing something, particularly something hard and large changing course, it's at least as much activation energy as actually starting down the course that you were on.

And so I try to do a couple of things that the 1st thing is sometimes just the underlying context or beliefs about something are no longer true. But people continue going down the path because they just haven't stepped back. And so first just like, understanding why did we make the decision to move from a monolith to services to begin with.

And one challenge at companies is like, usually don't write down why they make the most important decisions. Like, you would sort of assume it's the opposite, but if you go into a lot of companies, the small decisions are documented. But the really important ones, like, why did we go into this business? Why are we shutting down this business line?

Why are we doing this services migration that's going to take, you 5 years? Literally aren't written down anywhere. So you have to actually go do some archaeology to figure out the rationale. And from that, you can kind of figure out, are those things still still true? But the second thing is sometimes, sometimes the decision making is still good.

Like the decision, the problem is still real. And the approach is still valid, just the execution is not working. And so I think those are the harder place where you have to have, like, a bit of a challenging conversation with folks like, Hey, you're, you're 3 years in, and this is just not working. And so the easiest way to get, like, a sense of what is real and not real is, is really looking at kind of examples to other companies and like, hey, how long did it take Airbnb? How long did it take Dropbox? How long did it take, you know, startups you worked at to accomplish the same task? Just benchmarking against that.

It doesn't give you like the perfect truth, but it gives you like a reasonable guess. Is this just not working for us some reason if we're taking 3 times longer and we still haven't gotten any meaningful complexity moved out of our monolith?

Hey, there's something that's fundamentally not working for us, and we don't have to understand exactly what they say that against the benchmarks. We're doing, like, an exceptionally poor job and go dig into that. And so I find benchmarks are really useful for you, particularly as a new leader coming into a job without a ton of context.

They're also helpful for the team, where the team might not have much, many like kind of measurement points to compare what they're doing versus what others have done. And this is where I think as senior leaders we can actually help the team, even when we're still building up our context at a new company.

Brett: Can you share some more stories of, of examples of when you move between companies and you were testing sort of priors or, or double checking, the things that you thought might apply versus the sort of areas where you have to update your own thinking or models?

Will: Yeah, so I think that the best one is kind of the Uber to Stripe transition. So when I came into Uber, 2014, and I was working on infrastructure and one of the challenges we had is that teams were spinning up new services, kind of new kind of function code kind of packages every, every, almost every week.

And we were supposed to help them provision these new services. And so, from the very first week I started, what I realized is my team was further behind every week than they started that week. So, my first week there, we were, you know, 2 weeks behind, my 10th week there, we were like 4 weeks behind, and we were just getting worse.

And so we went down this path of creating self service, um, provisioning, automation, tooling, really, I think, did a remarkable job of making it possible for anyone at the company to spin up a new service themselves and felt really good about that. Um, we were at the forefront of a movement of many companies doing that.

Then came to Stripe and I was like, we have the same problem, like teams are feeling stuck. We should employ the same solution. But I had just started at Stripe so I want to talk to a bunch of the senior engineers about the idea. Like, hey, here's what we did at Uber. I think we should do it here too.

And one of the senior most engineers, uh, Nelson, who I think really highly of, he absolutely hated the idea. And I kept trying to pitch him on it. I kept trying to convince him. It's like, hey, here's how it worked for us. And I could never, I could never actually get him to think it was a good idea. And I remember having one conversation with Nelson.

It's like, hey, what could I tell you about how we approach this such that you would feel comfortable with us testing this idea? His response is like there is nothing you could tell me about any way you'd roll this out that would make me feel comfortable. At first, I'm like, hey, this is like a really unreasonable person.

But, but later as I dug into it, like it came to appreciate, like he was right. Um, the, the context of how Stripe did technology really was pretty different than Uber. And one of the core kind of undocumented strategies at Stripe was we use a Ruby monolith. And so everything was done in the Ruby programming language with a few minor exceptions.

And the way I wanted to approach this problem just wasn't aligned with that. And so it took me a little while, and I could have, I could have just been, you know, hey, this guy is totally unreasonable. I'm going to ignore him. But I would have missed the key learning, which is that I was the one who was missing context, and I needed to actually refine my approach, not just ignore this person who had more, significantly more context than I did.

Brett: What was your process as you sort of chased that thread down that helped change your mind?

Will: So a lot of companies have kind of undocumented beliefs about how things work. And I think both Uber and Stripe didn't really document their beliefs about technology management and kind of how broad you want your technology investment to be. Do you want to have one programming language for everything? Or do you want to pick the best program programming language for each problem that you work on? Uber was much more the latter half- camp of supporting many things. Stripe much more kind of one tool for all problems. You only have a hammer. Everything's a nail. And in terms of like coming to understand that, it, to me, it really came back to this like conviction I had going into it that we should move to like a more self service kind of provisioning approach.

And just continuing to test it with people and a lot of people, I would say, will it get convinced that my approach was worth trying. But again, there's 1 or 2 people, you know, forementioned Nelson, who I just could never convince. And that's actually, like, again, like, as an authority figure coming in, it's easy to kind of look at those folks as holdouts people who are resistance- resistant to change people who are, you know, just stubborn, but it's usually those folks who have, like, the most valuable context. And until I could represent his point of view as clearly as I could represent my point of view, I didn't move forward. And in that case, when I could actually do that, I came to appreciate that his point of view actually was a better fit for the problems at hand.

And so I think just really making sure that you can represent opposing points of view, at least as well as you can represent your own, I think as as, you know, an executive, something that that's quite important and the most valuable tool I have for for, these, because you can usually get, um, you can get buy in from other executives way more easily than you can from people with the most context around a given problem because they're the people who really understand the details and can't, you can't lie to them like they, they know the truth and they're the most valuable folks.

Brett: Is there a story that comes to mind that's sort of the opposite? Where you had to convince a holdout or a small team that are endowed with their current solution and it ended up being the optimal outcome?

Will: You know, usually it is true. I think one of the advantages as you get like deeper in your career, as you've just seen the same sorts of problems over and over. And so you're kind of pattern matching, but pattern matching is dangerous because you can be wrong, but it's also like really fast and pretty effective because usually you're not wrong because you've done the same thing like four times.

So when I came into Calm, we, we were in, you know, the midst of a services migration, like almost every company. And we were a year in, and it just wasn't going anywhere. And so I talked to the engineering teams working on it, and most of them were like, hey, we agree. This is just not going anywhere. And it's like, we should stop doing this.

A couple of folks were like, absolutely, like, we refuse to stop doing this. Like, we, we must keep doing this. This is the most important thing for the business. And I was like, look, the business doesn't doesn't even know we're doing this. Like, it's not like this is a business requirement from the CEO or something.

Like, he doesn't care, why are we doing this? And as you kept kind of digging in, um, the answer was ultimately we think this is the right way to motivate ourselves and the other engineers to adopt new technology to do interesting technology work. And I was like, hey, that's, that's a valid point of view.

Like, you could have the perspective that the goal of companies is to motivate engineers by doing interesting technical work. It's not how I want the company to work, though. And that's not how this company is going to work. Um, while under my leadership. And a couple of them ultimately decided to leave to do something else, because they felt so strongly that the role of the company was to provide interesting work for them.

As opposed the role was for them to help the company accomplish things on behalf of the business on behalf of the user. And that's okay. Like, sometimes there's just a values mismatch where it's not actually, oh, I don't understand the truth, or they don't understand the truth. It's like, Hey, we have different values and sometimes the values can't coexist.

And that's okay. There's, there's a company for everyone out there, doesn't have to be your company. By trying to be the company for everyone, you end up being the company for no one, right? Like, it becomes so diffuse, so confusing about what you stand for that you don't stand for anything.

Brett: Sort of as, as you were sharing some of these insights, I think you mentioned this idea of writing down, engineering principles or the way that you think about these decisions. Can you talk a little bit more about the role of, of writing these things down?

Will: Every company you go to engineers will say things like, uh, there's no product strategy. Like, how come we don't even have a product strategy? And then you go to, like, you know, product managers, they'll say things like, there's no business strategy. Like, how, how do we even function without a written business strategy?

And then you go back to the engineers and they're like, there's not even a technology strategy written. Like, what sort of company are we? And so there's this like pervasive belief that there's no strategy anywhere, but I've really come to believe that strategy is everywhere. It's just like rarely written, right? 

And so the challenge is that unwritten strategy can be really effective, but it's hard to use. It's hard to refine, like, you'll update the strategy, but people have different understandings about how it was updated. It's really hard to bring new hires on. So you have the senior leaders who come in and just assume their last company strategy is your strategy because you don't have your strategy written down anywhere.

It's hard to explain confusing points where people interpret different nuances differently, but it's you can't improve the clarity of it because it's not written down anywhere. You have to have a bunch of 1 on 1 conversations. Or kind of do case law where it's kind of like, Oh, well, in this case, we did this. In this case, we did that, which means the law must be this, but that's like super confusing, right? Versus just writing it down. 

And I think that this is a place where I think one of the biggest things you can do to improve the quality of strategy in the industry, not just engineering or product, but like just all strategies, just committing to write it down because once you have it written, people can disagree. They can iterate. They can write a revision to it that changes the approach. Unwritten strategies is so hard to engage with. And yeah, this is one of my latest beliefs is that if you just write things down, no matter how bad they are, your strategy will improve almost overnight. But if you don't write it down it's incredibly challenging to improve your strategy.

Brett: I get the idea that almost anything written down is better than nothing written. But if somebody wanted to do an excellent job in, turning the set of amorphous sort of ideas or strategies into something written, what should the document actually look like?

Will: There's two kind of aspects of this. There's like structurally, what should document look like? And then there's kind of how you create that document. So starting with the first in terms of, like, structurally, what should this document look like? 

Richard Rumelt wrote good strategy, bad strategy, which I think is particularly the first third of that book is just like excellent advice about how to structure strategy.

And Rumelt just has like three different ideas that come into strategy. One, there's a diagnosis, like, what is the reality? The- what is the state of play today? Two, what are guiding policies? Like, how are you going to interpret that diagnosis into kind of an approach to deal with the problems at hand? And then three is coherent action. Rumelt's biggest concern about strategy is he thinks a lot of strategy is just kind of spoken words that don't actually get implemented or used anywhere. So he has this idea of coherent action, which is like a strategy is only real if you are specifically doing something today, based on the guiding policies to make that a real thing you're doing. So that's like structurally a great definition and just thinking about like examples super, super quickly.

A diagnosis could be something like, hey, it's 2023 and funding is really hard to get. We have $10 million dollars in the bank. We spent about $2.5 million dollars last year. Okay, that could be the diagnosis. And we're- our revenue is steady, but it's not growing. That could be your diagnosis.

Then you're guiding principles could be something like, hey, as long as we keep our costs flat year over year and maintain the current revenue, we have 4 years of runway and we're comfortable with that. And so we plan to spend $2.5 million. We will aggressively prevent any initiative that's going to cause us to spend more money. That could be guiding policies.

And then a coherent action can be, um, we have a monthly meeting with finance to review every expenditure. And if we start seeing an increase in the curve, we'll immediately spot fix to get back on track. So that would be like a very simple strategy about dealing with your costs and your current assets.

But you can imagine like much, much more sophisticated strategies as well.

Brett: Can you give an example of an engineering strategy that you thought was particularly well done from one of your past roles? Um, because I think that it's oftentimes a little bit more clear, maybe what a business strategy or product strategy looks like. And I think at times it could be a little bit more murky when it comes to excellent engineering strategy.

Will: I think that's true. And really, I think engineering strategy comes down to, like, largely, like, two kind of core ideas. There's like a, a resource allocation. Like, how are you investing the headcount and the, you know, the budget that you have. And then how do you make, um, how do you make decisions as they come up?

Um, so the first one's a little bit straightforward. It's, it's how much are you putting into infrastructure, into developer tooling, into the different business lines you're supporting. And that's kind of a mix of like headcount and vendors, whatnot. And it can be like within those buckets, which projects are you're prioritizing as well, but really just like resource allocation is like the first piece.

The second piece is like, how do we make the important decisions that will come up when there's a hiring decision or when people are asking for more headcount, how do we make that sort of decision? When there's a technology doc that proposes we use a new set of technologies we don't currently support, how do we evaluate that decision? So I think good technology strategy or good engineering strategy just has to solve both of these problems. And so they're not super hard. I actually think the hard part is just like enforcing them and being consistent with them yourself. But I think one challenge of the last decade, right, is that resource allocation was largely dodged by many companies because they just decided to spend more and hire more folks. And so oftentimes, um, the real constraint was like hiring velocity on a given team and your ability to kind of like, get recruiting aligned with your hiring managers rather than budget. I think in 2023, um, budgets fixed for a lot of companies that were previously kind of open ended.

And that's forcing like a level of learning that a lot of engineering leadership got to dodge for the past decade, but certainly the 2nd component in terms of decision making has been equally hard the entire time. And I think even harder in the last decade than it is now, simply because you had so many resources to allocate against these problems, it meant that a lot of projects were started or a lot of things were staffed that just can't be staffed in a more constrained budget environment, so I think easier in some ways now than, and harder in others.

Brett: On that point about enforcing the strategy or frameworks or policies. What are your thoughts as to what that looks like if you're doing that well?

Will: So a lot of companies have one to two different models, right? Well, maybe three models. So one model is teams just kind of do, do what they want. If they do something really problematic, it escalates up to the executive. Where they start doing totally new cloud vendor or they start doing a new programming language, it will kind of escalate up to you and then you can kind of pass judgment on it.

This is good because it empowers teams to move quickly to pick their own tools. There's a good amount of literature about how this can make teams move quickly to have control over their tools. It's bad as well, because it means that you have no ability to make concentrated technology investments and you usually find out a little bit too late, but pros and cons again, like, in a, in a growth moment where we have a ton of budget and you want to optimize for kind of 1 in 100 wins, bottoms up kind of investments can be great. 

Then kind of the other model is typically that you either have like an executive, either CTO or a chief architect or someone like that, who kind of makes most of these decisions top down, or you have like, a community of kind of your staff plus engineers who are kind of doing this as a, you know, an architecture group, a technical advisory, advisory committee, or whatever you call it. All of these are a little bit tricky. I think bottoms up is super messy and messy is good, but messy is hard to manage. Chief architects and CTOs are pretty inefficient in terms of actually getting enough context in their heads to make a good decision on any given, given point. And then committees like in almost every architecture group turns into a committee where they have to balance their, their different perspectives or they're just like slow and kind of inelegant.

It's rarely true that a committee gets the best decision. It does at least bring all the context into the room. Something I've been trying at Carta, and I increasingly believe in, is something that at Carta we're calling navigators, but it's basically having one person per area, kind of roughly a business unit, who is kind of the, the CTO from a technology perspective for that area and gets to be the final decider for that area's technology decisions.

And then we have some written strategy that informs how those folks are supposed to make decisions, but they can make exceptions if they want to. And then they're accountable to me in terms of the exceptions they make to make sure that they're actually going the direction that I want them to. This gets us out of the consensus mode and into, like, a point where they're individually accountable for each of the decisions they're making there.

Brett: Can you share a little bit more about the idea of navigators? Like, what's an example of what that looks like in the context of of Carta?

Will: Yeah, so Carta has, like, a handful of different business units. One of them does fund administration. And so in fund administration, we have one navigator. This is about, you know, 60 engineers roughly working on this area and for example, we've started rolling out Kafka as like a messaging kind of system in many parts of Carta and the navigator and fund administration, like, want to minimize the role of Kafka for them.

They didn't think it was the right tool for them to use. So they didn't. And that was something that, you know, we would talk about a little bit, I was curious to understand the rationale, but sort of place where I want to empower the folks with the most context on the ground of how the tech stack's working to actually make the decisions that make sense.

Other navigators have gone the opposite direction and are leaning really heavily into kind of Kafka and the events approach. And that's, that's great too. It's just about making sure that they have the people with the most context to make a high quality decision in each area are able to do it, rather than trying to roll out a low context uniform policy across everyone.

Brett: When you think about the topic of running an excellent engineering org, have you spent a lot of time thinking about sort of the hidden variable problem? In that, in any sort of team that is producing a great output, it's hard to actually have your arms around all the different inputs or variables that are driving it. And I find that because of that, getting to ground truth on why things work or why a team is high functioning or not can be a bit confusing. You know, example would be, you know, there's this idea of, most teams are too unfocused. So everybody says we need to focus more, but maybe it has nothing to do with focus. It's just poorly executing team. And like, my take is that there's a lot of poor execution that people think is just a focus problem, but it's an execution problem. Or this team is really high functioning and you think it's because of X, Y, or Z, but it's actually, they just have one person on the team with really fantastic intuition or instincts.

And so you can quickly kind of take the wrong lessons from these sort of incredibly complex organisms that are coming together to build software. And I'm curious if you have any reflections sort of on that idea across the different teams and companies you've worked.

Will: First I think this is a real problem, right? And so one, one of the, cause I talk to CTOs and heads of engineering at companies, is like, the biggest challenge they have right now is just their CEOs are saying we need to drive engineering velocity up, we need to ship more value more quickly. And for exactly the reason that you're, you're stating, like, no one has a whole lot of conviction that we can simply drive velocity up.

There's no lever where you're like on sales. I think this is not totally true on sales either, but if you talk to people, there's the theory that on sales, you could just add more people and more sales happen. I think this is like a overly simplistic model of how sales work. Like, there's a lot of people who are just orders of magnitude better at sales and get sales done that no one else would have gotten done.

Salespeople are not fungible, replaceable individuals. They're, they're like uniquely talented people. But for whatever reason, we really want to look at sales as like a numbers game, right? You add more people. There's an average performance, and people look at recruiting the same way. It's like, oh, we do five hires per quarter per recruiter.

So we add two more recruiters and we'll hit our hiring targets. Okay, great. Perfect. Engineering, there's just really no measures that I find super, super helpful here. There's teams that have one individual that are that are pulling a ton of weight. There are teams that have one individual that is getting in the way and they just can't move forward.

So that one individual keeps creating conflict and just removing that individual caused them to speed up quite quite a bit. But this is a really unsatisfactory answer. So you think about, um, a lot of engineering leaders who are kind of managing a team of a couple hundred people, and that team might be, you know, $50, $75, $ 100 million dollars of salary a year.

And then they say something like, well, engineering is kind of like art. You just can't, you can't predict how it's going to work. And you're sitting there, you're the CFO, CEO under the board. And you're like, you hear that and you're like, this guy's an idiot. Like they're telling me this is art, but I'm spending like $100 million on this art each year.

 That's not reassuring to me. So how do you actually like get deeper than that? The thing I've really been thinking about is diving deep. I think one of the lessons and management over the last decade has been that, you know, managers are people leaders in that it is micromanagement that it is bad to get into the details.

I think this is like an anti pattern because it creates disengaged, and it's sort of like context free leadership, and leadership can be so much more than that, right? So, the biggest thing I've been working on and like my own leadership practice has been really diving into the details, not on everything, but figuring out what's the most important area this month, this quarter for me to understand that will help the business move forward and drilling all the way into the details. What are we doing? Why are we doing it? What's the data? Where is the actual source of the data? Data lies in a number of ways, right? I think there's this idea of, um, can people just pull their own data?

And the reason people can't pull their own data is that data lies to them in, in many different ways. And you have to build an intuition around debugging how the data is lying to you. But drilling all the way into it has been the biggest thing I found to figure out where the real key kind of movers are.

That doesn't make it easier to recreate them, though, but it does at least help you understand, like, who are the people you should be listening to? Who are the people whose judgment you should be relying on? But again, this is helpful in terms of understanding how the system works doesn't actually help you in the conversation with the CEO or the board where they want like a lever that's going to drive up productivity.

It's, it's tricky. So, in a lot of these cases, the way to drive up productivity is to reduce the burden around, like, a small number of, like, really high performers. But if those performers ever leave, your productivity is going to drop massively as, as well. And so it's sort of a high risk, high reward strategy and, and again, doesn't really work in its scale.

So I, I definitely think this is a, um, a real gap for engineering as a, as a practice right now.

Brett: What's the advice you tend to give to your friends who are CTOs who are coming to you and sort of saying exactly that?

Will: So I think there's two different problems. There's how do you improve execution? And then there's how do you address your CEO who's telling you to improve execution. I think these should be the same problem, but I find they're a little bit different. And so often with starting with the second one first, in terms of like engaging with the CEO who wants to drive like engineering velocity, I think you just have to start measuring something and giving it to them. And so often when people start measuring, there's always a concern, which is like, Hey, this measurement is imperfect. This measurement is flawed. And that's true for all measurements. So let's talk about like story points in a sprint.

And everyone you talk to about story points in a sprint is going to say this is a terrible way to measure velocity. It's a waste of time. And that's true, but if your CEO disagrees with that, that's actually interesting, right? Because it means that they haven't built an intuition about how these things work.

And so you can help them build an intuition by actually measuring story points, reporting to them, and then getting into the details and showing how it's not a very good, useful- good measure to actually show velocity. So I think people want metrics to show reality. They want to show the truth, but half of metrics is showing the truth. The other half is educating people to inform their mental model about how the truth works. And so when you think about measuring for kind of your CEO for your board, I would encourage people to worry less about things that offend their personal mental model and think more about things that would inform the mental models of their board, their CEO, etc. 

And that I think is the most important goal when you measure upwards is like, how do you inform their mental model and help them learn about the real substance of the work being done? These measures are going to change every quarter for, for the rest of time at your company.

In terms of, like, what you should actually do yourself, this is where it's a little bit different. Once you're dealing with, like, this optimization problem, there are real measures that are useful. They're just not useful to understand how fast the company is going. They're useful to understand where should you invest to go faster.

And so there's 2 different problems here. The upward facing problem of, how fast are we going and the downward problem of how do we invest to go faster? And so there are a lot of optimization metrics, um, I think coming out of DORA, there's the accelerate great book, bunch of metrics there. There's the more recent SPACE framework does some, some similar, um, but slightly different kind of analysis as well.

These metrics are really good because they help you understand where problems are, and you can then invest to improve. But they're not going to, they're not going to increase engineering velocity from 100 to 120. They're simply going to say, um, deployments going pretty slowly right now, we can make an investment to improve deployment.

And then we, we have confidence from the survey data that companies that deploy quickly tend to be more productive. So it's a little bit like un-reassuring for kind of a CEO who wants full clarity. But if you think about your job as an executive to identify opportunities to make bets and then be accountable for the quality of those bets afterwards.

Um, I think that things like Space, DORA's Accelerate Framework, et cetera, are really powerful tools for picking the right bets and then, um, determining if they could work.

Brett: Can you give some examples of this sort of context and alignment problem with a CEO? 

Will: So one thing we're seeing like more kind of at the industry is, um, folks like, Hey, I want to understand how many pull requests we're doing, the size of those pull requests. And so this is something you see at Twitter, I post Elon Musk has this emphasis on just volume of work being done. One, one challenge there is like, what are people who are providing indirect value to kind of a hidden variable point?

Like people who are coaching others or improving the quality of others' PRs versus folks who are actually contributing significant amount of code. Also, there's no direct correlation between volume of code and impact to the product, right? Sometimes more code is in fact worse. So one of the problems that I've dealt with was a request from a CEO actually to understand the volume of of software, like the lines of code being written by the different engineers.

This is tricky because it's, it's super nuanced, like, within each pull request there's like the lines of code, sure, but some of them are are bad. Some of the lines of code are actually external libraries being added. Some of these are mostly tests, which are very valuable, but might be very robust.

Some of these get reverted later and so often when people kind of push for specific things again, going back to this idea of building their mental model. It's like, hey, like, absolutely. We'll get this data and we'll talk through it. But then let's drill into it and understand where the mental model breaks.

Like, let's understand the cases where this is actually a bad proxy. And I think where executives and certainly me in the past get in trouble is they push back. It's like, this is a terrible way to measure. Instead of saying like, hey, let's start measuring here and drill in until we can understand the limitations of measuring this way.

And then we can have like a nuanced conversation about it. And trying to force people out of the details is never a good way to like actually build alignment. You want to pull them into the details, educate them there. And this is something I've had to do a couple of different times. And it's, I think originally I'd be like, Hey, this is wrong.

We don't do this. I'm like, how does this help anyone? Um, it just looks like I'm unwilling to actually participate with what the CEO needs. And coming to appreciate the most valuable thing as an executive, you can do is educate like the other execs on how engineering works, not kind of protecting engineering from visibility from anyone else.

Brett: You mentioned this at a high level, but I'd be interested in some other examples of metrics you actually tend to rely on or find are particularly useful in the context of running in engineering org.

Will: Yeah, let's let's like talk through a few and the first one that I think is interesting because it's it's controversial for many engineers is reliability. So, at Stripe, we had a reliability, like SLA, that we kind of extended to enterprise customers in terms of there was actually a financial consequence to us.

If we dipped below a certain level of uptime, like, I think it was less than 4 nines of API availability. It might have been 4 and a half nines by the end. I forget. Forget the exact details at this point, but we actually have a commitment to our customers and we tracked uptime. And then we wanted to go a step deeper into, like, how do we get a little bit more nuanced in this?

And so one idea was, should we actually set targets the number of production incidents that we have? And then we had different levels of incidents, like a high severity, a medium severity, a low severity, and there was kind of an internal debate, having targets around the number of incidents isn't actually very helpful because then you start to try to hide incidents.

You start trying to do things that don't make sense. Maybe it's more useful to have a target around customer impact of incidents. And then you can kind of, like, go all the way down. And so we started still with, like, a high level target against the number of, like, incidents by certain severities, like, particularly, like, reducing the number of, like, high severity incidents.

And people were frustrated by that. That's okay. Again, come back to this idea of measurements aren't perfect. The goal is to improve the mental model of how kind of companies and leaders think about these areas. And then we can move to more refined kind of measurements over time. So that was one where there was a lot of conflict, but I would encourage people to not worry so much about the perfection of metrics, but really worry about what's something useful that can start refining the mental model of like everyone at the company working together.

So starting with the number of incidents, is that the perfect thing to measure? It's not, but is it useful? Yes. Is it a great platform to refine the mental model of the company about how to measure reliability? Absolutely, yes. You can be ideologically, like, pure and say, we shouldn't measure something, but all it does is slow down the learning of the organization around you doesn't actually help you accomplish your goal. So reliability, and I think uptime is like a simple, useful one to measure. I think a number of things around the development process kind of build velocity, you know, time to run tests- testability. These are all useful measures that I think companies should have.

Um, when you drop below a target, it usually means that you want to invest a little bit more in improving improving those areas as well. Those are, I think, the most useful ones, something I've started adding as well, it's just the number of ships, like, number of large things that we've shipped as an organization.

And again, a controversial thing to measure is people will say, okay, these two different projects for different sizes, so we shouldn't count them as the same. And I actually don't even care about that. I think it's just like holding yourself accountable to like, how frequently are we shipping something?

And do they actually have an impact when we ship them? And then you get a sense over time of whether that number is going up or down. And if it's going down, like, that's a great thing to diagnose. Figure out why it's going down, but I've really gotten much more lax as I get deeper into it in terms of measuring things that give directional signal I can then drill into. 

Earlier in my career I was very pure about, like, measuring the perfect thing. I just found that 11) it took forever, 2), um, you can't guide people to the perfect thing who don't have the context of why the earlier simpler measures don't make sense. So I'm like much less of a purist on this topic than I once was.

Brett: I wanted to go back to something you mentioned a few minutes ago, which is this idea of anti patterns and engineering. And one example you gave is sort of this confusion around micromanagement and I think as an executive being close to the details and how those 2 things can get confused. Maybe you can kind of pick that apart a little bit and then we can talk about some of the other anti patterns that you've noticed.

Will: So typically when you first start managing, like you're coming from like a functional like expertise role, like maybe you are the staff engineer or the tech lead for a team. And then all of a sudden you become the manager for that team. So you're in an interesting place where you have the most context technically about everything the team is doing, and you're also the people manager.

So you now have the authority kind of set work to tell people what to do. In that moment, like advice that people get almost, almost constantly, is get out of the details, step away from the code, it's going to cause you to manage the details too much. It's going to make you a bad manager. And your job is to think about the people and the direction, and the stakeholders. It's not to worry about the code. I think for some people, this is really good advice, right? Some people are so specific in the way they think about things and the way they want things to work that they can't tolerate people kind of doing it any other way.

If you're that person, then stepping out of the details to let your team actually make decisions makes sense. First, that person's hard to hard to be managed by as well. So I'm not sure that's the right archetype for for a manager on average. But second, it's really hard to be a good manager if you don't know what the team is doing in the details.

And so this is a place where, you know, advice that's well meaning for some folks, I think it's overutilized and sometimes used by the team trying to push you out of the details, like, hey, um, giving us direction about the technology is actually micromanagement. That can be true, but it's often not true. We take that idea that kernel, which is that, like, we were kind of taught early in our management career to get out of the details.

And then if you keep that kernel with you, as you get more and more senior, you can start thinking of yourself just as like a resource allocator, like all you do is you have like a budget and you allocate that budget across different kind of teams. Then you, you check in on the quality of that allocation periodically. That's certainly an important part of management, but it's not the entirety of, of management. I think a lot of managers and executives kind of get caught up on it by thinking that when they're managing an entire organization their job is just to make sure that the resources are allocated and that people feel supported.

These are important parts of the job, but it's by no means like what a extremely high functioning executive looks like, because an extremely high functioning executive understands the domain that they're operating in at some level of detail. They understand, like, what are the levers that could really transform the business.

They understand, like, the sorts of business outcomes we need to accomplish and the time frame for them to actually, um, make a more valuable business and to serve our users better. And so as you, as you get too far out of the details, you just become a bureaucrat. And so I think a lot of executives are taught to become bureaucrats rather than actually engaging in the details.

So instead, you shouldn't micromanage every detail and you shouldn't use the term micromanagement probably, but it's often valuable to develop like a set of different leadership styles that you can pick from. And so in cases where there is no clear path forward or where the folks who have the context around the path forward are like violently in disagreement, it pays to get in the details and make the decision yourself.

Then it's not picking one person or picking another. It's not like, oh, I'm taking the product perspective or the engineering perspective. It's like, actually building your own conviction around what you need to do. And making the decision there, so it shouldn't be your default kind of management style as an executive, but it's certainly a necessary management style to be an impactful executive. And so this is where I think the simple idea of like, hey, don't micromanage has a useful kernel of the idea, but people over apply it and undermine themselves as executives as a result.

Brett: What are some other examples of these type of, um, anti patterns or, or, or areas that people tend to get tripped up?

Will: We talked about some in terms of measurement. There's a lot of measurement anti patterns in terms of holding out for the perfect measurement and that's measuring nothing instead of measuring something a little bit, a little bit imperfect, but useful. We already talked about those in detail. I think really anytime you apply like a rule too universally it turns into an anti pattern.

For example, something I believe pretty strongly is that teams should generally be one manager to like six to eight engineers. I think this is a really useful sizing mechanism. I even have this idea of like organizational math, where you can, if you each, each team of six data engineers has one manager, each manager manager manages like five engineers.

And then you can recursively apply that up to figure out the number of managers you need for the ideal kind of setup for your organization. I think this is actually really useful, but where you can get into trouble is applying it like too, too, too loyally, like, these are just quick rules to help you think about an organization and what like the ideal ratios are they're not perfect.

So it doesn't mean you can never have a team with, like, two engineers and one manager. And I think whenever people get too, too structured and rigorous, with any rule kind of run, run into trouble. That's one reason why, like, every policy that I write now has kind of like a last line, which is like exceptions approved by the CTO, which is the reality is, like, in the details, like, sometimes our general policies don't make sense.

And we have to have the openness to kind of thinking through whether a different approach might make sense in a very specific case. You know, another anti pattern I see a lot is related to the strategy question we started with, which is executives coming in, assuming their previous company's context applies locally, and just rolling out what they're familiar with.

And then, you know, if that doesn't work, um, often like doubling down, maybe they bring in kind of senior leadership they've worked with before who are comfortable working that, that way that worked for them previously. But this is just like a little bit like unnecessarily painful, like, is it not easier for you to change a little bit, your leadership style, your approach and to change, like, the entire company around you?

It's almost easier to change yourself, almost always easier to change yourself than to change entire company. But a lot of executives are so fixed on doing it their way that they can't adapt themselves at all. And this is like, one of the, to me, most predictive traits for whether an executive will last more than a year at a company is where they come in and they start with, hey, how can I change myself to fit versus how can I force my entire organization to fit to work with me?

It's hard to move organizations and you shouldn't do it if you don't really need to.

Brett: There's kind of this tension of, I think that most rules are about sort of scaling intuition. This idea of creating some sort of shared consciousness.

 And then to your point, a lot of times, you know, maybe a vast majority of the times you can have a policy that works. But you're trying to create this high quality mechanism around people being curious and looking for the exceptions. 

And so I'd be interested, do you have any other thoughts on what are the implications of the fact that everything can't be boiled down into a simple rules engine? 

Will: I think figuring out where the exceptions are handled is really important and doing that explicitly. So we talked about the navigators before and I had generally believed that decisions are kind of inconsistent across large groups of engineers. You need to have someone who can interpret the strategy consistently for, for each area. 

And to your point, judgment matters, like, different people will do it differently. You want to actually pick who the people are that you have confidence in making the right decisions to stay aligned with you and kind of being successful. So navigators is not, um, we'll, we'll focus on, like, a consensus driven mechanism to do this.

This is like I trust Shauna. And Shauna is going to do it for, for this area because she's the person I have conviction in. And so it's really, it's really about, like, picking the people that you have conviction in and then, um, you know, routing decisions there and being clear about what sort of decisions should route to Shauna versus should route to Stephanie versus should, like, escalate up, up, up to me for discussion, right? And so to me, I think just being really explicit about who is kind of approved to exercise judgment, it goes a long way. And again, this is a bit, um, a bit at odds with, like, the pure management view, which is that companies should be resilient to departure of any individual. People should not be load bearing. There should be redundancy.

But judgment, to your point, you don't just have, like, two of the same person kind of hanging around somewhere. Like, you only have one of them. Um, how do you make good use of them? And then if, if that person, like, were to leave your company or move to a different role. You have to find another person with good judgment, but there's no way to just have two of them.

Organizations are really, even like very kind of large organizations are the culmination of like a few individuals judgment and, and there's just no way to escape that even if you have like a huge budget trying to.

Brett: So you've sort of talked about this in various ways. But I'd be interested in hearing you kind of more concretely articulate, like, what an excellent engineering executive looks like. Or sort of the classic question of what's the difference between the top 1 percent and 10 percent and 50 percent given sort of everything that you figured out thus far.

Will: I think my perspective continues to evolve a little bit, but I think one of the, the core challenges of being an engineering executive is, is you kind of have, like, at least 3 different kind of opposing hats. You have to wear, they're not wholly in opposition, but they're subtly kind of different.

You have to be like, a member of the executive team and kind of thinking about how you actually lead the business forward. And sometimes that means making decisions that are kind of quote, unquote, bad for engineering overall, that could be reducing the budget for engineering. That could be, um, not doing promotions on engineering to hit like a financial target. That could be, um, you know, moving to a business unit model where there is less engineering voice in the room on certain topics, etc.

Then there's like engineering management leadership, where you're actually trying to figure out like a structure and set of processes and policies to like run the organization overall. But then there's the actual like engineer, like actually representing like the technical excellence and the kind of execution as well.

And I find a lot of executives are sort of lean a little bit too heavily on one of these dimensions. For example, there are many engineering leaders out there who are functionally like the head of engineering management. But aren't really the head of like engineering, by which I mean, they don't represent the senior IC, individual contributors, very, very well. They really look at it shortly from a management lens. And sometimes you'll get folks who are kind of very engineering management and kind of engineer centric, but don't think about the business perspective at all. 

And so I think to be like a top tier executive, you have to be willing and able to put on each of these hats, and to take off each of these hats as necessary and only where the appropriate hat, even if it kind of puts you at odds with kind of the other components of, of what you're doing. And that would be like, the 1st and most important piece of advice I have on this. Certainly there are other layers of this.

Like, I think, you know, understanding the domain, like, quite deeply, like, the product, the users. Not thinking about it from constraints, but thinking about from the product perspective first without the constraints and then layering constraints in second. Keeping deep curiosity about how other functions work, not having like a, kind of an arrogance to, you know, engineering supremacy were kind of QA is a second tier or sales is a second tier, but really thinking of all the functions as part of like an organization working together.

There's a lot of details, but I'd say the first and foremost thing is keeping those three teams in mind that the executive team engineering management team, then the engineering team, putting those hats on and off and keeping the ability to do that is probably the most important thing for a top tier executive in my mind.

Brett: So if a CEO of, I don't know, a 500 person software business came to you and said, I'm trying to build my own intuition around whether our CTO is good or great, you know, you might sort of explain sort of these three sub dimensions. What else might you tell them that would help them sort of figure out, uh, the performance of this person or, sort of how to benchmark them?

Will: It's a good question. I mean, really, where I would start would be talking to, like, the peers, the senior managers and, like, the senior engineers separately. And I think you'll get very different signal from each of those groups. But stepping back to think about the CEOs problem, typically, when a CEO of a 500 person company feels their CTO is struggling.

They're probably running into like one of, one of two different things. Um, either the, the individual CTO is like causing friction with other people on the team, for, for some reason like their peers. Or two, just like that, the shipping velocity, like the way that they're able to ship work is just not at their expectations.

And the first one is pretty easy to debug just as like a general manager. Like there's nothing special there for the CTO versus the others. Shipping goes back to some of the, you know, the engineering velocity metrics we've talked about before, which is you can immediately tell shipping is not happening at the pace you expect.

Well, you can't quite figure out why that's true until you dig deeper. And so my advice in that case would be, like, actually drilling in to get a sense of, like, why people think shipping is not going quickly and coming up with the conviction on, on why that is. And so, as you dig into shipping at most startups, if you go to engineering, engineering is going to start with, like, one of two things, like, either maybe three things, either were understaffed, the amount, like, we just don't have enough staffing. Okay. Sure. That's like, not a super helpful diagnosis, but you can say that. 

Two, they might say something like, we have so much technical debt that we are stuck. And then you can force that into, like, what's a specific project, to improve the technical debt, and why haven't we actually resourced that project? Are we not even thinking about that? Have we actively decided not to?

Or third, they're going to point fingers at another function. Like, Oh, product is screwing us over. Oh, like, you know, business development is screwing us over. And, and for each of these, it's like, Oh, okay. You think product is the problem and then go drill in there. And I think that really drilling in two or three like times, you'll probably get like a pretty convincing answer about whether your executive is doing something useful, or sometimes there's like another executive that's struggling another function that's struggling, but I really think just drilling into the details of, like, getting, like, three or four diagnoses from different folks involved and then like, okay, he said, he said, his product has no road now.

Let's go. Look, let's go look at the road now. And then if you think the road map's terrible, then hey, let's um, maybe there is a product problem, but it's their own that's pretty good. And they're pointing out the roadmap, you probably have an entering executive problem. And so I think just drilling in is the only way to really answer these problems.

Anything else is going to get like a kind of consensus view, but like, you don't need a consensus view on executive performance. You as a CEO need to have like a conviction view on like, whether it's actually good or not.

Brett: When you think about your own role as an engineering executive, what are the big ways that you've changed or evolved? And so, if I were to watch you running a team, 5 or 7 years ago, and watch you running a team last week, what would be some of the key tenets that are similar? And then where are there sort of meaningful departures?

Will: I think, like, the biggest departure for me is that I used to think a lot about, so there's this, like, graphic from a long time ago about kind of the, uh, managers who are umbrellas and kind of protect their team from interruptions and managers who kind of direct problems at their team. And I used to be a big believer in the umbrella concept, where it's like, Hey, we should protect our team from the reality and problems around them so that they can focus.

But, but as you work with more senior teams, um, but, but also even as you just work in generally, I've come to believe that protecting your team from reality really just makes things worse for them long term. And so I try to, like, just throw them into the messy details much more than I did in the past. In the past, I used to think, Oh, I'm protecting them. I'm keeping them happy. I'm like energizing them. But now I'm like, I'm like lying to them. If I don't just give them the details and then they can come to terms with the messy truth as it happens versus like having a huge kind of compressed junk package of, of truth to deal with, like in a year when they realized that all these things have been happening or whatnot.

So I try to buffer information way less than I used to, and that means that even if it's disappointing for folks, they get to kind of process it bit by bit rather than dealing with a huge, huge kind of ocean of mess all at one given moment. So that's one thing I've changed quite, quite a bit. Another thing is I've tried to get like much more intentional about being kind of the, the manager for both the managers, but also for the kind of the engineers.

And so having kind of, you know, like I, at Carta I mentioned the navigators, but groups that I'm actually spending time with doing offsites with meeting with regularly who represent different perspectives rather than thinking of myself as like running an organization that has engineers kind of on the leaf nodes or something like that. How do I actually make sure that they're fully represented? And in my head, not when they're escalating, but kind of all the time, those are two things that I've changed a little bit over the last five years or so in terms of keeping things the same, still big believer in communication, big believer in written communication. Big believer in bringing folks into the room so that they can represent themselves rather than having like, really small decision groups. How do we have like, large decision groups where you just hold people accountable for showing up in like, thoughtful and effective ways. Big believer in, like, having the different kind of conflicting stakeholders in a room to kind of work things out live rather than kind of doing it asynchronously or to manage information.

I, disappoint individuals a little bit less than they would if they actually got the truth. I think getting to, like, shared conviction around reality is still something that I believed in then and believe in quite, quite a bit now. Sure, I'm sure there's others for that. That's probably a good starting point.

Brett: What, what's some of the, the sort of subtle things you've picked apart as it relates to communication?

Will: So, there's a lot of different formats for communication. So, one of the things I've always envied is people that go to, like, a McKinsey or something like that get trained formally on how to communicate. So, there's the pyramid method by Barbara Minto, which I don't know if they use anymore, but historically, this is how they train people to communicate and presentations written, so, written documents, et cetera. But then in like tech, we don't really like train people. We really, we don't train people about anything. 

Um, but we, in particular, we, we don't train them about communication, right? And you know, it's a little bit harder to train people like how to write software in like a concise, predictable way, but for communication, I think there are like just a few different templates.

We can actually like teach people that, that get them a surprisingly long way. And so a frequent problem I see folks running into is that communicating to executives, versus communicating to peers or non executives is pretty different. So, if you, if you kind of went to like a standard, like a university or high school or whatnot, there's like a structured way of presenting things kind of start at the beginning, and you work your way through and you get to the conclusion at the end by with all the supporting evidence presented as you get into it.

But executives, you really have to do the opposite, right? Like, you have to get them to the conclusion immediately. If they agree with you, like they don't they don't care about supporting evidence because they can already kind of come up with it in their head. And so I think a ton of people get really, really surprised when executives like hate their presentations and keep skipping over the entirety of it.

But when you present to execs, you should start with the conclusion and then if they already agree with you, stop presenting. Like, there's no reason to keep presenting. And a lot of people just want to keep going and they just lose their, their, their, their listeners immediately. So that's one. Another piece, though, is like, sometimes you start doing this presentation to executives, and then they come back with this, like, really specific idea that you're like, in your head, you're thinking this idea is like, terrible, like, why, why are they suggesting this? Um, this makes literally no sense. 

And then I see people often kind of get into, like, conflict with executives who are communicating these ideas. This doesn't really accomplish that much. And so the thing is, um, executives who are responding to a presentation often have a ton of context that you don't have.

They also don't understand your presentation super well, because they have a lot less detail about what you're working on. So, like, almost always when you get these, like, really, like, offhand, random ideas from executives, there's a kernel of insight there that you can extract by kind of asking more, more detailed questions to them.

But if you like fight them on it, you're like, Hey, this is a bad idea. You know, the reality is it is a bad idea. Like everyone already knows it's a bad idea. You're not there to like prove the executive wrong. They're like. Okay. They have a key idea that they're trying to get across. How do I pull that idea out?

And that's the key thing. Like, it's not that you need to agree with them or disagree with them. It's like, how do you actually understand the piece of context they have that, that you're missing? And I think this is an interesting piece of advice. Because people hate this advice. I call this like extracting the kernel.

So when an executive gives you feedback that you don't understand, try to like pull out the insight from it. Because people are like, Hey, like the execs should just learn to communicate better. Like, why is it my responsibility to communicate effectively with them? They're an executive. They're paid more than I am.

They should communicate better to me. And it's like, Hey, that's true. They should be better. But like sitting in your chair, telling them to communicate better is like not a solution, right? That doesn't solve anything. That's not a way to move forward. That's a way to like push responsibility on them and to stay stuck.

And so it is true. Maybe it's not fair that executives are allowed to communicate poorly. But if you actually want to be a high performer and improve, like, you have to take the initiative based on the actions you're capable of taking. And what you can do is extract the kernel. What you can't do is, like, magically make the executive a better communicator.

And often I'd say, like, what people think good communication is, and what's possible when you're seeing six different presentations a day that you have, like, two minutes to prepare for before they start seeing them. It's just hard to have the full context in your head. And so the exec often has the choice of communicating an idea that's, like, confusing, but has an insight or saying nothing at all.

And so I really push people to, like, value this sort of feedback more and then find a way to pull the kernel out. Rather than focusing on the exec communicating poorly, you're right, they are, but, but that's probably the best you're going to get.

Brett: I think sort of one of the ideas that you're getting at that's really tricky is figuring out what the right altitude to fly at, what's too little context and too much context? And the closer somebody is to your day to day, what you're communicating to them would obviously be different. And so do you have any thoughts on figuring out how to fly at the right altitude? Or if you are communicating to execs or communicating up, what doing that well tends to look like?

Will: I think that the best folks presenting to executives start super zoomed out and then zoom in to their specific problem over the course of like 30 seconds. You know, 2023, our focus has been on driving more revenue within that we expect 70 percent of that revenue to come from business line A.

The biggest project to actually move that revenue forward for business line A is this project X that we're presenting on, which is currently blocked on Y. Then you have that the full kind of zoom in from like, oh, they have no context to the level that you're in. You kind of brought them to the altitude that you're trying to go to, and then you can go into the proposal.

And the substantiating details behind the proposal as, as well. I think people who can do this and do it in literally 15, 20, 30 seconds tops, are super effective at bringing teams along with them. People who want to take a couple minutes to do it, you just lose the listeners. So it's, you have to do it, you have to do it really, really quickly.

And you can, you can practice getting fast at it. And, you know, the execs have a lot of the context, so you don't need to go into great detail. But a little bit of framing just to agree on the altitude you're at, it's super helpful. Like, why, why are we even talking about this? You have to give them the answer to that question quickly.

I think that's really helpful. Two, there's almost always like, every exec has like a couple of people who they're highly aligned with. Who are a little bit more available for them, than them. So maybe they're executive assistant. Maybe it's a chief of staff. Maybe it's a senior manager they work with closely. And just like test with them. It's like, okay, what's the right altitude? I want to come in at this altitude. And then that person can almost always be like, wrong altitude. Do this instead of that. So I just think testing a little bit with, um, folks who have context that aren't the exec themselves is the easiest way to help calibrate.

Certainly after you've been working with a given exec for a while, you don't need to keep doing that.

Brett: Other than just the opposite of what you shared, you know, in the thousands of meetings that you've been into that are similar to this, over the past decade plus, what are the patterns of where things tend to go badly? Or meetings that tend to get derailed in certain ways, can you boil it down to a handful of things that tend to always happen?

Will: Oh, yeah, there's a lot of these. Um, So one, if you go in with, like, total conviction, like, if the decisions already made, and you're not willing to tolerate feedback. Those meetings always go badly because execs think they're there to give feedback. Two, I think some people want to engage with feedback immediately. And usually you can give like a one sentence answer. It's like, Hey, why are we doing X instead of Y? It's like, Oh, because Y is 60 basis points and X is 12 basis points. So you can give like a quick one, but you can't go a lot deeper than that. You have to like, like, Hey, okay, we need to spend more time with this. Like, we'll come back to you with more materials on this point. Got it. Noted. And then we're going to like move forward. Um, so you have to like manage the attendees a little bit. And you have to be open to executives, just like totally derailing. Like, sometimes you try to manage the attendees to like, kind of, Oh, parking lotting that, but we're going to come back to you.

And they'll be like, absolutely not. We're going to drill in on this. And you just have to accept, like, in that case, you're going to drill and you can't get to the actual presentation you want if, if they refuse to. So I think it's just like being open like, you, you have like a plan of how you want the meeting to go.

But sometimes it's not going to go well and it's not going to go how you want it to anyway, and that's okay. You have to kind of be able to think on your feet a little bit as, as it goes awry. 

There's this like, um, this idea of like three Amazon answers and I never worked at Amazon. So I don't know how real this is, but it's, uh, three Amazon answers are like a piece of data. I'll get back to you later today with the answer. And then there's the third one. I forget what it was, but they're all like, very, very concise and clear and there was no, like, maybe you're kind of like a long winded story. It was, it was like point, point, point, yeah, I think the biggest thing is, I think when people go in and they're just not willing to get derailed, that's when it goes, goes really off off the wires. 

I think something that people like underestimate is executives see so many presentations that executives can usually consume slides. Like, if you give me the slides, I can work through your deck and probably like 2 to 3 minutes versus you taking like 45 minutes to present this thing to me and I already know the questions I want to talk to you about. So I think people who really want to run to schedule just like it's hard to stay engaged when you're going through so many presentations. You can work through the content in 2 to 3 minutes, and then you have to spend 45 minutes just like listening as they kind of like rehash it slowly. Man, people try to own the pace rather than letting the listeners own the pace.

I think that's another one where usually people aren't rude to them, but they just, like, kind of get distracted and there's, like, on their phones or something. If you want, like, a high quality exec meeting, you have to let the exec own the pace.

Brett: When you're getting feedback on whatever you're presenting, do you generally find it's ideal to debate those points in the meeting? Or is there sort of a range of situations? One is you just write down the feedback and loop back on it. And the other is you kind of could spend 20 or 30 minutes kind of debating or prosecuting it in real time.

Will: I think there's like two different types of kind of like feedback that there's feedback that makes no sense and then you need to extract the kernel in those cases. I think asking some questions, but then like taking some time to like, think it through and come back. There's also feedback where people do have shared context, they just disagree. 

And so I think that the 2nd case is really valuable to spend time on because then you have the context in the room. You just don't agree on what you should do. And that's where talking through it live is incredibly valuable. But in cases where people are introducing new context and don't really have a lot of shared understanding, it's just like you don't actually have enough information across the group to make a good decision there.

So I think those are kind of come back with the context and then try to have the decision. The latter, I do think talking things through live when people do have the shared context, they just disagree, is incredibly valuable and it's almost impossible to kind of like, get to the end of these things on like a chat on slack or something.

You really do have to like, talk it out live for at least important decisions.

Brett: It's sort of an adjacent question. If I were to watch the types of topics and things that you work on with engineering executives that report to you directly. Do you find that there's sort of an 80/20 rule where most of the things are 1,2 and 3 and you just kind of keep going back to the same types of things? Or does it tend to be very long tail oriented and you kind of can't boil it down to, you know, the script that you kind of keep coming back to 80 percent of the time or sort of something like?

Will: So, I would say for executives, or for kind of my direct reports who, where we're working together well, we're rarely coming back to the same topics. I'd say it's usually a bad sign in my, my one on ones. If we are working through the same things repeatedly over time, it's fine. If something comes back up like six weeks later, you know, Hey, we're having a performance problem with this person, and we're doing this strategy right now.

Six weeks later is like, Hey, here's the outcome of that. What should we do next? That's great. Totally makes sense. But like if every week it's more of like a status-y situation or we aren't actually working through problems, usually like not, not a great sign in my opinion, like I think that the best one to ones for senior folks are actually solving problems together or discussing how we might solve a problem together.

So that that's like my strong preference. I, I do think this is a little bit more true for senior folks, maybe a little bit less true for folks who are earlier in their career where it's a little bit more about helping them build like fundamentals. But I really think the most interesting work that two senior folks can do together is like working on a problem together. That's where you get not like kind of generic kind of direction, but how you get like specific, real alignment and kind of learning from each other about like, how you try to solve something.

Brett: Can you expand a little bit on on sort of the early career of build- building fundamentals? In the context of of engineering, what does that actually look like?

Will: So engineering is like a bit of an apprenticeship model, where you kind of like, you learn by just seeing other people do things and by getting feedback. And so like, obviously none of us have worked in like a, a blacksmith uh, shop, but you know, trying to build something then get feedback, it's like, Hey, like that's, that's like misshapen. Here's how to do it next time. And you just keep doing it and you get feedback on each time you do it. You also see other people who are really good at doing this, you know, the more senior engineers around you, you get feedback from them. And that's how you learn. Like, we kind of joked earlier, like, this isn't an industry that teaches you how to do things.

It's an industry that gives you opportunities to learn how to do things. And so this is largely through this apprenticeship model. I think also true for, um, you know, managers working with engineers, like, how do we engage with different folks in the team, how do we like plan? How do we create space? How do we like have fruitful conflict rather than kind of like negative conflict, conflict that actually builds trust rather than diminishes it?

How do we work through priorities? Like, how do we actually create space in the roadmap for testing or quality investment? If people are saying we need to move faster, things like that. And so I think it's like, just continuing to think about the fundamentals and helping, like, steer people back into it.

And helping people recognize when they're, they're skipping something they need to actually spend some time on. I think a lot of folks, uh, you know, we're in an industry where you can be a senior engineer in, in, in, in four years in some cases, but, but what, you know, what does that even mean? Um, to be senior if it, it takes four years to get there.

But then two, it means people like skip a lot of important skills. So you can't develop everything you need to be an effective senior engineer in four years working in one company, you need to like see different views on these different problems to really kind of develop mastery. And something like you as like a leader can do is like help people recognize where they have gaps that you go back and kind of spend time on, not because they're bad, just because there's always room to kind of develop ourselves.

Brett: Do you do explicit training in the context of, of building engineering teams or is most of it sort of this apprenticeship model where you're training by kind of working through problems with people?

Will: I can do both. So for, for really senior folks, like, typically, it's much more the apprenticeship model because I think the problems they're having are usually very specific and precise problems with, like, this individual, that individual, this business priority and the intersection of those three. 

I do trainings for kind of folks who are not quite a senior. And this is like almost my only way to get access to like that group and batch. I can't, I can't do developmental one on ones with that entire group. They're just too, too, too many of them. And even, even at Calm, there were still a decent number to actually meet with them, like frequently, and they have full context on what they're doing was just super time consuming.

So do trainings, like often pick like an important topic. And this is just not for managers. I think for the kind of staff plus engineers. What not like, what are the, um, important kind of capabilities and then thinking through like a discussion around those. Some things for them to try. I do think a mix is good.

And some people I find love trainings and some people I find hate trainings and that's okay. That's why having a mix of both is so valuable.

Brett: Of the different trainings you do, is there, are there a few that tend to have the biggest impact? 

Will: I don't, I don't think I know the right answer to that. I think usually the trainings, I focus on our reactions to what's going poorly. And so, for example, the training I've done a number of times is helping hiring managers build conviction about hiring folks. So, a frequent challenge for hiring managers is their team never actually aligns in any candidate or for every candidate.

Some people are like, yeah, pretty good. And some people are like, oh, like. They didn't like that they use JavaScript instead of instead of Java or something. You're like, well, like, I don't know if that actually matters, but and so early managers in particular often get like thrown off or if anyone on the team is skeptical, they just don't know what to do.

And they just kind of give up. And so I have a training that I've done a number of times on, like, helping managers understand their job is to build conviction, not to kind of take the average of everyone else's conviction. And that they should override their team, if they have conviction, and they don't think that the concerns that are raised are meaningful.

And some, some of those concerns are meaningful, if it appears like the candidate has significant bias in some direction or another, that, that's meaningful. And you should listen. But you also have to know, like, some interviewers just aren't calibrated and you have to be willing to override them to actually get to an answer.

So that's one I think is really important because I've seen so many managers feel uncomfortable with with that. For more senior managers another one has been like, team composition. Like, how do you actually divide your teams? A lot of senior managers when they first get into that role, have a lot of small little teams, but small teams are not resilient teams.

If you lose one person, the entire team kind of breaks. So how do you think about having larger teams that may be positive in those teams to kind of deal with that, so that's another one, just like, thinking about composition, because this is a group that's never had to think about composition before.

Another one for kind of, kind of staff plus engineers, I think about building consensus. Um, how do you actually do that? And why is that part of the job rather than something that managers do? Like, and how is, how does that actually work? Those are three that I think I've seen work pretty, pretty well when I've given them,

Brett: Maybe a place we could end with is, where have you found the most inspiration on these topics? Are there specific people who have shaped your worldviews? And what are the specific things they imparted on you? 

Will: I'm a big learner by doing sort of person. I think advice is always just so abstract and it's in the details that you really learn. So I'd say the biggest lessons for me are really from the places where I have done things poorly, and that has forced me to kind of introspect a little bit and understand the gaps in my work, but as I've gotten further, I've also gotten better at being able to learn the lessons from others who have made similar mistakes before me and learn.

And so my, my network, um, I have a learning circle of kind of CTO and VP of engineering that meets every other week and kind of does like a shared learning format is then super, super valuable for me, but there's no, there's no like one book, there's no one like blog or one person to me. It's like consuming widely and comprehensively.

Like I've read, you know, many different management and engineering books. I read many blogs. I, you know, listen to, to many folks speak. And it's kind of triangulating through and just building like a, an encyclopedic set of data points about how it's been addressed elsewhere that for me, I've been able to kind of go forward merge in my experience and kind of learn from from that.

And, you know, earlier, you mentioned that, like, the, the details matter and that is, like, unfortunately, true. The details do matter. But for me, it's having conviction in my own experiences. Reflecting on where I've screwed up in my experiences and then having this broad set of data points from others' experiences has really been like the best way to navigate, my career for me..

Brett: How do you run the bi weekly learning circle?

Will: It was a little bit more structured when I started, but the structure has largely stayed the same. So each week we have between six and 10 folks show up typical week, six or seven, we start out with around the world for each person has about one minute to say, what are they focused on this week?

And what's the topic they'd like to talk about? And so the focus on this week is just kind of share out of, you know, what, like a little bit of emotional connection. Like, Hey, I'm dealing with this problem. I'm getting screwed over by, by this project or whatnot. Maybe actually hire someone they're really excited about.

It can be, it can be good to you. Although often they're a little bit more of the challenges top of mind. And then, um, what's the topic that would be helpful? So, you know, we talked about engineering velocity in this chat and engineering velocity and CEOs kind of pushing for engineering velocity are a frequent topic that comes up.

Um, friction between product and engineering, particularly product leaders and engineering leaders is a topic that comes up quite, quite a bit. The job market is something that's come up quite a bit over the last year, as it's just been, like, an incredibly strange job market for, for folks and even an executive roles.

I think there are people in there who are not getting traction after almost a year of searching, despite having many senior roles before. And then it's kind of like, when do you calibrate your expectations up or down based on that.

All sorts of things that come up. But I've actually found this to be a pretty good format. So we collect like the different topics we want to discuss, then I run the group and I pick like, hey, this week we're going to spend, um, a speed round on these three topics. We'll spend five minutes total just quickly sharing thoughts about these hree topics, and then we'll spend 20 minutes on this topic and this topic.

So it's actually worked pretty well for us so far.

Brett: And how long have you run that?

Will: So this group is, um, I guess three or three, three and a half years old, something like that. So I've been running it for a while. And I think one of the interesting things is people kind of come in and out where typically I add another four people every six months to it, or people just kind of their schedule changes, the time doesn't work out for them. Like a number of people have kind of like drop off of exact work to work on less senior rules or to just like, you know, take a sabbatical or whatnot. So I find adding like four-ish people every, every half kind of keeps us at the right number of folks. And it just needs to keep seeing new people with new problems and have different perspectives, which keeps it fresh as well.

Brett: Awesome. Well, thank you so much for joining us.

Will: No, thank you. And, you know, really, really fantastic set of topics.