From product roadmapping to sprint planning: How to ship software at scale — Snir Kodesh
Episode 66

From product roadmapping to sprint planning: How to ship software at scale — Snir Kodesh

Today’s episode is with Snir Kodesh, Head of Engineering at Retool, which is a development platform for building custom business tools. Before joining Retool, Snir spent six years as a Senior Director of Engineering at Lyft.

Play Episode

Today’s episode is with Snir Kodesh, Head of Engineering at Retool, which is a development platform for building custom business tools. Before joining Retool, Snir spent six years as a Senior Director of Engineering at Lyft.

In our conversation, we cover some of the biggest differences between leading engineering teams for a consumer product versus an enterprise platform — and the things that are consistent across both orgs.

First, Snir pulls back the curtain on the software development cycle, starting with setting the product roadmap while balancing a diverse set of customer needs. He outlines who’s in the room to represent product, engineering and design, and what those meetings actually look and sound like.

Next, he dives into how engineering actually starts taking that product roadmap and making a plan of action using the “try, do, consider” framework. He makes the case for leaning on QBRs instead of OKRs, why scope creep gets a bad rap, and his advice for getting better at estimating how long a feature will actually take to complete.

Finally, we zoom out and cover his essential advice for engineering leaders — especially folks who are scaling quickly from leading a small team to a much bigger one.

You can follow Snir on Twitter at @snirkodesh

You can email us questions directly at [email protected] or follow us on Twitter @ and

Brett Berson: Well, thank you so much for joining us.

Snir Kodesh: Thank you for having me really excited to be here.

Brett Berson: Um, so I thought we could start off by talking maybe more broadly about your experience as an engineering leader. And I was curious, maybe you could kind of compare and contrast the last two roles that you've had, um, previous to retool. You were at Lyft and, and now your retool, and maybe use that as kind of a jumping off.

Snir Kodesh: Yeah, that sounds great. Um, yeah, exactly. As you said, uh, from Lyft to retool, uh, from consumer to enterprise and, and, and from marketplace to enterprise. Uh, I think one thing that really stands out to me is that if you think about just the product surface area, um, and, and the product sort of path or happy path, uh, that was really clear at lift, right?

You have somebody come in, open the app, look around, hopefully convert, request a ride and sort of convert every step of the funnel, uh, and, and ultimately get dropped off at their destination. Uh, and that happy path is really hard at retool, um, specifically, because, because it's an enterprise platform, uh, there's sort of infinitely many permutations of what you can do with the platform.

Um, you can introduce your own code, which is a whole other set of, of, of branches that are very hard to anticipate. Uh, and so I think from a, from an engineering stand. Uh, observability and operations are really hard when you sort of open up, uh, those infinitely many permutations. And so that that's certainly like one, uh, massive difference, I think.

Uh, and it's probably true of most, uh, consumer versus, uh, enterprise platform type of dynamics and type of switches. Um, I would say that the challenges on the, on the engineering side were also, uh, quite different. I think at lift, uh, I was working on the marketplace side. Uh, so building some of those large distributed systems that try and clear, you know, a hundreds of thousands plus drivers and, and, and sort of millions, tens of millions rides, uh, many, many, many, many, many, many orders of magnitude, uh, more than, than the type of volume that you would see on, on the enterprise SAS side, uh, and, and processing that data really efficiently finding globally optimal solutions, uh, and doing that all in, you know, sub two seconds was, was really, really challenging.

Uh, just sort of a large data processing and compute challenge, uh, and retool from a, from a technical standpoint is, is very different. Again, you have those infinitely many, uh, branches and paths of what you can build, uh, but creating a client side tool that can support that, that branch factor in sort of a secure, scalable way.

Uh, and again, all of which is living client side at, at lift, we were all sort of server side and able to take advantage of some, some pretty compelling computing power that that's a massive, uh, difference in the engineering side as well.

 So one of the things you just shared was this idea of a consumer product that has a, a clear, happy path versus an enterprise product, like retool that doesn't sort of have those same properties. And so I'm really interested. What does that ultimately mean for engineering and engineering manage?

Snir Kodesh: Yeah. Uh, it's, it's a really great question. And for, for us at retool, again, what's, what's compelling is if you look at the customer segments, customers come in all sorts of shapes and sizes, and we wanna build, uh, not just for the customers that we have today or the ones that are pushing our frontier. Uh, those are obviously so critical and, and so valuable, uh, but also for where we see retool going.

Uh, and, and that, that is a massive difference where, whereas again, on the, on the sort of happy path side in general, if you have a very structured, happy path and you believe that, uh, your entire customer segment is gonna walk through that path, uh, you don't really need to think about the counterfactual, the things that you're not investing today.

And so for us, uh, a lot of it on the management side is, uh, trying to anticipate all the failure modes. In particular, I think today, one of the things that we're thinking a lot about is that frontier, I think, uh, we talk internally, uh, and externally about retool as a, uh, low floor, high ceiling product. And I think that's absolutely true, but I think as you, as you push the frontier, as you push the ceiling, things get harder and harder.

Um, you have to go and explore, uh, escape patches, and we have to be thoughtful about how we build those escape patches for you. So that, um, we're reason, realistically, not going to be able to anticipate every conceivable permutation path, but we wanna have a outlet for those to accomplish for you to be able to achieve those goals.

So a very concrete example here is let's say you're a customer and you're looking to build, uh, a heart rate monitor component. Um, and, and we don't have a heart rate, moderate component in our, in our default, uh, library today because we didn't think to build one. Uh, and so we S. Want you to have a pathway to be successful in retool.

And the way we do that is by exposing, uh, external APIs, uh, that allow you to build those custom components. Um, and so taking that example and extrapolating it, it's really about, um, how do we, uh, try and imagine all the conceivable places where an escape hatch might be needed and then give our customer, our prospective customer, the tools to operate in that escape patch world, um, and in doing so we actually get a lot of really great data, right?

Let's use that heart rate monitor example, uh, maybe multiple customers. We find our building sort of custom components for heart rate monitors. And so we can pull that in and provide sort of first class support for that. Um, or maybe, maybe that actually goes and, and, and gives a lot of creative thought to what we could do even more broadly.

Maybe we wanna build a component marketplace, uh, using our APIs that can be used. Cross customer. Um, and so again, I would say that the, the biggest thing on the engineering side, uh, would be, uh, anticipating the places where the platform will not serve today, uh, and, and building escape patches in those moments.

Uh, if not explicitly pushing the frontier out ourselves, um,

Brett Berson: So, so what does that mean in terms of the way you organize the team or you prioritize work?

Snir Kodesh: Yeah. Um, well, I, I think from a, from a prioritization standpoint, we do want to, again, we, we believe that we are building sort of a, a universal platform, um, that will again, serve everyone we have today and, and everyone in the future, we're not necessarily, uh, you know, we don't have as an example to be very concrete, how we organize, we don't have like a flagship team or a team sort of on the engineering side that is dedicated to one or two specific customers.

Uh, and the reason for that is because we believe that there is sort of the, the downside. Tyranny the minority, for example, right. You'll have one or two customers that really prevent you from achieving a global optimal. Um, and so the, the organization may be counterintuitively is not oriented around, uh, specific customers.

Um, in terms of, you know, in end of one or an end of two customer segment, uh, that being said, uh, we do think our customers come in very distinct shapes and sizes where, um, our enterprise customers face very distinct and different challenges than our self-serve customer base. And specifically. As you become more and more entrenched and more successful with retool, uh, you start to integrate it into your software development life cycle.

Uh, that's not something that you necessarily do day zero day one, uh, using the product, using the platform. Uh, and so specifically, how do we source control, uh, retool apps within a larger company, uh, within a, within a more successful customer base. That's a, a really important challenge for a very specific segment of our customer base.

That is something that we will absolutely orient around, but we believe that that is a cluster of, of customers and a general use case that is generalizable. Uh, and, and so we'll, we'll orient our engineering team toward that direction, but again, maybe UN intuitively or surprisingly, it's not a, um, you know, here's a dedicated team, a professional services team.

We believe it has sort of some negative cultural and, it's sort of, uh, a detractors toward achieving our broader goals.

Brett Berson: Can you talk about maybe in a little bit more detail, how you sort of balanced this enterprise versus SMB. set up customer needs

 I think one of the interesting things, um, when you're building much more of a platform product versus kind of a piece of vertical SAS or a piece of SAS software that sold to sales people or marketers or finance folks, is that I think product road mapping at times is more complicated.

Brett Berson: And I think figuring out how you get someone into the product is significantly harder because the jobs to be done are so much more broad. And particularly if you look at, you know, one of your startup customers that are 10 people versus a hundred thousand person company that's using retool, it just seems incredibly hard to figure out how to build product in that context.

Snir Kodesh: Um, yeah, so I, I would say first you're, you're a hundred percent. Right. And, and, and using that software development life cycle, uh, example, or using sort of like what retool at scale looks like from an, from an operational standpoint, um, there's this really interesting inflection point that happens where, uh, again, that 10 person team versus that a hundred thousand person company probably has.

Central, uh, front end platform or internal tool platform team that they're using. And so we would work really closely with them, which is just distinctly a different challenge and a different customer, uh, than that sort of edge team or, or that engineer that's looking to, um, uh, operate more efficiently, uh, at the internal tool level.

Um, I think there is just one point that I'll make is that there is sort of a, a universal truth, which is that the core product experience. In other words, building internal tools, uh, incredibly quickly and building these internal tool front ends incredibly quickly, that is universal and, and any, any, um, lift that we would make to make onboarding into retool simple, or to, um, you know, extend the frontier and allow you to build more intricate, more performative apps.

Uh, and I mean, performative, not in the perform, but they perform faster. They load faster. Um, You know, that that is sort of the tide that raises all boats. Uh, and so that is, I would say the, the core of, of our roadmap at any given point in time. Uh, and then the place where these segments differentiate is really, uh, again, how they operationalize retool.

And so that does become a meaningful part of our roadmap in a way that I think, uh, a consumer company doesn't necessarily have to think about that, that sort of branch factor. Um, and so we do definitionally have to allocate a meaningful part of our engineering team of our, of our head space of our product roadmap, toward understanding what retool looks like at scale, uh, for that a hundred thousand user base.

But I would say it's sort of like a, almost like a one-way door in the sense that like anything that is good for that, for that small 10 person startup is going to be, uh, good for that a hundred thousand person company. Uh, but the inverse is not necessarily true. Uh, and so it's sort of just additive.

Additive, cognitive load, additive thoughtfulness, um, and, and a very different, uh, operating model at scale that we have to, uh, plan for budget for. Um, and, and that, that's also what makes it exciting because, you know, it sort of falls to me in one of those, uh, good problem to have buckets, right? You could take a hundred thousand person company and have retooled deployed, uh, to one or two teams.

Um, and that would be fine, but that's not nearly as exciting as seeing it sort of deployed, uh, wall to wall. And, and when it is sort of deployed site wide, um, a whole host of different problems, uh, come to bear. Uh, and so again, it's, it's an exciting problem to have, because it means that you've, uh, you've, you sort of permeated throughout the org, you've become sort of the cultural norm of how, uh, tools are built.

Uh, and, and, you know, as the, as the saying goes, you know, the problems don't go away, they just evolve. Well, now your problems have evolved. They've, they've evolved to how to, how to manage it at scale, how to deploy it, how to version it, how to collaborate and get sort of. 10 plus builders within an app concurrently.

Um, a lot of really compelling, uh, problems come to pass there. But, but that's a, that, that again is just exciting that that pushes our frontier.

Brett Berson: of on this thread, um, how does product and engineering fit together in the context of a product like retool that's highly technical and maybe less so comparing and contrasting consumer versus enterprise. Is, is there a unique way that the, the two work together. At Reto that might be different than if you're building a generic SAS app or pick something that is sold to a less technical audience.

Snir Kodesh: Yeah. Um, I would say so I, I think that it's just definitionally more collaborative, right? You know, when we're talking about, uh, building usage, analytics and, and sort of, uh, the product is effectively a, a data warehouse, um, or when we're talking about source control and the product is how you, uh, you know, build your, your, your branching abstraction, uh, in such a way that it's consistent with sort of the, the status quo, uh, in writing code natively more traditionally, um, these are sort of, uh, both technical challenges, but also like really compelling, uh, product challenges that you'd sort of need to understand.

The state of the art, you know, in a, in a prior life, in terms of writing that natively in order to really deliver the value. Uh, and so I think, uh, what, what I see is, uh, an engineering function. This is, this was, uh, both a selection criteria for me in terms of coming to retool. But it's also a value that I really believe in, uh, which is that engineering should be deeply involved and, and thoughtful about what the product experience is like and, you know, product and, and product managers Absolut.

Should and, and, and can take the lead on that. Um, and they can sort of be the, the Canary in the coal mine to some degree in terms of, uh, first line of contact coalescing, a lot of the input, but to have that be sort of a, a strong interface where, uh, the product role sets the definition exclusively and engineer shows up and, and goes and builds, uh, I think is actually, uh, a detractors for us and something that, uh, would, would create a worse product overall.

Uh, and so there's a lot of product consideration, a really good example of something that's being actively worked on right now, um, is extending our, our debug tools. It's like you think about that, that is such a, that is such a technical product to build, um, both in terms. Uh, the challenges that we're trying to solve and bringing that into the, the closed system that is retooled, but also just in terms of the meta, like what are we really trying to solve here?

Uh, and you kind of have to understand it from first principles and it's a developer's problem. It's an engineer's problem, right. Debug tools and, and, and how to debug software effectively is an engineer's problem. Uh, and so, uh, I, I see a really, uh, collaborative environment. Um, I see that extend all the way out to, uh, to that customer engagement, right, where we will have, uh, engineers on QBR with customers, um, sitting in and sort of discussing the challenges, walking through our thought process, explaining what's coming up, uh, explaining parts of our roadmap.

Um, and so I would say, uh, product definitely takes the lead on that. Uh, but it's, it's certainly more collaborative. And I would say, a sort of weaker looser interface than, than a strong one where it's sort of, uh, uh, someone would be reaching over the net. I, if they were getting involved in somebody else's, uh, work stream that that's certainly not the case.

Brett Berson: So, can you go a few levels deeper and maybe tell the story or explain how products are built, sort of what the planning process looks like, how product and engineering might work together on a given product, what kind of cycle are you on? Who's involved,

 So we plan, I, I think abstractly, we plan to the half, but then we obviously, uh, re recalibrate and bring it into the quarter. So I think pretty traditional on that front, um, you know, it's the same sort of, uh, constructs that you might expect in terms of, uh, plans. We recently actually, uh, and this is sort of interesting from, from a, from a, a Norm's perspective, shifted away from OKRs into more of a, a QBR format.

Snir Kodesh: And what I mean by that is. Um, our, our product and engineering teams, uh, are, are sort of more subjectively thinking about, uh, their space and their progress, as opposed to, uh, trying to come up with a objective metric that frankly, isn't all that objective. Um, but the inputs are, are, are maybe what you'd expect.

We look a lot at, uh, at, at customer sentiment at customer signal. Um, we lean heavily on our go to market teams in terms of our success and sales teams to, to help inform and provide inputs into what we're doing. Um, and we blend that with sort of a, a bottom up process from the team. Uh, you know, the team is actively building in the product, which I think is, is always sort of a, a critical part of, you know, any company, right.

Enterprise consumer doesn't really matter, right. That, that sort of, uh, use your own product, culture, dog, food culture. Um, I, I think that that's really important. Uh, and so that generates a set of ideas. Uh, and we blend that with sort of a little bit of, of, of top down direction in terms of like the must haves.

Right. And that's a little bit of, uh, I would say maybe a regret minimization framework, right? If we ended the year today, what would we, we be most regretful of? And these are pretty clear opportunities that really leap out to us. Uh, some of them are, are, are just true north, right. You know, stability and, and performance like we talked about.

Um, but some of them are, are a lot more oriented toward the product itself. Um, you know, today I would say we could be doing a, a better job at sort of building really complex apps, like, like I mentioned, um, and there are again, escape hatches, and there are sort of ways of getting it done today, but I think we aspire to do better and we believe it's sort of a strategic must.

And so investing in that. Um, is, is maybe not something that would come from the teams in this case. It did. Uh, but maybe it wouldn't. Uh, and, and so we would push that, or, you know, in talking about source control, um, you know, the, the branch logic and that collaboration on branches is something that, uh, we've decided as a, as a broader team is just really critical.

It's something, again, that we hear a lot from customers about. Uh, but we, it, it aligns with our intuition. Um, and this is actually, it takes me to an interesting, as I'm talking this out with you, like, it takes me to one interesting place, which is, um, I've been in orgs, that sort of data is, uh, is everything.

Data is the religion, right? And, and you really look at data in order to, uh, explain or express where you should be, uh, making your bets, putting your chips where things are, right, or where things are wrong. And, um, on the one hand, I think that's wonderful. I think being data driven is really important. I think data, uh, presents a very, nefarious, uh, vulnerability, which is.

If you don't know what to measure and you don't think to instrument and you don't necessarily know where to look, it, it could actually lead you astray because your biggest opportunity might be in a place that you just didn't think to instrument. Um, and so what's really interesting about, um, retool specifically, but I think enterprise SA more broadly is that, um, you don't have the same, you know, that bug in the feature nature of data plays roughly the same.

You don't have the ability to sit, uh, and look, every nook and cranny statistical significance is very difficult, uh, because you're not talking. Millions of customers on a daily basis necessarily, especially as you're, as you're getting off the ground, maybe at scale. Uh, and so you have to use 

A lot of that customer sentiment, a lot of that intuition, a lot of that belief of where you're going and a little bit more of that subjective signal about what is missing. Um, but that really is the product planning process in a very roundabout way. I don't know if I quite answered your question, but, um, a mix of, outbound coming, inbound signal, uh, from customers, a mix of, of bottom up thinking in a mix of sort of top down direction.

Brett Berson: And then how does that get brought together and sort of what is the, the planning and execution cadence?

 who's involved in then creating the roadmap and then what's the cadence of sprints. And is there an annual planning and theme component? Is it, is it quarterly? Does it tend to be a little bit more bottoms up and so on?

Snir Kodesh: Yeah. Uh, there is a annual theme. I would say the annual themes are really, intended to be broad, uh, in terms of there's a lot of room for interpretation there, but you know, it certainly removes, uh, certain options from consideration. Uh, and so, you know, for example, when we, when we talk about stability, it's like, well, that's somewhat obvious.

Um, but you know, stability is one of those things that is really subjective in terms of, have you reached your goal or not? And so having it stated, uh, is actually important because it sort of signals organizationally that like, indeed we have not, at least not to our liking. Um, and, and we set a really high bar there.

Um, In terms of the, uh, the team planning process, again, that, that is mostly quarterly. Uh it's in terms of who owns it. Uh, it is certainly the, the product function, uh, but it is in very close collaboration with engineering and design. Uh, and so I would say the, the sort of three headed beast of, of PM, em, and, and design lead come together to really put that plan together, uh, as you'd expect, one of the hardest things is being in sort of a, a, a resource constrained environment in the sense that I think our, our eyes are, are bigger than our stomachs at time.

And, and we really, uh, see so much potential of, of where to put our chips, um, but sometimes have to wake up and, and, and come back to reality in terms of what we're actually able to deliver on any given quarter. Um, and that that's sort of a continuous learning process, right? Because I think the teams themselves are evolving, right?

Let's say every team is doubling every year, uh, continuously you're sort of recalibrating what is your capacity and what can you actually. Uh, take on and that's certainly a big, a big learning muscle right now. Um, but yeah, quarterly on the team level. Uh, and then the reason in part we moved to the QBR, which I think is interesting in the context of that quarterly planning is we realize that there are many projects that just cannot be constrained into a quarter.

I think this is one of those sort of, uh, ex. Explicit downsides of, of quarterly OKRs is that you bias toward items that you can fit and deliver and demonstrate, uh, impact within a quarter. And that's not always the sort of strategic thing to do. Um, and so we want to have the space to take on, uh, six month long lived efforts.

Obviously we push from milestones. We, we wanna see sort of incremental validation, um, but we wanna be able to take on not just six months, but we've had some work streams that have been going on for a year, uh, which, uh, for me personally, sometimes makes me, uh, a little bit, a little bit nervous and a little bit paranoid, cuz it's just not, uh, how I'm, how I'm wired, but I think it's actually been really good for the organization.

It's the only way, uh, to get some of these bigger, uh, projects off the ground and, and this isn't said enough, which is, um, you know, I think back on, on some of the bigger efforts that I've been a part of, uh, none of them have been less than nine months of work. Uh, and it, sometimes it just takes time, right?

It takes time to do these things really well and to be really thoughtful and to sort of engage with customers along the way. And there are things that can be done to de-risk the progress and make sure that you're not just sort of, uh, sitting on an egg that will never hatch, but, um, but, but we want the space, uh, for our teams to be able to take that, that long of thinking that long term thinking as well.

Brett Berson: Is there anything that you do from a ritual or process perspective for these specific long build projects that, you know, increases the probability that they don't land in these multi-year things that never ship, or, you know, you ship something after 14 months and it's sort of not the right thing.

Snir Kodesh: Yeah. Um, I mean, we certainly, I, I don't know if it's, uh, if it's quite on the ritualistic front, but we definitely embrace, uh, the more, uh, qualitative surveying. Uh, and we really do try and get out to a customer segment early. Um, and so, you know, even reflecting on and I can share some of what we're building in terms of, it really is sort of expansion of the internal tool space and moving into, uh, areas that are not strictly on the front end.

Um, you know, as we're, as we're building these products, they've been in, in, in development for, for over a year, um, it's been a, a, a ton of hard work. It's been a, a couple of rotations through that time. No, no hard pivots, which is, which is, uh, you know, exciting to say. Uh, but certainly some like solid, you know, 45 degree rotations in terms of what we're trying to do.

And most of that is predicated on getting it into the hands of, of customers that are eager earlier and, and really taking that, that prototype and that MVP and that sort of, uh, design system, uh, to engineer. Uh, even though they are really complex builds and finding the customer, that is gonna be okay with some, you know, in spite of our, our overall brand.

And in spite of the fact that, you know, we are retool and people have come to expect, uh, a certain level of quality from us, uh, somebody who's willing to embrace and, and recognize and say, yeah, you know, I'm gonna, I'm gonna go and, and, and play with your mobile product. And I know that it's gonna be, uh, potentially janky at times, or the build's gonna break on occasion because you guys are moving fast on it.

But, uh, I, I'm excited to, to, to sort of, uh, trial it, uh, and using, using sort of a, a, a handful anywhere between 50 and a hundred customers, uh, over the course of that year. Um, and refreshing the pool. That's the other sort of factor there, because if you don't refresh the pool, you can, you can get a little bit in the, into the echo chamber and the confirmation bias, you know, the people that have stuck with you that have seen the, the progress over time, get more bullish, but that's not necessarily how somebody new will will think of it.

Um, and so those are the two things that I think we've done. We've, we've refreshed the customer segment. We've gotten customers in very early, um, as early as sort of three months in, uh, to the initial development process. Uh, even though these have been, uh, year plus, uh, builds,

 So you were talking about this idea of QBR versus OKRs. Can you explain that in, in a little bit more precision? So I understand what you mean by that.

Snir Kodesh: yeah, absolutely. Um, and I would say, I think that none of these are intended to be none of these are prescriptions. Uh I'll I'll explain the definition. I'll go back to, to what I mean by that. Um, I think that the, the QBR, right, that the quarterly business review is more of a, um, subjective and sort of directional.

Assessment today, we have our teams just generally assess their, their progress. And, you know, so if you have a, a critical team and a very sort of self critical team, I consider myself a self critical person. So, you know, if, if it were me, I'm always gonna come in sort of scoring low hell or high water, even, even who thinks they're objectively going great for somebody else.

Uh, I, I see sort of the opportunity in it and that, that, that is sort of an explicit downside, but the QBR is really about, um, what could have gone better. um, what went well? What are your plans on a going for basis? Uh, how do you mitigate the things that, that didn't go well and continue doing the things that that did go well, and then a little bit of a, of a pre-mortem right.

Um, what, what, you know, if we, if we forecast a quarter out, you know, three months out or, or half out, six months out, um, what are the things that you believe are, are going to be in the, what didn't go well column and, and the intended exercise there obviously is, um, force people to think about it, uh, and by extension, hopefully plan and mitigate the risk.

Um, and the OKR processes generally more well understood, high level objective, and then sort of the key results. Uh, and part of the QBR intrinsically is objective still needs to be named, right? Like, if, if, if anybody's going to talk about what they're going to be billed, it still has to be sort of thematically, uh, grouped and really be the global priority or, or rather the local priority for, for that team, if not the global priority for the company, um, And so the, the, the good thing about the QBR is that again, it allows a little bit more, uh, open ended thinking where you don't just have to anchor in K that may or may not be achievable on a quarterly cadence.

Um, I, I think the bad part is that, you know, I think directionally, if a team is going to be running, um, the subjective process for, let's say six months to a year, I would be concerned. I think sprinkling in is a good thing to do. And historically we have had sort of the OKR process. Um, but the sprinkling in is a good thing because it allows team to think bigger picture.

It allows team to sort of take a more, I think, subjective, but in a good way, audit of, of where they stand instead of just be scored on metrics that may or may not be the right metrics. Um, now if you're using the QBR process, because. Aren't measuring or there's no observability or you don't have a sense of what success looks like from a quantitative standpoint.

I think that's the downside. And I think that's where, you know, over the long run, you can't just have this QBR process and why for us, it will always be a little bit of a, a portfolio or a mix or, or blending the two approaches.

Brett Berson: And who's involved in that sort of specific meeting. And you mentioned some of the pieces, but can you walk through how the meeting is? Tructure.

Snir Kodesh: Yeah, absolutely. Yeah. So who's involved. Um, it's sort of the, the core team, uh, and I don't mean the entire team. I think, you know, as, as most folks have discovered, we we've found that smaller meetings are actually more effective and allow for, um, more directness and more transparency, which I just think obviously leads to better outcomes.

Um, and that's really hard, right. Uh, at, at the end of the day, there is sort of like an altitude difference. Uh, let me walk you through who, who is in the room and, and it'll make a little bit more sense. So our, our co-founders, uh, David and Anthony, um, myself, uh, ACA are head of product and Ryan are head of design, so design, uh, engineering and product.

Uh, and then from the core team itself, uh, the product manager, the engineering, uh, manager, and then the design lead, uh, or design manager as appropriate. Uh, and, and occasionally we will also include, uh, actually not occasionally we will always include the TL, um, in, in terms of helping to, uh, compliment the, the skill set in the room.

Uh, My point about altitude is, is, is just that if you look at the sort of core team and the, and the leadership team, uh, we operate sort of at, at different levels. And so, uh, we wanna be respectful and sometimes we don't know, and we wanna be vulnerable as well. Uh, and so, um, you know, I think vulnerability is, is easy to do for us in a large group, but that sort of directness or that critique, uh, isn't always, uh, the simple thing to do in a larger group.

Uh, I think that's a, probably a muscle that I could personally get better at. Um, and so the smaller group really helps in that context, uh, in terms of what the meeting is structured. Uh it's, you know, tactically, it's sort of the, the, the open read. we find that, that a silent readthrough is really effective, um, actively leaving comments in the doc, so that there's, uh, sort of a written record and that, uh, there's less sort of room for interpretation. Um, And then we really, we really just get into it and we try and understand really where the team is coming from.

I think really the goal of that particular meeting, frankly, is to walk away with. This sentiment of they've got it right. The, the team's got it. And we've got it. And, and, and that could be, you know, thinking back on, on some specific cases that doesn't necessarily mean that it's all sunshine and rainbows, right?

I've I've, we, we, we asked the teams to score themselves on a scale of one to five, uh, one being quite poor and, and five being sort of exceptional. Um, you know, I'm glad to say no team scored themselves five. I think that again is, is, is sort of the, the pursuit of, of perfection. And, uh, I, I don't necessarily think perfect is, is, is the goal, but we've walked out of meetings with a team that has scored, you know, let's say 1.5 or two, uh, and felt confidence about, about the next three months.

And, and for us, it really is about, um, establishing that record. And I. You know, if we see two of those in a row that that'll be caused for concern, we have yet to see that. Um, but, uh, the score is, is somewhat important to us. Uh, the overall plan and assessment is really important to us. I think again, because we've got, um, the team leads and then the company leads for those equivalent functions.

Uh, there can be some cases where there's sort of misalignment or, you know, a team will, uh, we, we will believe there's a certain root cause for, uh, over or under performance and the team will have a different one. And so, uh, getting to alignment and, and reconciling that is, is certainly really helpful and useful.

Um, you know, one thing that that came up recently, uh, in one of our QBR is, is, um, how engineering onboards onto retool, cuz it really is a very, uh, complex system and requires a lot of sort of global state. Um, and that was sort of a great flag that sort of extends beyond any particular team is sort of a, a companywide or engineering wide need, um, and a really great insight to, to come out of the team.

Brett Berson: And then you talk about after the, this sort of pre-read the conversation that happens. One of the things is you, is, is you sort of ha have a self-assessment is most of that time as you're discussing what's in the doc in progress and where people are going in organic discussion, or there's a series of more sort of structured prompts or, or, or themes that tend to emerge across all the QBR you've you've been involved

Snir Kodesh: Yeah. Um, We try not to. So we actually try to, to, to use the asynchronous time and, and while we're leaving questions, the team is sort of phonetically responding to our questions, which I, I think a very, uh, efficient use of time. Right? I think one of the, one of the funny things about a pre-read is that you've got basically half the room staring at a screen that they've already sort of authored, uh, which is pretty inefficient.

And so I actually like our, our slight deviation against that, which is it's an interactive that, that quiet period of reading is actually an interactive, uh, time because we're engaging, you know, myself and, and, and head of design, head of product in our co-founders are engaging with the doc and leaving comments.

Uh, but the team is actively, uh, responding and reacting. And so, you know, if, if there's a little bit of a back and forth in the threads that will certainly get sort of promoted into a broader discussion. Uh, but for the most part, we actually spend a lot of the time, um, either asking sort of general inquisitive questions about, about the overall strategy and what motivated it, um, or, or sort of identifying, uh, And suggesting sort of working in the try, do consider framework, um, you know, asking the teams to consider or, or try things that, that maybe they, they didn't previously think of.

Um, and so this came up actually recently we were in a, in a QBR for one of our, uh, new product lines and sort of the question around, uh, integration came up, right? How do you integrate, or do you want to integrate a new product line with, with an existing one? And there's, um, you know, a, a lot of reasons to go in either direction, but it was a really, uh, interesting conversation because, uh, there was a point of disagreement, right?

And, and, and sort of one side really wanted to push for, for some integration to sort of demonstrate that we are being, uh, ex incredibly thoughtful about how these, uh, connective, uh, pieces fit together. Uh, and the other side was like, We absolutely see the value of integration, but let's sort of do it in a, in a first class way.

Let's make sure we do an exceptional job with it. And for the time being, there's sort of standalone value, um, I, I like to, uh, make a, a metaphor, a shout out to sort of the, uh, AWS service suite, which is, uh, me being sort of, uh, uh, very bullish as well. But you look at the AWS service suite and you've got a lot of standalone products, uh, and, and some really great interaction effects.

And I think we think of, uh, the retool ecosystem in very much the same way. And, and the question is, do you need to sort of integrate, uh, day zero or, or can you do that, um, later, uh, in sort of an exceptional way?

Brett Berson: So maybe on a, on a related note, what have you figured out as it relates to estimating?

Snir Kodesh: Hm Hmm.

Brett Berson: And I, I think it's kind of an interesting topic of, you know, you have a, a product or set of products, you have teams, um, and, and, and you have a lot of oftentimes dependencies in terms of what you're building. um, and when things aren't going according, uh, to the timeline you established, you have an interesting question of, is it a, is it an estimation problem or is it an execution problem, but I'm just curious.

Are there things that you've figured out or ways in which you've evolved your thinking as it relates to helping individual, uh, engineers or product people or engineering teams, uh, create more predictability or, or increase the probability that, that, that whatever their estimate is, turns out to be accurate?

Snir Kodesh: Yeah. Uh, it's a really good question. And one, one of my funny stories about estimation is I remember sort of earlier in my career and I, I was struggling with this at sort of an individual level. And, uh, my mentor at the time was like, well, just take your estimate and multiply it by two . And I always found that really, really funny because, um, that's not an accurate estimate.

That's just buffer, right. Is all that is that that's sort of admitting that there is no good estimation to be done. Uh, and so I'm gonna cheat and just give myself, you know, I'm gonna run my mile the first time at 10 minute miles, and then I'll run an eight minute mile and feel really good about it.

Um, and, and so I, I always found that that, uh, two X, uh, suggestion, uh, kind of funny, one of the things that, that we did recently, uh, cuz yes. I mean just to, just to be direct, um, yes, we, we, we struggle as I think most engineers or most engineer disciplines do, um, I think again about sort of the, the bigger projects that I, that I ran at Lyft.

And I think the initial estimates were like six months took like a year and a half. Um, and I think we, we definitely see the same thing with retool. One of the exercises that I like to run is sort of like, uh, It's almost like a, a bisect for estimation where you, okay. So we're not estimating well at the, at the month cadence or at the quarter cadence, can we estimate well on the week cadence, can we estimate well, on the day cadence, like I think often you'll, you'll sort of see where the, uh, where the misestimating arises.

Um, and certainly like there's no silver bullet, uh, things that I've seen are, are absolutely the sort of execution risk and the, the unknown, unknown. Uh, cropping up, especially as you take on, uh, large work, you see a little bit less of the unknown unknown when you're going zero to one. So like these new product lines, for example, have seen less, uh, unknown, unknown in terms of, um, the technical risk of working with an existing ecosystem, but they certainly, you know, uh, have unknown unknowns or you know, we miss on identifying a risk vector that, that rears its ugly head, uh, later in the process and, and really BLOS things.

So that's certainly like one risk vector. Uh, I think the other thing is, is scope creep, which, which gets a little bit of a bad rap. Uh, there there's certainly the, the scope creep of we scope a project to be X and now we're gonna make it X plus Y plus Z, because along the way, we've just decided X isn't good enough.

Um, And that's one form of it, but there's another scope creep that I think is actually, um, more admissible, which is, uh, we have this really great, uh, construct from one of our teams. It's called a bug bash, uh, a product that is, uh, almost ready to ship will be presented to the team and the team will just hammer it.

We'll hammer it for, for two plus hours. And this is sort of additive to product reviews and all the other. Uh, operational processes through which we'll try and establish a quality bar, but the cool thing of bug bash is, um, you know, beyond testing beyond sort of smoke testing and integration, testing, and unit testing, the, the owner of that project will often come back with new insights for people, right?

They've been so focused and, and working on the product that things feel intuitive to them when they aren't or, or, you know, they, they have come to, uh, accept, uh, clunky or, or, you know, uh, sharp edges of the product as it's currently being built. Um, or maybe they just are, are, you know, using the same fixed, test suite to, to interact with it.

And, and somebody brings something new. So we saw this like with debug tools where, um, a different app generated a whole host of, of different errors, some of which are not managed all that well in the, in the product. Um, So the bug back is really great, I think because it actually, uh, ensures a certain rigorous quality bar, but it does, it is one of those things.

That's like, okay, well, it's a non-deterministic meeting. I'm gonna come out of it with either, you know, zero minutes of incremental work or like two months of incremental work. Um, and, and that is another sort of like real challenge. Uh, I think for, for us organizationally, we sort of treat these different, um, estimation, uh, root causes with different fervor.

Right? So I think like, misestimating that come as a result of the bug bash in my mind are like more than admissible, right? And that's actually just us holding a, a really high quality bar and, and really deciding what, what Don means. Um, MIS estimations that come out of unknown unknowns. Uh, we've certainly gotten better there in terms of moving more thought earlier in the process, which is sort of the intuitive thing to do.

Um, I've certainly seen projects in my time that get kicked off and I, I don't just mean retail. I mean, globally, you know, without an RFC, without a spec. Um, or if it's a spec, it's sort of like a, a spec that checks the box, but didn't really involve a lot of upfront thought. Um, and, and I think it's hard because as an engineer, especially one who really cares about the business and the product and you, you wanna get out, you, you wanna write the code.

I still remember for me personally, uh, you know, if I was going through a two week spec process, I felt really unproductive and it's a little bit of like a, a reframing or reprogramming because the benefit of those two weeks can actually be many months on the back end. Uh, and so that's the other thing that we've done is move a lot more thinking and a lot more thought upfront into the process.

Brett Berson: Kind of continuing to build on this thread when you look at a given engineering team. So let's say an engineering manager and his, or her direct reports. How do you define excellent performance?

Snir Kodesh: Yeah. Uh, for me, it comes down to impact and then, uh, a little more nuance there sort of impact done sort of the right way. Right. So, that's more on the, on the cultural side, in terms of our people sort of highly engaged in, in driving toward that impact engagement, being a function of how motivated they are.

Do they sort of understand the work, is there a high degree of autonomy and respect? Um, and I think that that really sort of all those factors coming together, uh, a team that has large purview, a lot of autonomy, um, understands the motivation for the work, um, takes this sort of customer need.

And. You know, conversate the customer pain very seriously, um, and then drives toward impact that that is sort of it, uh, I don't believe, high performing or high productive teams, like it's not sort of a line of code metric. It's not, these are most of the purely measurable things that are completely objective can also be gamed.

Um, and so they aren't perfect indicators, um, and it really sort of comes down to impact. And so I think maybe connecting the dots here, uh, if, if I'm, if I'm reading the question the right way in terms of like, well, what do you do when, you know, estimations are Amis and, and things are delayed. I think there's just a subjective, um, evaluation perspective that does have to come into play.

Right. Uh, did, did the project take a new shape part way through, right. If you just sort of. You know, naively evaluate a project on its day one estimate. Well, the entire guts of what it meant to ship, that product had changed. It's not really fair to hold it to that. And the question is how much could we have anticipated?

Um, so, you know, again, connecting those two dots, some of the better performing teams that I've been on, uh, maybe counterintuitively didn't always hit their, their deadlines, right? To me, it it's less about the deadline and, and more about the artifact that comes out the other end. Um, and, and again, do we have to roll back that code?

Um, you know, when it, when it hits, does it, does it really hit and does it sort of stand some reasonable test of time? Uh, do customers love it? Does it have the impact? Does it get the usage, right? If not, why? Like where along the way did we get it wrong? Did we misunderstand the signal? Right. There's lots of examples of projects that, that I've seen where, you know, we got the, the letter of the law.

Right, right. But not the spirit. and this is, these are asked specifically from customers where, you know, they'll say something like, well, we, we really want sort of, uh, I'll use an example, uh, a, a local development environment. It's like, okay, great. We, we go and build that to the best of our understanding, but we don't ask the sort of five whys or, or the five of what stops you today.

And you quickly realize it's about, you know, uh, configuration sprawl and credential sharing, and a whole bunch of other needs that don't necessarily get met when the product and project first lands. Um, and so for me, it, it really, it really is the impact. Um, and then not just the, uh, the impact combined with, you know, how the work got done.

And that again is, is more of those en engineering cultural values that, that are just really important to me. Uh, not just the, the traditional one's operational excellence, quality stability, but, but also the, um, you know, the autonomy, the thoughtfulness, the, the business mindedness and, and how, how the team, uh, responded to the market.

Brett Berson: Do you think as you've grown in your career, you've embraced intuition and subjectivity more than maybe the beginning. And, and I'm asking, cuz I, I think, you know, this, this idea of like the role of intuition judgment, subjective versus objective is an interesting one and it touches some of the different parts of what you've been talking about all the way back to the comment around some of the issues of being too data driven.

And I I'm curious if you sort of think about that or are there ways in which you behave as a team that maybe avoids some of the problem when everything, when you lean more on intuition or more on things that are subjective I'm curious if anything pops to mind for you.

Snir Kodesh: Yeah. I mean, I will say 1 quick thought, which is, I do think there are plenty of examples where we set the constraints ahead of time and they are sort of data driven. Right. So I'm thinking about. One of our new product lines and, and sort of the, the plan at first was a very large, uh, sort of breath play like many, many customers.

And we refocused that. We said, actually, we think that fewer customers, but, but deeper integrations and, and more sort of, uh, penetration into a given org, uh, is more important for this particular product. Um, And we really set those in terms of raw numbers. Uh, now again, as we set the raw numbers, uh, in my, in my head, I was thinking, you know, it doesn't have to literally be these numbers.

It just has to have to be directionally in, in, in this, in this vein. Um, and so it's, again, it's not quite as it's, it's sort of a, a nonbinary assessment, um, of, of achievement, right? Where let's say, just throwing out some numbers, this wasn't actually the case for this product, but let's say we would've said, Hey, we'd like to see, you know, a hundred customers with upwards of 50 end users, uh, per app.

Um, well, if it would've been 80 customers or if it would've been, you know, 30 end users, it would've been directional. What we didn't wanna see is, you know, thousands of customers with, you know, one to two end users that that would not have achieved, um, success or impact in our mind. So there are, there are cases where, um, we can be, and, and we have been, and we are a lot more prescriptive.

I think my, my, my point, which you're right. It is, it is pretty consistent is that, um, sometimes. When you over index for data and you avoid, you have to, you have to marry the two, right? Um, if. Have no subjectivity and no intuition. And you just focus on data, frankly, that's comforting. And going back to, to your original question, I think the answer is yes, over, over my career, I have come to embrace intuition more.

Um, and the reason for that is cuz I think early innings. Data is inarguable, right? It's you know, if, if you, and again, I don't, I don't believe in this, in the slightest, but if you gauge engineering productivity on lines of code, like, oh, that that's awfully convenient, right? Because you have some sort of north star metric that you just can't argue with.

And it doesn't matter whether somebody edited or read me 50,000 times, 50,000 lines of code. Great. You're successful. Right. And again, this is where data and subjectivity, uh, collide, because obviously I don't think that's the, the right north star metric. And I don't think editing or read me 50,000 times would be, uh, would, would be anybody's definition of success, but early innings, it's comforting to have that.

And, and, and you, you feel. Uh, you feel like you're wearing some amount of armor because you have this data, whereas when you just rely on intuition, it's a little bit more vulnerable. You're, you're a little more exposed. Um, you have to have really good, uh, sort of strong opinions, at least loosely held. Uh, and you have to have, uh, reason that is explainable and, and, and justifiable for why you think that way.

Um, I do think so that that's on the point of, of, of me personally and the sort of evolution over time. Uh, I do think you have to marry that with sort of being your own harshest critic. I think this is where, uh, I am sort of short term, uh, critical and, and maybe even short term pessimistic, but long term, very optimistic.

Uh, and I think that that's an important balance to have, because in the absence of data, there's a, there's a different failure mode, which is everything just looks great. And you just constantly redefine what success looks like, and you never, uh, you know, achieve greatness, right? You sort of constantly justify, um, flaw, you know, , this is a story from, from, uh, way back when, for me, but a friend of mine, you know, we were doing a standup and, and one of the teams that I was on, we had this culture of sort of clapping, uh, when something would ship.

Uh, and, and, you know, I can't remember how we got here, but someone. Shipped a bug fix for a bug they introduced and we clapped and, and, and my friend was like, are we seriously going to clap for an error that we ourselves introduced? And that, that was again many, many, many years ago, but it was an eyeopening moment for me because we do have to be a little bit more self critical than that.

And we have to be a little, we have to push ourselves to, to hire Heights than just say, oh, code went out, let's applaud for it. Even if really it was a lot of self harm along the way. Um, and, and so I use that example as to say, like there has to be critique. There has to be sort of a bias for wanting to do better, uh, along the way in order for that subjective environment to work.

Uh, and then in terms of marrying, um, Different forms of subjectivity and, and building alignment because you have a very good point that, um, my impression of the world isn't necessarily the same as anybody else's, that is where the directional data helps. Right. And that is where, you know, uh, if we're working on something for performance, we are going to set a millisecond constraint on it.

Um, and, and, and sort of load time will be defined on the order of milliseconds. Uh, a product launch will be defined by customer engagement, um, and by, you know, daily active users. And so it's not, we're not, we're not walking away and just saying, Hey, this is sort of a, a, a, a art form, you know, we'll just know it, we see it.

Um, we try to set those directional fence posts, but there's just more nuance to it is, I guess what I'm trying to say.

Brett Berson: Something, you mentioned a few minutes ago that I just wanted you to explain was this try, do consider framework that I think you all use, or maybe in, in the context of a QBR, what, what, what is sort of the, the sort of expl explainer there?

Snir Kodesh: Yeah. Uh, so try to consider is, is, is really nice. It's just a very good structured way for a team to understand, uh, where we as sort of a leadership team are coming from, um, you know, do is obviously the sort of strongest version of that, which is like, we don't really want, you know, we'll discuss it and debate it and that's fine, but we sort of have, again, that's sort of, I would argue strong opinion, strongly held.

Um, and so if we have a perspective, um, then we will just ask the team to do something. Um, and obviously they can say no and they can, they can explain, uh, you know, why, and, and we can engage in that. Um, but, uh, more often than not, I think the, the dues prevail, um, consider is. Way more. Uh open-ended and is actually more of a, a, a ask and a question, right?

It's sometimes come in the, come in the form of, uh, did you consider, or how are you sort of weighing, right. It's it's more explor, exploratory, and really trying to understand how the team is thinking about, uh, a certain theme or topic or product, um, And try is, is an ask, right? We're we're not, we're not at the Dew point.

It, it maybe is a, uh, loosely held opinion. Uh, and, and probably not the strongest opinion either, um, where, uh, we, we like them to try it on for size and see if it works. Uh, but again, it's because we are vulnerably unsure, um, of, of our perspective. Um, and so that, that, that sort of tried to consider a good example was actually, uh, the one that I, that I just used with respect to the, the usage numbers, right, where the team, the way this came about is there was a, a sort of table in one of these docs showing a ton of customers, but sort of one or two end users.

Uh, and we basically said, Hey, this isn't it. Right. We need, what the do is, is, is, is change the, the acceptance criteria, right? And change the exit criteria to be, uh, if it means fewer customers, that's fine. But what we really wanna see is depth, not breadth. I'm curious, I, assume you get lots of heads of engineering or VPs of engineering reaching out as their kind of growing their teams and you share notes or ask questions or sort of those type of things.

Brett Berson: And so I'd be interested when you think about folks that are going from a small team, you know, it's 10 or 15 engineers, and they're gonna be growing into a 30, 50, a hundred person engineering team. What are the types of problems that generally emerge? And what are the types of advice that you tend to give on a recurring basis for those type of problem?

Snir Kodesh: Yeah. Uh, actually just, you know, uh, got off a call last week with, with someone that, that, that fits the bill and, and the thing that comes up the most and the most dominant thing. And it's frankly, the thing that I also struggled a lot with personally, um, is sort of delegation. Uh, and, and I think it's particularly difficult in the engineering, uh, discipline, because for so long, uh, you're sort of, um, you either directly or indirectly quantified or qualified by, you know, the code that you ship, it's a very sort of hands on somewhat, you know, it it's collaborative, you work with other people, but at the end of the day, it's also an individualistic activity.

You sort of do it and, and it needs to get. And I think when companies scale really quickly, so much, right, just in the natural, uh, process of scaling, there's less documentation. There's a lot of, uh, sort of, uh, domain knowledge that is, that is held by individuals. Um, I think in particular leaders and especially ones, you know, at the earlier days who were also presumably writing a, a decent amount of code, um, handing that off is really hard, not because of some sort of ego, but just because more often than not the fastest path to getting something done, is them doing it.

And that's been true for the entire time of the company to that point in time. Um, and again, I, I experienced this and, and had to sort of face my demons, uh, many years ago to, to, to sort of. Crest it, and at times I still struggle with it. Uh, but this notion of like being really thoughtful about what you can delegate and, and the other part of the coin, which, which fits into subject number two, which is hiring, finding the people that you actually feel, conviction, confidence that you can delegate to is, is, is the far and away, most dominant thing that I share with folks.

 you know, people are primed to want to do it themselves because that's been the currency. They are also the most effective person to do it. And that creates just a lot of built up bias toward doing it. But you can't scale. and, and you find that sooner rather than later. Uh, so I think that's one, I think the second thing is, is the, the sort of org structure, um, and, and finding leverage, uh, through the organization.

Snir Kodesh: Uh, and so part of that is delegation, right? But part of that is also just hiring the right people. I think in those early days, you're, you're hiring very logically sort of ICS and hopefully you're, you're doing a good job hiring ICS that actually wear many hats and are, are very close to the business and sort of have good communication, uh, primitives as well, whether sort of written or verbal or both.

Um, but I think very quickly, you want to think about the org structure and how it sort of operates effectively. I think there is controlled chaos is very good. Um, but you quickly get into uncontrolled chaos where people are just running. Anywhere and everywhere. And you get a lot of extremely local priorities that are set, that don't actually true up to, to any sort of positive global outcome, right.

If everybody can set what they work on and there's no sort of structure around it, uh, you're not necessarily selecting for the best things for the business. Uh, and so on that point, I, I really encourage people to, um, find the unicorn. I think, um, what a lot of founders will default to is a really strong IC who has like an interest in management, for example.

And that's a very dangerous, uh, failure mode. While that person should absolutely go through that, that transition and, and get into management. They're not necessarily the right person to take you from 10 to 50. Um, in fact, more often than not more often than not, they are not. Um, and that can be because of, uh, cultural norms or because they're learning how to performance manage, or there's just a lot of learnings along the way.

They don't necessarily have a strong philosophical backbone for, for how they think about org structures. Um, so that's the second thing I'll always encourage. And, and I, I try to, um, advise, uh, founders to not try and do the, the two for one special where, um, they, they really are. They're thinking they're getting a, a great org leader, um, out of a sort of 1%, uh, technical leader.

Uh, and, and sometimes they can, which is great, right? And there are those unicorns out there, and those are awesome. Um, but this is sort of the, the flaw of, you know, everybody thinking they're above average, like the odds that, that. Every company that thinks they have, that actually has, that is, is quite low.

Um, and so that, that's the other piece, which is really think about the, or scale, think about where you're, where you want to go, where you're trying to go and you know, how can you get somebody today that can, um, be sort of comfortable at that state? Um, less variability around like, oh, well they also have to perfectly grow with a company at every step along the way.

Brett Berson: On the first topic that you mentioned on delegation, what, what advice do you give folks other than it's going to be an issue for you? Are there, are there, or I guess maybe put a different way. If you look at your own journey to become better at delegating,

Snir Kodesh: yeah.

Brett Berson: were there unlocks along the way or, um, what, what did you do to get from where you are today versus where you were before?

Snir Kodesh: My favorite thing is the counter factual. Maybe this just works really well with engineers. Again, we, we sort of have this sort of like logic, branch factor thingy. Um, I, I like to do the counter factual. Uh, this is something actually that's worked with, with folks on my team today, which is if you could go back in time and at different junctures, right.

Do something different, maybe it's hire somebody instead of spending your time, coding, hire somebody who could have helped accelerate that over the long run. Right. Um, would you have, and I, I think that's one thing that most people, when they look back, they're able to be objective and they look forward for some odd reason.

It's very hard for us, right? Where in the heat of battle in the moment, you know, war time, we just need to get shit. Uh, and, and it's harder to be objective, but when you look back and you ask yourself those, those sort of critical inflection points, would you rather have done what you did or rather done something different, maybe that's hire a manager to take some managerial load off your plate.

Maybe that's higher an IC to take some of the execution load off your plate. Um, maybe that's hire someone to O on the product side, right. To help compliment and, and, and jam with you on, on sort of product strategy and have more, uh, dedicated time for that. And more sort of institutional time, those counterfactual questions where I, where, where I, you know, sit with someone and, and we look back like 90% of the time.

They're like, yep, absolutely. And so then the, the follow up questions, like, okay, well, you felt that way then can't you see how, if you go down, like, can't, you see how the current state will lead you to the exact same counter factual consideration on present day, right? Like forecast the future, looking back on today, aren't you gonna feel the same way?

And that changes behavior. Um, And so it, it depends like what, in terms of like, which pressure point we're talking about, whether it's, um, technical execution or, or system and, and, and architecture, uh, or hiring or org design, like that is really sort of person, person, company by company because the, the growth curves are, are different depending on, on, on the company.

But that exercise has always worked. I shouldn't say always, but like 90% of the time works really well.

Brett Berson: Great place to end. Thanks so much for spending all this time with us. This was great.

Snir Kodesh: Thank you. Thank you for having me. This was a lot of fun.