Building a highly-technical enterprise product? Essential advice for product leaders — Nate Stewart of Cockroach Labs
Episode 58

Building a highly-technical enterprise product? Essential advice for product leaders — Nate Stewart of Cockroach Labs

Today’s episode is with Nate Stewart, CPO of Cockroach Labs, the creator of database product CockroachDB.In today’s conversation, we cover his essential advice for building a highly-technical product. He sketches out how the Cockroach team decided on the specific use case for its database product.

Play Episode

Today’s episode is with Nate Stewart, CPO of Cockroach Labs, the creator of database product CockroachDB.


In today’s conversation, we cover his essential advice for building a highly-technical product. He sketches out how the Cockroach team decided on the specific use case for its database product. Nate explains the steps the team took to reach conviction on their go-forward plan — which meant saying no to a lot of customers who didn’t align with the product roadmap. Nate dives into the tactical ways to avoid taking on too many customer commitments, which he calls tech debt for product teams.


Next, Nate dives into his advice for approaching design partnerships, especially when handling more conservative enterprise clients. He explains the different types of design partners, and why you should have all of those represented in the early days of your startup.


Finally, we wrap up with his advice for other product leaders, including how to create a rock-solid partnership with a CEO as the first head of product, and how he solicits honest feedback across the executive team.


You can follow Nate on Twitter at @Nate_Stewart


You can email us questions directly at [email protected] or follow us on Twitter @ twitter.com/firstround and twitter.com/brettberson

Brett Berson: Well, thanks so much for joining us.

Nate Stewart: Brett. Thanks so much for having me really excited to be here. 

Brett Berson: So I wanted to start by talking about maybe some of the intricacies or differences. When you think about building products in an open source or developer infrastructure context versus maybe more traditional SAS. And you've had experiences in a variety of different shapes of products. And obviously most recently have spent a tremendous amount of time building and scaling enterprise infrastructure with an open source component.

And so I just wondered to start kind of a little bit more open-ended and maybe have you share some of the things you've figured out as you've been building and scaling products in this enterprise infrastructure space.

 The good news is that a lot of the principles for building a product organization are the same across both of those. So you're still putting in place the people processes, culture to take market inputs and ultimately to create a roadmap that should create value over the long-term. But when you start drilling into, you know, what does it mean to build a capability in a infrastructure company?

Nate Stewart: The biggest difference that I've seen is that you're not just designing for human users. You're also taking into account, uh, machine users as well. Right? 

So as a specific example, Uh, early, when I was at cockroach labs, this was like 2018. There was a big trend around unstructured data, And we were adding support for unstructured data. so people could store data within cockroach and use that to power some, you know, machine learning experiences. And what we saw was that we understood the requirements of the feature, how to write this data, how to read this data, how to edit the data. But when we actually released the V1, we didn't see the reception that we expected.

And as we started to drill in, we, we saw something pretty interesting where in a traditional B2B SAS company, you know, your, your features would be used, you know, once a day and twice a day, as part of some core workflow in a, an infrastructure company, your feature may be used a hundred thousand times a second.

And so you really need to start thinking about the performance profile of the capabilities that you're building, because that can be the difference between having product market fit or not. You have to think about how your features interact with the underlying cloud infrastructure, because if they are not using resources efficiently, again, at a hundred thousand uses per second, things can get cost prohibitive.

And then that again can be the difference between having product market fit and not so starting to have a broader view of how the features exist. Both for humans and machines is one of the big differences in working at the infrastructure layer.

Brett Berson: so on that point about thinking about the performance profile, or how does it interact with the underlying cloud infrastructure? How does that actually express itself when it gets into the nitty-gritty of actually building a product or product strategy, how does that reality show up?

 part of the challenge at that building a database from scratch is. Way that your customers, uh, behavior changes over time. So early on at cockroach, we were just trying to get the first users, the first customers, and they were, they were building small applications, not really, um, using cockroach for what it was designed for.

Nate Stewart: You know, do you have these scalable, uh, data intensive applications? And then what we started seeing is that okay? Yes. You know, we were able to understand the initial needs to kick the tires, but once people started getting into production, once they started scaling and going from supporting a hundred users to a thousand users, to a million users, we started seeing where the performance issues could, you know, really cause a high operation.

Uh, burden for the user. And so we started thinking about, all right, what does it mean to test the edges of our product or test the performance of our product in different scenarios and how do we actually use that to differentiate versus our competitors? So, as a specific example, one of the biggest decisions that we made early on at cockroach labs was very simple.

You know, what types of queries do we want to make fast? Um, when I joined, we had an option. We could build a database that was more like a analytic system like snowflake. It was very good at running large aggregations, doing math and computations, across many, many roses. Or we could build a database that was, um, very performant when it came to writing and reading very small amounts of data.

Um, but in a way that's consistent in everything ties out. And so that was one of the big decision points. You know, what types of use cases do we want to be a performance for? Um, ultimately the, the big decision that I made was to focus on those transactional use cases because the, the thing that made a cockroach DB unique and makes cockroach DB unique is the resilience.

So if your analytics database goes down for two seconds, that's okay. You press refresh, you run the analytics again. If the database that is powering your payments processing goes down for two seconds, you can use millions of dollars. So, um, that starts to show how you can start to build a strategy around the core capabilities.

And that's how it surfaced for thinking about performance and cost and things like that.

Brett Berson: And so w with that sort of anecdote, what was the process that you went through to land on that opinionated stance? Was it organic and like highly judgment based, or it was kind of rigorous.

 So I joined as the first head of product. And one of the things that I had to do was put in place the processes for making decisions around what to build, what are the types of capabilities that we should say yes to and what, what are the things that we should avoid? And so when you have a database, I mean, databases work in every industry, they work in many different use cases.

Nate Stewart: You need some way of saying, no, you need some way of filtering. Uh, some of these are requests. And so I was looking at what is it. What is the first part of a decision tree that we could use to filter out things that aren't going to be in our strategic interests over the longterm. And so this is something that I backed into.

I looked at different ways of segmenting the database market, but the one that became the most clear in the most natural decision point was just, what are the types of queries that people want to run? Are they analytical? Are they transactional? And just putting that line in the sand and getting alignment with the founders and the revenue team that we are going to be a transactional database that helped us not only say no to a lot of things that would push us more into say a competitors to snowflake, to start to say yes to things that could build on our strength over a very long period of time.

And we're starting to see the results of that with our recent customer growth. 

Brett Berson: W did you find that maybe you as a leadership team or you along with the founding team, did you find that decision difficult to make, or did it kind of become quite clear and it wasn't particularly controversial?

Nate Stewart: oh, it was very controversial?

You know, There's a lot written about startup strategy and execution and how you use OKRs to go from strategy to, activity. But there's not a lot of literature around why do strategies fail in the, the real world? And there were some real world pressures that were making it difficult to hold the line here.

Um, the first is that it was very early days when we made this decision. This was pre revenue. And so as soon. as We said, Hey, we are going to be a transactional database. Of course, we have large customers that are saying, you know, we'll give you your first $100,000 $200,000. If only you build these analytical capabilities, you're talking about your first revenue.

That's very hard to say no to in, you know, at that stage of a startup. So there was enormous pressure there, you know, and as we got a little bit further along, there were some other pressures on the, the strategy. Um, for example, recognizing that there's different possible strategies that would be successful In a market as large as ours for our competitors that were starting in, a similar space.

You know, when we decided to focus on, mission critical transactional workloads with a focus on keeping data close to users, Other feasible strategies opened up for our competitors. And so you start to see customers that you say no to go into the arms of your competitors, and that can be demoralizing if you don't have conviction.

And if you're not able to show how the strategy that you've outlined is ultimately going to create success both in the short term, but over the long period as well. So those are some of the things that made it difficult to stick to that strategy of transactional workloads as opposed to something else.

Brett Berson: In in, on, on that point, other than, than what you've outlined in terms of some of the, some of the early process that you put in place or the process that you followed to land on the transactional use case. Was there anything else that you, you did to, to give you where the team conducted? Or to make it okay.

To see revenue go to competitors.

Nate Stewart: Yeah, there was a, there was a big story telling element here. Um, you know, I started with some basic, you know, MBA style, uh, reports around just market size, right? Like what's happening in the analytics space versus what's happening in the, uh, the transactional space. And really, you know, w what I initially said was leaning on our differentiators.

We're not at, you know, we can build a great analytics and database, but we're not as differentiated. Our secret sauce is in keeping your data online, up and running and close to users. That's something that you don't really get if you're focusing just on analytics. So there was a, there was a storytelling element there, but also starting to pair that with some real design partners.

So these aren't people that were ready to pay us now, but I could use. Um, interest to show a possible future. So for example, we had a customer that was doing a significant amount of payment processing and all of that processing was happening in a single data center. So if for some reason that data center went down for any reason, whether there was a operator error or an earthquake or a power surge, their business would be completely offline.

They had to get outside of a single region. They had to build a resilient system because it was only a matter of time before that happened. And when you start to see how many opportunities there are like that, and when you start to see the conviction of these customers that are trying to move off of legacy databases, you start to get a picture for not just academically how big this market is.

You can feel it viscerally. And that was a big part of the storytelling to build conviction around the transactional. 

Brett Berson: Anything else that you did in terms of that, did those design partner relationships in terms of how you structured it or how that, um, fit into this, this process of really trying to own this specific transactional use?

Nate Stewart: Yeah. The, you know, you asked at the beginning of the podcast, what are some of the differences in the infrastructure space? And what I learned Was that people who buy enterprise infrastructure tend to be more conservative, you know, by definition, the inf infrastructure is what supports some core business process.

So if your infrastructure is unstable, if your infrastructure goes down, your core business is unstable or your, your business can go down. And so we found that. It wasn't as easy as I would have expected to get people to give feedback on beta software, there was a huge engineering investment that would be required to kick the tires of a new feature that wasn't fully baked.

And it really wasn't worth it to our customers. They just wanted the product to work and they wanted it to work very well. Um, we also found that when our customers were rolling out or, uh, purchasing the software, they often didn't want to run the latest and greatest. They want it to be six months behind or a year behind.

So a lot of the feedback that we got from our customers was based off of decisions that we had made a year or two years ago. So it was really, um, you know, driving through the rear view mirror. In a world like that, I ended up splitting our design partnerships into three big categories. We had a strategic design partners where we really talked about where we thought the database market was going.

What were some of the macro problems that people were experiencing and the pains that their organizations were going through? Um, the, the second portion of the design partnership is around the actual roadmap, the capabilities that we want to add to the database and getting buy-in around that. So for example, um, it was clear over time that if people are installing a mission critical database, they want it to plug into the rest of the organization.

Ideally sending updates to other systems in real time. So we talked about adding a real time data streaming capability. There was a lot of excitement around that and that helped us de-risk the capability. But then you have the last part of the design partnership, which is giving feedback on the particular feature, getting engineers to use it, try it, kick the tires, see the performance profile.

That's where we ran into more difficulty. And that really led us to a mode of spending more time dogfooding right. Figuring out ways to use the database like our customers would to power our own business processes. And that's one of the last ways that we could de-risk the product.

Brett Berson: Was there anything else about the way that you structured the design partnerships, um, in those early days that you think is extensible, I guess put another way if, um, if you had somebody that was starting another infrastructure company and they're like, Hey Nate, we're just getting going on this design partnership process.

As we build out our early product, what are some things to keep in mind to make that successful?

 So the first is just recognizing that there are different levels of, uh, eliminating risk or reducing risk at the strategic level. Are you going in the right direction at the roadmap level and then at the feature level? And so before getting design partners at the feature level, that is the most difficult part in a, an infrastructure, a company it's identifying the companies that have the most pain.

Nate Stewart: So for example, I talked about the customer that absolutely had to get out of the single region. They were willing to. Invest a couple engineers to make sure that what we build would help them on that journey to their, you know, cloud native architecture or their, their modern architecture. So that was a good design partner, but I can't emphasize enough the importance of eating your own dog food.

You know, there's a great examples that came out of Apple's design process, right? How do you build a product where you can't roll it out to, you know, millions of people early in and get early feedback? How do you build a product in secret and the way they do that is by living in the product, which is another way of dogfooding it.

And so that's definitely something that I would bake into the culture of the product and the engineering team, uh, very early on.

Brett Berson: And on that point, it was that easy to do, given the nature of your business, or did you kind of create some fake part of the business or do something so that you could fully utilize the product in.

Nate Stewart: That's a great question. It was very uncomfortable because in many cases, early on it slowed down some of our product development and slowed down some of our, um, you know, core business processes. There was a point when I insisted that all of the product analytics that we do are done on the cockroach DB data.

And know, I just said Congress, DB is not an analytical database, but it was a way for us to see what that SQL interface was like, how easy is it to get data into cockroach DB, take data out of cockroach DB. Eventually once we were confident that cockroach DB could handle a basic queries, we didn't spend so much time running analytics on cockroach.

We ended up going to an analytics database. We did use cockroach DB for keeping track of all of our user telemetry from around the world. So if. Anyone deploys a database and they have telemetry turned on that is phoning home to a centralized cockroach DB database that is handling, you know, very high, uh, writes per second.

And that is helping us test the, the core product. Then we can see, Hey, how do we upgrade this database without losing our user, uh, telemetry? How do we stream updates from what is now our telemetry system of record to an analytic system, which is a core use case. And so initially some of those dogfooding opportunities were a little contrived and, you know, quite frankly painful, eventually we found opportunities to plug cockroach DB in, in the more natural place.

And it's paid huge dividends. Oh, 

Brett Berson: One of the interesting things that you mentioned a second ago was, the idea that, strategy can fail in the real. And you gave one example when you were talking about crafting strategy in the early days around this sort of specific use case, but I'm curious if you zoom out a little bit more, are there other areas that you've found that strategies tend to fail or ways that you think about that translation layer between, you know, maybe a five page memo and one that what ends up happening in the real world?

Nate Stewart: Yeah. You know, our strategy has evolved several times over the last five years. And one of the biggest failure modes that I've observed in other companies as well is coming up with a strategy without taking into account the capabilities of the organization to execute it. Right. So you hear a lot of people talk about strategy.

You hear a lot of people talk about execution. What skills do you have that make this particular strategy, a good fit for your, your team? And you have a couple options here. You can have an honest accounting of your capabilities, given your strategy and figure out how do you close that gap? Or you can say, Hey, this is the team we have.

How do we figure out a strategy that can make this team successful? And, and there's a, there's a give and take here. And this is something that we experienced that pretty directly as we started moving into more of a SAS product and delivering cockroach DB as a service where our customers don't run the database themselves, we run the database for them.

And so when we realized that, Hey, over time, we're seeing that there's way more developers than people who have the capability to operate a database. And this macro trend is pushing more people to offload operations to vendors. It became increasingly clear that we needed to figure out how do we offer a cockroach DB as a service.

That was a big part of our strategy. But then we had to say, okay, what needs to be true for this strategy to be successful? Um, and this is something that we had to figure out the hard way, by the way. Um, the first thing we realized is that, okay, our customers are having trouble hiring these operators, but now we have to hire them.

And these operators, these site, reliability engineers, they're not going to run just one database. They're going to run thousands and thousands of database basis. How do we find these people? How do we interview these people? How do we integrate them into our culture and make them successful now that this needs to be a core competency.

That was a huge capability that we had to build. Another one. Which he may be able to guess is that we are in the business of handling mission-critical data payment data, ledger data. If we're asking people to trust us with that data, we need to build a substantial security skillset. We need to figure out how to do network security, data security, there's all manner of certifications that we need to be able to pull off.

 how do you hire security experts on the product side, on the engineering side that can do that. So those are some of the things that we realized we had to build. If we were going to be successful, moving into, uh, becoming a cloud company.

Brett Berson: Another part that you hit on in this topic of early product strategy was the role of competition. And it's something that I'm, I'm kind of endlessly fascinated with. And I think that. Uh, my own point of view, is that the idea that like competition doesn't matter, focus on your customer, built for your customer and everything works out is, is actually quite damaging advice.

 particularly if you're operating in a market where there are sort of other players, which definitionally is most markets. And so I'm curious, how do you think about competition and maybe how has that changed over time and how does that fit into this sort of product strategy thread?

Nate Stewart: Yeah, it's a good point. I also initially thought that competition didn't matter, just, you know, focus on the customers, work through first principles and you'll end up in a good place. But when it comes to you actually landing an enterprise sale or looking at a real evaluation process, competitors are involved and it's useful to understand.

Where your competitors, competitors are strong and where you're strong and how to talk about those differentiators in a way that, uh, favors you. Um, the, the place that I've spent the most time thinking about this is when it comes to strategy, because we have a class of competitors that have made fundamentally different strategic decisions than us.

Um, as an example, you know, Amazon has a great database, Amazon. And rather than make the entire database cloud native, they have just made the, the lowest level of cloud native and they've taken the execution layer from an existing database. And what this means from them strategically, or for them strategically is that they have a much simpler migration.

Right. It is very simple to take an existing database and run it on Amazon. Aurora. The thing that is in our favor is that those databases aren't cloud native, you don't get the same resilience characteristics. You can't move your data anywhere in the world. You, you can't scale to the same level that you can with cockroach DB, and that is a fundamental trade off.

And so this is helpful as we think about the investments that we make and where we can invest in our differentiators. But it's also helpful when we're arming our sales team to really talk about the real contrast between us and our competitors and position us to win. And in the ways that our customers find valuable. 

Brett Berson: Maybe we could sort of build on this and talk a little bit more about how you actually build product, how you. And maybe we could talk about it a little bit, what you figured out in the early days, and maybe what it looks like today, like how you actually run the product team, how you set direction, what are the intervals that you're building?

Like what does that look like? Maybe in kind of a step-by-step fashion in terms of, you know, if at the end of the day you have a new product release, what what's, everything that happens before that.

 So for some background, I lead the product team at cockroach labs and that includes product management, product design education, and our data and analytics function as well as product operations. the way that I think about building product is as a, uh, system.

Nate Stewart: So I think about the, I call it the product operating system, where you have a product team for now. Let's just focus on product management. You have product manager. And what they're doing is they're taking inputs from the market, right. Externally, right? So what are our customers saying to your point, what are our competitors doing?

They're they're taking these external inputs, but they're also taking internal inputs. What are we hearing from our sales team? What are our pre-sales folks talking about versus our post-sales? What are the, what are the trends there? And they're taking all of this information, looking at it through the lens of a strategy, and then figuring out what are the ways that we want to change or enhance the product that will further our interests.

Right? That'll move us closer to our, our mission. So in this process of. Taking this input and turning it to, uh, this output. The question is what actually makes it on the roadmap. What are the questions that PM's need to ask themselves? So, you know, big things are outside of strategy. You know, how deeply do we understand the problem?

This is huge for us because we're in a space where it's very easy to look at another database. Let's say Oracle, and just do what Oracle did, but that was based off of the world 30 years ago. If you truly understand the. Now you can come up with a much, much better solution. And so we really put a big premium on that.

And this is especially important when you're working with engineering teams that may have a closer or a tighter, uh, intuition about the customer than the PM's that the PM's can frame the problem. Right? All sorts of innovation can come from the engineering experts. So framing the problem is huge. Um, there's also your standard ROI.

Does this make sense to build given everything we know, you know, risk adjusted? Is this a good investment? Given the engineering cost and then crucially there's the opportunity cost element. And this is all baked into our product process. If you add something to the roadmap, if you add something to something you want to deliver over a period of time, something is going to fall off.

And are you Okay.

with that, that trade off? So those are some of the things that the PM's are asking themselves. And then what they're doing is they're, they're partnering with engineers. You know, we, we go through a standard agile process. We have epics, we have user stories. So there's not much to talk about there, but the, the big question is, as these features, as these updates start to come to fruition, how do we de-risk them?

And that goes through the, the design partnerships that I talked about, acceptance testing, living in the product. Dogfooding these are all things that help the PMs build the right things. And then we roll the product out. The challenge for us as an infrastructure company is that we only do major releases twice a year.

Right. So we have to figure out given the, the risk of the feature. Is this something that we can put into a current release? Because we're very, very sure it won't break anything that helps us get it into more customers' hands faster, but it also limits the scope of what we can do or are we okay. Winning six months?

And so there there's that calculus. But that's a big part of the process. And then as these updates go into the market, we're seeing the product analytics understanding what's working, what's not working again. That's another form of input that helps us understand where do we iterate? And so this cycle just keeps going.

And at this point as a product leader, I don't have to understand the, you know, what a product manager is building, what feature, as long as the system is working, as long as we're getting high fidelity inputs. And we have this feedback mechanism and we're correctly plugged into the rest of the organization.

I know we're going to have a good outcome at the other end.

Brett Berson: So how does that process fit into annual plans? And do you start with a set of themes or areas? 

Nate Stewart: Um, 

Brett Berson: That gets filtered down through the line level PM or what sort of like an annual cadence and maybe how does this kind of tactical process fit into it?

 So I mentioned that all of this happens in the context of a strategy. And if we drill in there, the way this ties into our annual planning is we have a strategy that updates very infrequently. You know, maybe we'll make tweaks once a year, but this is something that is fairly static. We codified that strategy into, um, okay.

Nate Stewart: Ours and those OKR is, are what ultimately set the, the roadmap, right? So if we want to become the developers database of choice, and we're looking at new serverless starts, or new database starts, um, you know, top of funnel as one of the leading indicators, there's a PM that is responsible for thinking about that entire funnel.

That, you know, moving those key results is one of the things that they're looking at when they're, they're setting the roadmap, what are the epics? What are the user stories? What are the enhancements they are going to do that are most likely to, to move the needle there? And then what happens is we have a monthly business reviews of things that we call a product councils, where the PM that is responsible for a particular product area or a set of user journeys.

We'll sit down with a cross-functional group of stakeholders to talk about how things are going. You know, at the beginning of the month, we were trying to move this particular conversion rate. Here are the activities that we took to do it. How is this playing out? Here's what we think the next steps are, you know, sales engineering is this aligned with what you're seeing?

What are we missing? Marketing? How are you feeling about this? So this is a cross. Uh, meeting that happens with them once a month and it's a chance to reflect and, um, really understand what are the next steps, right? a, it's a course correction and, uh, information sharing mechanism that we have on a monthly basis in the context of this broader, uh, annual plan. 

Brett Berson: Anything else you can share on that format of product councils? I think that's a really interesting idea. And I'm curious to learn maybe if somebody wants to implement a similar idea at their company, what are the things they need to know about how it actually functions?

Nate Stewart: Yeah. So the way that we run our product councils are primarily through, uh, docs and, uh, Read, um, a meeting. So what happens is the product manager for their group. They, they have a set of, okay, ours that are scored and rather than just creating a deck or providing a scorecard for how things are going, they write a six page narrative around what's working and what's not working.

And they do this. Um, you know, maybe it takes three days, four days to write after the, at the end of the month. And then the, the meeting is fairly small. We have, let's say seven to 10 people in the meeting. The doc is released at the start of the meeting. And in an hour meeting, you take maybe 20, 25 minutes for everyone to read through the.

Um, leave comments. We use Google docs, so you can, um, respond to comments while everyone's silently reading. And all the while the product manager is pulling out the main themes, the main questions or discussion points at the end of the read period, the PM summarizes the discussion. And then there's a free flowing conversation around where do we go from here?

So the PM leads the conversation, but there's input from every single part of the, or most parts of the organization. And the, the last part around the product council structure is figuring out who needs to be in the room. We found that these meetings are Very interesting. A lot of people want to join, but the more people you have in the room, the fewer people are willing to raise their hand and actually say something.

And so there there's a balance. We do make sure we have representation from, you know, pre-sales, post-sales support engineering, product management, design, and marketing. That's the, the, the main group. But, um, also of course, that education, but just what's the cross section of the organization that can give you the most relevant feedback to move the product forward.

Brett Berson: Very cool. Flipping back up to two sort of annual strategy, and then how that cascades into. involved in that as it is a generally sort of more top-down leadership. And what, what is the output like if you think about, you know, 20, 20, like the, the product strategy or the strategy of the company, how specific or, and or high level is that strategy for the year?

Nate Stewart: The strategy is, is fairly high-level right. W you know, we believe that. Right now Oracle has the lion share of the database market. And that hasn't changed for some time. But because of this, once in a generation shift of the cloud, people are starting to take another look at their stack, and we believe that there is this movement of workloads from Oracle to the cloud.

And we believe that if we can build a database best suited to running cloud environments, we'll be able to win more than our fair share of those workloads as this transition happens. And we, and then we start to talk about, okay, what are the drivers? What are the big elements of the strategy that will help us take advantage of this, this moment?

And this is only, let's say a three or a four page doc. Uh, from there, the, the big deliverable is the, the OKR, and this is a top down and bottom up process. So the executive team will take a first pass at saying, okay, Here are the three big objectives and here are the four ways that we're measuring progress against each one, but then they'll, they'll work with their direct reports and those direct reports to say, Hey, how are you seeing this?

This driver is, you know, what do you think the, the levers that your team has, let's say as the disaster recovery team, how do you think that you, can support this, this mission? And ultimately after this top down bottom up process, we end up with a, an OKR tree, which is what we measure the company against on a quarterly or in some cases month by month basis.

 when you post-mortem, uh, products or features that you've shipped, that haven't worked out or resonated with customers, is there a thread that tends to tie those errors together?

Nate Stewart: Yeah. It comes down to the point I was making around what makes infrastructure companies unique. It's rare that we'll build something that there's no demand for. What's more likely to happen is we didn't stick the landing because we weren't able to get the feedback required to actually pull this off.

So, you know, this happens a lot with some of our finance customers. We'll deliver some capability and they'll say, oh, we can't use this because we don't have the right permissions. And we don't want this certain class of employee or this certain role to have access to this. So this, this feature is completely unusable.

And by the way, we're only updating once a quarter. So see you again, or once a year, see you again in 12 months, we, we see that happen for some of the more, you know, enterprisey features. If we don't have the right level of design partnership, that that's a threat. And I think that's something fairly common with, uh, enterprise infrastructure.

Again, I talked earlier about some of the ways that we de-risk that, but that's one of the common failure modes is just not getting people to actually use the, the feature so that it can, be used in real-world environments. Um, the, the other place that can get tricky is sometimes you have a big gap between when someone evaluates a critical piece of enterprise infrastructure and when they move it into production.

And so, Hey, this feature looks great. We've tested it with, you know, This synthetic workload. We've tested it with, uh, you know, our friends and family, and now we're ready to move it into production. And three months later, it's time to upgrade that feature or it's time to, um, do some what they call day two operation, which is just, uh, you know, administrative tasks on a running feature.

And we realize something's missing, right? Like, oh, we didn't have the right observability. We didn't have the right metrics. We didn't have the right auditing pump the brakes. Right. So that's a big theme that we see as well. And it's part of it is just the way that enterprises roll out infrastructure.

It is very slow. It is very cautious and that's just something that we have to, to build and plan for. Um, 

Brett Berson: Another thing that you shared was the importance of really good problem definition. When you kick off feature development at, at sort of the product team level, and I'd be interested, what is great problem definition look like, and maybe is there an example that comes to mind when you think about the last year or two, maybe you were sitting in on a product council session or something else, and you just thought it was like, when you think great product definition, this was like an example of that.

Nate Stewart: Yeah, for problem definition, you know, there there's a lot written on PM literature around how to define problems, but I, in my experience, the best problems are the ones where they're, they're not leading the engineers to some solution. And so one of my favorite examples about this in, in, in cockroach was we had a very high level problem definition, which was just, Hey, there's a certain class of queries that are just too slow.

Like, okay, what's too slow. They're there 10 times slower than our competitors. And that's not in the classic PM sense, the, the most crisp definition, but it, it shined a light on a category of solution that.

no PM was thinking about. And so what happened was, you know, the, the normal way to solve this as like, okay, we need this to get 10% faster or look at ways to improve the efficiency of the system.

But the way the problem was defined, the, um, some of the more senior engineers were like, oh, you know, I've seen this problem before, the way you would typically solve this is with a, what's called a cost-based optimizer, which basically looks at a query, looks at some statistics about the database and on the database side, rewrites the query to be more efficient.

So now you're not talking about 10% more efficient, you're talking about 10 times more efficient. And as you know, this was, this was, uh, a two way conversation. And we started saying, okay, well, Where does this problem show up? It doesn't just show up when you're running in a single data center.

It also shows up when you're running in multiple regions, you know, how, how can we take that into account when figuring out the optimal plan for the database? And so this is where the novelty came in, because rather than build the same type of cost-based Optimizer that, you know, Amazon's databases have, or Oracle has, we said, let's build it from scratch.

What, what are the costs that we should take into account today? It's not just what's in the database. It's where is the data physically located in the world? That that's part of the cost and how do we find plans that minimize the network latency? So even in a globally distributed environment, we can get you fast results.

And so just by teeing up the problem and not over constraining it and bringing engineers into the solution, we got to a much better place than we would have, um, using other other methods. 

One of the things that you noted was this, given the nature of, of the infrastructure product that you're building you're on a two times a year release cycle. The other thing that that's really clear is unlike a lot of tools, you mentioned reliability is one of the most critical features.

Brett Berson: And I would think those two things sort of combined creates issues around driving a sense of urgency or fast paced execution. And I'd be interested. Is there anything about how you, and maybe you sort of inherently disagree with that, which is great as well, but is there anything that you do in terms of the way that you run or build product that keeps iteration, um, speed really high keeps urgency really high when the very nature of the product, you, you kind of want to be slow and not break.

Nate Stewart: Yeah, that's super interesting because you're right. The, the opposite of a good principal is also a good principal, right? So move fast and break things is a good principle and move slowly and honor your customer's data is, uh, fine too. But I, I think there is, there's not necessarily the same trade-off between reliability and urgency.

I think it forces you to think about testing and quality and all the different ways that at a technical level, you can reduce. And qualify releases, but that, to me, that's completely decoupled from urgency because if you're shipping twice a year, you know, if you're about to miss a release, if that feature doesn't get in and you're going to have to wait another six months, there's a huge sense of urgency to get that feature in by, by that date.

So, um, I don't think they're, they're quite mutually exclusive, but as we move to more of a SAS company, we are taking a hard look at, you know, what, what needs to be true around this release cadence for our customers that are running our cloud product. What happens if we release on a monthly basis? Is, is that okay?

How do we give customers the same predictability and, um, trust they have in our product while also iterating a little bit faster, but I do, I would decouple iteration speed with urgency for, for the reason I just mentioned. 

Brett Berson: sort of on this theme and the same when we were talking about the idea of product councils, I'm always curious to learn more about rituals or habits or mantra. Or things that, um, product leaders, um, have implemented in their culture or in the way that they do things. And so I am curious, is there anything that kind of bubbles up that maybe you do that's a little bit weird or maybe a set of documents that you're really religious about that power, a lot of your work that you might be able to walk through in some way.

Nate Stewart: The thing that we do, that's a little bit weird as it's more on the process side and it's our relationship with, um, committing features to our customers. So we make it very, very difficult to do that, to a point where when new, uh, product managers come in, when new sellers come in, they're caught off guard our unwillingness to commit to building something for a particular customer.

And this has been huge for us. And, you know, I got to this point of view, the hard way, because I had operated in environments where we made too many customers. And this is important because you know, engineers have technical debt, you know, shortcuts. They take to, you know, accelerate some feature, but they pay the tax on it for a very long time.

Customer commits our tech debt for product teams, right? So you can say, Hey, we know we don't have product market fit yet, but if you pay us revenue, now we will commit to building features a, B and C. And the problem with that is in doing those commitments, you limit your flexibility and what's more valuable to a startup.

Then the ability to change course quickly there, they're just completely incompatible in my opinion. And moreover, in those early days, if you get a exciting customer, that's a little bit off strategy and you make a commit and now you're committed to doing something that you know is off roadmap and it's off strategy.

The opportunity cost for that is huge. And so. We have instilled in the product team and in the company as a whole, that we're not making, uh, co uh, customer commits. And if we do do them, it's going to be like getting a bill through Congress. It's very, very difficult in the five years that I've been at cockroach labs, we've made five, six customer commitments.

And that flexibility has been huge as we've navigated the, the changing database landscape. 

Brett Berson: I'm curious if, if you've only made those commitments a few time, what what's sort of the thread that, uh, that ties them together or, what are the conditions that would, allow you to be willing to sort of break one of your rules?

Nate Stewart: The first is that it has to be something that we were planning to build anyway, or after we learned about the problem that this commit with. It is better for the company than what we were otherwise planning to do. So this customer commits that we make our roadmap accelerators, they aren't distractions.

That's the first thing. The second thing is for us to make a customer commitment. We have to understand what the solution looks like from a customer point of view, but also from a technical point of view, the engineering team are, um, key partners with product management to say, yes, we will accept this commitment or no, we will not.

So that's, that's another angle. And then also there has to be a massive reward at the other end, right? We have to know that if we build this thing, this will become not just a customer, but also someone who will deploy and grow their usage with cockroach DB. So those are some of the things that have to be in place, but, you know, at a macro, like.

The request for customer commits is a very interesting indicator because it's telling you that you don't have product market fit, right? The sales team feels they need to stretch. And then the next question is, are they stretching in ways that serve you as a business or is there an enablement problem?

Are, do they not know how to position the product or do they not see the pipeline in the direction or in the types of customers that you would expect? And so it's a chance to look at the entire, go to market function and see if there's some misalignment there.

Brett Berson: is the reason that so many enterprise companies. Are addicted to these customer commitments, just because anytime there's revenue on the line and you can do X to get X dollars in the short term, it tends to just drive overall product strategy and behavior or, or are there other reasons why it's so common?

Nate Stewart: I think that hits the nail on the head. It's, it's very attractive, especially early days. If you can get your first 500 K you know, a $1 million customer through customer commitments that can materially change your, uh, your run rate. Um, the other thing with the customer commitments go going back to the lack of agility, is there very nicely aligned with sales incentives, which tend to be more short-term focused?

What does this mean for someone's quarter? Or what does this mean for someone's year? And they may not have to deal with the longer term impacts of that. And So there is something that it makes them more attractive to teams, you know, in the enterprise space with a large sales teams that, um, you need to be mindful of.

Brett Berson: Switching gears slightly. In what ways have you. Focused on growing or learning or evolving as a product person and product leader. You know, you've been on this amazing five-year journey. You started off as the first head of product and have kind of grown in pretty profound ways. Not to mention how hard it is for the first head of product to stick.

I feel like the first head of product sales, whatever, most founders and CEOs go through like seven people to find the person. So you, you, you defied, uh, the, the sort of laws of hiring product people. And so I'm curious when you just reflect on your own journey, what, what did you have to figure out or, or in what ways have you grown or evolved or learn the most that has really fueled the success that you've had at cockroach over the last five.

Nate Stewart: When I think about the big, uh, decision points or the ways that I have scaled with the organization, a lot of it has come down to a self-reflection, you know, having a honest, you know, internal dialogue around where I'm doing well and where I'm not, you know, if someone joins a new company and they report to the CEO, You know, when there are functional expert, it's unlikely that the CEO is going to be able to coach them on how to do well in that function.

That's why they're hired to be the functional expert. And so you need to be, I don't want to say self-critical, but you need to, um, really take a hard look at where you're succeeding and where you're not, and doing that at a, at on a fairly regular basis, you know, insisting on getting feedback from the CEO, uh, creating informal peer networks, um, for other people who are, you know, running product teams at similar stages, what are the challenges that they're going through?

If someone was promoted or demoted, what happened? Is there a way to get around that? Um, one of the things that I did very early on was I had a conversation with some of our investors around failure modes for heads of product. What are the things that I should be looking at now? To get ahead of what are the skills that I need to build to be successful at every, um, you know, along every step of the way.

And the, one of the biggest things that I've done is really understanding that I need to check my ego, right? I'm I can be wrong in some cases, and if I'm wrong, you know, not fighting to the death over it, but accepting that, Hey, I made a mistake. I, you know, this isn't the right strategy. This isn't the right approach.

Let's move in the right direction. And this is huge because when I joined cockroach it was an open source company. Um, then we started commercializing, that's a, that's a different business. Then we started moving into managed services. That's a different business. And now we're in this, uh, product led growth mode.

And so being able to be flexible and understand. You either understand when things are changing or when to drive the change yourself. Um, those are all things that have been, uh, very helpful to me over the last five.

years. 

Brett Berson: I'm curious in that if you can remember back when you talk to early investors in the business about these head of product failure modes, were there other things that came up other than this idea of maybe managing your own.

Nate Stewart: Yeah. It's figuring out roles and responsibilities with the CEO. Like why were you hired if you have a visionary CEO and that person is looking for an operator product leader, don't try to be the visionary product leader and fight with the CEO on the vision. Right. So figuring out what does that better together story with your CEO and how can you help that person?

Uh, achieve their vision, achieve that mission, you know, provide a back pressure, you know, provide a sounding board and, you know, hold up a mirror in some cases where things, it may not be working, but really understanding where, what role do you need to play to support the entire business? 

 on sort of that insight around managing your own ego as, as an exec, other than keeping it in mind, how do you actually manage your own ego as the company evolves and changes? Um, are there rituals or things that you reflect on or things that you've leaned on 

Nate Stewart: Yeah, one of the things that's been very helpful to me is a book that. Uh, often quoted, which is thinking fast and slow and realizing, you know, what does it feel like getting, you know, the physical sensation I get when I'm about to make an error or when I'm about to let some emotion or some logic error take over, like getting really in tuned to that is, is something that's been very helpful, but also actively seeking out feedback and really putting yourself in a place to be vulnerable, which is very difficult to do when you're on the executive team.

It's, it's hard to, you know, go in front of a room of your colleagues, go in front of that, you know, quote unquote, first team and say, Hey, I've fallen short in these areas. Here's what I'm doing about it. But I welcome other perspectives around how we might overcome this, uh, together. Um, that's been very helpful.

And also I, I asked for candid feedback from, from my. I asked how things are going. And, you know, we even just hire this awesome chief of staff and say, Hey, you know, you Are the CEO's proxy. Where am I failing right now, given your vantage point. So I'm always looking for feedback and I do my best to, to incorporate it.

Brett Berson: Are there ways that you've found to ask for feedback that increases the chances that you get, something that's useful, clear, and maybe reflective of reality.

Nate Stewart: What I found is it's not so much ways for asking for feedback. It's how you show up when you receive that feedback. If you ask for feedback and you get it, and you're defensive, it's very unlikely that you're going to get the same level of feedback as you could, if you really took it And acted on it. Um, the other thing that I've seen, cause again, I do this a lot is people have different ways that they prefer to give feedback. In some cases, it may be over a phone call. Other people may want to take a moment and write things down. So if we have a shared Google doc for our meeting agenda ahead of time, I'll say, Hey, here are the areas that I'm looking for feedback. And if you have a moment, I'd love to get your thoughts and we can talk about it in person.

So understanding people's communication norms is also helpful in getting actionable feedback.

Brett Berson: And is there anything that you've done to make the change based on feedback that you get as a product leader? I think it's a little bit underappreciated, both in some cases how hard it is to get high quality feedback, but also to, to really internalize and take action on that feedback. And I don't know if there's anything that you've done to hold your own self accountable.

When you, when you receive high quality feedback or maybe there's a story from a couple of years ago about high-quality feedback you've gotten and how you ended up taking action on.

Nate Stewart: Most of the feedback that is most memorable to me is around getting outside of my comfort zone for the pace of development and the, the speed that we're moving the business. I've been in several board meetings where, you know, before I was a board member where I was feeling good about the progress and it's like, yeah, you know, compared to your, your cops, you're actually not moving that fast.

I was like, oh, that sucks. Let's take another look. You know, I, I thought I was moving as fast as possible. What am I missing? Is it how I'm communicating it? Or are we taking some fundamentally worse approach than our, than our peers? Can we, can we justify this Delta? Sometimes we can, sometimes we can't. But, um, those are the pieces of feedback that most resonate with me because.

Part of my brand that I try to, um, show was that, Hey, you know, I am a grinder. I am going to get things done. I'm going to run through a wall in order to accomplish some task. And, you know, hearing someone say, ah, you know, you're not, you're not moving as fast as your, your peers or someone else may have done it differently or more effective or faster.

It's like, all right. Um, maybe I'm not at the world-class level. Like I aspire to be, let me understand that, that Delta, so those any type of feedback around that really resonates with me and I pay close attention to, 

Brett Berson: And so with that specific feedback, what, when you ended up digging in, what did you, what did you figure out around that, um, feedback or benchmarking around speed?

Nate Stewart: there was a, uh, an internal PR opportunity that we were missing you at the, at the beginning of our conversation, we talked about capability building, you know, How do we build this operator capability? And we weren't building it fast enough, you know, full stop and as a, a PM leader, I had to figure out how do I, how do I tell the story about where we need to go, but also why this is a shrinking window.

And So what I ended up doing in, in, in this particular cases, uh, I created this visualization of when different infrastructure companies were started and how many years it took them to become a SAS. Right? And so I just had, was like this, this bar chart and look at where we're pacing. Like we're not, if you look at this chart where we're not pacing to the same level as our peers and everyone at cockroach labs, they want to build something world changing.

They want to build this once in a generation company, they don't want to see themselves as third or fourth place on this crucial transition. And so figuring out how can you use communication to. And drive, uh, excitement and drive energy around a transition is one of the things that I did in partnership with my PM team to get more urgency around building that operator, um, capability, which also ultimately helped us build our cloud product faster.

Brett Berson: Awesome. Well, thank you so much for spending this time with us, 

Nate Stewart: Thanks so much for having me, Brett. Really appreciate it.