Ankur Goyal is the founder and CEO of Braintrust, an end-to-end platform for building AI apps. Before that, he founded Impira, a data management platform that was acquired by Figma, where he went on to lead the AI team. Ankur kickstarted his career when he dropped out of college to join the founding team at SingleStore (formerly MemSQL), a formative experience that shaped his views on building for high-bar users.
In today’s episode, we discuss:
• Ankur’s early lessons on quality from MemSQL
• How frustration with evals at Figma led to Braintrust
• Why they delayed go-to-market (on purpose)
• How to find product-market fit in a new market
• Why building great software comes from a place of “paranoia”
• And much more…
Referenced:
• Airtable: https://www.airtable.com/
• Adam Prout: https://www.linkedin.com/in/adam-prout-0b347630/
• Braintrust: https://www.braintrust.dev/
• Brian Helmig: https://www.linkedin.com/in/bryanhelmig/
• Coda: https://coda.io/
• Databricks: https://www.databricks.com/
• David Kossnick: https://www.linkedin.com/in/davidkossnick/
• Figma: https://www.figma.com/
• Goldman Sachs: https://www.goldmansachs.com/
• Kris Rasmussen: https://www.linkedin.com/in/kristopherrasmussen/
• Manu Goyal: https://www.linkedin.com/in/mngyl/
• MemSQL: https://www.singlestore.com/ (now SingleStore)
• Nikita Shamgunov: https://www.linkedin.com/in/nikitashamgunov/
• OpenAI: https://openai.com/
• Snowflake: https://www.snowflake.com/
• Zapier: https://zapier.com/
Where to find Ankur:
• LinkedIn: https://www.linkedin.com/in/ankrgyl/
• Twitter/X: https://x.com/ankrgyl
Where to find Brett:
• LinkedIn: https://www.linkedin.com/in/brett-berson-9986094/
• Twitter/X: https://twitter.com/brettberson
Where to find First Round Capital:
• Website: https://firstround.com/
• First Round Review: https://review.firstround.com/
• Twitter/X: https://twitter.com/firstround
• YouTube: https://www.youtube.com/@FirstRoundCapital
• This podcast on all platforms: https://review.firstround.com/podcast
Timestamps
(02:02) Dropping out of college to join MemSQL
(02:24) Key lessons from MemSQL
(05:54) How to build quality software
(08:51) The trick to recruiting well
(12:03) Founding Impira and selling to Figma
(19:45) How Braintrust was born
(25:33) Why good founders are paranoid
(28:08) How to recognize a real market opportunity
(33:37) The biggest mistake at Impira
(35:15) Inside Braintrust’s first six months
(40:57) How AI is reshaping Braintrust’s future
(42:32) The evolution of their prompt playground
(46:53) Fighting to stay mission-driven
(52:45) Make big bets, with extreme clarity
(57:00) The cultural choices that shaped Braintrust
(58:49) Hiring mistakes they won’t repeat
(1:03:07) What PMF really looks like
Brett: Well, thanks for joining.
Ankur: Thanks for having me.
Brett: I wanted to start by learning about how you got into technology in the first place. We actually met very briefly when you were at what was called MemSQL at the time, that then became single store, and you were working out of an apartment, but I don't even know what brought you up to that point.
Ankur: Growing up I wanted to be a doctor just like my parents and then,
Brett: Where did you grow up?
Ankur: Pittsburgh mostly. but if I tell you too much, then you'll know the secret answers to my, you know, password, questions. I wanted to be a doctor. I like my parents.
And senior year of high school I took two classes, AP bio and linear algebra, and I absolutely hated AP Bio and I completely fell in love with linear algebra.
And I had like a, you know, midlife crisis in high school and realized that I, I really don't wanna be a doctor and I, I don't, I'm not into this biology thing and I really like linear algebra. One of the things that was fun about the class that I took is we learned about the page rank algorithm, which was very novel at the time.
And, I implemented it in lisp and that was like super fun just to, to experience what that was like. And so I, luckily I grew up? in Pittsburgh, and just kind of, because I was in Pittsburgh, I applied to CMU and, applied to this yes program, and. Luckily I got in and so it worked out really well for me to just kind of switch gears.
my parents weren't super happy, but I think they've come around to it since. I started studying CS and then, again trying to be like a good Indian kid. I interned at Microsoft and I did research with a professor and I hated both of them. So I had my second existential crisis and, decided to move to San Francisco that summer, and I met Nikita and Eric.
When it was, them and two, you know, two other people, and I realized like, oh my God, I can learn way more from these people about, what customers are actually doing in the industry and how to build really hard technology than I can back at school. And so I dropped out of undergrad and joined MemSQL then, and I've never looked back.
Brett: so you mentioned this a little bit, but when you reflect back on that sort of first chapter at MemSQL, what are the, the things that, the most tangible things that you learned there and the types of things that sort of are still part of the way that you think about building and running companies?
yeah I think, I'll tell you a few things that I learned and also some things I took for granted. so some of the most important things I learned, one, I think you have to be extremely paranoid when you're building software about quality. And, what I, what I mean is, it's actually like super, super hard to build a product that actually works and people actually use.
Ankur: it's pretty easy and now I would say even like easier to build a prototype. but the craft of actually taking something from 95% to 99% or a hundred percent is, very challenging and not something they teach you in school. and there was one guy, at MemSQL named Adam, and he was a pretty senior database engineer from Microsoft.
And anytime someone broke a test or anytime a customer ran into an issue, he wouldn't brush it off, he would pay attention and he'd find the relevant engineer and kind of force them to pay attention to it. And I think being extremely paranoid and that level of diligent about quality is not only important, it's just existential.
So I, that was one of the probably biggest learnings for me. I remember when we were selling a big deal at Goldman, the managing director pulled me aside. I was like 23 at the time. And I, you know, I was, I had no idea what really what I was doing. And he was like, Hey, we're about to make a really big investment in your software, and if it crashes, that group of people right there might lose their job.
and so, you know, we're really excited, but I hope you guys are taking this seriously. 'cause this is a really big deal for us. And again, that's not something they teach you in school, just experiencing, how important the, software that you produce actually is and, the responsibility that comes with it.
I think it reshapes how you think about, building software and, and, and delivering it to customers. recruiting, I learned a lot about. and I think, one of the most interesting things is that recruiting, really, really good engineers is not a transactional process. And I think that sounds very obvious to anyone who's tried or read about recruiting, but until you experience it, it's very hard.
We'd sometimes spend years recruiting someone, you know, getting coffee with them every quarter, writing down a list of all the people that you really want to hire, but, you, maybe they're unavailable and making sure to ping them, you know, just to hang out every once in a while. I think that kind of stuff is really important and, to anyone who's aspiring to start a company, one of the things I, I always tell them is if you're at a company that, you know, is really good at recruiting, for example, I was at Figma recently, I think they're really exceptional at it.
try to spend as much time as you can actually working on recruiting. 'cause that's one of the most important and challenging things when you're starting a company. and then I think taking for granted, I didn't, that was my first job out of college. And in college I was constantly around developers and CS people.
And, at Men's sql I was selling to engineers and customers who were very technical and I completely didn't realize How important that was to me. you know, at Imperia, which I started after sql, we actually sold mostly to non-technical users. And I found that really intellectually stimulating for a while, but at some point I realized I don't have as much in common with our customers as I did at MemSQL.
And it was very hard for me to, relate to them and actually think intuitively about what they wanted out of our product at MemSQL. I had that idea, totally took it for granted. So that's one of the things that I, I look back on and, and, and really kind of cherish that.
Brett: How do you build high quality software?
Ankur: I think you have to pay attention to, literally every possible sign that the software?
is not working. And I think the first step is actually just believing that when you type something or you build a feature, it's not gonna work. the, when you hire a relatively new engineer or someone who, is quite junior, they don't have that imprinted in their brain yet.
They might like type something, send a pr, test it a few times, and then say, oh yeah, this thing probably works. but the really good senior people, they have strong intuition. and at a certain point you can kind of predict based on what you're building or how pervasive the feature is, whether or not it's gonna have quality, to begin with or not.
We're starting to see this, by the way, with AI people too. Totally separate topic, but really good AI people are able to intuit, what the quality, curve will look like for a new feature. So that's really important. And then the second thing is you, you have to take feedback really seriously. I think it's human nature when it's 6:00 PM on a Friday and a user is like, Hey, I think, I, I, I think this button, like I had to click it twice instead of once to actually, you know, close this window or, I, I got like, I, I ran like an experiment in Braintrust and you know, it worked 19 out of the 20 times, but one out of the 20 times I got a 5 0 4 error.
At 6:00 PM on a Friday for me. And I think for most humans it's human nature to be like, eh, I don't know if that's such a big deal. It looks like they were able to work around it, or maybe they screwed something up. Maybe their laptop is broken. You know, maybe they're using an old version of Chrome, whatever.
But I think the really good people razor into all of those details and they take them all very seriously and they use them as an opportunity to justify that, hypothesis, that something's likely broken. And I think you just have to be really, you know, like radical, about that.
Brett: Why is that important?
Ankur: well I think that product quality is probably one of the most important?
factors for a customer, whether it's a consumer or an enterprise user to decide whether to bet their precious time and energy on your product.
One of the things that Chris, who's this Figma, CTO, used to say a lot, which I really, thought was insightful, is that Figma is a professional product for professional designers and our users, are betting their career on the quality of our software. And so that's the bar that we need to hold ourselves to.
And I think, if you aspire to build a product that is actually valuable to the users who are using it, it has to have an insane amount of quality. And then I think there's. You know, more rudimentary things to think about. If you, want to scale to a lot of users without having a lot of humans behind the scenes, answering support tickets, fixing bugs, et cetera, quality helps you do that.
So you're, you know, literally more efficient. and, and then of course, if you want to expand within an organization, if your product is low quality, it's gonna be harder for more teams to adopt it. the better the quality is, the more people will want to throw things into the product.
Brett: how would you teach someone how to recruit well, or how would you explain what excellent recruiting looks like outside of just, it's not transactional. I. It's oftentimes months or years of, of authentic relationship building. What else would you say on the
topic?
Ankur: I think recruiting is one of those, sort of trades that you need to learn by working as an apprentice, with someone who's really good at recruiting. For me, that was Nikita. Nikita was exceptional at recruiting very, very unique talent that can work on relational databases and for some reason wants to do that at a startup.
and, a lot of my preconceived intuitions about how to do it well. were just wrong, like, you know, kind of a silly thing, but I remember. Once, like almost begging a candidate to join us. 'cause we needed his skillset so badly and Nikita let me do it, but he, I, I think he knew that that was the wrong thing to do.
and it, it definitely, you know, it, it didn't end up working out. I think we left, a poor impression on, on the candidate about the viability of the company and, and so on. but one is, I, I think you should just try to find someone who is successful at recruiting and work with them. this, the second thing I think is that, you have to sort of convince yourself that it's gonna take time.
it's, it's similar in sales as well. Like if your product isn't selling for some reason, then it's almost always pipeline. And if you try to invest in in pipeline, it takes time. And So I think zooming out, if you're a founder and maybe you're starting a company and you're like, yeah, I gotta get like three or four people working on it, I think, accepting that it might take you a year or two years to find those three or four people, and trying to design around that I think is a little bit better than trying to rush into it.
And so you can almost take that as like a rule that may not satisfy your intuition, but you should just accept as true. And if you do, it might work out better for you. And then the third thing, this is something that Elad is actually really bullish on as well, and we implemented really early at Braintrust, but I think it's actually important to have a rigorous and challenging interview, even for people that you know are good.
Manu, my brother, was the first person to join Braintrust, and he got interviewed like crazy as well. and, and I think it's important, partly because if you think of recruiting as a relationship thing and you think of it as a long-term thing, sometimes you convince yourself that someone's really good or you might overlook someone else just based on how much time you've spent with them.
and if you have a really hard and, I mean hard in a, accurate, not a, a sort of bullying type of way, but hard, appropriately challenging interview, then I I think it allows you to separate the act of relationship building and getting someone to the table from maintaining and building a high performance team internally.
I've messed that up a bunch of times, by the way, by, you know, ignoring interview feedback or ignoring back channel feedback or, whatever it is. And I think I've probably like 90% of the time regretted it. so I think that's a, a really important element of it. And at Braintrust for example, I think Manu and I probably spent like three or four days early on just working on the interview questions that we still use, for the technical interview.
I think. I think it's just something that's really important.
Brett: So what's the founding story of your first company?
Ankur: So, um, after, uh, five and a half years at MemSQL, um, I kind of asked myself this question every six months or so, like, am I learning a lot? And, you know, for quite a while, five and a half years, I, I felt like the answer was emphatically yes. And at, at some point, the answer started to become no.
And it's not to no fault of anyone there. I love the team and, as you can probably tell, I still think really highly of them and stay in touch with them. But, um, I wanted to push myself to, to do something harder. And, I reflect on that and think like that, that was great. I, honestly don't think that's a good reason to start a company.
Um, and now I try to discourage people from using that framework, but it doesn't always work. that's what led me to, to leave. I took some time off. actually Elad gave me this great advice. I met him right around then, and he said like, you should take six months off because the next time you have this opportunity, you might not be able to.
And you know, you might be married. I have kids, you know, all this stuff. We don't have kids yet, but turns out we, last day of the six months, off, I met Elena, who's now my wife. Um, and this time while starting Braintrust, I had zero days off between my previous job and, and Braintrust. So as usual, I think Elad was right there.
So I, yeah, I took six months off kind of reset, and then I did something which I, I honestly don't think any founder should do. Um, we're talking about product market fit. This is a, almost a surefire way to prevent yourself from finding product market fit. But I sat down and I said, okay, I wanna start a company.
I understand this type of technology really well. Lemme figure out what kind of company to start. you and I were kind of chatting about this earlier and, and one of the things that I didn't realize from my time at MemSQL, I think partly because we revered venture capitalists and their thinking ability, you know, their choosing abilities and stuff so much, I didn't actually realize how easy it would be to raise money and, specifically, that I, I didn't realize that, raising money was no indication of the quality of your idea or, uh, product market fit or chances for finding product market fit.
So I kind of fell into all of those traps and I thought, you know, at this point, five years before ChatGPT and all this other stuff came out, and AI actually started working. I thought, okay, great. AI now works. Um, it, it definitely did not, uh, and how people use data in enterprise is gonna completely change because of ai and databases are going to evolve to be, um, AI centric, query tools.
And that was the idea behind Impira early on.
Brett: share more about the, the couple mistakes you made. Why? Why did you make them?
Ankur: As an engineer, you are very used to solving problems that are, um, in your control. And by that I mean, what comes by your desk is potentially a very ambiguous, technically challenging, or maybe impossible problem to solve. But the difference between you, having the problem and solving it is literally typing like keystrokes, like a 10,000 line PR is a large change that might be, well nowadays who maybe it's, uh, not many keystrokes, but, um, even back then that might be like a hundred thousand keystrokes or something, which if you knew exactly the keystrokes to type, you know, it's, it's not, that much time, like a few hours of time to just, type 10,000 keystrokes. So this is entirely within your control. And I think that, in some ways that's great. As a founder, you, you don't feel, that you're burdened by, um, you know, whether or not something could be solved. You feel like, you know, if you do wanna solve something, you can solve it. But I think the key thing that it, you, you sort of fail to learn.
And I think that sales or go to market founders often have this intuition earlier than engineering founders do, is that markets and what people actually want are in many ways out of your control. And, I just had no appreciation for that. So. my bias going into Impira was if I come up with something that is, technically challenging to solve and seems like a good idea, then people will want it.
And I think the reality is you need to really understand, and time what people want and don't have. And I think couple that with what you could potentially build, but I was just completely ignorant or unaware of the fact that the idea of a market and what they might actually want exists. And, you know, partly you could say the MemSQL thing didn't help.
Like databases are a big market. They're well understood. And you know, Eric and Nikita figured out the timing thing, I think quite well. So I, I, I think I took that for granted.
Brett: And so what did the first year of that company look like?
Ankur: We went through a bunch of hypotheses and, another trap that we fell into, I think a lot of, um, founders, do because founders are often charismatic people. Um, you can convince a few people, uh, especially that you meet through your network to try out your thing. And so we actually, we built like an amorphous, tool for querying images and videos and then eventually documents with ai.
and we actually got people using it and paying us for it. But the problem is that, they were primarily paying us for interest in us and to help them solve problems. There was very little repeatability, across what they were, uh, asking for. I mentioned being paranoid about software quality. Um, I think in the world of early product building, you actually need to be like?
ultra paranoid about repeatability and you need to assume that, you know, give yourself a little bit of credit and assume that you're somewhat charismatic or entertaining or whatever it is.
and say that, Hey, this person seems interested in my product, but I actually wanna be really paranoid about that and assume that despite the fact that they're interested, they're interested for the wrong reason. And challenge yourself to, to, to really think about that. I didn't do any of that stuff so people expressed interest in our product and we just, you know, oh yeah, sure, , we can modify the product, to do exactly what you need.
and that just over time, it accumulated into, um, a bunch of customers who, you know, we had, few million dollars in a RR, but very little, repeatability across what they were doing. So when we tried to bring on a sales team and scale that, um, what they were selling and their ability to scale the success from those early users into the n plus first user was very challenging.
Brett: such an interesting, subtle thing that you're highlighting and, and, I kind of, we sort of call this weak product market fit, right? In the sense of you have customers, they're paying you. if you had no customers and no one was paying you, the world is telling you you're doing something
wrong. right,
This, it's kind of working. There's some customers, there's very little repeatability is one of the biggest traps I think, because you can spend years pushing down that as opposed to if nobody
cares and I'd rather have no customers.
How does this ladder into what ended up becoming Braintrust?
Ankur: Yeah. So braintrust, it's sort of a in many ways, very different than Impira. I wasn't thinking about starting a company. I was leading the AI team at Figma. and, I was, fresh out of an acquisition and fairly tired, and, you know, a lot of other things. but what happened is, at Impira when we started using language models, we ran into this problem around evals.
Like we would, we had customers that were using us for invoices and customers that were using us for bank statements. And we might update the model or update some prompts, improve the invoices and make the bank statements worse. And obviously that didn't fly with our financial customers, so we had to get really good about actually measuring these things during the development process and not letting customers find out about them again in the spirit of building a product that doesn't suck.
and at Figma we had exactly the same issues. I think you all actually just released, an article about how Figma thinks about evals. a lot of the foundation for that was, sort of set the at at that point and, and again, basically exactly the same problems. And we built a lot of the same internal tooling. I was talking to Elad, who, you know, speaking of non-transactional recruiters and stuff is like the one person who would just ping me and, and we'd chat like every week or or two weeks and just hang out, talk about random stuff. And at some point we were talking about, parallels between the two companies.
And I was like, yeah, this eval thing is really annoying. I, I hate it. You know, I, I had to build internal tooling, twice. And he was like, huh, that's really interesting. you built the same tooling at, Impira and Figma, and you know, when you're doing Impira, you were like one of three people who were, were working on AI stuff.
now, a lot of companies care about that. And the fact that you had to build the same tooling twice, maybe it, maybe it means that other people are gonna have to build it.
And so we actually thought about incubating this as kind of like a side project, and we talked to a bunch of companies, and one of the things we did was we were very skeptical about the fact that this would be a good idea, but we wrote down the conditions under which the conditions that would have to be true, for which it would be a good idea.
So we, we actually wrote down a list of like 50 companies that were somewhat ahead of the curve, working on AI stuff at that point. companies like Notion, Airtable, Zapier, Coda, Instacart, all of whom were some of our first customers. And, we, wrote down, the company, the person who is leading AI stuff and, the founders.
And we were like, okay, let's talk to a bunch of these people. obviously Elad's Network is incredible. Mine is, okay as well. So we were able to get in front of all these people. And, let's just do an open-ended user interview. And we wrote down a bunch of, what we hoped were non-leading questions about the biggest challenges that we're having.
And, we wanted to just assess this hypothesis and it was fun. I, I, I feel like we were almost antagonistic towards the idea at first. 'cause we almost, we didn't wanna believe that it was a real idea. maybe Elad did. I certainly didn't. And so we asked a bunch of people these questions and there were some really fun learnings.
Like one of them was, probably envy from being a closed source tool at MemSQL. I sort of, I immediately assumed that people wanted an open source solution to this problem. So one of our, questions early on was like, you know, would you want to consume this as a, an open source thing or, or not?
And, and we actually heard early on from people, an allergic no reaction to that. They, were like, Hey, we we're using open source stuff and it's super brittle and it breaks all the time and we just, evals suck and observability sucks and we just don't wanna deal with this. So, we don't want to use an open source thing.
We just want to use something that we can install and it works. and I thought that was, you know, a really interesting 'cause if we hadn't done that, my bias would've been we should make this open source. but you know, we learned, I think that pretty consistently, people didn't want that. and we talked to a, a few people.
And, first of all, everyone said exactly the same thing. evals were the main problem that we're having, which is what I experienced at, at Figma as well. And, and the second thing was, it was actually triggered by Zapier. Brian, the CTO sent us an email after the user interview and he was like, Hey guys, I know you were just doing a user interview, but like, dot, dot, dot, are you actually fixing this?
Because it's becoming a really serious problem for us. And, you know, we need something to help. I just created a shared Slack channel. we'll just tell you what to build. and if you recall, Zapier was one of the first companies. They've been very ahead the whole time, so they, they were one of the first companies to actually ship something.
And, it was just fun. Like we, we, we built a prototype and five days later, the Zapier team started using it. And the prototype was terrible. It was so ugly, barely, you know, like barely ran it. I won't, I won't even bore you with the gory details, but people were using it. And I remember early on, engineers at Zapier would take screenshots of parts of the UI and annotate them with like, I wish I could do this.
I wish I could do that. I think you probably know this very well, but that it just doesn't happen very often. David, who's now at Figma by the way, but was at Coda at the time, was another person who sort of jumped on on using the product and, so Coda started using us and. they were literally asking, they started using it.
They were complaining about the same things, asking for the same, solutions. I think the one thing they asked us for that was a little bit different was they wanted a prompt playground, but it, as soon as we built the prompt playground, everyone started using it and it became the most, you know, requested thing until we shipped it.
So we started to see those dynamics, and I, you know, I, wasn't looking to start a company, but this just doesn't happen very often. And you know, it felt like an amazing, kernel into something that that could be really great. the the. The other thing is I just really enjoy spending time with Elad.
And, that summer I was hanging out with Manu, who's my brother. We were at a wedding in, in Switzerland. we're kind of joking about how we don't get to spend as much time together as we'd like, and when we do, all we talk about is computer science, and whether it's my parents or, you know, whoever's around us when we're doing that, it's quite irritating to them because that's also the time that Manu and I are spending with, other people and they don't want to talk about computer science when, When we're doing that.
So we were kind of like, you know, Manu was, had worked at Neuro for, I think six years at that point. And, we're like, you know. Well, we enjoy, we we're not spending enough time together. And when we spend time together, we like talking about computer science, so we should probably just work together when whatever any of us does next.
and so that kinda lined up and that that was like the, the, the kernel of of, Braintrust,
Brett: As you were explaining the first couple months of, of working on what then became Braintrust, one of the interesting things is it sounds like you were default skeptical, that in a lot of ways you were looking for reasons not to start the company, which seems like the opposite of the first company you started, and I think it's the opposite of a lot of Silicon Valley wisdom.
Great founders have this vision for the future, and they'll just do anything to sort of make it happen. and they're unwavering in their conviction for whatever this idea is. Maybe you could sort of reflect a little bit on the value maybe that you found in being default skeptical, um, that maybe has helped the company work.
So well,
Ankur: I think, every good founder and honestly every good investor that I know has this strange dichotomy between, being paranoid, skeptical, negative, you know, whatever, pessimistic and hugely optimistic. I certainly experienced this and I, I wouldn't say I've perfected the formula for myself, but it's evolved over time.
and I think there are certain things that are outside of your control that, you should be skeptical about until you have scientific evidence that, You know, a hypothesis is true or false, and there's certain things that are in your control that I think you should be very optimistic about and have a lot of conviction about.
in the case of Braintrust, I was very skeptical about the market. Um, I was, very skeptical about, whether people would think about evals. I honestly, I was pretty skeptical of myself. Uh, I had just come through, an acquisition wasn't the best acquisition in the world.
you know, I was stressed and tired and, uh, not, I, I, I wouldn't say I, I was like, felt like, I remember I was talking to Elad at one point, like, should we hire a CEO early on or find a co-founder to be a CEO? And he was like, no, I, you know, you've already been through the pain of doing this. I think you should do it.
Like, okay,
Brett: Easy for him to
Ankur: Yeah,
yeah, yeah. Um. But, that, you know, that's just an example. Like, I, I didn't, I didn't have really any, self-confidence that I was, uh, a magnificent CEO I still, I wouldn't put that label on myself. but at the same time, there were certain things that I was very confident of. one of them is, uh, building soft software that involves crunching data. I also had a lot of belief in the quality and sort of, value of the network that, um, Elad and I had, uh, in that sort of tech community. you know, I think it's, it's just what you have conviction in.
Brett: What has building Braintrust thus far taught you about what makes a good market?
Ankur: you know, there's a few things. early on we, at both Impira and MemSQL, I think there were times when our go to market
was way more sophisticated than our product. And by that I mean we had like great sales talent, we had a sales process, we had all these website metrics, and if anyone at all came to our front door, we'd, you know, be on top of that opportunity.
And, I, I wanted at Braintrust to, again, in the spirit of skepticism, build a terrible go-to-market motion early on, but make the product make Braintrust only successful if the product was so good that despite being grossly incompetent at selling and marketing our product, it would somehow take off.
and that meant that when the first few people started using the product, we, we didn't talk to them at all about pricing. And I, I didn't talk to anyone about pricing until they started putting us in production and they're like, Hey, I have this thing in production now there's liability. You know, like I need to do some kind of business with some kind of commercial entity to be able to, to be able to continue working with you.
can we please pay you? and that it, it's kinda interesting, like we started in August and then November, this happened with the first three customers that we had. and it's sort of continued from there. And, I think now, you know, we, we are building a really world class sales team and marketing team, so it's a little bit different, but. Foundation, sort of established a really high bar for the product. but it, it also made like weird things happen like most of last year last year we just had random companies including household names, like coming to Braintrust. Maybe they were already using the product or signing up for it. Their friend was using it.
They heard about it on Twitter. I don't post that much on Twitter, like it's just sort of happens, but people just came to our front door and started using the product and we exceeded every sales target that we could come up with, like throughout the entire year. but it was all just sort of organically inbound and people just always wanted the same thing.
Like thing I at MemSQL, imp Pira. I remember when I would meet a cool customer, I would spend like a few days before the customer meeting preparing a demo that was extremely specific to them. So let's say you're talking to Instacart, you might prepare like a grocery app demo or something. At braintrust.
First of all, we just had so many of these that we couldn't, and second, I would just show up and show them like some generic use case that, is What everyone was doing. And, and people would literally look at the screen and point, like I could see them point at the screen and say like, I need exactly that now.
and that was a very, very different experience than, Impira and MemSQL. I, I have spent almost no effort, or no energy trying to convince anyone that they need our product. and I, I think that is a pretty clear sign of product market fit.
Brett: one of the most important points I think you're making, which is that the, the problem is both, large, urgent and repeatable across customers. So that's like one setup, but is there anything else in terms of being a good market that comes to mind?
Ankur: Yeah, I think one of the things that, we benefit from is, our customers and our people really enjoy, talking about the problem that we're solving rather than the solution or the, this button or that button. I think a good example of this in recent memory is if you look at a company like DBT, a lot of the early energy around DBT was like, I'm a data engineer or data analyst.
I'm, I'm trying to figure out how to set up this warehouse thing at all. Like, who do I talk to? what are the watering holes, you know, that, where I can find other people who are experiencing this problem and, oh yeah, great. There's this cool thing called DBT, which people use and it sort of helps you with that.
But that wasn't really what it was about. It was about the problem, which is, you know, wrangling data into this data warehouse thing, which is a hugely valuable outcome. And I think that, people without us having to force them to or ask them to, they are just interested in solving the problem around evals.
And they find it intellectually fascinating on their own. And I think that leads to a lot of, organic knowledge sharing, a lot of people wanting to talk to us, but also talking to each other about evals. And I think that that creates a very sort of, rich community, where if you have success with some customers, it, efficiently translates into success with other customers.
A lot of people will read, a blog post that Simon and I.
wrote about how Notion uses Braintrust, and they read that, maybe in March just to learn about how they should be doing evals. And then in July or something, maybe they hit, hit us up and say, Hey, we, we read the blog post. We sort of implemented this, but now we're ready to take it to the next level.
I think that's a pretty exciting property of the market that we're in. And I think it's an important one.
Brett: You talked about this kind of janky prototype you built before you fully committed to Braintrust.
What was the path between that product and kind of what you thought was kind of a first saleable product that you were willing to kind of put your name behind? How did you figure out where to begin given how expensive the problem space obviously is, which gives you basically infinite roadmap.
But the flip side of that is how do you figure out where to begin?
Ankur: Yeah. One of the mistakes that we made early at Impira was protecting our name or whatever, until we felt like we had enough traction and, then we did that too late. And I think that if you think, if you consider two failure modes, one is that you die in obscurity and the other is that you, are overhyped and then fail. I actually don't know what most people would say, but I think there's only one right answer, which is, it's much better to, be overhyped and then fail. I mean, obviously within the realms of morality and integrity. but, I, I was, I, I sort of learned like I, don't wanna be an obscure product that no one cares about.
I'd rather actually put my name out there immediately and, start to create some buzz around what we're doing and get that, started.
So we started in August and then we launched publicly in September, and TechCrunch covered us and we got a bunch of Twitter posts or whatever from all of our, friends and, got the engine going, and I think that was the right move.
you know, the fact that people were already using the product, and these are very high taste engineers at companies like Zapier. to me that was enough. Like, yeah, our product sucks, it can improve. That U UI was terrible for a long time. I think we've really, really improved that over the last year.
but, you know, it, it's val, it's providing value to these people, so why wouldn't it provide value to other people?
Brett: when you look at that first six months, the prototype that you built, when you were working on it part-time, all you did was just make it better and better day after day.
Ankur: Yeah. Yeah. That's all I'm still doing other than these two hours here today. I'll, you know, that's, that's my, that is my entire life.
Brett: did you do that just by filing off all the rough edges or were there in those first six months really important phase shifts that happened in the product that you were building
Ankur: yeah. yeah.
So the first six months we built very little interesting technology. And I remember talking to, I won't name any names, but, VCs and stuff, and they're like, Hey, you know, what are you building here that's durable and what, you know, what, what are you building? What's your moat? And, what's gonna stop?
people who are now writing AI code from just rebuilding Braintrust or how come customers aren't gonna build this internally? And I just didn't care 'cause I was talking to customers and they didn't care. And so why would, why would that matter?
what started to happen, really interestingly last summer is that notion who was way far ahead of everyone in terms of real adoption of their AI products.
and really pushing the boundaries of the latest models. They had, and they also had more people working on ai. they were like searching for stuff and trying to dig through the logs that they had in Braintrust in a fairly unique way. So normally when you're working with observability products, you try to do very structured queries like, you know, user ID equals Brett or whatever.
and in ai, primarily because there's so much text floating around one, you have these enormous rows. So every span in, Braintrust Land, which is like a row of something that you'd log in Braintrust, the average size is 50 kilobytes. In traditional observability it's 900 bytes. So you have these very fat rows with lots of text and big prompts, and they have people that understandably so they just wanna search for stuff.
Like, I remember, I'll get to the end of the story in a second, but there's a bunch of stuff about users in Japan that the folks at Notion ended up discovering because they were able to search for stuff that was very hard to before. And, in the spirit of moving quickly and stuff, we were using a medley of preexisting database technologies to make Braintrust work.
But there were none of, none of the things that we were using were actually able to solve this full text search problem. And Notion was already struggling with it at their current scale and, growing like crazy. And so we thought like, oh my God, if we don't really think about, a hundred x this scale and how we, are able to handle it, then we're gonna.
Like not be able to service our, one of our most important customers. And so, we, we really have to figure out a solution to this. And, what's interesting is we kept kind of kicking the can and collecting information, and we really understood the use case at some point. Manu and I had learned a lot of lessons about how to not to do this the dumb way.
So we, first built a benchmark, and we made the benchmark like a hundred times more stressful than what Notion does. And we tried using all the stuff that we were using like Postgres and Click House, and we also benchmarked Snowflake and Redshift and all this other stuff. And then we benchmarked a few other, more interesting, libraries and technologies.
And there's one library called Tantivy, which is a rust reimplementation of the Lucene Search Index, which is a very popular and well regarded and used, search technology. It's the technology behind Elastic Search as well. and that was the only piece of code that was capable of handling this, handling this 100 x use case.
from a search standpoint, nothing else even came close. Like understandable but staggering to me how bad relational databases are at full tech search. Just, they're just awful. and so we, we saw this and we're faced with this problem and we're like, okay, We're a year into the company at this point.
Our philosophy up to now has been not to, think about moats and, Hey, let's build this big piece of technology, and that's our thing. Let's just focus on what customers want. But we just don't think there's any way to be able to help notion. And, and now other people are starting to hit this as well without making, without building like a real piece of technology around this.
So we broke ground in October and there are three of us working on it, myself, Manu, and then another engineer, Austin, who, was one of the first few engineers at Impira. And, before that was a physics PhD. super sharp, really nice, person. And, all three of us knew each other really well.
Like we'd all known each other, Austin, I mean, Manu for 30 years.
Yeah, Austin for, you know, the better part of a decade. And so we broke ground in October and we shipped it to Notion in January. and then we shipped it to everyone else in February. and, that was pretty awesome. first it was awesome to be able to work with a customer who, sort of pushed us in this interesting way to build novel tech and then helped us, you know, like ship the early versions of it and find issues.
And it, it's, it's not often that you find customers who you're able to partner with in this really cool way. but second, I, I think, It doesn't usually take two and a half months to build it a database. but one, we know what we're doing. Like we, all three of us had been working on this kind of technology for quite some time.
And two, we had such a precise understanding of the workload that we needed to make work. If you try to use this thing is called Brainstorm. If you try to use it as like a data warehouse replacement, you're not gonna have a good time if you try to use it to solve what Braintrust needs to solve. It's extremely purpose built and very good at it.
And we, we basically, we had no margin for error, but we also were extremely confident in exactly what we needed to make work well. 'cause we had a year of accumulated experience actually suffering through trying to, trying to solve the problem without Brainstorm.
Brett: Did you think at all about how fluid, what is happening in AI is today and were you gonna solve this important problem that would not be important in two years?
Ankur: Oh, for sure. I, I think there's maybe two things, I would offer there. The first is that I have the benefit of working on AI for a lot longer than the chat GPT, you know, sort of big bang if you will. And, there's a number of things that we used to do back in the day, like evals, that we still do.
And so, A very simple framework is that at Braintrust we roughly build the same set of solutions to the same set of problems that, we previously experienced at Impira and, people still experience today. And that's a reasonably good proxy for the types of problems that people will experience tomorrow.
It's not perfect, but it's actually served us really well. the other thing that we do is we try to be very explicit about what we think are long-term bets and what we think are short-term, bets, and we don't reject the short-term bets. So, a good example of this, aside from just knowing that the volume and shape, like, I think it's a pretty obvious, thing to believe that people are gonna log more AI related data and that prompts are gonna continue to get bigger.
So investing in technology that makes that easy and scalable and cheap for customers is, is probably gonna continue to be relevant. maybe something that's slightly less relevant or less obvious. we built this prompt playground early on in Braintrust and I was actually not a big fan of the idea of it.
Like as a software engineer, I like using, cursor, and using my IDE or Vim or whatever and having everything source controlled. But we noticed that one, there's a lot of, product people support people. we even have some doctors who are writing prompts and stuff in the playground. Who obviously really prefer that experience to using an IDE in a command line.
And we've actually seen a lot of developers really embrace the playground and wanna use the playground as well, because it's just a much faster way of working in the product. And we heard, or sorry, working on AI iteration now you can run evals and stuff directly in the playground as well. And we heard, a bunch of people early on, and it felt very uncomfortable when they asked for this, but people would say like, I really want your playground to be like an IDE for working on prompts and, AI features more generally.
Like, how do I do rag in the playground? How do I do version control on the playground? I, I wanted to be like a full IDE experience. And again, this made me quite uncomfortable until I just literally saw how people used it. And we started to realize that actually we can't think of this playground as kind of like a half-assed extension thing in the product that you just use to test stuff.
It can't be ephemeral. so our playground is actually stateful, like if you write something and then you refresh the page or share it with a colleague, it has Figma style, you know, multiplayer collaboration as well. but we, we realized that actually people, people really do want a serious professional, you know, maybe it's not every bell and whistle of an IDE, but like something that's as good as an IDE for actually working on prompts and, and testing them.
And so we made the explicit decision, I think about eight months ago or so to say, like, Hey, let's actually make, let's treat the the playground like a real IDE and engineer it. With that quality bar in mind, we just shipped an improvement, which, we didn't publicize. I think from an external standpoint, we've framed it as we fixed a big bug or a series of bugs.
and we, we spent like a couple months re-engineering the React state in the playground because we had accumulated a lot of technical debt from iterating on it, with these, you know, things that keep changing, like model parameters, blah, blah, blah, all these things that change often. But we, we sort of learned from that and said, okay, let's, let's zoom out and make this like significantly more robust.
And we shipped that and I think that foundation will last us for a long time. And, and we also know that that's an important bet. At the same time, there are things that are changing constantly. Like the word agent means something different today than it did yesterday. Probably will mean something different tomorrow.
and I think it's very important that when people are, sort of topically zoomed into whatever the AI du jour is, they feel like Braintrust is a, an accessible place for them to explore that. And I think that the durability with which we engineer some of those things is different. We have cookbooks, which, you know, are tutorials that make it very easy to, to try these things out.
We're constantly iterating on some of what we highlight or feature in the UI versus all the things that we collect. certain things become more relevant over time, like we've collected cash token counts for a long time, but. Actually thinking about the impact of cash token counts on pricing has become a very topical issue for a lot of users recently.
And so we've put in some effort, which is, very specific to that problem to make the experience of tracking cash tokens and looking at the pricing implications, very, very, easy and you know, front and center. But that may not be something people care about. in six months, if there's yet another dramatic change in how models are actually doing something.
You know, I'll give you an example. Let's say that the pattern which OpenAI started introducing, others are considering maybe not as far along yet, which is like running tools in the background of, of models. Let's just say that that becomes like a big thing and there's new pricing considerations for that.
That might be the thing that someone really cares about. And when they go to our monitoring page, that might be the first thing that they wanna see and that might change. So I.
think we, we sort of just think about these as different buckets
Brett: Is there anything interesting to share as you had your first few customers that were not, at the cutting edge of Silicon Valley, sort of scale up.
Ankur: for sure. I think probably the most interesting thing, and we've done this quite differently than we did before, is, we've been very true to what our product is. We say our product is for, product engineering teams that are trying to incorporate AI into their core products and services and that.
description excludes and excluded a lot of, sort of traditional enterprises, for example, early on. And we were okay with that. our hypothesis was make sense. we think that eventually the way that, you know, the Notions and Zapiers of the world are, thinking about how to use AI in their core products and the, the humans who are, you know, working on this and the way they think about engineering, that will eventually be you as well.
And if it's not you right now, we totally get it, but we're gonna stay focused on this core ICP. and what's interesting is that, a lot of those enterprises and many of whom we just stay in touch with and talk about the problem, because it's mutually valuable. they've actually come around now and they're like, oh yeah, and you know, hey, now we're actually building this into our external facing app, or we're building it into the internal tool that all of our analysts use or, or something like that.
And, we do actually want to build the way that those companies are building. We find it, you know, to be the right way. And so, we, we've been somewhat firm about what we think the right workflow is for AI engineering, and it's been interesting to see other, traditional enterprises, sort of, subscribe to that over time. What's obviously different about those companies is, there's, a bunch of, you know, security deployment, quirks, that are different. so guess what? Braintrust now has like amazing support for Azure. we didn't have that. there's not a lot of the, those cool tech companies necessarily that are using Azure yet.
another, example is that a lot of these companies, are using, PDFs. So a lot of the cool Silicon Valley companies, their multimodal workloads are mostly images. And nowadays videos, a lot of the more traditional companies have a lot of PDFs. And so, you know, our support for PDFs is now like, dramatically better than it it was before, informed by these users.
But I think those are, are, are things that feel very comfortable, within the type of repeatable product that we're trying to build. On the other hand, if we had pulled those customers and they're like, yeah, Braintrust seems great, but, we don't actually do any AI engineering. We're just, actually training our own LLMs or we think that we're only going to solve the fine tuning problem.
How do you, how do you solve fine tuning and custom inference? that would be a very different, transformation or pull away from the sort of core product that we're building than I think what we're experiencing.
Brett: One of the things that I think you all have done really well as this category has emerged is really tie yourselves and the companies that are building at the cutting edge together. and a lot of that is they are the ones that are always tweeting about the great stuff that you're doing. Obviously, I assume a part of that is that they love the team and love the product that you've built.
Is there anything else that has gone on that has allowed you to so closely associate with those companies? Or have you thought a lot about, if we wanna win the category and build the trusted brand, the number one way is to get all the people at the forefront to say that you're the best? Is sort of any thoughts on that?
Ankur: Yeah, I honestly, I haven't overthought it. I feel very grateful that people talk about the company that way and they, they do that. But I, I, I think that at the end of the day, all those companies that say nice things about us, we've been through some hard stuff together over time. They've broken our product.
we break all of their products. Like we use all of those companies, alpha versions of their AI products. we have lunch and dinner together. many of the relationships are longer than the lifetime of Braintrust and, I think that there's sort of genuine Mutual friendship and trust that we've built with each other over time.
And it is, in a very simple and literal way, gratifying to share and see people that you care about succeed. I think, for example, if you hire a new investor at First Round or we hire a new employee, they're doing a really good job. I think you feel a hu you know, as a human, you feel a sense of pride sharing that with the world.
and, actually, I mean, I think the culture that you built here is a really great example of that, with, with sort of newer investors. and I think that if you, are really focused on like a shared set of problems that?
you All care about and you work really hard together and you, you push through those challenges, then I think that on the other side of it you sort of have the opportunity to, to create that mutual, interest and, and, and goodwill around the product.
It's definitely not something we try to manufacture. I think we, it's, it's sort of, I almost sometimes wish I did more of that. but it's, it's just not in my personal nature. I, I feel very, feel like my, my, my purpose in many ways is to, work for the customers that we have the opportunity to work with and, just try really, really hard to deliver a, a great product to them and, and just sort of hope that.
The act of doing that and the act of maybe caring about the success of their, projects and careers beyond their, the keystrokes that they type into our product. I, I hope that translates into commercial success. I might be wrong, but I'd rather die trying, with that than do it any other way
Brett: . Maybe sort of as we wrap up and you zoom out on these first few years of company building, are there any other things that you think have been causal with the company's success that we haven't talked about, or key decisions that you've made that are, have turned out to be correct that other people that are getting going might find some inspiration in?
Ankur: Being extremely clear about what bets you're taking, is just very important. So, we've talked a lot about how we bet early on our customers, meaning we were very specific about who we worked with, and we bet that if we essentially listen to exactly what they tell us to do, that will translate to what other people, are doing.
And that is something that you could either choose to be very skeptical about or you could choose to bet on in this case, we bet on it. and that gave us, I, I feel like it gave us permission to have a lot of conviction in that, and not question it constantly, but, the other hand, I think we're, you know, very skeptical about whether this company should exist in the first place.
Very skeptical about whether we should build our own database technology at some point. a number, a number of other things. And, I think it's good to be, explicitly skeptical about those things as well. And, I think if you're very clear about what are the bets that you're willing to have conviction on and what are the things that you wanna be more scientific on, not only does it help focus your own thinking, but it's also really helpful when you're recruiting.
So, when I talk to a candidate, they'll say, what are the biggest risks of Braintrust? And I can tell them what I don't think the risks are. I, I don't think that if there's, if data crunching is important to the or, or, quality of UI is important to the success of this category, I think we're gonna win.
Right. however, if AI is overhyped, if, what, you know, the cool tech companies are doing is not the right pattern for building AI software, you know, if there's some crazy change, I don't know, there's a number of things one could name, these are our bets. And I think it, it makes it quite easy for candidates to, you know, pick, whether it's, it's as a set of bets that they believe in or not.
And I also think that's important because sometimes we have a tendency to recruit people by convincing them to join and. Then they do, and then they have a different set of values, or there's a bet that you believe in that they don't believe in, and then it doesn't end up working out. And we've had very little of that actually at Braintrust.
'Cause everyone sort of cards on the table, everyone knows what they're getting into. We do these work trials. So you literally see all the issues that people have with the product and either that motivates you or it doesn't. and, and I think that that really helps.
the other thing I'd say is that every, first time founder, I feel like is either a sociopath or they are, you know, have, empathy.
And they will, tend to listen to a lot of the things that people ask, of them or ask them for, whether it's their time or, you know, modifying the culture or, prioritizing things that are not actually critical to the success of the business. And, I don't think I'm a sociopath. And so at, at Impira, I, you know, I, I I feel like we built a culture that almost regressed to the mean.
and at Braintrust we've been much more specific about, what our culture is and how we operate and. you, you know, some people, it just doesn't work. Like we don't have ma we have one meeting per week as a company, and that's it. if you're an engineer, designer, product manager, and you find it, really important to meet frequently to be able to accomplish your, your work, this is not a place that you're going to enjoy.
On the other hand, if you are a designer who loves to code, or a product manager who likes to prototype, or an engineer who cares about product problems and doesn't wanna be told what to do all the time, and, you know, go try to build stuff, it's, it feels like an amazing place to work, and I think it's very clearly a great place to work for those people and not a great place to work for people that, have, you know, a different but totally valid style of working.
It's not kind of a mediocre place for both people to work. so that's been, been very, important. you know, for, for me personally, I, I can't work in meetings like I can participate in them. but it, it's, it's not, not a place for me where work happens. So I cut off meetings at noon every day. I don't take any meetings after noon.
And I, I could have never thought of doing that before, but it's been incredibly valuable I think, for us at, Braintrust and for me personally, to just be able to focus in
the afternoon. So stuff like that I think is very important.
Brett: When you think about those choices that you've made in terms of how the company behaves, is it just a reflection of how you like to work and it's encoded in the company, or
Ankur: I think that's the seed of it. and, when you have a, a really clear perspective early on, then I think you start to attract people who, share what are the fundamental values, but maybe have different, things that they care about. So, I'll give you an example. I work every weekend, and I have, since I joined MemSQL.
and it, it, it almost just feels weird to me that I, I couldn't work every weekend, or I, or, you know, I've, at first when I was, you know, a manager and I was like 22 or something, I didn't understand why other people didn't like to work on weekends. And I.
I've learned over time that everyone's different and, people recharge and stuff differently. So if you took a, an overly radical perspective on taking my personal style of working and, forcing everyone to function that way, I think that, everyone would be working on weekends all the time. But I don't think that, I mean, there's some companies that have done it successfully, and I have nothing negative to say about those companies.
But I think personally my perspective is, even if you're trying to hire like a, a pretty good, well-rounded and, you know, skilled group of people that's even, you know, 40 or 50 people, it's very challenging to get everyone to work on weekends. And so we don't, that's not been a part of our culture, even since the beginning, even though it's something that, you know, I personally do.
so I, I think, but that doesn't mean that people don't share the same values that I do around hard work and, you know, independence and autonomy. it's just, implementation detail that is specific to how I work that doesn't need to be part of the culture,
Brett: So how, how do you choose which, which one of those, which one of your tendencies are gonna be encoded in the company and which aren't? Like the whole idea of your, your personal preference and working style is to end meetings at noon and then focus on the work, for example, which seems like that that's a, a company-wide way that you behave or that work doesn't happen in meetings.
So we're not gonna be a meeting heavy company if you like meetings, go somewhere
else. Like what, what's your own process for figuring out what is gonna be encoded in Braintrust versus what sort of optional or what you're gonna do in your own time?
Ankur: Yeah. The sad answer is trial and error. I maybe someone like maybe Brian Chesky or some, someone who's like a, you know, amazing culture steward. and, I, I, I've benefited from reading a lot of his stuff about culture. Like he might have really great answer to this question. but yeah, trial and trial and error.
We've mishired, I've mishired and misfired for, a very long time now, maybe longer than I would, be willing to admit working in, in the Valley. and, just learning, about these things, like if I rewind to when I was at MemSQL and you sort of ask the same, what would you write down as the cultural stuff?
I don't know. Working on weekends may actually be on the list. I don't know. but, I think I've learned that there's a lot of people that are insanely competent who I enjoy working with and are very productive that just don't do that.
Brett: What is misfiring?
Ankur: I think the obvious answer is firing too late.
there are definitely cases where you fire the wrong person.
Brett: How do you know if you fire the wrong person?
Ankur: you can't always know, but I, I'll give you an obvious example. sometimes you hire people. and this especially happens at startups cross-functionally. There tends to be, I'm not gonna name departments 'cause I don't wanna feel, I don't wanna alienate any particular group of people, but there's certain departments where you, you hire like the first or second person, and then they tend to conflict with people and, and sort of the other groups.
And, what, often happens, and this is usually, a large part your fault as a leader, but you start to get these, sort of like factions that conflict with each other. And it, this is very natural. It's, it's, you know, in some sense unavoidable and something that you have to, to, to work through. But what what might happen is that one person starts underperforming and, ultimately whether they quit or you let them go or whatever it is, they're, by the end of it, they're underperforming.
And they were maybe performing well before. Sometimes by the way, people call this people scaling out of a startup or this person was good for the early days, but not later. I don't know how much I believe that, but, long story short, someone's not performing well. You end up, parting ways with them and then the person
that was the contributor to the drama, you realize that they're actually not doing that well either.
And maybe they pick up a new conflict and maybe they stop performing well, or maybe they just quit after some time because they're just not happy. Whatever it is, I. Then you sort of have to do this mental ablation study of, Hey, if actually if I had removed this person first, would this other person scaled as the startup grew or continued to be productive?
again, you can't know for sure, but I, those are the kinds of things that
I think, I've felt it.
Brett: How do you define product? Market fit
Ankur: I feel like you guys coined the, that that original thing with, superhuman, that really took off like the, you know, some per 40% of people say, you can't take this away. I think, this, this is, another Elad-ism, he said like, when you feel it, you know it, and if you haven't felt it, then you can't understand what that is therefore it's hard to feel. So I, I actually think that that's probably the truest representation of product market fit. at Braintrust people, we can't stop people from using the product. And, I almost find that annoying sometimes. Like, oh my God, our infra is like on fire.
Like, can you please just like not shove all this stuff into Braintrust Or like, can you run fewer experiments or something? And then you zoom out and think about it and you're like, oh wait, actually, it's like a very good thing that people are trying to do this much with the product all the time. And I.
To me that is the, clearest signal of it.
Brett: People can't just can't get enough of the product?
Ankur: Yeah. I think it's, it's people, you know, despite you or despite the quality of the product or whatever it is, people are, magnetically pulled into using it constantly. And whether it's existing people like using the product or new people using the product, they almost can't help themselves from, from using the product.
And part of this, by the way, is not convincing people to use the product.
I think if you have to convince people, it's super unlikely that you have product market fit. Whereas if without convincing people, they just find their way to the product and then they're, already convinced or they've convinced themselves very quickly, I think it, it, it's a, it's a pretty clear sign.
Brett: to wrap up, and it may have been somebody you already sort of talked about. when you think about getting a company in a product market fit, and early company building, who, who's taught you the most and was there like a particular tangible thing that they imparted on you?
Ankur: honestly I think there's two people I'd point to. and these were the first two people that invested in Braintrust and really helped us. the first is, Alana and I think that she, grew up, learning a lot about sales from, her dad and then worked in product. And I think that.
She is super technical, so if you're a nerd, you can like, communicate with her easily. and it doesn't feel like you're talking to a, someone who's too salesy or something. But she deeply understands the importance of getting the right people like customers and candidates around a company to make them successful.
And she has very high taste for those things as well. And I, I, I just don't think that comes naturally to a lot of the people that have the, skills or gifts that would enable them to create a great product. And I think the meta point here is if you can create a great product or you're very technical or whatever, you need to like almost surrender or make yourself vulnerable to people that, are wired a little bit differently and they are wired around people or wired around markets and sort of benefit from making them a really important part of your company.
Like the way that she works with companies is so oriented around the people that she gets around the company early on. That is just a very unnatural thing for people, for, for especially people like myself who are super nerdy. so I think, Alana is, is really exceptional. And then I've mentioned him like 45 times probably, but the person I've learned the most from is definitely Elad.
And it's, at this point I would say it's impossible to describe. How much I've learned from Elad, literally everything from I should take six months off before trying to start a company. 'cause, you know, and end up meeting my future wife in the process all the way through, you know, being super skeptical about company ideas.
you know, one of the things that Elad told me early on, which I found fascinating, is that, companies are successful because of things and despite things. And so if you see a company that's super successful, it's not always useful to emulate all the things that the company does. In fact, you should be very skeptical of a lot of the things that a company does because, they may have not ever had to get good at those things, to be, a good company.
A classic example of this is when companies hire salespeople from extremely successful product led, companies, and they expect the salesperson to be good for their product led company that does not have product market fit. that is not the right, so you, you actually want someone who maybe worked at a company without product market fit.
you know, great example of this is Ron from Databricks. I, I was talking to him before we, ended up hiring Brian, who's our, sales leader. And Brian would actually say the same thing. Brian was at a company that had very little product market fit and then he went to Grafana. Ron was at a company without much product market fit and he went to, Databricks.
And in both cases. It was like going from, something very hard to sell to like easy mode, but that's very counterintuitive, right? They, they didn't, you didn't hire the person who maybe did all the cool PLG stuff to make Databricks for Grafana, do all the cool PLG stuff. and I, I, you know, I think, there's just so many little things I've learned from Elad like that I, if you have the opportunity to work with Elad or work with Brett or someone who has been, around for enough time to pick up a bunch of anecdotes and counterintuitive learnings, and you build a relationship where you trust them to the point that you suspend disbelief, I think it's, it's immensely valuable.
The last thing I'll say is, we talked a little bit about market early on with Braintrust. I was very skeptical of the market opportunity. part partly because I didn't really want to admit to myself that it existed, but, also just thinking about CICD and all this other stuff, and Elad was actually really bullish about it.
And, I'd known Elad at this point for seven years or so. And, I remember, every time Elad was like, Hey, I really believe this thing. And it didn't make sense to me and I didn't listen to Elad, like literally every time I was wrong. And so at this point I had rewired myself to think like, okay, if Elad is pushing on something and I think it doesn't make sense intuitively then Elad is probably right. And that it's not something that happens overnight, right? you?
you sort of have to know each other, have that rapport and build a relationship like that.
Brett: Great place to end. Thanks for spending the time.
Ankur: Thanks for having me.