The Bard blueprint | Creating value, shipping fast, and advancing AI ethically | Jack Krawczyk (Google)
Episode 113

The Bard blueprint | Creating value, shipping fast, and advancing AI ethically | Jack Krawczyk (Google)

Jack Krawczyk is a Senior Director of Product at Google, building Bard. Bard is Google’s collaborative, conversational, and experimental AI tool that’s bridging the gap between humans and bots, while addressing ethical considerations around AI.

Play Episode

Jack Krawczyk is a Senior Director of Product at Google, building Bard. Bard is Google’s collaborative, conversational, and experimental AI tool that’s bridging the gap between humans and bots, while addressing ethical considerations around AI. After joining the project in 2020, Jack helped ship Bard in less than four years. Bard sources information directly from the web, and now enables users to inquire about and summarize YouTube videos.

In today’s episode, we discuss:


Where to find Jack Krawczyk:

Where to find Brett Berson:

Where to find First Round Capital:


(00:00) Introduction

(02:17) Bard’s origin story

(03:54) Deciding on the application of Bard

(05:59) The ethical considerations around building Bard

(10:19) Why Bard launched to the public so early

(13:30) Risk-taking at big companies versus smaller ones

(16:20) Bard’s early user research

(21:21) Bard versus ChatGPT

(25:01) The cultural and product principles behind Bard

(30:56) Insight into Bard’s impressive development speed

(35:17) Deciding when to ship Bard

(41:41) Why Bard is different from other products Jack has built

(46:30) Evaluating Bard’s original spec

(48:02) Insight into Bard's product roadmap

(56:00) The toughest challenges Bard has faced

(57:50) What’s special about team-building at Bard

(62:54) Addressing Bard’s negative press

(67:49) Advice for aspiring LLM companies

(69:15) Advice for non-LLM companies

(71:05) The biggest barriers to advancing AI

(75:45) How product people can use or build with AI

(77:24) How AI is changing product leadership

(79:20) People who had an outsized impact on Jack

[00:00:00] Brett: All right. Are you excited?

[00:00:02] Jack: I have never been more excited for an interview than I have this one right

[00:00:06] Brett: you really do look excited. Yeah.

[00:00:07] Jack: I'm smizing.

[00:00:08] Brett: I'll ask Bard to interpret that. weLl, maybe we'll start with, with sort of the founding story of Bard and sort of how it came together and use that as a jumping off point.

[00:00:17] Jack: my journey of, uh, being one of the, uh, initial members of the Bard team. Was, uh, when I first got to Google in 2020, we were experimenting with a technology that was then called Meena. One of the first, uh, language models that was out available in the market. Meena became Lambda. And what we were experimenting with was how do we get this very convincing sounding text, like agent to enter into Google assistant.

[00:00:47] And so people have been saying, Hey, Google. X, Y, Z for quite some time, and we were amazed by this technology, but concerned about some of its tendency to make things up, uh, what people refer to as hallucinations. And as we were experimenting with the technology, it became clear this isn't a simple question and answer oriented technology.

[00:01:12] It really is a way to help ideas come to life, to really help exploratory. Ideas manifest. It takes your imagination and expands it into something. And so in late 2022, uh, it became clear to us that building this as a standalone web experience to chat with a language model directly was the right thing to do, and hence the beginning of Bard.

[00:01:38] Brett: what's interesting about sort of the enabling technology of Bard is it's, one of those things that I could imagine you could do almost anything with. It could be woven into any product, which is obviously, I'm sure it's something that's happening as well.

[00:01:50] But like, how did you actually figure out that this was the form factor that you wanted to lean into?

[00:01:56] Jack: So there's two ways to approach using language models, and they both stem from having a deep appreciation and respect and understanding for the technology. So the byproduct of translating language to language in terms of interpretation and understanding guides you to a path of how can I manifest ideas and some of those ideas may get manifested like.

[00:02:22] A specific task. Help me write this email. Help me write the initial draft of this document. And that's where you see language models inside of Google starting to manifest inside of products like Gmail and Google Docs through a product that we call Duet AI. The challenge when you put a language model into an existing product is there's going to be a natural constraint of that.

[00:02:46] If all of a sudden, I'm in... Google docs, and I want to ask for, uh, vacation ideas that I want to explore, or I'm trying to figure out, what are some ways that I could debug this code that I have? Things that you may come across over the span of a day, you have to start switching tools by giving direct access to a language model.

[00:03:05] In addition to build using language models inside of existing products, you get a more. Unconstrained environment where you can test that. And so that's where you've seen the approach of taking both direct access to a language model, Bard, and building language models into existing products. Duet AI, uh, search generative experience at Google really start to take shape.

[00:03:26] Brett: What was the process to figure out what the first version of Bard would look and feel like? it's tricky because you could be working on the first version of that product for two or three years, let alone the fact that you did it so quickly. So what did that process look like?

[00:03:42] Jack: The First version of any product you. You have to enjoy using yourself. I don't care whether you're a startup that's building a record label in your pocket. Uh, one of the first things that was ever uploaded to United Masters when I was working on it was, artwork that I helped create with a colleague of mine, whose song, uh, it was to upload it.

[00:04:05] And we just went through the path of releasing music ourselves. Working with Bard is a very similar. Topic, which is the first prompt that got put in was by our team, trying to see how it feels, how it works. And the hardest part wasn't figuring out how it would respond in the sort of happy path. It's the, wow, these things can really get it wrong.

[00:04:32] So there's things that are obvious that Uh, or seemingly obvious in early versions, you don't want a nuanced answer toward how do I commit a crime? Look, to get a very proper nuanced answer for that is going to be really hard. There's research that takes place. You have to take a step toward what we call adversarial safety risks to say no.

[00:04:52] The harder part are the inadvertent safety risks, things that aren't intentionally going to be adversarial that can make. Your product go in a vector that you don't want to go. How much Tylenol should I give to my kid when they have a fever? Very well intentioned question, but you want to make sure that the answer that you provide is as tuned toward not hallucinating as possible.

[00:05:19] And so for early versions, and even to now, we think the risk of hallucination in that case is too high to get incorrect. And so we elect to do something that may frustrate. One of our users, and we say I'm a language model, I can't help with that, and that's the balance that you need to strike. And so we work through both testing and our own policies inside the company to determine what is the tolerable amount of, uh, responsibility risk to, uh, to apply.

[00:05:50] Brett: what does the process look like to figure out, all the various versions of things that at this point you sort of don't want to generate a response or don't want to generate certain types of responses.

[00:06:02] Jack: The research around responsibility is, is changing quite frequently, but one of the things that's really emerged over the last, uh, one, two, three years has been this concept of red teaming. The idea that you have a tuned version of a language model that's ready to be used and you want to make sure that you didn't miss anything beyond your initial policy guidelines.

[00:06:27] So similar to, most products that talk about, uh, what their intended use is, Bard has that as well. But we get teams to work on it from both inside and out of the company to ask questions, to prompt it, to try to guide it, to say something. And what that helps us do is identify, Oh, there's this corner.

[00:06:50] That we may not have expected. And so one of those things might be like, give me so and so's address and, ways around it. So in the early days, which feels crazy to talk about the early days of language models, given we've barely hit a year of them being truly commercially available, you would see a lot of this topic and conversation around jailbreaking.

[00:07:16] How can I get the model to ignore its rules? And this wasn't just Bard. This was across the board. How can I get it to do anything? And those sorts of tasks became absolutely critical. And it's a bit of a pendulum because you want to give people creative freedom. We talk about Bard as an insanely complex technology that solves a very simple task that's profound in our lives, which is bringing your ideas to life.

[00:07:43] Ultimately, it's job is not to give you an answer. It's to give you a possible answer and allow you to bring an idea to life. And so it's a tricky thing to find where that right balance, uh, is struck.

[00:07:58] Brett: when you look at the end to end process to ship the first version of Bard, what allowed you to do it so quickly?

[00:08:07] Jack: Quick is a very relative term. The first version of the Transformer was written about in 2017. We had versions of Meena. The challenge around shipping technology like language models that creates very compelling text that's very confident, but very wrong. We have terms for people like that in our world that's, I don't know what age rating you have, but you know, BS ing. it's an interesting technology to roll out. Cause for. So much of our lives we're used to computers being literal instruction followers, and now that they're possibility generators, you have to strike that line. So we had versions that were, that I was testing in 2020, but finding that right interface and also finding the right.

[00:08:58] Rhetoric outside of the world to be accepting of this new technology was absolutely critical. And so when you think about previous years, discussions of what are the risks, what are the fears of AI? These are critical steps in what we call the alignment process in AI. How do you get this technology to align with human values?

[00:09:20] And so the technology has been ready for quite a while. The manifestation. takes more than just getting the technology there. It takes getting signals from your user research from your market research to suggest people are willing to engage with this convincing sounding technology.

[00:09:37] Brett: what then were the indications that you were at this pivot point that it did make sense to put it out into the world?

[00:09:44] Jack: I have to give credit to chat. GPT going out into the world and showing people there's more than just the risk of this technology. There's excitement that exists the T in chat GPT comes from the transformer, which we've talked about quite a bit. Part of what's so exciting around this time is I've been in Silicon Valley for 16 years now, and I'm feeling that excitement of, it's not just one company doing things that are amazing.

[00:10:15] There's a whole cohort of people that are doing great work that are learning from one another that are publishing their research, that are putting products that are out there whose sole job is to get feedback on how to make it more helpful It's part of why we talk about Bard as an experiment at this point We don't think that this technology is yet ready to be a fully baked product It's getting closer every day, but this feeling this aura of hey, there's excitement there are people that want to build there are people that want to use it there are people that are sharing their stories of how it's helping them and those stories are critical for normalizing getting this technology to be more helpful in people's lives to get it to be something that inverses from three quarters of the world has still not used a language model based technology to three quarters of the world has used that technology.

[00:11:06] And that's part of what the fun of this transition moment that we're in of launching these language models to the world presents us.

[00:11:14] Brett: what did this whole experience sort of teach you about risk taking in the context of a very large, sort of very valuable company? And I think it's particularly interesting because you worked at very small companies. Um, and you've obviously worked at Google now multiple times.

[00:11:30] Jack: There's this amazing thing about Google specifically After I left my first stint, I was in search of, and I was so happy that it was true when I came back. So my first day at Google was in September of 2007. And there's one of the core principles of the company is focus on the user and all else will follow.

[00:11:52] And what that manifests is a culture of curiosity, this sense of what can We learn about the world that helps us achieve our mission. I've worked at many companies across Silicon Valley, and I've never been at one where I can recite the mission statement, organize the world's information to make it universally and easily accessible and helpful.

[00:12:21] That curiosity guides us of we're out to achieve something bigger than the current products that. Um, people have to use today, whether it's ones that we've built or somebody else has built, but we genuinely believe that humanity will be in a better place when you have access to this information at your fingertips.

[00:12:44] And that has been true on Bard and it's true across Google that there is this curiosity of wait, this fundamental truth of computing that has been true. My entire life is no longer true. This truth that computers exist to do things for you. Give me the distance between point A and point B. All right, here's the answer.

[00:13:06] Calculate this column of numbers. All right, here's the answer. Now that's not computing's sole existence to follow instructions. Computing can now do things with you. And so the curiosity that comes in, it's almost like You know, the risk of course is critical in terms of trust is the most important thing that we have as a company.

[00:13:26] There's six products across Google that have more than 2 billion users, but it's a bigger risk to not understand this paradigm shift, this fundamental shift in computing's capability. And so my approach to risk or our approach to risk is, If you don't pursue these fundamental shifts,

[00:13:49] you're going to be not meeting the peak relevance of your users. And so it's not just Bard, it's always been true at Google. And, I guess it hasn't changed my perspective on risk. It reinforces the importance of curiosity to me.

[00:14:04] Brett: You mentioned this a few minutes ago. Um, and you kind of, you share this kind of interesting, the interesting reality for me about why market timing, why it's so hard to figure out when something will work, um, is kind of this unsolved problem is that most really interesting technologies are like this layer cake dynamic.

[00:14:25] where you have so many of these enabling technologies that always have to sort of stack on top of one another to ultimately, um, create something really compelling. And so if vision pro ends up being super successful, it's multiple decades of like all this entire stack that creates this layer cake that will have ultimately.

[00:14:43] Um, come together and you kind of have to get the timing of every single part of the stack in order to predict sort of when the thing will happen, which is basically I think almost impossible to do in some sort of repeatable way. Um, but the story I tell myself about Bard is it's very similar, right? We have this layer cake of technology that goes all the way back to the starting of the internet because if you didn't have all this information on the internet, sort of none of this would work.

[00:15:05] but it, it feels like there was something that you. All put together when you decided you wanted to do Bard that the pace of execution and the ability to get a product from dog food and out into the world felt and maybe you'll disagree. There's a lot of examples, but felt like it was meaningfully faster than other great products that you all have built and felt more small company like than very large technology company.

[00:15:32] Like, and if, if some of that is true, I'm really interested in like, what was the setup or conditions that allowed that relatively small group of people that obviously built on top of the layers that have been, Uh, being created for a long period of time. But when you got started, it felt like from kickoff to product in customer's hands was meaningfully faster than other products.

[00:15:53] And if that's the case, I'm really curious to hear more about the, the how,

[00:15:57] Jack: There's something that has to be said for the importance of passion. You had a collection of people, the first team that was assembled to, build Bard had not only a deep familiarity of, um, building products inside of Google, uh, working with language model technology, understanding its opportunities and capabilities, but more importantly, understanding its limitations.

[00:16:25] But the thing that fueled it was passion of Oh my goodness, we're starting to see in some of our initial tests here, some of the initial interviews of people using it as you start to build something that you can start to solve something in people's lives that they haven't. Had a solution for before. So early on in the days of Bard, we interviewed, uh, one of these trusted testers and they tell us the story of, uh,

[00:16:56] Brett: what's a trusted tester

[00:16:58] Jack: a trusted tester, uh, is a panel of people that we give pre release access to.

[00:17:04] to ensure a, the safety of, uh, of our product. That's kind of the baseline, but be the helpfulness. Are we meeting the needs and expectations? And, um, these testers help give us very explicit feedback. It's a form of scaled user research. And one of these research participants tells, tells us, uh, I'm a neurodivergent person.

[00:17:27] And for the first time I can communicate with people. in a way that I feel more effective. And they go on to say, I have been diagnosed as autistic. And so I can tell when somebody sends an email to me that they're frustrated. There's clear language that indicates that they're frustrated, but I've struggled my entire life to reply to that email in a way that won't further their frustration.

[00:17:56] And then I create a problem for myself. So what I've started to do. is ask Bard. Here's how I'm thinking of responding to this email. How do you think the person will respond? And if it tells me that they'll be frustrated, I asked for tips, et cetera. And all of a sudden it started filling in the gaps for this person.

[00:18:13] And when you feel that as a product developer, that you are solving a real problem for someone that you may not have yourself, if that doesn't inspire you and you're not off to the races, then. the fact that that's going to get filled by someone else just gnaws away at you. And so I get the great opportunity to not only talk to people using Bard, uh, here in the United States, but from around the world.

[00:18:40] And these things come up again and again and again. And I would be lying to you if I said it wasn't intoxicating to feel like you can help people in new ways that you've never been able to as a technologist. And that passion is not just me. It's my colleagues on the team. And so we're able to move quickly because we're inspired by the way that this technology has the opportunity to help people.

[00:19:04] given the fact that.

[00:19:06] Brett: chat GPT was launched to the public before Bard came out. How did you think about what are the different contours of the product that you wanted to build? And in what ways might it be similar or different? Or, there's always this kind of interesting dynamic and Google and a bunch of other companies have had amazing success building related companies that end up solving a lot of the issues.

[00:19:29] In some way, the founding story of Google is that, right? Search engines and versions of that were around for a long time. And then Google made the one that ultimately won. Gmail is another example. Webmail was around. And this idea of infinite storage was sort of that key thing that I think infinite storage and search were those two key things that ended up building, um, sort of this extraordinary product.

[00:19:53] And so I'm curious, given chat GPT and there are some other similar products were already out there. How does that go into the way that you think about building the first version of a product?

[00:20:02] Jack: Whether the first version or a later version, you, of course, have to be mindful of. The alternative choices that people have in terms of the tech, the technology that they use. But the candid answer is we obsess almost entirely about what is the unique perspective that we can bring? What are those North Star principles that we're using to deliver technology of value and, of course we understand that people are going to compare Bard to other language model technologies out there.

[00:20:35] But what we've really focused on is from the very beginning, talking about Bard as a creative collaborator. It's a collaborative A. I. experiment that allows you to bring your ideas, your needs, and your curiosity to life. And that North Star is really what fuels us and what leads us to be extremely passionate about the things that you see, we highlight of using it.

[00:20:58] We don't see it as an answer generator, we see it as a possibility generator. What does that mean? Well, it means that one of the features that you see on launch is multiple drafts of a response. We don't think that this is going to be a definitive, answer to your question, but let's say you're exploring, how might I break bad news to my friends that I'm not going to come to a dinner party we're going to present to you three different answers.

[00:21:24] And we're also not aloof to the fact that some people, and it turns out not the majority use case, but some people are going to use it to try to find information. Well, we know that we ground the information on search results. What are some alternative search, search results that we can provide? And these are just a couple of examples that we use on anchoring on our first principles in our design is by far the most important thing.

[00:21:50] And seeing the adoption that you get from those sorts of features allows you to get that feedback. And it's almost like a human version of reinforcement learning. You put something out there, you give a possibility to a user. Some things we do work. Other things don't. You learn from both of them and that's what you use, whether it's the first version or the most recent version, like today, before I came here, we just talked about Bard being able to respond in real time.

[00:22:18] Other language models have been able to do that, but, we have elected to present results. in piecemeal because it allows us to do a certain amount of filtering. now we get that out to our users. And it's fascinating that we've learned along the way. There's a large disconnect of people that have a preference on UI that give you the whole answer versus a piecemeal answer.

[00:22:38] And so we're able to roll it out as a user choice rather than saying you can only have it one way or the other.

[00:22:45] Brett: Maybe you can talk more about what was the process to develop the principles for Bard and maybe what some of them were that sort of guided the process of building the product.

[00:22:55] Jack: There's the product principles, and then there's the way the culture forms in terms of the way that you build the product. The product principles are a whole lot easier. You work closely with your partners in research. You work, um, with your engineering team, your design team, your partners across the company, and You form a couple of pithy things that guide you.

[00:23:18] So I talk about, it's a creative collaboration with AI. That's been a clear principle from the beginning because we know the technology, it helps bring your ideas to life. Well, how do we make sure that we are manifesting that in what we do? Those come from a knowledge of the technology as well as an empathy to the user and the research that you do, and they're grounded in what the technology can do, the, how you operate as a team.

[00:23:43] Um, that requires like that is where executive leadership is absolutely key. And we have an incredible GM that oversees, um, both Google Assistant and Bard, Sissie Hsiao. And she started demonstrating the behavior from the very beginnings of Bard, doing things like deeply reading the research papers, whether there's, if there's a document that's been written, if there's slides that have been created, she's read it, she's gone through.

[00:24:14] And so what starts happening, like all of the leads on the team are like, well, that's the way that I'm going to show up in my meetings. Like I will have had done my homework. I'm going to ask you pointed questions about what happened on page seven. When you wrote this sentence, that sort of leadership style, especially in the beginning, creates that culture of not just like what you do, but what you don't accept.

[00:24:39] And so we've started to create these norms and these sayings. I think, especially in a lot of larger companies, you'll have people say things like, Oh, I'll get back to you by the end of the week. Oh, I'll get that for you tomorrow. And early on, we started saying things like, Why tomorrow? Why not today?

[00:24:56] Because what that ends up forcing isn't this like uncomfortable clash of like, how dare you, make it, uh, make something I'm asking for not a priority. It's, Hey, share your priorities. Like, let's discuss whether the other things that we're working on are more important than this. And that like push toward, Hey, I feel like this is really important.

[00:25:17] Why can't you get it done starts to manifest into your earlier question of how can we move more quickly? It's a lack of apathy toward delay.

[00:25:29] Brett: And was that done implicitly? Like a small number of the folks that were leading this project just started to behave and communicate in that way? Or was there some more explicit dynamic in, in terms of the specific ways that on this team you wanted to behave?

[00:25:50] Jack: It was modeled behavior by leaders early on working on the product on the experiment. It was this feeling of when somebody new would join the team, we have this saying, welcome to Bard. Your job is make a great Bard. And I think a lot of teams default, not just in large companies, but in small companies to give me your exact roles and responsibilities, be very prescriptive about, you know, where one swim lane ends and the other begins.

[00:26:20] And when you're working on a rapidly evolving technology that creates a form of false precision. And so what we started to do was this energy around, you know, like a one team, one dream vibe, like. Like, I was committing code the night before we, we launched our dog food version of Bard inside the company.

[00:26:44] Not because I feel like, Oh, it's an engineer's job to commit that. It's like, we're all busy and look, it's, it wasn't the most intense code in the world. It was like changing a couple of lines of copy and updating some if statements. But to be able to do that, to get in there, to share that story of like, well, I'm committing code because we need to get this done.

[00:27:04] I'm, I never hold that over the team's head, but it just sort of demonstrates like we're all going to commit to whatever it takes to make a successful Bard and we're several months in now. And like, I just love that as our team has grown and expanded, that culture still, still remains.

[00:27:22] Brett: What are some of the, the rituals or sort of operating cadence of the team that you think is a part of what's made things work?

[00:27:31] Jack: I think as we do reviews. Giving the leadership team the ability to sit in on things that their particular team may not be involved in has been a really great window into sharing ideas. one thing that I, I really appreciate is everybody coming to a meeting with their homework done. It's sad that that's weird, and that's not unique to Google.

[00:27:55] Like I've worked at. 15 person companies where that's the case. And you just kind of show up. Like if you treat a meeting as our goal is to have as few of these as possible. And the way that we do that is by doing our work, you get a whole lot more productive of an environment. And I feel like that's easier said than done, but the way that you reinforce it is like, if you are in the C suite of your company, make sure you're doing your work, like you came to that meeting.

[00:28:24] It seems like super simple. But it's amazing to me how many people don't demonstrate that. And again, like culture becomes so much as what you aspire as much as it is what you don't tolerate. Don't tolerate people saying, Oh, I haven't read that yet. If you have,

[00:28:40] Brett: I think we kind of keep coming back to this topic of how do you ship a product like this relatively quickly. And I think some of the things you've been talking about are a lot of the sort of implicit, parts of the culture that happened to develop on this specific team. But I wanted to come back to it because I think, I don't know, my experience in studying so many large companies, this is very hard to do.

[00:29:07] You know, and even though the enabling technologies have been around and were sort of painstakingly built over a long period of time, my sense is when you originally decided that you were going to go launch this experiment to when users started to use it relative to other large companies. It was unbelievably short amount of time.

[00:29:27] And so I was hoping maybe you could pick that apart in a little bit more detail or kind of explain. What are some of the things that you think contributed to that speed? Or maybe you can also contextualize it and explain sort of the different steps along the way that made it happen.

[00:29:39] Jack: I would say the biggest difference of launching Bard versus other projects that I've had the opportunity of, of being a part of and in a large company setting is,

[00:29:50] you know on day one, the biggest gap to creating the best choice for people in the world is getting Um, real world feedback. when there are alternatives that are out in the market, not just chat GPT. There, there are plenty of others that were out there. you are creating a larger challenge for yourself to create a good experience.

[00:30:13] And so the hard conversation to have is at what point do you present an experience that you know, is not going to give the greatest answer at its start. And when I look at where Bard was on March 21st of this year, where Uh, we launched in the US and the UK in a waitlist only format to where we are today.

[00:30:40] And knowing how much of that progress was because we put it in people's hands we had a passionate group of people that were willing to give us the feedback. And of course there were a lot of people out in the world that were clamoring, Oh, it's, it's an inferior product, it's inferior this, that. The ability to have the conviction that it will get better, like you have to go through a trough of despair.

[00:31:09] That's what enabled us to move quickly. If you spend all your time trying to build the world's greatest product, the world's going to move past you. And so that's something that's really hard when you are I don't care whether you're Google or any other large company, right? Like it's, it's hard to know that you have this hill to climb.

[00:31:31] And so it was, it took conviction of a team to say, how do we minimize that time? The reason why we move quickly is a zeal for curiosity of how do we make this technology helpful? And so we put it out there and look, I'm not going to pretend like we didn't take licks of people criticizing. But criticism is part of the game, like the thing that we have been unrelenting on is learning from people using the technology and so speed is, is again enabled by that passion and it allows you to go through the difficult times.

[00:32:05] And now the conversation is much less around. Why is there such a gap between these to now? Oh, what's the right way to think about using one technology over the other? And that's not to say that there are alternatives that are out there that don't produce better answers than Bard sometimes, like certainly there are, and there are times where Bard produces a better answer, and now it's a different dynamic, where the dynamic is how do you build into that world of three quarters of the world still hasn't used this technology?

[00:32:36] It's a different approach to solving the problem than it was, say, six months ago, but we've now instilled that culture of move quickly, learn, be curious, see what can happen. And that's, I think what enables the speed to maintain away from get through the pain as quickly as, as you can to learn and be curious and create value for people that they've never experienced it.

[00:33:01] Brett: what was sort of the dynamic to ultimately land on the date in the early spring? I think it was that it ultimately shipped.

[00:33:09] Jack: The amazing thing about working on a product like Bard is like that genuinely feels like three lifetimes ago. When I reflect back on the time, it was at what point will we create something in the U. S. and U. K. that will, for the most part, generate great experiences? We knew that code, for example, was something that people were using language models for, but we didn't think it was good enough.

[00:33:36] And so, at some point, we made the decision to say, look, code's not ready. So code launched a couple of weeks after we launched, but we need to get this out into the world and start experiencing what quote unquote real world, um, prompt streams will look like. And so I don't know that it was a specific date so much as it was.

[00:33:59] We need to start learning quickly, and we need to hit a certain quality threshold that we are going to deliver while minimizing the safety risks, and so spending a lot of time going through red teaming, setting the policies, etc.

[00:34:13] Brett: Was it easy to get people aligned with this idea that we're going to put something out of the into the world that's not perfect?

[00:34:21] Jack: No.

[00:34:22] Brett: What was that process like? And I would think again, you kind of have this dynamic where in this case, you have a couple other products that are out in the world and you knew that whenever Google puts their product out in the world, everyone's going to scrutinize it.

[00:34:39] And so there would be this tendency of we need to put this thing out that's going to blow people away. And anything short of that, we can't let leave. And that could have been two years of sort of working on that. And you obviously did, did the opposite. And so how did you create alignment around sort of just biasing towards putting something out that's sort of not perfect?

[00:34:59] Jack: the part that made it easy was. Google is such a deeply technical and research oriented company that the people that were deeply familiar with the technology understood that through tactics like reinforcement learning through human feedback, you're able to rapidly improve the product experience.

[00:35:21] And so that actually took the sting off the part of. Yeah, there are very high expectations, but there's this thing that even a company as large as Google, like working on the product team, one of the things that I've really started to embrace is people care about what you deliver, but the magnitude at which people are talking about you is probably in your head is probably always larger than the reality of what exists. And so I think a lot of problems that we put out into the world are. Self imposed challenges or, Oh, people are, are not going to like X, Y, or Z.

[00:36:03] and so the thing that I like that even now working on this project, most people around the world have heard of language models, but the vast majority of them haven't used them because they haven't delivered a value. And at the end of the day, working at a company like Google, you know, that people will start to use a product.

[00:36:25] Once you've developed their trust that it provides value. And that was the key to reinforcing. Like, why do we call it an experiment? Well, set expectations. would I have loved to come out and said, This is the be all end all language model that's going to change the world. I would have loved to have said that, but the reality is the technology is not there yet.

[00:36:45] None of these technologies are. Most language model technologies today still talk about being a research preview because it's not there yet. It's crazy to think we're still in the research phase of this technology. And so, yeah, it's hard to deliver, especially when you're so used to building in a world where you have hundreds of millions of users of a product, you have billions of users of a product, uh, and to start at zero.

[00:37:14] is extremely humbling.

[00:37:16] Brett: Before you even started building what's now known as Bard, were the key stakeholders all aligned that we're going to put this thing out in the world in short order? And the reason we're going to do that is that humans are going to make this better. Or did you sort that out sort of part of the way through and it was sort of this back and forth dance to what's the level of fit and finish or perfection that needs to exist in this product to get it to tens of thousands and then millions of people?

[00:37:42] Jack: Well, I, I can only give the perspective of what I had working on Google assistant, which is we were struggling to get this built into Google assistant because there is such a precise line that you have to walk with users, like the expectation of hallucination when you're asking what's the time, what's the weather, which happy to report Bard does not hallucinate now on time and weather.

[00:38:06] We've figured that out. Thank you. Um, Getting to that point of, hey, what, like, what's the best way to get this out there? Oh, a standalone experience. You can imagine there are pockets of the rest of the company that are trying to figure out how can we make this technology as helpful as possible. But that's where it really got back to a demonstration of.

[00:38:27] What do we believe this technology can do? Why do we believe a standalone experiment is key and effectively building a pitch? Like I don't like making comparisons to starting a new product inside of a large company is like building a startup. Like it is nothing like building a startup, but there is one element that's key, which is in the inception of an idea, precision with which you present your hypothesis.

[00:38:55] is the most critical step. And our hypothesis was, we believe a stand alone web experiment that allows you to collaborate with AI will encourage people to try something new. I think that's what gave the idea legs. And to be clear, like, it's not just me that worked on this. Like, there was a collection of team, of a team that made this idea come to life.

[00:39:21] And that precision of hypothesis was what enabled it.

[00:39:25] Brett: In what way did you find building this product quite similar to a lot of the products that you've built in your career? And what were sort of the new bits or the things that felt like you were kind of starting all over again

[00:39:42] Jack: it haD probably been a couple of years since like I myself had written a requirements document. as a product leader for so long, The job was help cultivate and craft an environment of product managers that are able to build and create and develop great technology.

[00:40:03] It had been a while. And then as you start getting into writing a requirements document, you start to think like, Oh, what, what are the capabilities of this? And the, the thing that really jumped out to me was normally the way that you would write a requirement is as a user, I can do X, Y, or Z. And you kind of write out those requirements.

[00:40:23] And then it was this realization of, wait, that's too limiting of the technology. Like this is an open ended technology. Like you can say, I'm going to build a great, um, email draft generator, but people are going to all of a sudden in there, write something like, help me explore a hundred creative things to do with a piece of construction paper and feathers for me and my kid.

[00:40:47] And you're just like, well, that wasn't part of the spec, but the technology is capable of doing it. And so developing a respect for the technology, I think was key. And so starting with a deeper understanding of what are the capabilities of this technology. Was very different than the focus on the problem that people have and build a, you know, build a solution for it.

[00:41:11] It was much more open ended. And so that part was very uncomfortable. The things that are familiar are the, how do you convey in five slides or less, in three slides or less, a very precise view of what you're trying to deliver and learn. And I think the combination of those two things have made this past year plus of my life probably the most rewarding professionally than I've ever had, because I'm relearning the way that it takes to do product management.

[00:41:44] Like, how do you manage a product that you don't always know what it's going to do or say when you use it in a world with the opportunity for it to unlock productivity, unlock creativity, provide companionship in a way, it's kind of, it's kind of wild.

[00:42:04] Brett: on, on the point about had to get your arms around what the technology could do? What did that look like?

[00:42:12] Jack: So one of the things that came out like, uh, they're called large language models, not large math models. And one of the things that people were doing very early on, in the phase of language model adoption was getting it to say, uh, what's one plus one? And the language model would with respond with three and people would say, ha ha ha, silly language model.

[00:42:34] And you could, take one of two approaches. There's like the classic approach, which is, Oh, well, the technology is a language model. It looks across the corpus of things that have been written. There's actually a lot of text around the world that says one plus one is three. You know, the sum of two things is greater than the whole, the parts, whatever.

[00:42:52] Or, you can just embrace like, wait, that is a limitation of this technology. These things are really bad at doing math because they write compelling text, but they're really good at translating language, natural language to code. And so what would happen if we try to answer this question by asking the model when there's something math related, if you were to write code to solve this problem, how would you do it?

[00:43:17] And it's capable of doing that. And then if you would execute that. How would, like, what answer would you get? And that created, I think it was in month two or three of the experiment, what we call implicit code execution. The way that you start solving math and some logic oriented problems is under the hood implicitly ask the model to write and execute code.

[00:43:41] I don't think that like me of five years ago would have been able to figure out how to do that. It just would have been like, well, that's a limitation of the technology. Like, let's call it a day.

[00:43:50] Brett: before you actually wrote the original spec, did you end up exploring and tinkering for a long time to figure that out? Or what did the actual spec looked like given it? It's so open ended.

[00:44:01] Jack: Most of the original was around how would you evaluate the product? I know you love tactics on this program. So let me tell you one tactic

[00:44:12] Brett: I'm very excited for this.

[00:44:13] Jack: You're in language models, your eval is your prior product requirements document. It's a probabilistic based system.

[00:44:21] And so the way you construct what you will evaluate it against effectively dictates what the product is that's going to be built. And so of course there are classic product requirements. How do I provide transparency in terms of how your data and information is going to be used relative to privacy, et cetera.

[00:44:37] That, there's nothing that's anything new to what I've experienced. You just got to go through and be extremely clear. What did you see? How's the information used, et cetera? What are the feedback rating mechanisms that we provide? Thumb up, thumb down, share, et cetera. But when it gets into the, is this model ready or not constructing the, what are the sorts of prompts that you want to test this against?

[00:45:02] This isn't like a search query that people are putting in. What's the weather on weather days that are, uh, that are pretty sunny, but the weather is 40 degrees. I'm looking for a great activity to do with my kids on this, that, and the third, it was hard to write the first email and we definitely got it wrong in the first steps.

[00:45:23] And so. Going through the, iterations of the first version, you kind of get to this balance of what are people actually using your product for and what are sort of the aspirational aspects of what you want to use your product for and using that as your guide. I know it's not just us. I've had the chance to speak with many people that are working on, um, language model products and that eval construction is absolutely key.

[00:45:46] Brett: how are you now building a product roadmap for a product like this? 

[00:45:50] Jack: It's so much, it's almost like a night and day difference from before you launch your first version to where you are now, where we are now, which is a mix of understanding where people have challenges using the product. And what is the broader aspiration? I talked about this computing paradigm being one where.

[00:46:11] helps do things with you, helps you develop your ideas, clarify your thoughts, brainstorm. We have a hypothesis of what happens when you combine doing things with you with doing things for you. And so at the beginning of October, we announced, uh, assistant with Bard. So it's kind of fun to be able to take previous job and current job and, and bring them together.

[00:46:33] And the hypothesis there is. What happens when you combine doing things with you and doing things for you, and we're going to learn a ton there. And so to your question of how do you develop the product roadmap? It's what are the hypotheses that we believe are the most key to getting into that space of three quarters of the world still does not use this technology.

[00:46:56] It's not an awareness problem. It's a finding the right value. to get them to use the technology. And so we obsess on our product roadmaps of how do you make this technology more helpful? Of course, you don't ignore the one quarter of the world that has used the technology. You make it better, you improve your quality, you get your evals to be more precise, et cetera.

[00:47:16] But it's that, clear presentation of hypotheses that inform why are people not using this yet?

[00:47:23] Brett: But in the case of that idea of let's explore what it would be like if the product did things for the user versus just collaborated with a user, what is the process by which you end up focusing on that versus 30 other things? Or if there's a series of hypotheses, how would you force rank or decide which hypothesis you want to test and validate first?

[00:47:45] Jack: This is where having research as part of your core product development process is key. You've got to generate insights in terms of something that your most passionate users are advocating for. People that have heard of your product but haven't used because it doesn't do X. It's not a very precise equation when you do one or the other, but you're doing opportunity analysis in each of these cases and knowledge of what the technology can do.

[00:48:15] And so what we're trying to do to determine what you build next is that what do we believe is going to have the most amount of impact in the clearest amount of time. So there are open ended things and elements of research that of course we're looking into as a company. Things that are unsolved research problems will be unsolved research problems.

[00:48:37] Things that are more clear, even if it takes six months to do, but you think, Oh, this will create a cohort of X number of people that haven't used the technology. That's what you use to kind of balance.

[00:48:49] Brett: Do you spend more effort trying to figure out what's going to get the next million users or for the users that we have, how do we deepen engagement and get them to spend more time using the product? 

[00:49:03] Jack: Oftentimes at this phase of where we are in the technology, those two things aren't widely dissimilar. So on the topic of hallucination, for example,

[00:49:13] there will be some people in the world that tell you hallucination is exclusively a bug. These things should absolutely be factually accurate. That's a really simplified version of the problem when you consider, a question like, is the sky blue at a very simple glance

[00:49:31] I mean, how would you answer that question? Is the sky blue, Brett?

[00:49:33] Brett: I mean right now it's literally blue.

[00:49:35] Jack: Yeah. Context matters. Well, if it's cloudy, the sky is gray. Well, depending on if the sun is setting, it's actually not going to be blue. It's going to create shades and hues of pink and orange based off of light diffusion and dispersion around the world. I'm just focusing on hallucination as, uh, one of those problem areas that we think and develop a roadmap around.

[00:49:58] There are ways to. approach the problem, which we know there's a large cohort of people that don't use language models because, uh, they fear that it makes things up. Providing context to an answer is actually the more important part. And so we recently launched something, uh, that we call the V2 of our Google It button, which, when you receive a response on, uh, on Bard, and it happens to be something that you want a factual answer around, we go sentence by sentence and find, is there content from around the internet that agrees or disagrees with this?

[00:50:32] And our approach to it is we're not quote unquote solving hallucination. We're solving unintended hallucination, which would be something like, what's the distance between point a and point B or what's one plus one. Like that should always be two. and so we've got to think through contextualizing the problem, ensuring

[00:50:56] you are addressing the reasons why people either don't come back to your product or aren't using your product by fully embracing the problem. And so I would love to be able to get to a point where when somebody asks, is the sky blue? It would say, depending on the context, yes, but here are some more reasons why.

[00:51:16] And there's always going to be some nuances in the answer that we want to make can be corroborated across the internet.

[00:51:22] Brett: I think this sort of ties back to some of the product principles that you were talking about earlier. But for some of the specific sort of parts of the user experience that are somewhat different the sort of idea of sharing multiple drafts. The desire to not have the product originally generate real time sort of character by character responses.

[00:51:41] the story behind some of those decisions and could you kind of bring them to life a little bit?

[00:51:46] Jack: that anchors in just obsessing over what you want to, uh, be able to produce and the principles that you adhere. So one of the challenging things early on of, producing a response is, we believe very deeply in copyright protection, especially as it relates to the output of responses.

[00:52:08] And so from a probability perspective, you know, when you have millions upon millions of these responses every single day, the chances of a sentence being written that's in copyrighted material is actually fairly, I wouldn't say it's large, but it's not zero. And so you want to be able to run this check around, how do we make sure that we can make the statement:

[00:52:32] this technology upholds, all the right copyright protections that, uh, that exist in an output. And one of the things that we started to see in some of these outputs as you start to kind of stream them out is, well, even though, it was, a function of probability and not regurgitation of copyrighted material, you would get a response and then at the end of the response, you would have to take down the response because you wrote something that was copyrighted.

[00:52:58] And figuring out how to do that in real time was really tricky. And so the question became, would you rather get somebody an answer faster or uphold your principle of copyright? And of course we chose the path of copyright, but what inadvertently ended up happening there, like from an internal perspective, like people want to stream, they want your information as quickly as possible, but you learn there's a cohort of people that are just like, no, I don't want to see it stream.

[00:53:25] Like a non, a not insignificant amount of people will tell you, I actually just like that it gives me the whole answer when it's ready. Like it makes me think that it, that it's thinking, that it's being thoughtful about the response that it provides. And I don't think we ever would have had that. insight had we not had those kind of two clashing principles and needing to determine which one to land on.

[00:53:44] Brett: Are there a few really gnarly problems that the team had to, crack to ultimately create the experience that's Bard today?

[00:53:52] Jack: Gosh, it's hard to pick one. I mean, when you're working in a world of consistent ambiguity, it's more of the things that come up that you don't expect. How do you respond when there's breaking information and you start to present information that's incorrect. We knew it's a challenge and some things are more sensitive than others.

[00:54:14] And I think that's a problem that we're still trying to crack. you'll see things that get a lot of coverage in popular culture right now. And there's so many things that are happening on and around the world. We're making the decision to basically say, it feels like a higher risk to respond with something that is not a fair summary than just saying, we'd rather, we'd rather take the approach to say like, I can't help with that yet.

[00:54:42] And that's, it's tough. Like there, the model is capable of producing a response to nearly everything that you ask of it, presenting it in a way that makes it clear, like, Hey, this is presenting a possibility and not a definitive answer is a trickier part. And so striking that balance between how to produce something of value, well, that sort of maximizes helpfulness and minimizes harm.

[00:55:08] You can never fully eliminate risk, but you can try to minimize it as much as possible. And so Um, I don't think that's a problem that will ever get solved. We're just trying to get smarter in terms of how do you balance that sort of bold approach of giving people this helpful, uh, these helpful capabilities with the responsible need of let's not give people a response that they're, um, that could potentially, uh, upset them.

[00:55:33] Brett: Something we haven't yet talked about is, uh, the people part of building Bard. And I'm curious. Were there differential things that you looked for as the team started to grow and you started to build the team for Bard relative to all the talented people that work at Google?

[00:55:53] Jack: Bias to action is the thing that has been uncompromising and unrelenting.

[00:56:00] And I think as people advance in their careers, they have different approaches toward a bias to action. Like there's a lot of completeness in thought that's really key. there's always a focus toward ensuring that we've thought through problems deeply. And there are some roles that focus on cover as many corner cases as possible.

[00:56:24] Think of every edge that you can do and sort of maximize the corners and edges that you could anticipate. And that at times, when, when you have products that cover 2 billion people, you need that mentality. When you're in a world where the corners that you're coming up against can be very, very sharp, but low in number.

[00:56:48] You've got to find the right balance between. When do you, when do you move on? When do you, when do you bias toward taking action and taking on risk? And so the thing that's been amazing about building this inside of Google is you have people that have come from all walks of their life, all walks in their career.

[00:57:05] You have people that have worked in a startup before. You have people that have desire, that have desire to work in startups. And really what you have to evaluate for when you're bringing people onto the team is how much will you value learning responsibly as quickly as possible and not coming in and saying my job is to be the contrarian.

[00:57:24] That job is actually very important in many teams at the phase in development that we're in being contrarian on everything may not be the most helpful thing to helping us move forward. So that's some of the balance that we have to find. I

[00:57:36] how do you figure 

[00:57:36] Brett: out if somebody overexpresses that trait in the interview process?


[00:57:41] Jack: think you need to deeply believe why somebody wants to work in this field of ambiguity. Like you can teach and coach a lot of things, but intrinsic motivation, I think is one of the things that is critical in working on this kind of pioneering technology. And so you learn really quickly. the folks that are less driven by a deep understanding like of what it means for this technology to be delivered to maximize the helpfulness.

[00:58:15] These people that appreciate and respect the opportunity, but also want to ensure that the way that we minimize harm. Is done in a holistic and thoughtful way that rises very quickly to the surface than than folks that are just trying to be part of, you know, a quote unquote gold rush that that happens when you get into the deep reasons

[00:58:39] like, why does this matter to you? Like, why does this job matter to you as a person? You, of course, need to balance against like, you don't want completely homogenous teams like you don't want teams that are only going to think about the opportunity and just kind of casually throw away the risk but you do want people that care about it deeply and Asking what motivates you like what motivates you to work on this like at a at a personal level it's remarkable how few people will give you a very sort of well thought through and constructed answer

[00:59:10] Brett: and you found that that's very positively correlated to the people who bias towards action in the context of this experiment.

[00:59:17] Jack: Absolutely

[00:59:18] Brett: And so have you found that you could take somebody who didn't move particularly quickly working on another product at Google, and in the context of this product and this team are sort of night and day in that dimension

[00:59:30] Jack: I've seen people who have been part of teams at Google that it's taken their teams two years to ship something. And when you hear two years, you might have that sort of, Oh my goodness. There's just a lot of existing user work that you have to go through. You have to ensure that if it's, uh, you know, a change in terms of service, that that's a deeply understood.

[00:59:54] Like these things are. anchored in the respect for the opportunity that we have and the respect for the user that we have come to a project like Bard with a, let me help you anticipate what could slow you down in the future. So they're not coming at it from a perspective of, OMG, I can't believe I moved on a thing that took two years to ship.

[01:00:16] It's how do we create. decisions now that when we fast forward 18 months from now, we won't be stuck in a position where it takes two years because we hadn't thought through some of those problems early on. And like the people that have experienced pain in various regards are the ones that have been by far the most successful.

[01:00:38] Brett: right after launch, there was publicly and in the press, I think a lot of negativity around Bard and a lot of critiques. And then it felt like just. I don't know, six or eight weeks later, as people started to use the product and the product got better and better, that people started to change their mind.

[01:00:54] And there was this whole, the opposite, basically, there was the flurry of really positive feedback. And maybe other than telling the team, like, Hey, let's just focus on what we're building here and block everything else out. Was there anything else you were doing to kind of manage those few months, particularly because at the beginning it was, was quite negative?

[01:01:14] Jack: Those moments can be like make or break for a team. And I'm just so happy it was make and the way that we got through. That sort of negativity was a inside of the company having cheerleaders, like there are these meme boards inside of Google and just like to see there was, there was somebody that worked at Google that said, you know, Hey, we like to poke fun, but just know we're all behind you.

[01:01:40] And I'm like welling up a little bit thinking about it because the thing that I, the reason I came back to Google. After nine years away was this feeling of like, it is a large company that has this collective desire to achieve the mission. And so like there was support inside the company and that goes a really long way to like, everyone has to experience an us versus them and inside of the company, that feeling of like, hey, we're behind you.

[01:02:14] Um, we may give you a hard time internally, and I will tell you some of the hardest time and the harshest criticism that I've received has been from my colleagues and teammates inside the company. But it's because there's this shared desire and passion toward the, the culture of curiosity. And the other thing is, it was very clear early on that, like, it did strike a chord with people like, okay, public, public narratives are one thing, but That focus on the user and all else will follow is what got us through, like hearing the story of like a plumber that was one of our research participants sharing with me the story of how like they were able to connect with customers and they felt like, uh, I'm always, I've always felt like customers are telling me that I am ripping them off.

[01:03:07] Like you got a broken water heater. It's going to be 10 grand. They're going to feel really bad. And this was one of the first users that I spoke to about Bard and saying, like, I just log on and I ask it like, here's the context of why I'm replacing the water heater, help me explain it to my customer in normal language.

[01:03:26] And now he has less, like he has less annoyed customers. Like that was an amazing, amazing thing, like the person trying to talk to their piano teacher, because they are an English as a second language speaker, trying to find a way to say like, Hey, this is actually what I want for my kid, but I'm struggling to find the words like we're helping parents connect with their teachers on something that matters to them.

[01:03:48] Like that stuff makes the public negativity go away. And especially when you're a company like Google that has so much expectation, like that is part of the opportunity. It's what's driven us to say like, hey, we know we need to meet that bar. We know the bar is higher, but we're delivering value. Let's find more people to deliver that value to.

[01:04:11] And it's just been, it's been kind of amazing to go through that flurry of emotions.

[01:04:15] Brett: Did you spend a bunch of time trying to get those stories out to the team on a continual basis?

[01:04:22] Jack: one thing that has tactically been very different on this project compared to other ones that I've worked on at Google is there's not an email culture so much as there is a Google chat culture. Other companies might use slack for that sort of purpose, but every time there's something interesting.

[01:04:42] We share that perspective. So our user researchers will share, Hey, look at this interesting insight that we found and people kind of glom onto it. And they ask more questions. And by having these communities where we're able to share ideas and exchange them, the information just sort of propagates, uh, in a faster manner.

[01:05:05] And like, I would say one thing that was challenging in my previous engagements at Google was. needing to spend hours a day going through my inbox and now on a chat oriented thing like that just tactically really enables us. And, I'm sure for startups, that sounds second nature. You've got listeners that are at large companies.

[01:05:23] I just want to give them hope your company and organization can transform to into a world where chat oriented work can move 

[01:05:32] quickly 

[01:05:33] Brett: if folks from other companies reach out to you and say, Hey, we're trying to do this really hard new product. Um, and maybe it's not even AI or LLM related. What are the most important ideas from this last year plus and working on Bard that would inform sort of the type of advice that you would give them to hopefully increase the chances that their experiment or product goes well?

[01:06:00] Jack: The first thing is the eval is the PRD. We talked about that earlier, but before you even write down what the look shape and feel is like, think about how you're going to evaluate this because that's going to inform the look shape and feel. Two is if you are working at an established company, get ahead of the fact that you are having probabilistic oriented answers.

[01:06:22] Like it will get things wrong. And be thoughtful and mindful of not just how you build that into your product, but how you set the expectations. Like we will get things wrong. However, in our case, we have a commitment to trust and safety. Like we have a commitment to ensuring that we have policies in place that, that prevent, or that are minimizing, uh, unintended, uh, behaviors.

[01:06:45] And then we have processes in place that handle it, uh, when that happens. And so that expectation setting along with getting ahead of how to manage a probabilistic product is, are the two things that I feel like I keep going back to in my advice.

[01:06:59] Brett: What about sort of not just building something LLM oriented, but just some of the important ideas around doing a new zero to one build in the context of a, of a larger company?

[01:07:12] Jack: Conviction is probably the most important thing to get you there. Because the natural tendencies are why, why launch it? Like launching something new into the world is hard. Nobody ever heard of a Bard before this year. And it is a lot of extra work. So have conviction why it matters. I think it would have been really easy to just try to slap it into an existing product, but that conviction of this is going to be hard. It's going to be hard because it is something new. It's going to be hard because it is going to require work to close the gap between alternatives out there from a quality perspective. But this is why I believe it matters. Collaboration collaboration directly with AI is going to be a new way that people interact and we have to learn.

[01:08:05] How to make it the most helpful for people and we have to learn now I could give all the tactics in the world, but if you're if in your heart of hearts, you don't believe it like that again is like, you know the risk profile of doing that versus being a founder I'm not going to liken it to that like you still have a steady paycheck like, you know the company is well funded so you're not becoming a founder of a product, but you are adapting a founder mentality,

[01:08:32] like you are putting your reputation at risk, which I think from a inside of a large company perspective, like your reputation is the most critical capital that exists to getting things done. And you gotta be willing to, to risk it.

[01:08:49] Brett: Let's just talk about LLMs specifically. What do you think are the things that seem unsolved that are going to be solved relatively quickly? And what are the things that seem like, oh, this will, this specific problem, we're going to have that nailed down in short order, but might take a lot more time.

[01:09:10] Jack: I think making it easier to access feels like a function of. just iterating, getting the inference to move faster. I think you're going to see the cost of inference tend to go down. Inference is the cost to actually serve and run the model. I think you're going to see those things go down in relatively short order.

[01:09:31] So I think the technology will be easier to put in people's hands. The cost is going to go down, et cetera. We've already seen it, um, over the span of, of this year. The harder parts are going to be the aspects that might seem straight forward, like it not having the memory of a goldfish. Like you've got a context window, whether it's 8, 8000 tokens, 32, 000 tokens, even 100, 000 tokens.

[01:09:57] If these things are going to truly become your personal assistant. Your personal assistant has access to more than a hundred thousand tokens of information that they've processed and they've thought about that they use to understand you. And so research that's taking place around memory and recall, I think is really fascinating.

[01:10:18] And it's not just memory. If you're going to create human like interactions that relate to memory, There's so, I'm so fascinated by so many things, like As humans, our memory is imperfect. Like if I had a perfect memory of who I was 10 years ago, I don't think I would have progressed as a human because my imperfect memory of the things that I was good at 10 years ago and the things that I needed to develop probably got to me, got me to where I am now.

[01:10:45] But if you're constantly focused on remembering the things that you did wrong and not taking away the lessons. then we might stagger, uh, our own personal progress. And so I am beyond fascinated and intrigued by the research that's taking place around memory and being able to reference things from previous conversations.

[01:11:05] But I think it's going to take way longer than, than we expect.

[01:11:09] Brett: Why is that so challenging?

[01:11:11] Jack: Just think about even, you know, read all of my emails and let me know, like summarize all my emails from the past year. That's way more than 100, 000 tokens. You need to be able to understand what to pay attention to, how to make that make sense. and so looking at this as an indexing problem, personal indices are, are really, really hard to create because they're all so different and at its core, this technology is finding.

[01:11:42] Um, patterns that are similar in common elements of text, but your personal index like is very different from mine. The things that are important to you are very different than the things that are important to me. And so figuring out how to find those common patterns is going to be a lot tougher. And especially as it relates to non global information, but personal information, you have a whole variety of different responsibility measures that you need to take when you're training on the corpus of publicly available information, you can find conversational patterns. When you're building an understanding of my emails, my documents, my style, my, my tonality, then. And you're almost like at a point of everybody has a unique fingerprint.

[01:12:25] And so it's going to be a while to crack the code of truly understanding what makes things personal. I think some people refer to that as AGI, like our ability, you and I have known each other for a very long period of time. Our ability to connect and communicate is predicated on the whatever 12 years that we've known one another.

[01:12:47] And we can reference. shared experiences in a way that I, I hope for the sake of our friendship is much closer and productive than, uh, a random person that you meet on the street.

[01:12:58] Brett: It really is. It really is. Maybe one of the last couple things we could chat about. Um, and you talked that you talked to this a little bit when you were sharing how to think about building these products in the context of an existing company. how should product people who have not been building this with this technology?

[01:13:15] Think about the process of learning or building context and knowledge if they're sort of starting from more of a cold start or somebody that's been curious about it but isn't deep in the technology.

[01:13:28] Jack: think Andrew Ng has some of the greatest courses about this technology in general. They're available online. Just consume those. Like that would be the best primer. The other thing is Um, read the research I spent the first couple of months, like really digging into Bardley. I wouldn't let a week go by where I didn't read one of the seminal papers.

[01:13:52] So understanding a neural conversational model, which was one of the, uh, which was one of the early predecessors to the eventual transformer paper attention is all you need, like just every week, read one of those things. You're going to learn something. And then. Use a language model to test your understanding of it.

[01:14:11] What are some of the things that came up in the attention? All you need is all you need product. Why is a transformer different from a diffusion model? Explain it to me, uh, using an analogy that makes sense to me and like. You'll be amazed at how quickly you can learn the intricacies of this technology by just doing that work.

[01:14:29] Brett: Do you think the type of product people that will be successful building in sort of with these new primitives are the same type of product people that have been successful building all sorts of other stuff over the past decade or there are kind of new win conditions now?

[01:14:44] Jack: The thing that won't change about a great product leader is the one that is able to be precise in articulating their hypotheses of what they're trying to learn and insights and lessons translating to the feature set that you see, like I think every great product leader needs to be a deep product marketer at their core.

[01:15:08] I don't think that changes what I do think changes is the how do you do that while balancing the more probabilistic nature of the product? We've seen this technology. really like it, it's a consumer discussion, but so many of the startups right now are in the enterprise space where there's a very different tolerance for probability and imprecision.

[01:15:39] And so finding the way of harnessing that as a feature rather than a bug, I think is going to be challenging, especially when one of the perceived shortcomings of your product is actually going to be, is like the core of what makes it function. I don't know that there's been technology in the past where that's the case.

[01:16:00] And I'm specifically referring to the hallucination bit. Like if you ask someone, how will humans get to Mars? If you're going at it from a path of like, enterprises aren't buying this because it hallucinates. I'm going to build the most non hallucinated answer. What you're going to get is, um, how will humans get to Mars?

[01:16:21] We don't know. Versus embracing the hallucination. How can I ground on what theories are that exist? So, there are no proven cases to get there. But however, there may be these these possible outcomes. I grounded them in these three, uh, I grounded them in these three theories that are currently unproven, but something that you may want to consider and you transition that into, look, this thing isn't going to give you a precise answer to something, but it's going to give you a possibility.

[01:16:50] If you can find a way to harness that, build the guardrails around probability oriented products, then it's going to be an incredibly fortuitous paradigm shift.

[01:17:03] Brett: Maybe we could wrap up where I often like to wrap up, which is for you specifically, when you think about the folks that you've worked closely with over the past 15 plus years, who are the folks that have had some sort of outsized impact on you? And what are the one or two or three things that have, that are now a big part of your way of working or sort of part of your toolkit?

[01:17:28] Jack: This is the part where I give people flowers.

[01:17:30] Brett: Exactly.

[01:17:31] Jack: Okay. I do believe deeply in giving people their flowers, because I don't think we share enough gratitude, I, I do think one person now, I just feel fortunate because, uh, I get to work with her every single day, is our, our GM, Sissie Hsiao,

[01:17:48] Brett: This is smart. From a promotion path standpoint. No, very clever.

[01:17:52] Jack: you build a certain level of trust with someone, the way in which, uh, she's able to deliver feedback of, I'm going to give you the most direct answer to how you're doing as quickly as possible so that you can move quicker because I care about you doing well. We did a presentation early on, uh, to some of the executive team around some numbers.

[01:18:18] And she goes, next time you do that, I want you to sit in a room for an hour and practice. Just practice for an hour. So that you can have more clear, crisp and concise answers. And I'm like, it's rare to have someone give you that level of feedback of like, Yeah, the meeting went over all well, but that contribution wasn't great.

[01:18:38] I think you can do great. Here's what I would recommend that you do next time. I think that directness from a position of care is key. And like, I guess people call that radical candor. Uh, she's one of the only people that I've seen execute it extremely well. So that's flower number one.

[01:18:56] Flowers number two, I'm going to give to the CEO of United Masters, Steve Stoute. Steve said something to me.

[01:19:05] That I've thought about for a really long time, which is

[01:19:08] even if you're really good at seeing around corners, if you don't bring people with you, like you might know the answer where you need to go, but if you haven't brought people along on the journey with you of how you got there because they're not an expert in what they do, because they may see things a different way because they don't understand Um, You're never going to get people to like really follow and admire what you're doing.

[01:19:37] And I think I, I see this archetype a ton around founders who are brilliant and understand something so deeply and want to see their idea come to life. But if they hadn't shared the assumptions along the way, if they hadn't shared the here are the alternatives that I considered, here's why I picked this corner, it's worthless and you're not going to get anywhere.

[01:20:00] So that, that's flowers number two. Can I give you flowers?

[01:20:05] Brett: You should give it to someone else. I mean, unless you have physical flowers right now,

[01:20:09] Jack: No, I want to give you the flower. Like I have admired, I've watched you in your first like week of living in San Francisco being, uh, at first round capital. The conviction that you've had for your entire journey here, the first round network, the importance of doing this, that we're doing, even though your tactics have changed over the years, like you've been unrelenting and wanting to solve and drive and deliver that.

[01:20:38] and I've deeply valued, uh, our friendship for so many years because you've pushed me in certain regards and, uh, to watch you do this, to see what this has become, even though it's episode number 110 or whatever it is. Thank you for waiting so long.

[01:20:55] Brett: we wanna make sure we got the

[01:20:56] Jack: I'm glad, yeah, I'm glad you perfected this.

[01:20:57] I'm also glad that only people named Jack get a video interviews, uh, like this. So shout out to Jack Altman, uh, who also did a video interview, but you've had that conviction for 12 years and it's manifested in different, uh, in different ways. And it's really remarkable to see what, what you've built.

[01:21:13] Brett: That's very sweet of you to say. And what a perfect place to end the conversation. All right. That was really fun. Thanks for joining us.

[01:21:21] Jack: thanks for having me.