Sam Schillace is the CVP and Deputy CTO at Microsoft. Before Microsoft, Sam held prominent engineering roles at Google and Box. He has also founded six startups, including Writely, which was acquired by Google and became Google Docs.
–
In today’s episode, we discuss:
- Sam’s advice for future engineers
- What’s next for AI
- How to develop technical taste
- The importance of asking “what if” questions
- Lessons on market timing
- Scaling a software company in 2024
–
Referenced:
- Amazon: https://amazon.com
- Box: https://www.box.com/
- Elon Musk: https://twitter.com/elonmusk
- Google Docs: https://docs.google.com
- Itzhak Perlman: https://itzhakperlman.com/
- Microsoft: https://www.microsoft.com
- Netflix: https://www.netflix.com
- Tesla: https://www.tesla.com/
- The Innovator’s Dilemma: https://www.amazon.com.au/Innovators-Dilemma-Clayton-M-Christensen/dp/0062060244
- TurboTax: https://turbotax.intuit.com/
- Uber: https://www.uber.com/
- Walmart: https://www.walmart.com/
- Workday: https://www.workday.com/
- Writely: https://techcrunch.com/2005/08/31/writely-process-words-with-your-browser/
–
Where to find Sam Schillace:
- LinkedIn: https://www.linkedin.com/in/schillace/
- Newsletter: https://sundaylettersfromsam.substack.com/
- Twitter/X: https://twitter.com/sschillace
–
Where to find Brett Berson:
- LinkedIn: https://www.linkedin.com/in/brett-berson-9986094/
- Twitter/X: https://twitter.com/brettberson
–
Where to find First Round Capital:
- Website: https://firstround.com/
- First Round Review: https://review.firstround.com/
- Twitter/X: https://twitter.com/firstround
- YouTube: https://www.youtube.com/@FirstRoundCapital
- This podcast on all platforms: https://review.firstround.com/podcast
–
Timestamps:
(00:00) Introduction
(02:54) Lessons on market timing
(07:30) Developing technical taste
(09:51) Asking “what if” questions
(14:03) Building Google Docs
(19:32) The decline of Google apps
(20:57) The Innovator’s Dilemma facing Microsoft
(22:53) The differences between Google and Microsoft
(24:42) How to build a winning product
(27:46) Becoming an optimist
(29:12) Why engineering teams aren’t smaller
(32:00) Sam’s prediction about AI
(34:11) Capturing the value of AI
(37:43) How you should think about AI
(45:33) Advice for future engineers
(48:18) What makes a great engineer
(49:45) One thing the best engineers do
(51:37) Microsoft’s new leverage
(56:01) Scaling software in 2024
(59:50) The future of AI across several sectors
(64:28) What Sam and a violinist have in common
Brett: Maybe we can kick it off. I I'm curious. What is the last 30 years of building software taught you about market timing and technology?
Sam: The one thing that I've learned that's like an indelible lesson is that the things that are worth doing are really uncomfortable. The right time to do something is when you have that like feeling in the pit of your stomach that's like, oh, this is a great idea and it's going to suck to build this because nothing's ready yet and there's all these problems. Like we had that problem with G docs for sure. And we did rightly originally and like, JavaScript was there, but it was buggy and not standardized and there weren't any frameworks and there weren't any tools and the cloud didn't really exist. The mistake people make the most is, you know, they'll be like, well, that's a cool idea, but it's really hard let's wait three years till it's easier.
When it's easier, there'll be a million people doing it. And there's path dependencies in there and stuff like that. So I see that lesson all over the place right now with AI, honestly, there's plenty of legitimate criticisms about what does and doesn't work right now, but like, none of those really seem to matter that much to me.
Like, they all just seem like engineering problems to be solved. and you can kind of see them evolving over time. Last year, the issue was it's expensive, it's slow, token windows aren't that big, whatever, whatever, you know, over time hallucinations are still a problem,
but the grounding problem is getting dealt with slowly. And like, you can see the prize, it's just really hard to get there right now. This is a great time to like, engage with a hard problem just grind through it. If you wait till there's jQuery or whatever, you're not going to invent Google docs, like somebody else is going to get there.
Brett: How do you make sure you're not too early? Or do you think that that's like an overrated idea that what you're saying is, you know, your job is to make the future happen faster.
Sam: I would have worked. people did try to do it in 94. we were not the first people to have that idea. many people had that idea. We just got the timing right, so timing does totally matter. I remember looking at the browser penetration of JavaScript.
When we were starting out today, I was like, well, this actually make any sense. 80 percent of the browsers installed browsers out there had a sufficiently high level of JavaScript or had it, had it working. I was like, all right, it's not the whole market, but it's enough for some critical mass.
Part of it is like, you have to be able to look at requirements up and down the entire stack of what you're doing and feel like, either the problems are solvable or there's enough in the environment to be critical mass, to get your thing going.
It can be the case that. the market isn't fully mature, but there's enough customers to keep you going, that will grow fast enough to, you know, to keep you alive and you can kind of grow into the market, which I think happens. I mean, there's like more than a billion people use G Docs today. There weren't a billion people doing anything in the browser at all when we started right? So that market grew around us.
So that, that's a possibility. Sometimes you have to have some measure of technical taste and judgment about the technology. Right. I think a lot of things that fail for being too early fail because you know, the technology doesn't play and, you know, the founders made a bad bet, which happens. I think I'd rather fail on the early side than the late side. Failing in the early side, you at least have a chance to like course correct, right? Because if you don't spend all your resources and burn out completely, you can keep going and maybe the world catches up to you or maybe you make the world catch up to you. Failing to live by being too late, somebody else got there. You can't get that back. This is one of my things I've seen lately is there's not really much of a prize for being pessimistic and right about these things.
Okay, you get a, Ribbon. Who cares? the prizes for being optimistic and right. that points you in the direction of like, try things as soon as it's feasible and see how far you get. And you'll have a lot of mistakes cause you'll be early on some stuff, but like, that's okay.
You just like learn to manage it.
Brett: Why do you think that is?
Sam: I'm like, definitely a pessimist. I've been fighting against it, like I'm trying to consciously be optimistic. I read a lot about AI and I read all the people who are very critical about how AI is or isn't working and how hard things like autonomous driving are and how all the problems with LMS, I don't disagree with any of it.
But I'm like making this conscious choice to like, not care cause I think there's probably a path through there. what do I get for being pessimistic and right?
Brett: You really can't go short, right? You can be pessimistic on a specific company and be correct and capitalized tremendously, but you can't do that in technology really.
Sam: Yeah. You can't go short on like technical technological revolution, right? You can't like short solar or some crazy shit like that. It tends to be wrong. We tend to figure stuff out over time. You know, the best you're right for a little while. And eventually then, the world figures it out.
People were super negative about solar tech 10 years ago, 15 years ago. I'm like, , the learning curve on solar has been relatively constant, if not accelerating a little bit, you know, for the last 30 years, you know, photovoltaic solar is like, it's like a thousandth, the price it was 35 years ago.
it's incredible how much it's come down. you're skeptical about that 20 years ago, you just weren't reading the chart very well. It wasn't practical yet, but you could see the curve, you could tell where it was going. So, I don't know. What's the point of being pessimistic? Go do something else for a living.
Brett: You mentioned sort of the topic of, you need technical taste.
What do you mean by technical taste? is it a sensibility that you find a lot of people develop?
Sam: Technical judgment may be one way to say it, but I do think there's like an aspect of taste to it. You know, any of us have done some kind of, any kind of technology, hardware, software, whatever, for a long period of time, you kind of collect a bag of heuristics you kind of know what does work and what doesn't work. Technical taste is like, how well have you consolidated that set of experiences and heuristics into judgment that you can apply accurately when you see new things, I've got all kinds of, you know, weird things in my head, like, you know, 1 point, I figured out that, most of the successful software projects I've seen have the same feeling of sort of slightly out of control, but not too much of a disaster as Chasing a three year old down a slight incline in a park you're not really in control of the situation, but it's also like, not crazy out of control either. That's a little asked little particle of technical taste where, like, I can look at a team and look at the code base and see how they're executing.
And I can tell whether they're failing to the, overly architected or under architected side of things by that feel like you go talk to a team and they're panicking and they don't have good answers to some questions. And, you know, they're sort of afraid of their code base. that's a bad technical smell, right?
You know, they're in trouble and you can sometimes do that for a larger technologies. I don't want to get into the whole Elon drama, but when I go look at the engineering that's being done with something like, SpaceX, that engineering makes a lot of sense to me.
it's a clearly articulated set of principles that's iterative, you know, they understand what they're optimizing for, which is, you know, cost to get a pound into orbit. And they're kind of chipping away at all the things that to me, like that has good technical taste.
Their results speak for themselves. No, there's stuff in there that I think was kind of nuts at the time. it's amazing that they pulled off the autonomous landing on the barges and stuff. To your earlier, the earlier question of like, is it too early?
I don't know what it took for them to get confidence that they could do that. And it took four crash landing, four or five crash landings before they got it to work, even the first time. It's like, that's a lot of believing in your technical judgment.
That organization, even for, I've never been really involved with them at all but from afar, it looks like they have very good judgment about how they approach technical problems and decide which things are optimizing for and which things are ignoring and how the tech is going to work and they're not distracted by all kinds of extraneous stuff that they could be extract distracted by they're just doing the thing that they set out to do really well and it's working really well. That to me feels like good technical taste.
Brett: A lot of the conversation in building software businesses over the last 10 years, maybe up until what's happening with this latest inflection point in large language models, most of the companies tend to be organized more around market risk. What is the customer need? And am I building the correct thing?
At least sort of maybe more in startup land tended to be a little bit more of the app layer or even at some of the enabling infrastructure layer. A lot of it wasn't I think as organized around incredibly hard technical problems where there's really more technical risk than market risk. And I feel like throughout your career, you've kind of worked up and down the technology stack, but you're kind of getting at this in this in this really nice metaphor of like you want a team to feel slightly out of control.
Are there other things that you can kind of share with what does a high functioning team look like?
Sam: There are these two questions that you can ask. This is more in the context of disruptive innovation, but I think it's true here as well.
One is, why not? And the other is, what if? So when we get confronted by novel problems and novel ideas, it tends to bother our ego and, we are defensive. And so if you're defensive about something, you know, there's some disruption in the world, like AI, and you hate AI.
It like, makes you feel insecure. Humans are really good at telling stories, so you either tell a why not story, why this thing will not work. Or you tell a, what if story, what, if this thing works, let me go look at it think about possibilities. It's kind of called that growth mindset or open versus close thinking or whatever.
But most people react with the, why not? And some people react with what if, and you see lots of different patterns in disruptive innovation because of that, I think good technical teams tend to have the, what if mentality about technical problems, And so you listen for the, what's the tone of the argument you know, everybody's arguing about why this is not solvable and why it's a bad problem and why your solution is crappy and whatever, or is the team arguing in more of a innovation or it's more of a sort of an improv mode, a yes and mode, right?
Where they're, they're talking about what if, well, what if we tried this? What did we try that? What did we solve it this way? What if we get around the constraint that way? that's a healthy, I think a healthier technical team, you kind of see both. I think unhealthy teams usually have people who are less focused on actually wanting to do the engineering and solve the problems and more focused on things like their status, or they have bad cases of imposter syndrome.
And so they're insecure. And so they're, you know, they're constantly trying to be right. I think good teams, you have people who are just you know a little bit more humble and a little bit more open minded and the team cooperates and collaborates about finding their way to the solutions. There's always technical details of whether what the team is producing is good or not, but that depends on what domain they're in. judging whether their work product is good depends on what they're doing and your familiarity with it and stuff.
But I think that cultural thing about open versus closed, you know, growth mindset versus closed mindset. that's probably the hallmark of a good team. I've got a team right now at Microsoft that's the sort of experimental prototyping team It's just the whole thing is like a jazz ensemble.
It's really wild. we don't really ship products. Products, we're trying to explore the space and understand some deep things about how agents work, how LLMs work the limitations of programming with them. Like, I don't build AI models, but I consume them and try to build things with them.
Brett: How do you know if a team like that's being successful?
Sam: Well, it depends on what they're trying to do. This particular team is one of the hardest things I've done from that perspective ever, because our job is to really indirectly influence the whole thinking of the company. We tell, I tell them all the time, you know, like your work product product is insight.
We're just trying to run ahead of the avalanche and figure out where things are going. So the giant product teams behind us don't have to learn all that stuff with, you know, thousands of people sitting around and making mistakes. We can feed them insights so that they know they're more effective as to where they point their attention, which has worked pretty well.
I kind of measure it in terms of like things that I can say internally, you know, faster and better than anyone else can, like, are we, are we digging out insights? And it's interesting cause like For a while, I felt we were a solid 6 months ahead of pretty much everybody.
Of late, like, people are catching up to us. I think because not as much has changed in the foundation models this year so far as has changed. I think things have changed in the industry. Now we have 3 foundation models instead of 1, and the large token contexts are happening and things like that.
It hasn't been quite as much of a step function. So I think people are kind of catching up to the, stuff we're doing, which is good. a lot of the insights that we had, I think are now getting to be widespread, which is, great.
Brett: How do you know if you're making appropriate progress?
Sam: What you want more than anything else is to be able to get to some proof points quickly. I mean, everybody says this you want cheap, fast experiments, right? Like, you want to know something quickly. It's much better to, like, hack something together or make some limited thing, you know, in a week or a month than it is to spend, you know, 12 months on some elaborate architecture that you're pretty sure will solve the problem when you're done with it.
But you have nothing in between and that's always the thing. I'm like, You could point at GDocs maybe as an example of this, like that was three or four years to take it from the initial idea to, the more or less final form. It got rewritten both the front end and the back end have been rewritten multiple times.
Right? It's kind of funny because I have the very first GDoc in my account. But what is very first mean, like, it's on different servers at the different back end with a different front end. Like, the whole thing is different. The answer to your question of like, how do you know you're making progress is can you get to something, some proof point fairly quickly? It's what we're doing with this team. We have these north stars, we do sprints. We try to get to stuff every week or two, even though we're spending a lot of time, kind of wandering through this wilderness.
We're always trying to put our hands on stuff and play with things and not be theoretical. So usually you can do that. You know, you break the engineering, you know, even I think for hardware, I don't know this, but I suspect we were using SpaceX as an example.
I know they did you know, constructed their model, their motors first, right? I remember they're doing engine firing and they figured out how to build that the Vulcan engine. And like, there's all the testing on that before they even built rockets. that one, you know, I think you can test the components and kind of have a sense that you're getting through your, your problems.
It's harder with software because things are so interconnected and you often do have to build stuff. There's plenty of techniques for, you know, doing things, stubbing parts of your world out so that you can, you know, have a mock of some part of an app and, you know, have an understanding that it basically works or there's, you know, Being a pretend to computer, we'll see if the rest of the service works. You can do stuff like that. That's kind of is the cheap experiment. I think the thing that successful teams always do is learn, learn quickly, they're open minded and then they tend to structure themselves around getting signal quickly, no matter what you're doing. Even if you know you're going to be working on something for 2 years, you try to get signal quickly and regularly as you progress through it somehow.
Brett: In the case of Google Docs the idea was this in the cloud collaborative editor?
Sam: You know, this is one of the other kind of interesting things of like, the kind of dovetails with what you're asking about progress, everything looks logical when you look back, you hack your way through a thicket and you look back and there's a path behind you, but you didn't have a path when you're hacking your way through the thicket.
I'm not quite that smart. The genesis of Rightly was experimentation. It was me reading about JavaScript and thinking it was kind of interesting, reading about content editable, which was a capability of the browser's had and having some experience with word processors and thinking, well, this is kind of funny.
Like, I wonder if we could write a word processor in the browser with this new language, interpreted language. And so we did that. It felt really good, really fast. It was like, great to work on stuff. But then we like immediately started stepping on each other's changes. Cause we didn't have the collaboration working.
We were just last in wins. And so we had to go do that. That was a nightmare to do because the browsers have different representations of what a document is that aren't even the same tree. So you can't even do texts or tree diffs. You have to do this like weird, logical diffing thing. This was before operational transform, which is where we settled on was around.
So we kind of had to hack our way into that and try to figure out how to scale our stuff up. We had none of the backend support that you have now in the cloud. Like we literally had three servers in Texas. And so scaling was starting to get to be a problem when we went to Google.
And then like, you know, a bunch of the rest of the stuff that we did was, you know, you have to move into change the language into Java, which was really fun, it's fun to have like a hundred thousand compiler errors. that's neat. You know, move into their data storage.
Eventually like the backend got rewritten and moved on to Spanner. We merged with the spreadsheet guys. We had to build the PowerPoint. So by then we started to realize, Oh, there's like a full office suite here. Let's go build the PowerPoint thing. Let's take this file system thing that was very primitive and like start pushing it in the direction of the G Drive stuff.
Then the Google's enterprise stuff started to happen. So we had to go figure that stuff out and scale happened in their function, you know, new features, just lots and lots and lots of work over time, you know, you couldn't really do all the rendering you needed to do in the browser, the way we were doing it.
So we wound up having to write a new rendering engine on top of the canvas tag and divs and, you know, like, we had to actually do a proper, the New York team did a proper, rendering engine eventually, which we didn't need originally and couldn't have written originally because those pieces of the browser didn't exist, 2 years later, the browsers had kind of caught up with us and so we could rewrite it properly.
And then you got features like rulers and stuff like that. it was just a long climb you know, lots of things, moving pieces around us. The foundation we were standing on was moving, like, The JavaScript VMs got developed and, you know, got better over time. So that was going on.
Some standardization was happening, but that's just the nature of the beast.
Brett: Maybe you could talk at a slightly more granular level, how did you figure out what to do in what order?
Sam: One of the early things we, we made this deal, I think, with the user, we made this bargain with the user, which was, we will not fight the war of features that everybody had been fighting and failing against Microsoft.
So we're not going to implement everything, but what we'll give you is this really stripped down thing that's really convenient in certain ways. And to the large degree, like we won that, you know, that, bet worked, people were willing to trade convenience and speed for a lot of the features. But there's one feature that we kept getting asked about in the early days, which I didn't understand, which was, word count.
And I was just like, why the hell, like, that was like, you know, number 900 on my list of things to add back in. finally, I realized Oh, a bunch of our early adopters are students and they need to know how many words are in their essays. So like, that's actually a really important feature for adoption.
So we put it in because it was easy to do. To be honest, like I'm a little bit sad at the state of GDocs these days, because I think they're losing sight of the idea that you know, they are kind of turning it into office and like making it very complicated to use again.
And I, I find it lots of things broken in it when I use it, like most of Google's apps, I think are getting very heavyweight and broken now, which is kind of sad to see.
Brett: Why do you think that is?
Sam: I think it's just kind of natural life cycle. You know, Google's philosophy is, the reason they have all those apps is that they have this business theory that the more more internet usage there is, the more searches there will be and the more ad revenue there'll be. So like, that's a valid theory, but it's kind of hard to hold that thought in your mind for 25 years. And so I think as organizations mature, like people kind of forget stuff.
The Gmail client and so, you know, you start to get neglected and nobody's really paying attention to how well it's being written and, you know, manage and maintain. That particular client drives me nuts. there's 10 things it every day that I are easy, low hanging fruit that nobody seems to be paying attention to.
So I should hit up my contacts and see if I can get them to fix it. But it's hard to do, right? You're in this context of like, they're running these really large services now. And so every move you make is a big deal, . So it gets really hard to move quickly on these giant services because they're giant. they do, you know, crazy things to the infrastructure.
you know, if you're not careful and like Google can handle it, they've got big infrastructure, but, you know, it is possible to like break stuff without thinking about it. You know something that you would do on your desktop with one app that doesn't matter matters when there's a billion of them hitting your servers, you know, tomorrow.
Brett: It's interesting. It sounds like the Google Docs story is really like the classic concept of disruptive innovation, where you start with a vastly simpler product, but that's vastly better in a specific dimension and potentially serving a different customer or slightly different customer that makes the incumbent.
It's very hard to take Word and instantly compete with Google Docs if you want to compete on that specific dimension.
Sam: It's weird being at Microsoft and talking about this now, because I've talked about this before I was there. So none of this is like insider or anything, but you can kind of see this from the outside. Microsoft has the classic parts of the innovators dilemma, right?
They have a big business that is attached to desktop applications that they can't really disrupt because it's a big revenue stream. So it's hard to pivot. and I think that was the interesting thing that we did. It's always the best thing you can do to a competitor, if it's possible, is change the terms of engagement, right?
Don't go fight a battle with somebody on their battleground. If you can help it, change the rules. That's always the best way to disrupt some incumbent somewhere. And we kind of saw that in the early days of GDocs when we went to Google. It took like a couple of years before Microsoft could kind of get their head together on what their response was going to be the, you know, the office 365 response that they eventually came up with, it took them a while to wrap their head around it because they didn't want to, they knew they didn't want to go all into the cloud and, blow their existing business up.
But they also knew they had to pivot into subscriptions, which they've done. And, you know, they had to do something in cloud, but keep the desktop healthy. And, you know, what that balance point was and, you know, all that. that was hard. Like that was, I think, a very challenging process.
I think, you know, Amazon kind of did the same thing to Walmart, right? They changed the rules of, engagement. You know, Walmart's thing was we're the most technically savvy retailer. Our retail stores are better than anybody's. And Amazon was like, what the fuck is a retail store?
We'll just like have giant warehouses and skip the hard part. You know, Walmart was never going to drop all their giant stores on the floor and go directly after Amazon, but they had to respond somehow and they've kind of managed to do it. But like, that's taken a long time for Walmart to come back from that challenge, right?
it's really hard to do that.
Brett: In the case of Google Docs versus, Office, it also feels, you know, and I haven't studied this closely, that Google Docs was, was actually far less disruptive than one would imagine, Meaning that, Microsoft, that both suites ended up winning and doing very well.
It's not like Microsoft Office it's not Blockbuster.
Sam: it's Complicated.
There's a lot going on. I think. Google is very fundamentally a consumer company and Microsoft is very fundamentally an enterprise company. And they both want to be more of what the other one does, but it's hard to do both of those things.
Right. So they're very much purely who they are. And so I think that's, you know, I think Google, it was funny because after I was at Google, I went to Box. And so I was the head of engineering at Box. And so I would be in rooms with CIOs all the time. And they would just like tear up Google down and then remember who I was and apologize to me because there was just such so little trust in the enterprise world because Google hadn't been clear about wanting to do that.
What that meant, you know, building SLA is having relationships, like, being fully on the hook for somebody's business. So I don't think Google fully won the consumer space. Microsoft got or the enterprise space. I think Microsoft got to hold on to it pretty well.
But at the same time, now that I'm at Microsoft, sorry, Microsoft, these who are listening, like everyone who knows my history with G Docs will ask me to like, go fix the sharing features in an office. And I've given that team a lot of feedback and they're doing a bunch of stuff to fix it.
But like, they have a different design point that they're bound to like, they're trying to keep the cloud very strongly competitive round trip or very strongly compatible round trip compatible with the desktop. that's a design point google does not care about, but Microsoft does care about.
Half of the world does care about it. And half of the world doesn't care about it. And so there's kind of room for both solutions. And I think it's hard for either one company to fully become what the other one is, and take, take the whole market. So I think we're I think we're getting close to stasis on this actually.
Brett: What's interesting about what it looks like to ship or build software at both companies at the team level, at the org level, does it feel very similar, very different?
Sam: I think with consumer, you really have to earn your keep with a consumer every day. One of the things that I say is like, people are lazy, right?
Like we're all lazy, you know, no one cares. If you're building stuff in the consumer space, no one cares about what you're building, but you care, your mom cares, whatever, but like, no one really cares. People use your stuff if it helps them, they don't, they don't use your stuff if it doesn't help them, they'll drop you in a second if something solves a problem better than your thing, better enough. That means you're very, very focused on user metrics and you're very, very focused on, you know, user sentiment and, at performance and, you know, you build your engineering teams and your infrastructure the same way.
On the enterprise side you have relationships that are long term relationships. You're not selling to the end user. You're selling to the CIOs. they're fickle to in their way, but they're not lazy the same way a consumer is lazy. They care very much about getting their business to function and, you know, they care very much about solving this problem.
And it's not trivial the way it is for a consumer to switch to a new product, it's very much not trivial for a CIO to decide to yank a solution out and move to a different one. And so there's a different dynamic there. It's much more about trust and longer term relationships. It means that there's less of an emphasis on end user experience, I think, in the enterprise space, because once you're kind of over the line where the CIO says, okay, that's on the checklist of things that your software does, anything above that doesn't really matter very much, the users will use it regardless as long as your software isn't so terrible that all of the companies, you know, end users are revolting and the CIO is going to lose his job if he doesn't find a better solution or, you know, like as long as it's not at that level of terrible, the pressure falls off it never really falls off with consumer.
And I think that drives a lot of the difference between the two.
Brett: What about technical taste and sensibilities? Does that feel similar between companies or different?
Sam: In enterprise, you can usually draw a very straight line between some amount of energy that you're putting in on the engineering side and some amount of revenue, and so you can make a decision okay, this isn't the most efficient thing to do, but I'll put like, 10 engineers on this. And, you know, it'll be net margin positive because the sales team will deliver and that's fine. In the consumer world, like Google spends billions of dollars a year on maps.
why do they do that? Well, cause it's connected to search through local search and it like makes search more valuable, but that's much less directly connected. And so if you're one of the, I don't know how many people are on it now, a thousand people or something working on maps, how important is your feature?
Like how much energy should go into that? Should we, you know, if I'm the manager of this team, there's always more to do. Should I make them the team bigger? Like, how do I justify that? It's hard to make the connection. Within the consumer world, you tend to think more about leverage, how can I build big horizontal infrastructure that's very efficient that so I don't have to do things twice because everything in some sense, everything, every engineering unit of energy is waste, you want to, you know, in the consumer world, right? know, it's all kind of deadweight.
Whereas in the enterprise world, it's much more like just cost of doing business and it's more measurable. So, , I mean, not, not zero pressure, but it's a different kind of pressure.
And I think consumer companies tend to be much more focused on the cost of the infrastructure. Cause they have to be.
Brett: You talked a second ago about this idea of why not versus what if from a team orientation or culture perspective. How does it foot with sort of your original kind of belief that most engineers tend to be pessimistic?
Sam: I mean, I think it's hard. this is why I talk about it. I think it's hard for engineers. It's hard for me. You know, we tend to be focused on negatives and problems you know, we tend to drift out of the what if mindset pretty easily. I just think that's the mentality. It's the kind of the personality type that goes into engineering to begin with.
So like, I've personally, over the last couple of years, you know, I've been more skeptical. I've gotten satisfaction from being skeptical about things oh, the iPhone, like the glass breaks a lot. Ha ha, that's never gonna work.
Or you know, I've said all kinds of stuff like that, that's just turned out to be stupid and useless and wrong. When you know, the AI, you know, the LLM started to show up. I was like. This is fine. Let me just decide I'm going to be optimistic here and just roll with it and see where it goes.
I've definitely been disappointed by some things, but, generally speaking, I don't regret it. Like, I've had a probably the last year and a half and the most fun I've had in the industry for a very long time. I'm happy to have made the choice to be optimistic, even if at the end of the day, some of the problems are insurmountable.
It has been really fun engineering and product design. I think it's just a matter of mentality, more than anything, a matter of choice, almost more than anything else for engineers. Engineers tend to be this personality type that looks for problems.
that's kind of the fundamental nature of both entrepreneurs and engineers. It's very easy to let that drift into a negative mindset if you're not careful.
Brett: Why are most engineering teams not smaller today? When you think about the last 30 years of technology, one of the things that's been so spectacular is all of these layers of abstraction. Think about building anything that's gotten drastically more efficient. But when you look at all the at scale products, you know, it just came to mind when you mentioned Google Maps, I wouldn't be surprised if there's more than a thousand people working on it.
Sam: I mean, I think there's a bunch of different reasons for that. 1 of them is when's the last time you solved a problem in your life by taking something away? It's hard to do. So we just humans tend to solve problems by adding not by subtracting. And so organizationally, you're just,
every manager, every VP wants to add people over time. Second of all, why is, you know, 280, or one on one, still crowded full of cars? We keep adding lanes and the traffic doesn't get thinned out, We all know this, right? But we have more, We have larger ambitions now as we get larger capabilities. And so the work expands to fill the team no matter what, look at the stuff we're doing. Maps is an incredible product just to pick on that.
The stuff that it does is just magical. If you showed that to somebody 20 years ago, they wouldn't believe it. You know, go anywhere in the world. I was in Japan recently and they've got the entire country very reliably in their database. the directions are perfect.
They're to the minute on the subway. they just keep expanding and expanding what they're doing with it. I, I suspect that's going to happen too with generative AI. I mean, anyone who's listening to this, don't pay attention because it's probably half wrong, but like, I think there's a decent chance that what's going to happen is, you Well, you can already see some effects, I'm a terrible programmer because I'm, I'm out of shape and I haven't really been writing code for a while, but I've been doing a little bit of coding, and just been Jupyter notebooks and stuff.
You know, one of the miserable things when you come back out of, a hiatus on coding is you've got to figure out how the tools work and get your environment set up. And so you've got to go bug some senior person for a while. And like, it's just gross. I didn't have to do that this time. I just did, you know, just use GPT 4 and it helped me out.
I'm much more effective now. you know, there's this leveling effect, people getting brought up to speed. So like programmers are definitely going to get more effective. We're going to get more agentic programming tools. So you're going to get higher and higher leverage.
But I don't think it's gonna make the team smaller. I think it's just gonna make the ambitions bigger. you know, Sam Altman predicted that, you know, at some point we'll see, a billion dollar solo entrepreneur at some point. I think that's possible, The teams may get bigger, they may get a little bit smaller, they may stay the same size, but you'll just do more with the same, with the teams because we can. you know, think about what a person team could get done in 1973.
Wasn't much by modern standards, you barely even had compilers back then. You have to feel everything out. you know, you didn't have good documentation on anything. Computers were slow and crappy and you could barely get something like VisiCalc on its feet in the late seventies you could probably get VisiCalc, we could probably sit down with ChatGPT right now and get a VisiCalc clone written in Python by the end of this podcast, right? Might be a little bit of a stretch, but not much of one. The capabilities are huge now.
And so I think we just, we just do more interesting things. I have the suspicion that the nature of applications is going to change radically in the next five to 10 years.
And so I think, you know, this idea that just in the same way that the internet made the distribution of information essentially free, AI, general AI is going to make The creation of pixels, distribution of pixels free. So, like, right now, you know, if you put a pixel in front of somebody, it costs a lot.
There's high friction to it. And the easiest way to see this is images, 2 years ago, if you want a picture of a cat riding a bike, you have to go invent Photoshop. Write Photoshop, learn Photoshop, and use Photoshop. it's a bunch of friction and effort to get that picture.
Now you just go ask Dolly or Stable Diffusion or whatever to generate that picture, and it happens pretty quickly. And so, like, those pixels are very free now. They're very cheap now. we've got all these other pixels we deliver right now, like static applications and business reports and, you know, written word and all this stuff.
And to varying degrees, they're becoming much cheaper to deliver to the end user. And, some things like language are pretty cheap. and some things aren't, but like, I think eventually what's going to happen is applications are going to become very fluid and in demand and documents too, are going to become very fluid and in demand and customized.
And so I think it's going to feel weird to us. just the way it feels weird if you tell a kid that, Oh yeah, TV used to be three channels and you could go home and like, you had to be in your house at seven 30 if you wanted to watch your show on that one channel at that one time, that feels really anachronistic and sad to us now.
I think this world of, like, a team of engineers goes and builds an app that's the same pixels for everybody, and they change it slowly, and you can't really customize it or interact with it or tell it what your intention is. And, that's going to feel sad and anachronistic and slow and not very long. And I think the same thing with documents, like, why do we have this long, linear document full of text?
Why can't I just, like, Go to the document and tell it to draw me a picture that summarizes what's going on inside it, which you can start to do now with various solutions. So I think, you know, why are we imitating typewriters with billions of dollars of GPUs?
that's just weird. In not very long, it's going to be like, Oh, remember when you couldn't just like tell the computer what you wanted and you just kind of got what somebody else built. And if it was crappy, then it was crappy. And, you know.
So I think that's, you know, that's going to change. And I don't know what teams will do when that change happens. But I think just like with the internet, you know, there are lots of businesses that didn't really consciously understand that they were predicated on the friction of moving information around. I think there are lots of businesses out there that don't fully understand that they're predicated on the expense of creating pixels.
And so similar disruption is, in the works.
Brett: if the future unfolds in that way, who are the most likely to capture most of the value in this transition?
Sam: It's the same question that you had to go ask in, you know, 1998 about the internet. Right. And nobody had a good answer. I mean, a couple of people had good answers and they're all worth 200 billion dollars now, you kind of try to do this stuff from a mixture of historical precedent and 1st principles. So what are where the network effects what's defendable? What's, you know, what's likely to be valuable. I keep coming back to personal data.
The thing that's probably valuable is, if you have an experience that's personalized to you, that is trustworthy. So I trust it to not leak data out that I don't want it to leak. And I have ideas about how that might work. Then you're going to spend a lot of time.
And if that experience, that agent or assistant or whatever it is, can create custom experiences for you on demand, it's the collection of the best a hundred executives in the world. That's where you're going to spend all your time. imagine if you were one of those people who was worth 200 billion or whatever, if you could hire the 100 best experts in any domain anywhere in the world and they all got together. They all worked along, worked together really well. And you had a chief of staff that managed them and there's never bullshit and politics and whatever, like, how much time are you doing when you're looking at a screen?
You're not doing a whole lot of. futzing around with expense reports and checking emails, you're like giving intention to your amazing team of people consuming work product and maybe entertaining yourself and communicating with your friends. So that's kind of where that durable value will wind up being, I think, over time.
It's kind of the holders of the data and the, companies that we can trust to, you know, hold that data and trust to give us the right answers are probably where the value is. I don't know what I think about the base model, the foundational model wars. Like, I think that's a really interesting question now, how that plays out, whether they commoditize or not.
Like, I think that'll be, I was sort of asking this question of All right. Is this like web servers or is it, mail clients or is it search engines? And like, what, what's the analogy for these things? Like, you know, is there one of them, an event, one or two of them eventually win?
Cause there's some scale effect, or is it a commodity and it's transient and they just sort of are web servers. We cared a lot about the web server market for a little while, but eventually it wasn't really where the action was, you know, it was higher up the stack. So I don't Pretend to know the answer.
We're gonna go build a whole bunch of really sophisticated models and we'll see. But I think there are some challenges that people have not figured out how to solve yet. Autonomy being one of them, how do you have an agent that can go do long running complicated tasks on your behalf?
In a way that's safe in a way that's reliable and robust? We don't see a lot of that. We're seeing early light on a few things in like the developer domain, the Devon project and stuff like that. But, you know, if you look closely at what's been going on for the last year and a half, you always see the human holding the hand of the model.
Humans always doing the metacognition. It's always doing the planning, you know, always deciding, few shot training works, then the human's deciding to do few shot training and coming up with the examples and stuff like that. We still haven't quite gotten to, value, you know, where things are really, really helpful in high leverage.
I think whoever cracks that open, there's a lot of value there. But I think it's a hard, that's a hard problem. That's one of the hard engineering problems I think right now of trying to really figure out how to get these agents to be somewhat independent and self correcting and do their system through thinking on their own reliably.
If you want a mouthful, I was amusing myself by making this as hard to understand as possible. The easy way to understand this is when I say, You should think with the model, but plan with code because the models are stochastic and fuzzy. And so if you want something reliable, you should build kind of a scaffold around it in code, something executable.
And then you just force the model to do the thinking bits rather than having it do the planning bits. that works pretty well. And I've written some very high leverage things that way.
Brett: So if you think about your career, you know, you have the dawn of the internet. then you have cloud, then you have mobile, and now you have the dawn of this next shift in AI and LLMs. When you think about what you were doing in 2012 or 2015, was it enhanced by you reflecting on your journey in the 90s as the internet was really taking off?
Or, in what ways do you use the past to increase the chance you do the correct thing in the present?
Sam: We get thrown into these situations, repeatedly as a technologist, you just mentioned like a bunch of these big paradigm shifts and there's, there's little ones too, all the time that you're looking at, whether you're an executive or an engineer plotting your course through your career or an entrepreneur, all of us involved with technology, constant or venture capitalists, all of us have to constantly make these judgment calls about whether this particular new technology or new technical shift or whatever is going to work out or how it's going to work out or the question you just asked of like, where are the winners going to be? And those are like very, very complicated problems in a bunch of ways. Like, it's always challenging to imagine a new world.
Everybody has trouble doing, not everybody, but most people have trouble doing that. It's really hard to kind of fill the blanks in. Some people are good at it. And I think those people, you know, their names, cause they're very wealthy, the only strategy that I have, unless you're like superhuman and you can really see around all the curves, inherently, which case you're probably not listening to this podcast. I think the strategy that I have for us mortals is. a mixture of try to have first principles, try to derive kind of durable first principles about things as much as you can. And then, you know, look to history for analogs, but try not to be overly attached to them. You ask whether it's good or bad to do that, and I think the answer is honestly both. you can come up with all kinds of examples where people drew an analogy to an earlier technology that was just wrong.
You know, it seemed right at the time. There was no way to know that it was wrong, but it turned out to be wrong kind of led you down a bad path. I don't know what you do about that, other than just kind of constantly updating and trying to self correct and challenge assumptions and be careful about them
right? Because I think analogies are really powerful, but leverage always works both directions. The 1 analogy that I've heard a lot recently, which I like, and but I also worry about a little bit is, you know, people say, like, the industrial revolution, the 1st 1 was the 1st time we genuinely had a surplus of energy that was beyond the human body or a little bit of, like, wind and water power.
all of a sudden we had this huge surplus of physical energy with electricity and steam and coal and all these other things that we could start to direct. And we spent a lot of time figuring out how to direct that energy. And the result is this much wealthier modern world.
This moment is the 1st time we've really had a surplus of cognitive energy outside of our brains. And I think that's like, half true. Like, I think we've actually been building up a surplus of cognitive energy in limited ways for the last 30 years. your life does not work very well without a bunch of CPUs involved.
Those CPUs are doing stuff. It's not stuff that we would think of as, you know, thinking, but it's stuff that a human would have to do if the machines wasn't doing it, right? have been building up some cognitive surplus, and we're now, shifting to this realm of, the semantic, almost, right?
We're like, the machines can do reasoning about very complicated things that are, you know, have meaning attached to them or have nuance attached to them imperfectly and completely for sure but like, they're beginning to do some of that reasoning. And so, in some domains, we're getting more and more of this cognitive surplus.
And so people kind of start to extend that analogy to say like, well, the early industrial revolution, we just replaced steam engines with electrical. And like, you know, it took us a while to figure out how to reorient the factories so that they were actually more efficient.
I like that analogy. I don't know if it's true or not. Hope it's true. Like the businesses need to reorient around, you know, somebody will figure out a better business model. You can kind of think about why that might be true if you want to like why do we have org charts?
Why do we have hierarchy? Well, like the CEO can't be everywhere all at once, right? It's just too hard to like, so you have hierarchy, you know, of communication and control and stuff. Now the CEO kind of can be everywhere all at once. you make a copy of the CEO in a, in an LLM, you know, you put a bunch of their writing and thoughts and direction and a large prompt and that thing can be everywhere.
And maybe you don't need as much of a hierarchy and maybe, you know, because the LLMs can understand language, you know, we've all been in meetings that get boring where you can, you know, you wish you could leave the meeting, but it's kind of socially awkward. Well, maybe you start to have an AI directing your meetings and they excuse you or call you back.
And maybe meetings get totally shifted around and like, our projects get shifted around because you have these superhuman organizers that are very patient can track a lot of details. And so maybe a bunch of the structures we have in corporation and organizations aren't necessary anymore and they get kind of redone.
You know, maybe that's the equivalent of us, figuring out how to do a mass production factory line, in the age of electricity, maybe there's mass production factory lines for white collar work that you couldn't have done before because white collar workers aren't patient enough for it.
But now. We can have LLMs do all the stupid, crunchy, automated part and we can completely redesign how we do the stuff. I think those are interesting questions to ask at least, illuminated by that historical analog. I guess that analogy is like why I'm bullish, I see, challenges to be overcome with the LLM technology.
And we're not even talking about other AI technology. Like the AI for science stuff is amazing these days, even seeing those challenges to be overcome. I think there's enough reason to believe that this is transformative at this point that I think it's silly to believe anything else. It's taking a while to get there and, we'll have fits and starts and, you know, people will adopt too early and get their fingers burned and back up and then get FOMO and come back in. And, there's noise. There's a lot of mutation going on in the space right now. I think it's pretty clear that there's something really valuable here.
If you pause sort of on the idea of, first principles, and then look at history for analogs, can maybe you talk a little bit more about maybe how that shows up and how you're reasoning through what is happening in AI or where we're going to go?
Sam: , I look at a lot of the criticism of LLMs and I think that while it's valid, I also see the industry pulling the technology forward very rapidly. And that to me is an analogy. We used to get this criticism of Google Docs in the cloud, it's hard to imagine now, because it's so established. 18 years later or whatever, but like at the time, the idea of the cloud was really radical and you know, I've got all kinds of challenge and skepticism and people are like, I've never trust that. And I would be like, fine, give me your laptop.
I'll, you can have mine. And now I get to erase anything on yours and you get to erase anything on mine. nobody would ever do that. they're all, their stuff was on their laptop, but my stuff was all in the cloud. That kind of skepticism feels very familiar to me, and I could see the value then, and the industry could, enough people in the industry could see the value then, that we pulled the tech forward and, you know, solved a bunch of these hard problems.
That feels very analogous to what's going on with LLM. Based AI now. Yeah, there's a whole bunch of problems with them. They're really incomplete technology. We've got a lot of scaling to do. We've got a lot of optimizing to do. We've got a lot of actual hard technical problems to solve.
Yeah. There's some in there that maybe aren't solvable or that we don't know how to solve yet. But I think we have pretty good, you know, line of sight to most of them and good reasons to believe that we can solve all of them. That's a good analogy of just the industry is going to pull all this forward, and a lot of these problems are going to get solved.
And I'm less concerned about the criticism right now, because it just feels very familiar to me. And, if you go walking around today, saying, like, the cloud's a bad idea, no one's ever going to, like, do stuff in the cloud. Let me introduce you to my, 2 billion friends that think some think otherwise, it's just silly. So I think we're going to get exactly to the same place with some form of LLM based AI, whether it's fully autonomous agents or, something in between, or, or, you know, whether it's a full disruption of how work works and how organizations work, or whether it's something more incremental, we'll find out, like, I think we had huge ambitions for the internet when it showed up, some of which happened some of which didn't and some things that we didn't have ambition.
We didn't know to have ambition for happened. It's like, I think, you know, we're going to get the same thing with AI.
Brett: If you're a 22 year old coming out of college with a CS degree and you're really ambitious, what should that person go do?
Sam: I have a lot of thoughts on this because like, I, I'm actually writing a book shortly that. Addresses I see a lot of like, pessimism in the 22 year olds coming out of college and a lot more optimism in the old poops like me and which I think is upside down. Those guys should be kicking my butt and some of them are for sure. I tell them, like, all the time, like, I wish I were 22 coming out of college because there's so many more toys to play with now and so much more to do and so many more possibilities. Not all of them are AI, there's all kinds of interesting stuff going on all over the sciences, we're starting to figure out what you can do with intermittent cheap power in lots of different domains. We're figuring out a lot of different stuff with biology. The intersection of AI and science, both with material science and biological science is fascinating. Mostly what I would tell people is, be curious, learn a lot, play a lot.
I think you learn by doing experiments and playing with stuff. I think the generation coming out of college right now struggles a little bit with play because If you're coming out of an elite university, just to be honest, I went to University of Michigan, which is, you know, considered a very elite university now, but I ass clowned my way into it.
I was not a good student in, high school. I was smart enough to pass a bunch of AP tests, which kind of got their attention. But like, I didn't do sports. I didn't do any extracurricular stuff. it was kind of my default school because I lived in the area. Now, if you go to Michigan, you have led a very curated life to get there.
And so, you are used to thinking in this mode of like, everything has to matter. I have to be doing everything for a reason. And the truth is you get real value from fucking around, for lack of a better word, for playing with things. Making messes and experimenting and following your nose and doing stuff that you don't know why you're doing it because it's just interesting to you.
So that's one thing I would coach people of that age coming out of college is try to cultivate a sense of playfulness and messiness and experimentation as much as you can. And then I think the other thing that's really important is and it kind of dovetails with this is you have to learn to use the tools.
And I think that has to do with being able to do critical thinking and reasoning and, LLMs are these large multidimensional spaces, and so you're directing their attention when you ask them a question or do a prompt. And if you don't know enough to ask an intelligent question, you won't get an intelligent answer.
You will direct the LLM to the part of the space where the dumb questions are, and you'll get a dumb answer and often than not. And so I think it's important to have enough basic understanding and skills to be able to interact helpfully with the machines. But those kind of go together because I think, learning how to, you playing and exploring and messing around with stuff and asking good questions and getting educated about what does work and what doesn't work.
And, being curious and being discontented are all kind of part of the same puzzle. You know, and I think the people who use LLMs effectively kind of had that sense of like, they're not passively asking a question, banging on the cage of the thing till it does what they want, they treat it as a tool, not as an oracle.
If you're coming out of college today, I envy you because I wish I had the next 30 years to work on tech.
Have that sense of possibility that what if versus why not, go get educated, go play with the tools, go make messes, go explore.
Brett: When you think of the characteristics of the most talented engineers that you work with today. What are the characteristics and do you think in five years they will be the same?
Sam: Some characteristics never change. I think this is kind of goes with the technical taste thing. sometimes you just you know, you meet people and you're just like, yeah, that there's something about that person that they're just a really good, they think lucidly in a certain way about engineering problems.
They tend to be, I think people who are simplifiers more than complexifiers. I think that's always that's a constant with good engineers. Many people have said this. I think it's very true. You know, good engineers, they're lucid. They can look at a problem and kind of reduce it to its minimal parts and think their way through it very carefully and simplify it.
That's always a good characteristic people get a little bit overly attached to actual technical skills. I remember when I was in my 20s, I had started off in Pascal.
And I was really worried because C seemed to be taking over and I wasn't a C programmer. I needed to learn C. It turns out to be like a totally stupid thing to worry about. I wound up writing code in like 10 languages over the course of my career. that's not the interesting part of being an engineer.
The interesting part was, being able to think about technical problems and, simplify them and break them down and work your way through them and stuff like that. If you can pound out a lot of Python or Rust or whatever it is that you're good at, like, you can find a lot of code that's good for you, but that's not really what makes you a good engineer. Being a good engineer is about seeing problems clearly and, you know, minimizing the wasted effort and, you know, really solving, solving the minimal thing.
that's kind of, I think that's pretty eternal.
Brett: what do you think is the most likely things that will change in great engineers or great engineering teams?
Sam: The use of tools is going to matter. Like, I think we're going to get to a place where you just can't compete if you're not using modern tools. And I saw that a little bit happening in earlier stages of my career were early waves of engineers are all just kind of hackers.
You know, everybody had their weird idiosyncratic environments and you kind of just. Grind it out and then IDEs showed up and framework showed up and, you know, all this stack that we're used to showed up and, you know, largely when the cloud started to arrive and, there are engineers that really struggled with making that transformation and like kept trying to do stuff by hand.
But like in modern day, you can't do stuff all on your own. You have to use the stack underneath you. And I think the same is going to happen Around LLM based coding pretty soon, you can still kind of sit down and write a program with no assistance if you want to, but you're not, it's going to start to feel very anachronistic. You're going to be less and less competitive as an engineer, if you can't make good use of tools.
And that doesn't mean just, you know, I've got a copilot sitting next to me, writing code while I write stuff. It's, being very thoughtful about how am I directing something to, you know, help me structure the program, help me think through the program, build pieces of it, automate the grunge away, automate the testing, right?
I think team good engineers are going to start to climb this ladder of capabilities with that tool, just the way they did with the last series of tools and frameworks. That's probably going to change. How it manifests in terms of the ambitious, like maybe the case that very small teams can do very large projects, but, or like we were talking about before, it may be the case we're just going to get really ambitious about what we try to do with the same size of teams.
which is kind of where I would put my money. I think we are always ambitious. I think it's very rare for somebody to be like, well, you know, I can do this same thing I've been doing. I'm just going to deal with like half as many people and like, just have stasis because your competitors are going to be like, yeah, well, I can just, do something that's amazing and eat your lunch.
So I think like, we're constantly going to push for more capabilities with the same teams. I don't know. We'll see
.
Brett: Aside from the type of, work that you're involved with in tinkering and, exploring these new building blocks in AI, how different or similar if I were to watch one of the teams that you're working closely with today, building software, leveraging these tools, even if they're In a phase that will be considered primitive in five or 10 years.
Where is the leverage or impact today or yesterday for you and your teams?
Sam: There's an interesting piece of leverage that's emerged in one of the teams that I've been working on. So we built this system. I nicknamed it the Infinite Chatbot. It's just this rag based chat system. It's very much like GPTs are. We just built it earlier, like last year. So it's this thing that has, accumulates memories in a vector database, and you can have very long running conversations with it.
Many months that, you know, that are very coherent. And so one of the, the thing, and we have this idea that these chatbots should be treated like documents. I say bots or docs a lot. So meaning that they have individual names and you can edit all the pieces of them and you can save them and share them and delete them and all this, copy them and stuff like that.
So we kind of have these multiple chat bot things that are used for different purposes. And one of the things we found is really useful is brainstorming technical designs with them. So, for a number of reasons, like, first of all, myself and the lead engineer, both are kind of ADD personalities and so we're kind of scattered in how we think through stuff. And these rag based systems are actually pretty good at pulling the that scattered thought into a more coherent hole. And they're, of course, infinitely patient. And so you can just have these crazy, wild conversations with them, and they never get frustrated or bored the way a person would.
But they're good brainstorming partners. you can sit down and brainstorm through, you know, I want to do this technical design in the following way and kind of go back and forth. And then 1 of the interesting things we did with them is we use mermaid markdown, which is a graph drawing markdown to have them that they can render because the model knows how to render that, how to express it.
And so we can, at the end of a design, we can say, okay, cool. Like, draw me the state you know, machine or the flowchart or whatever of that, and it'll do it it'll generate the mermaid graph for that. And so that's really valuable. It's like having this kind of neutral architect party in the team that you can go to and talk.
And it's a very senior. It's the equivalent of a very senior engineer, it's very experienced. And so you can ask it these interesting questions and you get these good ideas and debates out of it. So that's been a surprising thing that we've gotten out of this. I haven't quite gotten, I want to do some other stuff.
We haven't quite gotten to like project management yet. And that's another one of these things where like, I think you kind of need a little bit more dependence than you're getting right now, That's an obvious next thing to do. Just have the model. So we all, we'll have chat rooms where we have these, these rag chat bots in them and they'll interact with us and they'll track things.
And, we're just on the cusp of starting to get them to do project management and things like that, which is pretty cool. I want to get them to do more complex coding tasks. And we've been working on that a little bit, although not as much because lots of other people are working on that. So we don't want to be redundant,
Brett: How do you know you'll be redundant?
Sam: Oh, we don't. And like, we try to work on stuff that isn't redundant, but I know there are lots and lots of people looking at that domain of just, how do you get something to really go do a long running complex coding task on its own without assistance.
I think it's a hard problem because, things that work well with LLMs have this shape almost like a cryptographic trapdoor. There are like things where it's hard work for the human to create the work, but easy for the human to validate the work. Those are those are the things that tend to have high value.
that's challenging in the coding domain because it's hard, to validate code without reading all of the code, you know, it's easy to do in the domain of like generating images because it's hard for me to generate an image, but it's very easy for me to validate whether it's a good image.
So that's a nice, bit of leverage, you know, from the model. I think that's why that's a common use case. It's harder to do stuff like project management to validate things like project management or large chunks of code. It's relatively easy to use it's not super high leverage, but it's somewhat leverage to like use an LLM as a coding co pilot, right?
Where you can say, okay, go rate this function right now. And I think what's happening a lot is like the humans are glancing at the function to see if it's obviously wrong. And so they're kind of doing a lightweight validation of it. you know, if they're doing full validation, then it might be not as much return on, on effort.
So that's the thing we're trying to figure out is what is the user interface really for letting people watch an agent, try to run a longer term, more complex set of tasks where I can just look at the nodes in the graph poke on the ones that are yellow or red rather than having to read everything and follow along and hold its hand.
And can you reliably, like, get it to ask for help when it needs help? learn to trust it and things like that.
Brett: What do you think the correct thing to do if you're running an outscale software business today, given this sea change that's happening? So you're running TurboTax, you're running Workday, you're running, given you've seen so many shifts in technology and I think there are a lot of risks to sort of over analogizing as you were saying, but are there ideas that you would impart on people running these at scale pieces of software?
Sam: You can probably make bad mistakes by overreacting for sure, right? You know, you don't want to throw your whole business away until you really understand where things are going. But I, think the worst thing you can do is get stuck in that. Why not mentality.
I think the thing to do if you're running a big business that is, just have some honest, uncomfortable conversations about like, well, what if this or that aspect of AI plays out over the next 5 years? what if hallucinations get perfectly solved and jailbreaks get solved and we've got these very, very smart, you know, kind of super humanly smart, you know, 10 million context, 10 million token context window things that are cost effective. Okay. What's my tax business look like in that case? What happens? Can I dump the entire tax code in and it does your taxes for you? And is it that extreme or is there some intermediacy? okay, if it's that extreme, well, what do I do about that?
What's my, Fundamental value proposition to the end user. Okay. So how do I deliver that value proposition in this world? you know, maybe I'm delivering more of it. Maybe I'm delivering the same amount, but cheaper because the user doesn't want any more of it. That's the line of reasoning I think you have to go through just, whatever it is, it's in the way of this disrupting your business now, like, assume that that gets solved. What's your business look like after that gets solved? And then, what do you do? What is the thing that your business is about?
What value are you delivering? And how do you deliver it? To go back to like the Walmart and Amazon example for a second, Amazon sort of forced that question on Walmart. I think Walmart had one idea of what its true value was, which is they're just the cheapest thing, you know, for everything, which is not a bad value proposition, a particular for retailer.
But I think Amazon kind of changed the definition of what Cheap meant and so, like, they added in this dimension of time and convenience. Walmart had to go respond to that. it's not enough. if maybe Walmart's 10 cents less for some item that I want.
But, you know, it's half an hour away and I got to go wait in line and stuff, but Amazon is just click, click a link. So, like, that's now Amazon's the cheaper solution in some sense. You could have gone through that exercise maybe if you were the head of of Walmart and said, okay, what do users really care about?
They care about getting their thing. In the most effective way and okay, the Internet that's making this very effective. I need to have some kind of response to that.
But you can say, like, okay, I'm going to start taking advantage of the infrastructure. I have and, start adding this value to my business and responding in this way. I think there'll be a lot of that pattern happening. You probably don't throw your business away overnight because it takes a long time for things to actually change.
But it may be that the shape has to change to actually deliver the thing that you may not be delivering the value you think you're delivering, your business may really be about something else. This will reveal that.
Brett: The thing that, that always complicates it is that you'll, you have to take action based on a forecast. It can't possibly be totally accurate. And, particularly where the tectonic changes, You can have a bunch of small projects or experiments or whatnot. But in a paradigm shift, when you have to push the chips in or be more aggressive, it's surprisingly, I mean, if you were a garage and, you know, remember when Lux and all those companies that blew up, but they were really hot for a period of time and the valet chain, pick whatever example that certainly didn't materialize.
Sam: I feel like I could pick on those specifically because Netflix versus Blockbuster, their value proposition was very clear. they delivered that solution. They delivered that, product more effectively, delivered more of it more quickly. And they had scale effects at their back.
So the bigger they got, the more effective they got. And the cheaper it got that felt like a pretty clear flywheel. Things like Lux they never had positive unit economics ever, even in theory, and they didn't have a scale effect. They didn't get cheaper the bigger they got.
They were just being pumped with venture money. That was not a particularly thoughtful, actually that's like taking the Internet and sort of like theorizing Oh, the internet implies this delivery thing, but not thinking about the middle, all the engineering pieces in the middle of like, how do you actually make this cost effective?
Do you get an advantage from the platform? That one's just a bad decision. it's not totally clear yet how far we're going to get how fast with self driving. Even these 20 watt supercomputers that we carry around in our heads crash a lot. So, it's not clear that you can put, you know, a computer into a car that can drive it all the time in all conditions, better than human. So that's a bet. that's a technical bet, but like that, the strategy there, I think is the strategy that people are pursuing, which is try to learn as much as fast as you can. that's always the, fundamental skill is holding everything else constant, not killing people, try to learn as fast as you possibly can, like, structure yourself around learning and experimentation and structure your organization around absorbing those lessons and reacting to them. You know, if you're Uber, the theory of self driving, or if you're Tesla, they make a bunch of it makes a bunch of sense because there is a scale effect based on data and miles driven, you know, it'll get better over time. You know, if, if there, if it is the case that, you know, practice makes it better, which it seems like it probably will be, then whoever really gets to self full self driving, that's trustworthy at scale 1st is going to have a very durable advantage because they're going to get more and more miles driven on them.
Because why would you drive the less trustworthy 1 if you had a choice? I think there's a little bit of that maybe going on in medicine too, once we get to superhuman diagnosticians with AI, why why would you go to the worst doctor when there's a perfect AI doctor out there?
Brett: So then what do you think you should do if you're a pediatrician?
Sam: I think there are plenty of practices that this will come to late. But like, if I were a physician today, I would start to, I would probably start to look for ways that I could use AI to de bias myself and to improve my clinical decision making. That's a hard process right? Like there's licensing and regulatory issues that get in the way of that.
Everyone's biased. Everyone makes mistakes. Everyone misses stuff. And if you're a physician, any physician, you know, that has a significant impact on a human being if you do that. And so you want to be understanding and educated in those tools. I think when they're ready and when they're better, we already have isolated diagnostic tools that are better than humans.
There's breast cancer detection and things like that, that are better than humans at spotting you know, from like a radiology slide. If I were practicing, I would at least want to start to understand, when can I enhance my decision making with that kind of tool?
when is it ready? What are we going to do? So these days, there was a study I saw a while ago compared like an AI based system, AI only physician, only an AI plus physician. An AI plus position did worse than AI only. the position kind of dragged the AI down because they didn't trust it.
I think the kind of the point of it is,, if you disagree with the AI, if the is more accurate than you are, and you disagree with it about something, you're going to want to dismiss it because you're like, oh, it doesn't know what it's talking about.
Like, that's clearly not you know, cancer slide or whatever. That's the natural why not story to ask there. Takes a lot of pushing your ego down to, be in that situation and and sort of trust it and and feel like, oh, yeah, crap. I made a mistake.
Like, if I hadn't had the AI helping me, that person would have been diagnosed. That's a bad feeling. You know, it's hard to admit that but I think that's going to be part of the transformation of work is all of us getting comfortable. And I see analogies to that as well. I mean, I remember people being very negative about PCs and very negative about phones and very negative about the Internet.
And, like, all of those things are very incorporated into our daily life now. The world is wealthier and the most part safer and healthier because we have all this surplus physical power that we can do things with. I think the same thing is probably true of cognitive power. More kids will get educated and those kids will do more interesting things and more teams, people who are maybe not the most skilled practitioners of their craft will get boosted up, you know, weak programmers will become better programmers and small overwhelmed teams will be able to get stuff done that they couldn't get done.
People getting access to diagnostic tools that catch diseases sooner and give them better medical advice and, you know, doctors getting a lot of leverage on their time where they can handle more patients and help more people with the same amount of energy.
there's lots of things that you can imagine being pretty awesome. You know, on top of everything else, there's, challenges with demographics everywhere. We don't really have enough people to support the system in the next generation.
we're not making enough new people. it's useful to have. Some of that offloaded to AI, let the humans do the valuable work and let the machines do the boring stuff.
Brett: Or maybe the inverse, but hopefully not.
Sam: I don't think the inverse, one of the nice things about AI is it doesn't have emotions, right?
It doesn't get bored and tired. It just goes forever. It's the most patient explainer, teacher, laborer ever. And so like, let it do the grungy boring stuff and we'll do the fun stuff.
Brett: maybe to wrap up, what would you like someone else to say about you if I asked them and your name came up?
Sam: Somebody asked Itzhak Perlman, like why he was practicing violin every day when he's 92 and he's like, well, I think I'm finally getting better. I'd like that jam too. Like, like that vibe, like I feel like I never really fully grew up.
I think I have a pretty flexible brain and I like to play and I like to learn. And like, that's the thing I hope I hold on to as long as possible and like, I hope I build useful things. I have to say, it's really humbling.
It's 18 years now since we did Google Docs. I keep expecting somebody to tell me to stop talking about it because it's embarrassing, but like, it's been humbling and gratifying how that sort of grown every year and people love that product. And, there's a lot of value and a lot of use for a lot of people.
I like building things that people love to use. Like that's, that's a really nice feeling. That's my best, my best answer.
Brett: Nice place to end. Thank you for such a fun and wide ranging conversation. I really, really enjoyed it.