How goal-setting and planning is different for AI products | Anastasis Germanidis (Co-Founder & CTO at Runway)
Episode 111

How goal-setting and planning is different for AI products | Anastasis Germanidis (Co-Founder & CTO at Runway)

Anastasis Germanidis is the Co-Founder & CTO at Runway, an applied AI research company shaping the next era of art, entertainment, and human creativity. Runway has raised $237m and was one of Time Magazine’s “100 most influential companies” in 2023.

Play Episode

Anastasis Germanidis is the Co-Founder & CTO at Runway, an applied AI research company shaping the next era of art, entertainment, and human creativity. Runway has raised $237m and was one of Time Magazine’s “100 most influential companies” in 2023. Runway has been a persistent viral sensation in recent years, and is behind many of the most famous AI demos online.

In today’s episode we discuss:

Referenced:

Where to find Anastasis Germanidis:

Where to find Todd Jackson:

Where to find First Round Capital:

Timestamps:

(00:00) Introduction

(03:23) The unique story of how Runway's co-founders met

(08:27) The origins of Runway

(09:28) Forming the initial product

(13:55) Turning Runway into a company

(14:41)Approach to initial market segments

(18:53) Early-adopters

(21:20) The limitations of being “customer-driven”

(25:54) Forming a vocal community

(27:08) Fostering community

(29:05) The progression of Runway's tech and use-cases

(33:08) How they picked users for early release

(34:00) Expanding past the first 100 users of Gen-2

(35:33) Runway’s approach to safety and content moderation

(36:44) Balancing product development and research development

(43:51) Runway's org structure

(45:08) Goal-setting amidst constant change in AI

(46:50) Why Runway doesn't plan very far ahead

(50:26) Advice to early-stage AI founders

(53:11) Will AI replace video editors?

(55:04) When Runway had the most momentum

(56:49) Anastasis' #1 piece of advice

Todd: Anastasis, welcome to the show. 

Anastasis: Uh, thank you, Todd. Glad to be here.

Todd: Runway has some really impressive customers in TV and film and video editing. And I know that you and your co-founders all have a shared background in film. So can you share a little bit about what you were doing before you started Runway and kind of how you all met?

Anastasis: Yeah, absolutely. Um, so I would say something we, we like to say at Runway is, uh, Runway was started by three failed filmmakers. Uh, so we all had, we're involved in film one way or another, but, uh, we're uh, kind of having hybrid careers, different for each of the co founders. My personal story is, I was, um, primarily an engineer uh, but also an artist and kind of what I was balancing both kind of working at startups doing kind of backend engineering distributed systems engineering, and then ML engineering. And then at the same time, I was kind of developing my own kind of, uh, artwork. The artwork was based on kind of interactive uh, art. So technology was involved in the process as well.

Uh, but the two worlds were pretty separate. So three co-founders, we met in art school uh, which is not the place where usually AI companies start from. Uh, so we met at the program at NYU, uh, that was kind of focused on the intersection of art and technology. Uh, and we started making small projects together.

Uh, primarily exploring ways in which uh, machine learning could be applied for creative use cases. At that point, um, deep learning was still much more kind of nascent, there was just starting to be, uh, interesting applications, primarily convolution neural networks for, uh, art and creativity.

And so we just started building small tools and just giving them to artists, designers, filmmakers, seeing what they did with that. 

Todd: Did all three of you have both kind of machine learning backgrounds and were writing code and kind of artistic and creative backgrounds?

Anastasis: So, uh, my co founders uh, Cris and Alejandro uh, Cris uh, had a, a mix of, um, artistic business and kind of innovation background. And Alejandro had, uh, primarily design background. So they came to this, this program was called uh, like one way to phrase this program is where artists go to become engineers and engineers go to become artists.

And that was more from the engineering. Um, side, and then Cris and Alejandro were more from the art side and kind of moving into technology. So that's kind of where we intersected and kind of started working together.

Todd: Very cool. Okay, so what were some of the early projects that you all worked on together?

Anastasis: I remember one, uh, very well. Uh, so just to give some context, um, around like 2016, 2017, there were some early uh, generative models that were coming out on the image domain. Uh, and one of them was, uh, had been released, an open source model by NVIDIA, and basically it's a generative model that was trained on, uh, uh, actually driving footage.

So it was primarily a self driving car dataset. Uh, and the main idea was, was you could take a very high level map of a scene and then generate kind of almost like a street view. Um, kind of scene that's based on the map that you provided. Uh, so we decided to take this, like rather like try a utilitarian uh, use case into a more creative uh, direction.

So we, we built this very simple drawing tool that basically let you create your own street view uh, of sorts. And we immediately people started making all kinds of like weird things with it. Like, like they would uh, build like giant pedestrians and like tiny cars or like a, um, a scene that was like raining from street sign.

So it was kind of an initial, um, evidence that you could take, um, like a lot of those models and research have been developed for more utilitarian uh, let's say purposes and not necessarily creative use cases. But you could, if you view it from the right lens and you apply more like a artistic or creative tools thinking around those models, you can build really compelling visual results for them.

And if you get them to the right uh, folks that have kind of this artistic motivation and creative background. Uh, and so kind of high level that approach is what we took later on with kind of building runway.

Todd: Was the goal to make art or was it to make technology or sort of, how did those projects that you were doing, what was the thing driving you to do them?

Anastasis: The main goal was to explore the potential of this technology uh, in an open ended fashion. Uh, we were still at school at the time and the kind of purpose of our projects was to just like, this is a new medium. Let's explore like what's, what makes sense in terms of new interfaces, in terms of tools around this technology.

There was no, because it was so early for those models to be using that context. There were no principles of like, how do you build products around it or tools around it? And so a lot of the initial goal was more exploration. And then something we're seeing was like once artists understood how to use those tools, they would make amazing things with those tools, but it was the barrier to entry was really high.

So just providing that like translation layer between creating this technology and AI models, and the more vocabulary of creative tools and design tools that artists were more familiar with, had the potential for us because it kind of unlocked this like potential energy of like creativity.

Todd: Okay. And then what was the moment that you said, Hey, you know, one of these projects, we want to turn this into a product? We want to, we want to turn this into the thing that would become Runway, like when did that happen?

Anastasis: So we, uh, we were pretty convinced towards the end of the program that we wanted to spend, kind of like after the program, we want to, uh, be involved uh, in building tools in this kind of intersection of ML and creativity. Um, but we didn't quite know the shape of that. What we had initially in mind was, um, we're going to start this uh, creative studio where we'd work with different clients and build like, and help them incorporate machine learning into their art projects or, uh, museum projects, for example, or like film projects.

And that would be more of a kind of consultancy. But uh, It turned out that uh, Runway, which was originally, um, Cris, my co founder's thesis project, it turned out that, that, that had, uh, gotten some initial interest and traction. Uh, and so we just decided to, uh, focus on that instead of our plans to build this uh, creative studio.

Todd: So what did that project look like at the very beginning?

Anastasis: Yeah. So the first version of Runway was, um, a simple, I would say a wrapper around uh, Docker. So Docker is this containerization technology. And one of the biggest problems when using those models early on was like right now, the frameworks that people use around machine learning had become pretty much, uh, centralized a lot more around PyTorch and, um, a few specific choices.

But at that point, it was much more fragmented. So figuring out how to actually run those models was a really time consuming process. So the first version of Runway was, uh, essentially uh, a model hub that allow you to run a variety of machine learning models uh, via Docker and allow you to use them very intuitively.

So you could run a pose estimation model and then use kind of, um, build a project around gestural interactions by just taking the pose data. And then for example, you could kind of build a way to control virtual objects via pose. So it was a very easy way to use those models in a variety of creative projects uh, via a kind of interface that was kind of simplified and did not include a lot of the kind of technical, um, details of those models.

Todd: I mean, is that kind of similar to some of the model hubs that have gotten popular in the last couple of years?

Anastasis: Yeah, it's uh, in some ways uh, it was an idea uh, early for its time, like that uh, I think Hugging Face Spaces or Replicate and maybe a more modern version of that same idea around Model Hub, um, but for us from the beginning, even though we were taking this Model Hub approach, it was very early.

Yeah. We're very specifically targeting creative use cases and a creative kind of creative domain and like visual applications rather than broadly uh, all kinds of, uh, models like classification models or regression models and so forth. So for us, it was still a model hub, but like very specific to like creative use case and creative tools.

Todd: And so that, how did that idea, that original idea kind of morph into, into what Runway is now?

Anastasis: Uh, through a lot of, uh, iteration and learning. Um, so the, I can walk you through kind of some of the steps that we took along the way. So, we released the first version of Runway in January of, uh, 2019, and that was, uh, kind of a model hub around open source models for a variety of primarily visual tasks.

So things like stylization or poses to measure segmentation and people could use them in all kinds of interesting ways. Uh, What we, like realized very early on was that like people could experiment with a prototype, really interesting things with that, but the moment they really wanted to put this into production or build like real projects with it, there was still that gap between what open source models could do and like the specific getting the performance that they needed for their own project. And that might've been just because the model was, uh, they might have a specific kind of style illustration that was not what the model was good at generating and so forth. 

Uh, so one of the, kind of the biggest initial milestone was just providing the way for people to build and train their own generative models. So essentially they could upload an image data set that that could be anything from, you know, their, if it's a visual artist, their own illustrations uh, filmmakers would kind of, um, train a model based on the storyboards or architects who train a model based on like architectural designs, and then they would generate uh, images that were similar to the data sets that they uh, they've trained with.

And that was the first, um, the first insight there was just the ability to customize and build your own models was really uh, really compelling for people. And that was the first like inflection point where we saw like, uh, a much uh, greater adoption of the, of the, of the tool.

Todd: And that was very novel at the time. I'm guessing there was nothing else like that out there.

Anastasis: Yeah, there was no, uh, generative AI was not a term that was, uh, commonly used. Uh, and also when talking to, um, investors and a lot of people and kind of the tech world is a bit of a strange concept and kind of to some degree it was justified because the results of those models were not quite the fidelity of the results and the quality is not quite what it is today.

But even earlier on, it was clear there was something really, really compelling about it being able to kind of generate generate kind of visual content using those models.

Todd: Cool. Okay. And so when did you decide to make this into a company?

Anastasis: So that was, um, a few months after uh, we graduated from grad school. So it was clear to us, this was what we wanted to do. First of all, we really enjoyed working with each other. We knew there was a sense that just because we collaborated so well, we're going to figure it out. And like, we're going to, like, there are going to be a lot of iteration and learning and like things might change a lot, but we wanted to kind of work and collaborate together on this field of like creativity and ML.

Uh, and so that's, that was kind of the main motivation of building this company. And also just the belief that, um, given the rate of progress in this field it was clear that those methods like generative models and, uh, AI techniques would really, be adopted at kind of a mass scale within the industry.

Timing those things is always difficult, but we knew that there would be a point where that would happen. Uh, so we, uh, incorporated in, uh, late uh, 2018, and then we launched the first version in, uh, like January of 2019.

Todd: And initially, did you have a sense of like, Hey, we are building this for video editors or photographers, or like a certain segment of the market, or were you just kind of generally interested in what AI could do for creative tools?

Anastasis: The way we approached uh, kind of this Product development process was like, how do we learn as much as possible in the short period of time? And so the early version of Runway was fairly broad and included, like you could use it for a variety of different like creative workflows and it wasn't specifically limited to video or filmmaking.

And, and that was something that kind of emerged in terms of like how we saw. people were using the platform was that, um, even, even when working with image specific models, they end up just processing uh, every frame of it for towards like building a final video. And so they were kind of repurposing some of the models that were not quite meant to be used on the kind of video context uh, for, uh, as part of like a video creation or kind of filmmaking uh, workflow.

Todd: So basically you're saying that like people were using the early version of Runway to edit like frame by frame? Because what they actually wanted to do was create a video or edit a video even though the tool itself was sort of frame by frame at that point.

Anastasis: Exactly. Yeah. And initially it wasn't really working that well for frame by frame, but just, we saw people repeatedly doing that. So we like kind of realized there might be something there and we should kind of rather than thinking those kind of generic tools that can work with any model.

Let's build more video specific tools. And so that's what kind of was the next iteration of Runway.

Todd: Okay. So the version in 2019 launched and it was fairly generic in terms of the use cases it supported and then when did the sort of focus more on video start to happen?

Anastasis: Uh, mid 2020. We, we did release some small, some model updates and some kind of minor uh, um, features around video, but really the, the big shift towards video tools, um, happened with uh, when we released a tool that was called green screen. And green screen is, um, what's called in a video editing rotoscoping tool and rotoscoping is the process of, um, segmenting out a subject in a video.

So removing the background from a subject or just uh, or kind of vice versa. Uh, and that's a very common thing that kind of VFX artists do. Uh, it's one of the most time consuming and least uh, pleasant uh, aspects of video editing, because you basically with traditional video editing tools, you basically go frame by frame and you're annotating the subject and you have to kind of correct mistakes of the model.

And like, it's a very, very time consuming process. But it's a necessary process because if you want to make any effects that are specific, that are only applied to one area of the video. That's kind of the first step towards all those different visual effects. And so it's a very common thing that's done in pretty much every VFX production.

So as part of building green screen, another realization was we needed to really start doing this research and more development in house and we couldn't necessarily rely on existing models that were out there or small kind of minor fine tunings of existing models. And so that was another aspect too that was kind of a big change for Runway's kind of, uh, transforming into a more research focused company.

But essentially green screen, when we released green screen, we saw a really big, uh, shift in like how the product was used where before it was primarily used as a prototyping tool. It was used a lot by universities as a way to teach what's possible with machine learning. It was used by, um, um, innovation groups and like, uh, R&D uh, groups to experiment with machine learning.

And with green screen, it was the first time that it was really used for production. And it was used by something that was like really critical to people's workflows and it was used by, by folks that were not like intentionally wanting to use AI. Uh, it was not like, I want to find what's the best tool to utilize AI techniques.

It was more, I just want to find the best tool for the job. And so that was, that was achieved that we saw that translated into a big growth in terms of the user base and revenue, but really make us understand like the difference of, of what it takes to build tools that, actually can be used in production versus tools that essentially are, are only being used in, uh, for prototyping.

Todd: So you were already getting some attention and usage prior to green screen, it sounds like, and it was in kind of more academic use cases. But it was already kind of popular, right? And then green screen made it very popular. Is that kind of how you think about it?

Anastasis: We had gotten a lot of adoption in the university. So I think that was kind of the number one driver of growth. Essentially, um, Runway was already kind of in the first year of being used in many, many different art schools. Like some of the top arts, arts schools in the world were using Runway as part of the, the class that was introducing, art students into new technologies and AI and so forth.

And so that was kind of initial use case that we found where it was really with the first version of Runway, it's just very easy to like, try out like 30 different models, understand like kind of the scope and like space of possibilities. And then also train your own model on like specific, um, like students would basically train their own models to, to build projects around.

Um, so some really compelling use cases there, and it actually turned out that that version of Runway in, in some ways ahead of its time in terms of like generative models becoming a very core piece of the product itself today.

Todd: I mean, that makes sense to me that the early versions were sort of adopted by students um, you know, who were interested in kind of the AI side of it themselves and wanted to kind of tinker with it and see what it could do. And then green screen all of a sudden came along and it was useful for professionals.

Like it and who didn't care whether you know about the AI behind it necessarily, but it did the job for them. How did, how did that affect you guys as a company? Did it, did it encourage you to start building for kind of the professional market?

Anastasis: The learning there was that, um, those techniques could really have a huge uh, amount of value for professionals today. And so the next step was releasing more tools that, um, kind of combine, um, kind of video editing uh, tooling and functionality together with the kind of magic of AI and the kind of, uh, AI assisted workflows.

Um, so the next uh, tool that we released was called inpainting and it allowed you to remove, uh, objects from a video or remove small mistakes from a video or like, um, and that was also a very tedious part of, uh, of a VFX workflow. Um, so we started trying to solve like those specific, really tedious aspects of, uh, video editing and VFX, uh, and at the same time, it started sort of trying to build a more kind of fully featured creative toolkit that didn't necessarily, not every functionality needed to be powered by ML. Um, the AI functionality was the thing that drove a lot of folks into the product because it allowed them to do things a lot faster.

But then once they uh, were uh, users of Runway, then they could uh, perform a lot of the other aspects of, uh, editing. We had a product, a video editing product with a timeline and effects stack and like things that you'd expect from a non-linear video editor. So the ML tools were the thing that brought people to the platform, but then they stayed because they could do a lot of the other kind of aspects of video editing inside Runway.

Todd: Okay. And how big was the company at this point when you started to add all of these other features on top of green screen?

Anastasis: We were between uh, like 10 and 20 people at that point. Uh, so still fairly small. Uh, we, we generally for the kind of life cycle of the company, try to remain uh, lean and like grow very intentionally. Uh, and so at every stage of the company, we're probably smaller than we should be according to kind of benchmarks of the industry.

But that's kind of been intentional choice on our end.

Todd: And so when you were leaning into these video editing and VFX cases, use cases, were you getting a lot of feedback from customers and talking to folks in the industry or were you sort of just following your own gut? Like the three of you all had experience, um, in filmmaking. So where, like, you know, how, how customer driven was it versus how much, how intuition driven was it by you guys?

Anastasis: This has been a really, um, recurring question in how we build product and our thinking around that has changed quite a bit. When we were building the video editing tools and those uh, earlier ML products, we really believed in starting from the customer problem and the user problem and then solving backwards and using incorporating the technology where it was needed.

Uh, which sounds like the, uh, a good practice in general, it's something that uh, all companies should do. But the thing that was, that we realized later on was because those, um, uh, AI models and like AI research was advancing so quickly, it was hard for people to necessarily know all the ways in which those could be transformed into product and can have a lot of value. 

And so initially our thinking was really like the traditional kind of product development and like inputs to the product development process of being right, like sitting with customers, doing a lot of user interviews, which we still do. They are still a very critical part of the process of developing products.

But, um, what we realized as we kept building those products was starting from like research that just uh, like became possible and identifying some kind of having some hypothseis of the ways in which it could really, we could build like really strong and valuable tools around was, a development process that actually ended up working better for us in many ways.

Todd: Interesting. Okay. And so how do you think about that today? 

Anastasis: So today we have this, I would say that there is the kind of long term view and like the North Star uh, that we're working towards that informs our kind of research roadmap. And then there's the day to day or week by week updates that we make the product. And then the week by week kind of more incremental updates are really informed by what we hear from users.

Like, thankfully, we have a very uh, vocal, engaged user base, so we know pretty much at any given point, these are the top five things that people uh, want to see improved. And so one example is, releasing the ability, so we're currently working on Gen 2, a text to video model. And we're releasing the ability to control the camera movement inside of video, was the top feature requests for, ever since we released it.

So those things come from uh, customers and our users. But then there is kind of the long term roadmap uh, that informs the kind of larger investments we make in research that might take, uh, half a year of just pure R&D work that doesn't see, uh, that doesn't end up in the product. And those are things like we want to eventually uh, make it possible to generate using generative models to build an entire like feature length film.

And we want to solve all the, all the problems along the way to get there. And that's not something that could be tackled necessarily in a very incremental way. You can definitely have milestones in that research. But you need to make the investment and go through this process of trial and error on the research side to be able to make progress towards it versus always kind of, uh, getting reality checks of whether this uh, the approach you're taking is working and like having to deploy everything in the, in the piece to monitor the product.

So these are the kind of the two different contradictory in some ways, but we try to figure out how to balance those two viewpoints.

Todd: So you mentioned Anastasis that you have a very vocal community. what, how did that start to take shape? Like in 2020, 2021, who are, who are the, the users and the customers who are really kind of excited about the product and helping you push the product forward? What kind of, what kind of people were these?

Anastasis: When we first started out, um, we, we essentially started from the art community and the art technology community. In certain ways, it was, uh, advantage because that was, uh, a community that really, there were no dedicated AI tools for it. A lot of the AI tools were kind of built for, uh, from ML engineers for ML engineers.

Uh, and so from the very beginning, we had a very active kind of small at the time community of artists and technologists. Some of them kind of had, uh, went to the same grad school as us, some of them in similar kind of programs as us, but the whole community of creative technologists is quite small and like everybody kind of knows each other.

And that has been really uh, it was really nice because as we were kind of releasing each new feature, like we basically were getting real time feedback even early on when the user base was small uh, not a lot of people care about what we're doing, but the people inside this small community really saw Runway as kind of the only, the only tool that was kind of addressing the, kind of the needs that they had.

Todd: Is that still kind of your main vocal user base?

Anastasis: The user base has, uh, expanded uh, quite uh, quite dramatically in the past, in the past few years. Many of the kind of Fortune 100 companies using Runway we have individual creators to the small teams of creators to large companies, media companies, ad agencies.

But there's also this emerging movement of, uh, AI filmmakers, AI content creators that, uh, has allowed us to, kind of co create the models and the product with them, essentially. Uh, so when we first released, um, Gen 2, our text to video model, we first rolled out access to a very small group of a hundred or so, creators on Discord.

And these creators kind of, we're very familiar with uh, all the, uh, usual limitations of those models and are very exploratory and know how to see the, kind of, identify all the ways in which they could be used. Uh, and so that constant feedback and the collaboration and communication uh, with this community has been really critical for us to fine tune those models to understand what are the ways in which we should, um, as we build the products and tools around those models, like, how do we highlight the strengths and kind of guide people towards good results? And that early feedback is really critical for that. Because it's such an early space and, um, it's almost kind of a new medium that's emerging around AI video and AI film.

We really try to not just grow the product, not just grow the models, but really grow the community and showcase to the kind of outer world what's kind of possible uh, with those models. So we've organized, um, uh, an AI film festival uh, early this year, which kind of showcasing creators and like artists using AI to make short films.

And we're doing that again next year. We just had, last weekend actually uh, 48 hour film festival where thousands of, uh, uh, folks uh, uh, were had to create a film using Runway within 48 hours. And we saw kind of a lot of amazing things coming out of that. 

Todd: Um, so I'm curious, you've mentioned the rollout of Gen 2 a couple times, like when you rolled out Gen 1, I'm curious, what are the initial things that you saw people do with it that really got you excited? And then same question for Gen 2, because I assume that Gen 2, you know, is a lot more advanced. And I'm curious what those first 100 users were doing kind of when Gen 1 came around and then when Gen 2 came around.

Anastasis: So we released Gen 1 in January of this year, and Gen 1 for context is a video to video model. So you have an initial video that maybe you shot from a camera and you can transform it into another video based on the text prompt. Uh, so you can, uh, for example, shoot a video and then describe a different style for the video, um, like a claymation style uh, and then generate a video that's transformed uh, that maintains the structure of the input video uh, but, uh, adopts the style or the content that's described by the prompt. And what was interesting when rolling that out was like, there were a lot of ways of using the model and, uh, uh, that we hadn't anticipated, um, uh, for example, we saw a lot of, uh, folks that were making basically uh, being able to, uh, create really complex scenes and, uh, and settings using like very simple materials like cardboard. So they would make a CD out of cardboard in their living room, and then they would kind of um, shoot it with their camera and that would become a castle or like a, a spaceship.

And it would be really compelling how you could essentially create those uh, really uh, big productions uh, from just your living room with just your phone camera. Uh, so we saw a lot of use cases around that that were uh, really compelling. People were creating some amazing results with very simple materials or even with just 3D like untextured 3D models uh, so, uh, Gen 1 became this kind of model that would be able to take some uh, a mesh of, uh, let's say building and then be able to transform it into uh, kind of a full kind of photorealistic rendition of that building.

So what Gen 1 allowed is this level of kind of control and flexibility, but combined with kind of the creativity of the model itself. Uh, Gen 2 is a slightly different model. It's a text to video model. You're no longer providing an input video, but rather you're just providing a text description of a scene and get the final result.

And the advantage of that is you don't even need to kind of have a driving video that you're transforming to another video. And immediately when we released Gen 2, there was kind of this, really wide spectrum of use cases that we hadn't quite anticipated. A lot of them uh, like we saw a large number of people making short films with Gen 2, making uh, uh, commercials for products that don't exist, making a lot of, really a wide variety of, of animated styles and kind of primarily more narrative content versus uh, with Gen 1, we're seeing really compelling, let's say a few second shots. But with Gen 2, we started seeing this kind of more larger narratives emerging because you no longer, you could just describe each individual scene and you didn't need to find a physical world analog for each of the components of it. Right now we're seeing people combine Gen 1 and Gen 2 in interesting ways.

So they can use Gen 2 for a lot of the footage in the final output uh, where you don't need that very precise control. But then when you have a, let's say a character speaking or, or, uh, emoting in certain ways uh, where you need that that fidelity and like the level of expressiveness, you can use Gen 1 for just those pieces.

Uh, so I think both models currently have their place in the workflow, and we're seeing a lot of interesting ways in which they're used in conjunction.

Todd: I think it's interesting you said when you rolled out Gen 2 this year you started with just 100 users and it was 100 users that you talked to on Discord. How do you, how do you choose who's in that set of 100 users? Is it a variety of different kinds of creators or is there a specific type that you look, that you focus on?

Anastasis: In the beginning, primarily uh, it was our users of Runway that we already knew from using early iterations of the product, and we were already kind of seeing kind of some amazing things that they've done. Some of the artists were, had submitted and shown films at the film festival. So a lot of that initial group, it was people that we had already seen their work, that were already making compelling, interesting things with AI.

And kind of we grew from there and there's a lot of amazing creators that emerged when the kind of the community expanded and just started doing their AI work with Gen 2 was kind of the first model that they really started making uh, projects with uh, but the initial group was people that kind of, we had seen using kind of Runway extensively before or making really interesting work with AI with previous models, like with image models or uh, other, uh, other tools in the past.

Todd: Okay, and then how do you decide when it's ready to go beyond 100 people? Is it a quality thing? Is it a user feedback thing? Is it a performance and latency and cost thing? How do you think about that?

Anastasis: Uh, it's a mix of, uh, all of those things, uh, primarily quality uh, even at the early days of, of Discord, we had a mechanism by which you could provide a feedback of whether a result was, uh, satisfactory or not. And we kind of relentlessly tracked the measure of like what percentage of videos where people were finding to be satisfactory or that kind of met, uh, the expectations or were aligned with like the description they provided.

Uh, and so that became kind of the, the met- the quantitative measure that we track to see, is this ready for a wider rollout? Uh, of course there would need to be product work and kind of UI work to make sure that we can build the right front end around those models and going beyond the Discord limitations.

Uh, and there was also as we were rolling out the model and we started from 100, but then we went to 1000 to 10,000 before we rolled out the web and mobile version of Gen 2. Uh, we also had a much better sense of, like, for capacity planning to understand, in terms of infrastructure, if we're ready, uh, to deploy this to kind of everyone who uses Runway. So a lot of those considerations. 

Then the other part was also the safety considerations of making sure we can have the right measures in place in terms of content moderation, and to make sure that like, once we rolled it out to a much larger set of users, that it wouldn't be used in kind of malicious ways.

Todd: What, how do you guys think about, uh, safety and content moderation? What is, what's your approach?

Anastasis: Yeah. So everything that you generate within Runway goes through, content moderation, uh, models, both text moderation and visual moderation. Like, it's almost like it's a multimodal, uh, problem where, uh, we essentially, the way we operate on the kind of, uh, alignment and safety team is just thinking ahead of like assuming that, you know, the technology will get more and more photorealistic, uh, and just thinking of where with this, this model has been six months.

And then kind of starting and laying out the groundwork, uh, for that, like we started building those kinds of content moderation, uh, methods earlier in the year, even with Gen 1, it's kind of, it's difficult to really, uh, use it in malicious ways. It's a fairly limited model, but even then we wanted to roll out those, those models and methods in order to make sure that as the models improved, that we would, wouldn't need to kind of, uh, do a lot of catch up in terms of the content moderation and we would be kind of well prepared for that.

Todd: Okay. I want to switch gears a little bit Anastasis, one of the things that I've heard you talk about that I think is really interesting is that Runway is different than kind of a normal tech company in terms of like, a normal tech company is mostly focused on product development. Whereas at Runway, I think you have product development and a very large research component to what you do.

And so I'm curious, how did you think about kind of organizing both the research side of things and the product development side of things early on? And then how has that kind of changed over the years?

Anastasis: There have been a lot of learnings in how we combine research and more, uh, traditional product development at Runway. The biggest early learning for that was initially we had, kind of similar schedules in how we're thinking about developing new research and developing new product. Uh, and so we expected, results like compiling results from a research project in the span of a few weeks.

And if those results weren't there, then we kind of moved to the next project. And we found that that was a very, very difficult timeline to follow to actually execute more kind of ambitious projects on the research side. 

Todd: What was an example of that?

Anastasis: One concrete example was when we first released green screen, it was merely, um, uh, segmentation model. So that essentially means it tells you for every pixel of a frame, uh, what's in the foreground and what's in the background. Uh, and there is no space for in between, uh, but in practice you have, uh, uh, you really want to solve this problem with a matting method, which is, uh, like, for example, if you take hair, there might be kind of, uh, intermediate, like semi transparent objects, uh, in the frame.

Uh, and we initially wanted to solve this problem. Uh, and just update our segmentation multi-matting problem because that produced better results when you kind of, uh, segment out the subject and blended with another background, and we initially tried to tackle this project in this kind of one week, uh, timeline.

So just let's try out this idea. If it doesn't work, then we move to the next idea. And just, we kept hitting a wall where we actually were not getting the improvements that we wanted, or we were solving the problem, but like in a very limited way. And we never end up, when we kind of had this updated version of the model, it was barely better than the previous version.

And it was really, there was no justification for actually rolling it out. So we just made the intentional choice of let's completely separate the roadmap of the research project of improving the modding and then the regular product development process. And there were a lot of improvements we committed to the product itself that didn't depend on a better model.

 Just a lot of workflow improvements or, making sure that the user will annotate frames properly and guiding the user along the way. And so we, uh, instead of having product wait for a specific research update, we let the research run for, of course, there's always a time boxing, you don't want it to run for years, but we basically give more breathing room for that research project.

And when the research project is ready, then we bring the rest of the product and engineering and design to incorporate that update into the product. And we found that to kind of minimize the dependencies between the two.

Todd: So for a project like that, a research project like matting, you must have known that it was attractable, solvable problem on, on some time horizon, right? Otherwise you wouldn't have invested in it. Like how do you decide whether, okay, this is going to take a long time, but we're, we want to invest in it from a research perspective because we think it's going to be really important.

Like how, how does it, how do you decide, whether an idea qualifies or not for that?

Anastasis: Yeah. Uh, I wish I could say it's a fully like 100 percent, calculated decision of like, you never have enough data to have like absolute confidence that something is horrible. Uh, but you can look at the literature to see with kind of broadly the field in terms of like the level of kind of performance of like, uh, other research in the field, like to expect and just develop some baselines that tell you we can at least get this level of performance.

So that's usually how, when you start out, like, we don't want to pursue projects where there's zero precedent at all in the research world. We know that there is evidence that this approach could work. Sometimes the results of existing methods are not quite where they need to be.

But like, we make kind of a reasonable guess that this is, uh, solvable within a specific timeframe and then invest resource to it. We sometimes don't get it right and, uh, but, uh, I think we've kind of refined this intuition and like around how we kind of probabilistically assess whether something will succeed.

I would say now that the research team is larger than it was at that point, we can also have more tolerance for like, uh, projects that have a less probability of success as well. Because we can run more projects in parallel. So we always have more high confidence updates to the models and the kind of their, our tools and then kind of more ambitious, high reward, but high risk, projects that we pursue at the same time.

Uh, but that, that required having a slightly larger team because when we're a couple of researchers, like we really have to have higher confidence that this will work.

Todd: How large is the research team relative to the product engineering team? Is it like half the size or along those lines?

Anastasis: It's about one third at the moment. Uh, so research is one third of product design engineering.

Todd: And do you try to keep them in sync at all? Like what, what if there's a product feature that relies heavily on a brand new research breakthrough? Like how do you keep them in sync or do you just really let them try to run independently of one another?

Anastasis: There are some aspects that run independently, like those more long term research projects, but, we generally try to have research collaborate very closely with product and engineering. Uh, and that can take the form of research, kind of consulting, folks in product engineering of like how to best build interface and tools around the product.

Uh, and even though like research projects might operate at different timelines, there is still very close collaborations in terms of deploying those models and updates between the two teams. So they don't really operate in silos, they operate together. It's just, we always have some, uh, some more like pure research projects and some projects that are combined research and like productionized research essentially.

One other aspect of the collaboration that I think has been really critical for us is beyond the product, or we also have a creative team within Runway, uh, and the creative team, has two roles. One is kind of creating, content for like communication for, uh, our brand and like the way we. Uh, uh, kind of announce updates to the product and so forth.

The other function that is very critical is, what we call like a workflow architect function. And it's essentially providing continuous feedback by using continuously the models and the tools internally and providing feedback to the research team, to the product team around what's working and what's not.

So researchers are actually in constant communication with the creative team and they kind of work closely with each other to develop new models, like get feedback on whether the results are there, whether the controls make sense, all those things. Uh, so research is very plugged in, in the rest of the org.

It's just that, uh, we don't expect that every, everything that research is working is going to deliver results like within like a very short timeframe.

Todd: How do you manage these teams kind of at the executive level? Like, is there a head of research, a head of product engineering, a head of creative? How does this all kind of organizationally map?

Anastasis: Yeah, so kind of roughly there is, our engineering org is split between, uh, research backend and, uh, frontend teams. And then we have a creative team that's separate, uh, from that. Right now I, I am the manager of the research team, and then I'm also, uh, the, uh, those engineering other functions are, are, have dedicated managers to them.

Uh, that might change in the future as we scale. But one thing that has been really critical, uh, for us as, as a startup is just being able to, like, even leadership, having some level of individual contribution or technical, like, hands on aspect in the way we work and just being involved in projects, because otherwise it's really, when the research is changing so quickly, when the technology is changing so quickly, it's easy to lose track and like kind of lose context on what's, um, what's possible. And so we kind of even the leadership in engineering, uh, is pretty much very involved in like the, uh, maybe not contribute to the core, like the critical paths, and, but, but really trying to have hands on involvement in, in different projects.

Todd: So all of the engineering leaders, it sounds like still kind of are in the code?

Anastasis: Yeah, they try to be in different extents, uh, but, uh, I think it's really important in order to build, uh, a fast moving organization and having leaders that have context over what we're working on.

Todd: Cool. And then last question, kind of in this section, um, how do you do goal setting? Across these different groups. Like, do you guys plan goals, you know, on a monthly basis, on an annual basis? And how, you know, again, like sort of how much do you try to keep them in sync across the different teams?

Anastasis: We have quarterly goals, uh, where we essentially set a few themes for a given quarter. And then we define some specific projects that we want to tackle. So Gen 2 for example, is kind of one kind of large effort that we want, like we, we tackled at the beginning of the year. Another example of a quarterly goal that we set last year was we were going to build in a single quarter, 30 magic tools. And kind of, we went ahead and kind of built those tools. So we try to make those goals fairly ambitious and then, uh, but not try to get to a really high level of precision around how every single tool and every single feature is going to look like just because we've realized that like things change a lot during the development process.

And that's especially the case for AI products. It's really difficult to have a waterfall process of like developing those things, because you can design and interface around an AI model and then have a detailed spec around it. But then the moment you actually try to build it, you realize the limitations of the model, or you realize that the model can do something that you didn't expect.

So we try to be very iterative around how we develop the actual, tools and products, but we try to have a high level, like ambitious goal that we want to kind of accomplish.

Todd: Is there anything that you, you wish you had kind of, uh, done sooner or, or changed sooner in terms of the way that you've evolved your team and your engineering organization?

Anastasis: What's been interesting is that, uh, in a lot of ways, like when companies evolve, they get the next stage, they get more, uh, revenue and customers, traditionally you had more process, you, uh, create more detailed specs, you create more kind of robust timelines and for us, it's actually been the opposite, like in some ways we had the most detailed product roadmap we've ever had at Runway when we were, 10 people, uh, and had a very five year vision of how like we would build kind of a future, futuristic video editing product. And it turned out that we learned so much during the first quarter that that vision no longer made sense. So, uh, actually a lot of, uh, like our product development processes shifted towards making less assumptions, having very ambitious goals, but actually really leaving the space for learning while developing the thing and really being open to surprise, being open to, you know, maybe in the middle of building this product or this research project, a paper comes out that makes the project already outdated. Uh, and in that case, if you build a lot of hope towards this one project and you build a lot of commitment from different stakeholders that like, we really have to make this work, you're not going to make the decision of kind of leaving the project behind in favor of, of, uh, of, of this new method that actually outperforms the approach that you had. So I think sunk cost fallacy is a big, is a big thing that we try to avoid. And like culturally we've learned to, uh, like embrace change and like be excited by change.

Uh, and not worry too much about, maybe spending a lot of time on something that turned out not to be the right direction. Uh, we try to celebrate that, uh, because otherwise people really get too committed on like an idea that might not be the direction where, where things are going.

Todd: That's really interesting. It's interesting to me that you, at the beginning you had a very specific five-year vision and you kind of learned over time that you have to be open to surprise. I mean, it's kinda like the opposite direction that I feel like most companies go, as you said. Um, how, does that work though, when you have, you know, external customers, professional customers? I mean, don't they sort of want to know what's coming?

Anastasis: To some extent, uh, they do, but I think primarily what they, what they want is for you to solve their problems well. And, and actually, when you are not so committed to your roadmap that you don't sit back and just, really try to understand the pain points of the customer, you're actually able very often to develop solutions for them that might not have necessarily fit like your, your high level plan of where you need to go, but that really, uh, like solve the problems that they had. Like essentially because part of our job as we're working with larger enterprises is not just to sell the product and make folks use Runway, it's also to translate to them and like kind of showcase what's possible to do with machine learning with generative models, how they can change their creative workflows, and they can move, they can work much faster. They can accomplish more with those methods.

Uh, and so a lot of the work that we do is some of it is just showing them the product, but also developing, uh, some customized solutions for them, developing like fine tuning models for them, figuring out how do we best serve them. And that ultimately is what they're most interested in less so than understanding, like, the five year plan of Runway, uh, because as long as you build trust that you're going to solve the problems that they have, um, that's, I think, the most critical thing.

Todd: I wonder if there's also some advice in here for kind of, uh, you know, founders who are just getting started in AI. Like, if you could give advice, you know, because some of the things that you've mentioned are like the field changes so quickly, right? Every week or every month, there are new techniques, there's new innovations.

What is the advice that you'd give to a founder who is just starting out now, um, and building a company in AI to think about how specific their vision should be, how specific their product idea should be versus kind of embracing change, continuous change?

Anastasis: I think it's a, it's a balance of being very specific around who you want to serve and how you see, uh, having a clear vision of like where you want to arrive in the future versus having a very specific idea of how your products should look. Like how you would structure pricing, how you would structure, like all those other details around it.

So for us, we never changed our focus towards creators, towards, creating new kinds of storytelling tools, new kinds of creative tools, that has been a very consistent theme for us. And that has been kind of the driver of a lot of the change that we made because, we realized that this, this product direction actually did not accomplish the goal in the way we were hoping.

Uh, so as long as that's very clear of like the specific, area of focus in terms of the market that you want to solve for or the community or the set of customers that you want to solve for. Versus being very specific around, I'm going to use this specific ML model, or I'm going to use this framework or so forth.

You need to be a bit stubborn about the general, uh, the general vision and the general, of, of what you're building, but at the same time be very flexible around, the specific tools of the specific product that you're building.

Todd: That makes sense. So how do you think about the vision of Runway now? Or do you have a, do you have a North Star that you think about or kind of a five year vision these days that you're working towards?

Anastasis: Our vision continues to be to invent new kinds of storytelling tools to, introduce, generative models in creative workflows at the maximum, like, bigger scale. I think we're still very early in terms of the impact those models can have in the industry. There is a lot of hype and conversation around those models, but actually in terms of them being adopted in a, uh, in a concrete way, I think we still have a lot of room to grow and a lot of like, uh, a lot of things to figure out in terms of the, tools and the interface around those models.

I think we're going to move from focusing a lot on the models and the foundations, to focusing on the workflows and the tools and the UIs and the specific products that we built around, uh, AI models. Um, and that's going to be, I think a lot of the focus for kind of the foreseeable future for, uh, upcoming years, less so than the underlying kind of infrastructure and the kind of foundation models.

Todd: Anastasis, this has been great. I wanted to, uh, kind of wrap up with a few questions that I think will be really interesting to founders and people building in this space and everybody paying attention to this space. when you think of the future of video editing, how much do you think will be done by AI versus humans?

Anastasis: So I, I generally, when, um, kind of presented with that distinction, I generally try to, or we and everyone, we try to challenge it and, uh, and kind of think of like what we're building is a way to augment the capabilities of humans and a way to augment the capabilities of creators, versus replacing the kind of workflows that, um, replacing kind of the creative process.

Uh, and so the way we think about, content creation, kind of making art, uh, kind of making movies and so forth is, there is a lot of aspects to this process that are particularly tedious, where there's not a lot of creative decision making and those, uh, we could utilize AI models and AI techniques to kind of accelerate them and make them less tedious. But the way we think about, uh, our tools is in a way, multipliers of human creativity. So, uh, it's less about, you know, you're going to type one prompt and you're going to get the full feature length film. It's more, you can now explore ideas for your film at a much faster pace.

And so be able to see different possibilities without kind of spending a lot of resources, without spending a lot of time. That's kind of the approach that we've, we've taken with our tools, kind of augmenting human creativity versus necessarily replacing, uh, human creativity, and we still see that today, that the best things that come from, uh, Runway and the, and the people that get the most value out of Runway are people with a really clear vision and a very, very specific ideas that they want to express.

Runway is not currently going to help you come up with ideas. It's just going to help you build those ideas faster and iterate over more ideas.

Todd: Looking back, at what point in the company building process did you feel like you had the most momentum? And what do you think made that moment in time unique?

Anastasis: There's a few moments that, uh, that come to mind, but one in particular was what I, uh, uh, I mentioned earlier around building, uh, 30 magic tools in one quarter. So this is a goal that we set, I believe early in September of last year, we were kind of working on research for generative models for a while, in parallel with building video tools. And those, those two kind of efforts had not met in the middle because we were kind of making a lot of progress on that kind of journey. Human side, uh, the results were getting better and better, but they weren't yet production level quality that weren't yet able to be translated into uh, really useful tools, uh, and, but there was this inflection point, last year in the summer where we saw that change and we saw that those models could become really useful and that you could build a lot of like robust functionality around them. 

Uh, so we set this like, very ambitious goal of building 30 tools, and then that really motivated the team to, to really move at a pace where, like, we had not moved before in terms of iteration speed in terms of thinking, not just how to, how do we build those tools one by one, but how do we change our infrastructure and our development process in order to support actually building tools faster and like, integrating new models, faster, uh, building new, uh, user interface faster, running the kind of design, like user interviews and like getting feedback from users faster.

Uh, so not just, move faster, but also move kind of, uh, and work more, but like move kind of smarter. And just think about kind of the, the infrastructure that would enable us to actually, uh, uh, iterate very quickly on new tools.

Todd: And then finally, we've talked about advice to founders, future founders, but what is some advice you find yourself giving out kind of over and over again?

Anastasis: Something I've realized, uh, as we were kind of evolving this company over many different stages, over many different products, over many different, uh, iterations, at each point, like we had a big release and we thought this was it, this was what kind of Runway would be forever.

We're just going to keep growing this specific kind of product idea or this specific tool. And that never ended up being the case. It always evolved to something else. One general like broad mental model, uh, I've, I've developed is very often when, um, people describe their product strategy or the, the, the way they do company building, there is one specific, uh, idea for how the company will differentiate itself from other companies. And so if you look at the AI space, maybe you think you're at the, your model is the mode, or the community is the mode, or the specific kind of audience that you're building for, uh, is, is the mode. And what we've tried very intentionally, our runway is to shift strategies.

Like in different stages of the company, and just think that the strategy that made sense to grow us to this point is not the strategy that makes sense to grow us to the next point. So we never think, like, for example, if we take the video generation models, we assume there's going to be a point in the future where, everyone's going to have a photorealistic video generation model.

Like we don't expect this is going to be the differentiating aspect of Runway forever. Uh, but it's, it's, it's something that has really enabled us to grow a really robust community to really, uh, learn a lot in a very quick period of time. But if the conditions change, if the research change, if like the technology change, we can shift to a new strategy and we're not too tied of- this company's identity is tied to this specific strategy. Uh, so that, that's kind of one, one area where I think a lot of companies think of themselves as following, having a very clear story around why they're different from others. And I think the real differentiation comes around internal processes and internal culture and how, how you can navigate, uh, adversity, how you can learn from what you're building, how you can evolve the company.

Todd: Anastasis congrats on all your success so far with Runway and thank you for being here. We really appreciate it.

Anastasis: Thank you. It was great to chat.