Parag Agrawal is the co-founder and CEO of Parallel, a startup building search infrastructure for the web’s second user: AIs. Before launching Parallel, Parag spent over a decade at Twitter, where he served as CTO and later CEO during a period of intense transformation, as well as public scrutiny.
In this episode, Parag shares what he learned from his time at Twitter, why the web must evolve to serve AI at massive scale, how Parallel is tackling “deep research” challenges by prioritizing accuracy over speed, and the design choices that make their APIs uniquely agent-friendly.
In today’s episode, we discuss:
- Why Parallel designs for AI as the primary customer
- Lessons from 11 years at Twitter and applying them to a startup
- Potential business models to keep the web open for AI
- Hiring philosophy: balancing high potential and experienced talent
- The evolving role of engineers in an AI-assisted world
- Why “agents” are finally becoming useful in production
- And much more…
References:
- Clay: https://www.clay.com/
- Index Ventures: https://www.indexventures.com/
- Josh Kopelman: https://www.linkedin.com/in/jkopelman/
- KLA: https://www.kla.com/
- OpenAI: https://openai.com/
- Parallel: https://parallel.ai/
- Patrick Collison: https://www.linkedin.com/in/patrickcollison/
- Stripe: https://stripe.com/
Where to find Parag:
- LinkedIn: https://www.linkedin.com/in/paragagr/
- X/Twitter: https://x.com/paraga
Where to find Todd:
- LinkedIn: https://www.linkedin.com/in/toddj0/
- X/Twitter: https://x.com/tjack
Where to find First Round Capital:
- Website: https://firstround.com/
- First Round Review: https://review.firstround.com/
- X/Twitter: https://twitter.com/firstround
- YouTube: https://www.youtube.com/@FirstRoundCapital
- This podcast on all platforms: https://review.firstround.com/podcast
Timestamps:
(1:26) Founding Parallel with an AI-first mission
(3:23) From Twitter CTO/CEO to startup founder
(6:20) What the AI era spells for companies
(7:58) The CEO to founder pipeline
(11:18) Reflections on Twitter’s transformation
(17:48) How Parallel was born
(22:31) Early use cases for Parallel
(31:42) How has Parallel’s ICP changed?
(34:37) AI’s impact on competitor dynamics
(36:06) When should founders launch?
(37:43) Parag’s fundraising framework
(40:14) Building a high-impact engineering team
(44:49) Counterproductive uses of AI
(47:35) How will the software engineer role evolve?
(49:10) How are Parallel’s customers using AI?
(53:27) Defining agents in 2025
(55:02) Parallel’s long-term vision
(1:03:43) Parag’s growth as a founder
Parag: I don't tend to enjoy public attention too much. During the Twitter era, I couldn't say very much. Now I want to harness public attention towards the future I want to build.
Todd: Hey everyone, it's Todd Jackson. I'm a partner at First Round. For today's episode, I'm excited to sit down with Parag Agrawal. He's the founder and CEO of Parallel, an infrastructure startup building a new web for AI agents.
Parag: We believe that AIs will use the web a thousand x, million x, more than humans ever have.
Todd: I first met Parag when we worked together back at Twitter, where he spent over a decade rising from engineer, to CTO, to CEO, before selling the company to Elon Musk.
Parag: I was entertaining some calls about taking on jobs that might have on paper felt like big jobs. I just couldn't join a mission.
Todd: Now, he's a founder building in the post ChatGPT era. In our conversation, we dig into what he's learned from scaling Twitter and leading it through its most dramatic chapter. As well as what he's had to unlearn as he builds Parallel. We also explore what is and isn't working in production, where agents are headed and why he believes the next generation of the web will look fundamentally different.
Parag: Our customer is an AI. We are going to design an API that is excruciatingly slow.
Todd: Let's dive in. Parag, welcome to the show.
Parag: Thanks for having me.
Todd: We're going to dig into a bunch of different topics today, but just to make sure everyone has the starting context, could you start by explaining at a high level what your new company, Parallel, does?
Parag: At Parallel, we are obsessed with the open web. We believe that AIs will use the web a thousand X, a million X more than humans ever have. As a result, the web will need to transform in order to drive that transformation. We've been building the best tools that AIs can use to access content in the web. In fact, the product we've shipped recently are the only APIs that hit human and even higher than human level performance on a variety of deep research benchmarks. That's been a key milestones that we've been working towards and we finally hit,
Todd: You were at Twitter for 11 years, less than late 2022, and there was just a little bit of publicity around that. Starting Parallel and raised your seed round in 2024. The company's still less than 18 months old and you've been pretty quietly building in stealth up until now. Is that right?
Parag: That's right. You know about our seed round because you were with us through that journey. We've been building away. We've had products out with customers and we've been working with some of the forward-looking customers, whether they are private early stage companies or slightly more established companies. It's been amazing to work with a bunch of customers who are ahead of the curve as we see them. They've essentially shaped us and our product and that's why we've been quiet, but now we think a product is there.
Todd: You and I actually met each other 10 years ago way back before that in 2014 at Twitter. One of the big projects that we took on together was how to introduce ranking into the home timeline at Twitter, which was hard. But I bring that up because I think it's so interesting to compare that to what you are working on now, because you spent a decade-plus being responsible for all these big, at scale, technical decisions at Twitter, dealing with all the constraints of 10-year-old infrastructure. And then you got this opportunity to start completely fresh in a post-ChatGPT world with a clean sheet of paper. What was that transition like for you?
Parag: When you're working with a established product with has product market fit like Twitter versus zero to one journey, I think every way of working to me feels different. When you're working with a team of 10 versus a team of 1,000, every practice you have as a leader in my opinion must be different. When you're working in 2019, Twitter with legacy infrastructure at scale versus a startup in 2025 with 20 people all in person, everything you had to do is different. In some sense, this startup is like a journey in unlearning everything I might've learned before.
Todd: What are some examples of that? It could be the pre-AI versus the post-AI. It could be just the size, the number of people. It could be incremental changes on an existing thing versus a brand new thing. What are some of the specific ways?
Parag: I think all of it... Really, you think of team. Twitter was famously remote-friendly. Parallel, we are all in-person in one office five days a week and I think you get to do that at small scale. I think you must do that. At least I must do that for who I am. For this moment, zero to one. If you think of pre-AI versus post-AI, I actually think you have to think about product very differently. At Twitter, if you ask for my framework on how to build product, it was like, "Okay, what do people need? What can be built?" You find something in the intersection, you try to ship that, and you have to track technology change at some rate in order to find that constantly. People need everything with AI. That's almost obvious. Now, you really have to zoom into what works and what can work and where is it going, and you have to look a step ahead and build towards that. It's a very different art. Everything is now stochastic. You build deterministic systems earlier and now you're really thinking about building and communicating stochastic systems. It's very, very different in my mind in terms of how you think about product.
Todd: And systems that are getting better so quickly. The leaps that are being made every 6 months, 12 months is amazing to me, but it's interesting when it comes to product. How do you design the product in such a way to say, "Okay, the models are where they are right now, but in six months they're going to be much different and better"? How does it influence the way you think about thinking ahead on product?
Parag: We've made two opinionated stances when it relates to that. One thing we did early on was we took the counter approach of saying our customer is in AI. And one of the things that led us down was we are going to design an API that is excruciatingly slow. Humans have no patience on the web. When we are building Twitter, we thought of like, "Okay, if you don't serve the timeline within a second, you can lose people." You can see it in your numbers. You can see it in your metrics, so you got to figure out how to do the best thing possible in a second.
Todd: Performance is incredibly important when the user's sitting there, waiting for the query.
Parag: If it's an agent doing something in the background, which could be running a workflow when there's no human waiting for it, or if it is someone trying to collect a large number of data and fill it into a database and you're doing 10,000 operations, a million operations, the latency of each operation doesn't matter. If you relax that constraint, you get to do more. Now, today we all call it deep research because that's better branding than slow. In terms of agents are our customers, we want to leap towards automating end-to-end and so that helped us build differently. We took a leap of faith that it's going to happen. Even if the models that day weren't good enough, we bet on models getting better to support that.
Todd: On top of all the things that were happening at Twitter and setting the drama aside, you went through some very big, what I would call, altitude shifts, the transition from CTO to CEO. I'm curious, just what was that like for you and what did you learn from that experience that may have equipped you to be a better founder in some ways?
Parag: One of the things when I was taking on the CEO role was until then, I really thought of my job as being versatile to the needs of the company and to shape myself to be most effective, which doesn't mean that you don't get to change the company around you or you don't get to shape the company to be. One of the big shifts I went through taking on the CEO role was I decided that the most productive way to operate would be to shape the company around me, take on the founder role for the company, because I think there is no other way. If you try to fit yourself to an organization, a company, a plan that exists, I don't think you do justice to the role. When I stepped then, I was like, "I'm going to shape the company to what I believe and who I am." That was a mindset I took on, taking that role. Really, that manifested as me having the conviction in that moment to change the company, to change how Twitter did things. I was there for a while.
Todd: You must have done that as the CTO as well. I'm assuming you had so much scope.
Parag: I don't think it's the same. I think it's very, very different. I think I had a lot of agency and a lot of influence, but I never... Perhaps it was my failing to not embrace it to, "I can change anything here." I tried and was often frustrated. One of the things I did as soon as I took the role is, I think, we changed the structure of the company. We changed the leadership team substantially within a month. I was about to meaningfully change how many people were at Twitter and what our roadmap and priorities were. Now, we didn't get a chance to see it all through, but really I think it's really important. It's different to embrace shaping something to you versus shaping yourself to something.
Todd: How did it feel to have so much public attention trained on what was going on inside the company? What was that experience like for you and is there anything that you took away from it?
Parag: I don't tend to enjoy public attention too much, but I think it's a very valuable thing. As a founder now, I actually think to drive the impact I want to drive, to shape the world the way I want to shape it, you have to use public attention to find the people who want to join you in your mission, to find customers, and to evangelize the future you want to live and to find partners. In some sense, public attention is really important. Now, what was unfortunate about the public attention I did receive during the Twitter era was that, in that moment, I couldn't say very much, and in that moment I was going through what felt like a zero-sum game rather than the current moment where it feels like a extreme positive-sum game. Now, I want to harness public attention towards the future I want to build, but I didn't get to do that then.
Todd: Yeah, because I was watching it from the outside at that point. Just knowing you and working with you, there was stuff being said about you, but you had to remain pretty quiet. I mean, is there anything that you want to say now?
Parag: Lots, but I think one thing I'll say... I think people assumed or might have assumed that, as an insider who's been at the company for 10 years and takes on the role of a CEO, I was going to continue going in the same direction. The reality is that my goal was to make Twitter what I think it must've and what could've been and should've been. That required a massive transformation of the company, the people, the product, and really all of that was coming together. One weird anecdote, I think, out there but not understood is that three days after we signed the deal to sell the company, which was a Monday... Thursday was our earnings call and there was a whole plan to really trim the company down by about 20, 25% going into that earnings call, because I think Twitter needed to change and adapt to the current moment and go from being just a great product to being a great product and a great business. That was the change, I think, that was coming, which would allow you to innovate on product in a real way. I didn't get to see that through and I wish most people got that.
Todd: You didn't get the chance to do that, but now that you've been an observer of what's happened to Twitter, now X, over last couple of years, what is your take?
Parag: I think I was obsessed with this idea. I obsessed with what we used to call internally Project Saturn at the time, which is how do you take this whole problem around content moderation and actually harness the power of Twitter and the community and the users to do it themselves. Now, it's idealistic, but I think that's in the realm of possible. And I think as we've seen the team continue to work on what we used to call Birdwatch and now it's called Community Notes, I think it's great. It's leveling up to the top ideals of being transparent, being open, and enabling people to contribute to Twitter in new unique ways to make the entire experience better. If you have to think of what happened, well, it's something I really believed in and I'm so glad it continues to be something, where I think I believe now others are even trying to replicate that. I'm so glad that continues to happen. It's so amazing.
Todd: Was there one or two things that you took away from the 11 years spent at Twitter, either as a leader or just in terms of the ethos of the company and of the user base that you brought to Parallel that came with you?
Parag: I think the core ethos at Twitter, the why for me was... The words we used to use was public conversation, but really what it was to me was everyone can, in a permissionless way, get access to everyone else and their thoughts and their ideas, including the best people who are changing the world, who you want to connect with directly, and to get to them unfiltered, and to build upon them unfiltered. Instead of being in closed, permissioned, invite-only, limited access environments, there's this massive democratization of access to people. You could literally DM anyone. It's still amazing. I think that creates magic when it works and it causes all kinds of problems that need solving around people misbehaving, but I think it's worth solving those problems because of all the positive it creates. I think that's the same ethos we are taking at Parallel, which I think we want the web to be open and we want it to be permissionless and we want it to be a free market and we want everyone to have access to everything including AIs. We want AIs to have access to the entire web and that's the common thread that motivates Parallel, which pulls from Twitter.
Todd: I'm curious, this decision to do this... You were leading a 7,000-person public company. There's a million things you could have done after that and you decided, "I want to start at the very beginning from zero as a founder of a brand new thing." How did you decide that that was the thing you wanted to do?
Parag: I was open-minded in the beginning, but I think I was just sitting there writing a bunch of code, having the time of my life, curious about the world, so greenfield. I was entertaining some calls about taking on jobs that might have, on paper, felt like big jobs, but I think [inaudible 00:16:35] Twitter long enough to have drank the Kool-Aid that I was serving to believe in this notion of openness and direct connection and public information and public data. To join a mission, I just couldn't join a mission. I think a couple months in, I knew that I would start something. I don't think you even know the story, but I ran into Josh somewhere around that time. By then, I was set on starting a company. I had no idea when and how to go about doing it. Josh gave me this piece of advice that stuck with me. It's like, "Just don't be in a rush. There's going to be 10-plus years of your life and just don't be in a rush. Don't jump on the first thing."
Todd: As an engineer, as a founder, you've developed these muscles on how to build and you are very good at building, but what you only do once or twice is pick the thing to work on. I think the luxuries that we have VCs, we get to look at a lot of things and we get to pick many things and the picking muscle gets developed. When as a founder you choose the thing that you're going to work on, that sets the boundaries and constraints for if the company's doing well in the next 10 years of your life. It's critical. Let's talk about that. This idea that you picked, I think, is a phenomenally interesting expansive idea. How did this idea come to you?
Parag: I was actually looking to do something in healthcare. I was hacking away agents for myself to research healthcare. I found myself, over time, spend more time on the future of the web and just obsessing about this idea in some abstract form, almost living in this science fiction of, "Oh, every decision we made at Twitter, we used to think about this service having 100 millisecond SLA because we're thinking about an end customer on a mobile device that had about a second and this was in the stack." And I started really living in this notion of like, "All of that's going to change completely. All of the infrastructure that we built, thought about, it's going to look completely different. Every business model we thought about, it's going to look completely different." It really started as this science fiction of the primary consumer on the web is now going to be an agent. You change that one assumption. Greenfield, what does this world look like? I would just obsessively think about this and I didn't actually think I would start something in this space because it's just like science fiction. I would just talk to people about it. At some point, it started becoming like, "Okay, so if this could happen, there's a good and a bad way it could happen." I want the good way. I have some views. What can I do? It took me months of research, understanding, talking to people, exploring paths around this idea to gain conviction to one day being like, "I'm going to do this."
Todd: Help us just understand that a little bit better. The idea is very interesting that the web, as we know it, was designed for human users with links and navigation and pages and page loads and all of that stuff. And if you're an AI, a lot of that stuff, it's not defined for you basically. It can consume data a lot faster. You don't need to see one page at a time, things like that. Give us a little bit more of that. What are the different assumptions that you make or patterns that you build for when you are building for an AI as the user instead of a human as the user?
Parag: Humans, as I frame it, operate in a very narrow band. We have a second or two of patience. We under-specify what we're looking for. We'll either implicitly just click on an app and expect the app to figure out what we want and show it to us like Twitter or we'll type an incomplete set of keywords or we'll click on a link to navigate our way to somewhere. We'd hope for the best and we'll go on this random box to get stuff done. We're also able to consume a few decisions like 10-click targets, two to three pages of content at a time. Most of the infrastructure in the web is just trying to figure out what we might be trying to do and guess and point us in the right direction. With AI, I think we get to invert a lot of this and actually that inversion is something we're starting to see. AIs can specify what they're trying to solve. In fact, to do evals, you need to know what the end goal is to be able to evaluate. To train an RL system, you need to know what the end goal is and what the reward is. When you start doing that, you can start communicating that down. That's a completely different interface for the underlying infrastructure concretely, that you no longer have to guess what might this user be trying to do. You're no longer stuck with producing a consistent format in your answer or a said quantity in your answer or a one-second latency SLA on your answer. Sometimes you'll want things in 10 minutes and sometimes you'll want a hundred documents and sometimes you'll want a one-word answer as an AI making a call to you. So the problem space really expands. Now, we, within this expanded problem, made, after a bunch of time with potential customers, an opinionated set of narrow decisions around which small slice of this would matter first. That's where we took this. Humans are different from AI, they're different in all of these ways, the query language will expand. But now what's the most interesting slice of this that we should go attack? We went after the slow end-to-end human automation, measurable, repetitive, eval-friendly work.
Todd: Let's talk about a little of that. What are some of the initial use cases customers scenarios where you felt like this kind of architecture could be really powerful?
Parag: The first few use cases we went after were this notion of repetitive work you might outsource to a BPO or KPO out there. That was an accidental discovery in some sense, talking to some customers who are doing this and they're like, "Oh, if your system can outperform this, that would be cool." I said, "Actually, we might be able to. Let's try." And then we realized that AI performance, when you have a repetitive, well-specified work, improves. Human performance, when you have to train 50 people to do repetitive work, declines.
Todd: These are people doing desk research or...
Parag: Yeah, desk research or some workflow on the public web on insurance claims processing or underwriting or figuring out some data set based on something governments published on how much to tax, what entity, how to be compliant with what, pulling financial data out of the web to create pure data sets. All kinds of this work happens. We're obsessed with the web, but we wanted to add value. We picked this as our first things that we would start working on and really pushing the limits of what's possible in a system on delivering quality. We decided we would care only about accuracy and quality in our system. We would not care about latency. That decision because this is the set we wanted to go play on first.
Todd: I think it's really exciting. If I have the numbers correct, you now have more than 100 paying customers at Parallel. Tell us about some of the use cases. What are they using the APIs for?
Parag: No, it's been really exciting to work with more and more customers and we've really scaled the number of customers we've been working with, both in terms of numbers but also in terms of the breadth of use cases that we now serve. On the one end, we serve these extremely long-running deep research use cases which would otherwise be done by a human. On the other extreme, we might serve as a search tool that a coding agent might use and call it and get an answer within three seconds. To look up documentation, to figure out how to course correct when it might need the live web. We have a breadth of usage. The other amazing thing to see as we've been building this is how it's been scaling. We now serve millions and millions of requests per day and our cheapest request on our system cost under half a cent. The most expensive requests through our system today can cost $10. It's amazing to see this is not something I could have imagined, the breadth of customers, the scale of use going into this year. It's been amazing to have all of that come together in this moment.
Todd: It's interesting you named a bunch of different use cases across all of your customers. Is there an example or an anecdote that sticks out in your head about a customer that you worked with early on where they really pushed you and helped shape what Parallel would be?
Parag: Yeah. I think in the early days when we were working on one of these insurance use cases, really the bar was we had a data set which was being generated as a result of human operations. We got that data set, it was like ground truth, and the question was, "Could an AI system this last year compete or drive productivity or entirely replace it?" As we worked through it, we were scoring low and then we had to go dig into the data set. Turns out, ground truth is never ground truth. One of the things we totally changed in our worldview was ground truth is never ground truth, and that we actually have to do comparative analysis of quality and really reframe every customer conversation we have around, "Let's compare two alternatives." We found creative ways of doing it. It's not complicated. Two ways of solving a problem, we'll agree in some case and you don't need to look at them. Whether they disagree, you take a small sample and you grade which one you prefer on that sample of 5 or 10 and then you know which way is better roughly. We started from that one customer journey. We went through this bespoke process in the design partnership, but then we learned this very simple pattern that has really been something that has worked with the customers who are most selective on quality bar and are working on these human operations and scale human operations where they want to automate them.
Todd: I'd love to hear maybe about some of these examples. One of the customers that I know of, because they're also a first-round company, is Clay. We love it, obviously, when companies become customers of each other. Tell me about their use case for Parallel and what they were trying to do and what maybe some of their alternatives would've been.
Parag: You make it seem like a coincidence. You all connected us with Clay. That's one of the hugely valuable to discover, people who are forward-looking, people who are truly innovating. They were one of the early people who worked on building benchmarks with us. They were ahead of the curve in bringing AI to a lot of people, making it an amazing experience, a really powerful experience. They took a bet on working with us very early on and they built benchmarks with us, collaborated with us, pushed our technology, and hopefully got some benefit in pushing their product forward. Now, as early partnerships are, I think it was really co-creation, I would say. But if you think about repetitive web research, give people extreme leverage in understanding every prospective customer or current customer, everything about them, prioritizing across them, and all kinds of things that people want to do at scale, we were able to partner with them. Try to push the limits of accuracy that you can achieve in doing this work because it matters for their use cases. And in that sense, that was a really great relationship partnership, because they've really pushed us being a demanding, sophisticated customer.
Todd: What are the APIs that Parallel provides to them that they're leveraging?
Parag: Parallel, our Task API, which is just an API which allows you to get structured enrichments back. It allows you to specify compute budget effectively and we just produce answers. Now, they make a really AI-friendly API accessible to end customers. There were system on top which orchestrates on top of our API in order for people to type small, under-specified prompts. But they have pattern matched across their customer base and they know what it means, in terms of the output, someone might want and they specify those queries down to our system.
Todd: Their end users are basically writing stuff in natural language. Clay has like an AI orchestrator routing system.
Parag: Exactly. And an AI.
Todd: So the AI is what's querying...
Parag: And AI is querying us.
Todd: Yeah.
Parag: That's what I mean by forward-looking customers. We wanted our customers to be an AI. Here, we have an AI writing queries to our system. Our system is built for natural language, but humans, as I said earlier, under-specify things. You can use AIs to help humans, not under-specify things and write query. We don't particularly optimize our APIs to be human-friendly because we expect our customers to be able to do that.
Todd: So they were basically saying, "These are the benchmarks we need you to hit," or how did that work?
Parag: No, they had built real benchmark on the products they wanted their customers to have and accuracy they wanted to have. They've also built a lot of AI technologies that can actually solve for those benchmarks, but they were open-minded to working with us if, by working together, we could do better or we could provide options to their customers. Clay is all about providing many, many different ways for people to do things. We ended up being one of the options they're able to provide to their customers and customers' users in very creative ways.
Todd: I mean, Clay sounds like one of the perfect initial customers in terms of how forward-thinking they are, the amount of volume that they have, really having high demands on your system, which always I think is a great design partner to work with. Are they the ideal customer or who is the ideal customer now that you're going while you're looking to expand?
Parag: I think now we have a very broad set of customers we serve. Clay is one of the ideal patterns in that scaled, repetitive web data is really critical and accuracy matters. But on the other extreme, we have some coding agents using some of our APIs, but there we show up maybe in 5% of prompts. We need to look up some recent documentation. It's an agent really calling a tool. Instead of using our end-to-end task API, you'd use a search API as a tool. We have customers doing claims processing in insurance and running a workflow, where there are still humans doing some QA on top. Some of the BPOs and KPOs we mentioned earlier, some of them are now our customers too. We're building a very horizontal product and we have now a variety of different customers that needs really converge, too. The API that we're trying to design, we want it to be able to serve this very diverse set of customers that otherwise wouldn't fit a single ICP.
Todd: For those customers, for the data that they're trying to gather and make available to their AIs, what are the alternatives?
Parag: BPOs. Let's get humans to create extremely high quality data sets. We have a value prop on that in that we actually compete really on quality with them. In the other case, the alternative that we often encounter is some sort of build your own or use just plain OpenAI. They just recently shipped a search. In both cases, our primary value prop is quality, quality, quality and the secondary value prop is price performance. I think, for us, the reason we care so much about price performance is going back to the vision of AIs should use the web a lot and create a lot of value. I think the only way you unlock new and new use cases is taking something that can be done and incentivizing people to do it at 100X.
Todd: And removing the friction around, that basically by having extreme performance for low cost. That makes sense. This gets into a question actually that I'm sure you thought of this as you were thinking about this idea is just potential competition and how this market shapes up. One way of thinking about this is what do you think is the territory of the hyperscalers or the major labs versus what is the opportunity that's available to a new startup when it comes to this kind of next generation of AI infra?
Parag: I don't think too much about the competitive dynamics. Two reasons. One, I think we're going to live in the most positive future or the next several years. Even if you can't fully articulate what it is, it's the opportunity set for people to mobilize and make an impact is extremely high. I think hyperscalers can't move fast enough to capture the opportunity, can't build fast enough. The labs are amazing, they move really, really fast, but they're also slow relative to how fast we can move. There are so many places to innovate. We see that from our customers every day. I don't think that is something to think about. You, of course, have to on a daily basis think about in the context of what choices your customers have and why they must partner with you, collaborate with you, buy from you. And in that sense, we'd really think about it, but then that's a really product decision on how do you differentiate your product and stay ahead of the curve. We think about that obsessively.
Todd: We said this before that you had been in stealth for a while. You've been mostly heads down, building, selling, building, selling, but now you're starting to make a little bit more noise with this launch. I believe you recently launched some self-serve options so that people could just come to Parallel and find the things they want. What advice would you have for other AI founders who are figuring out, "When do I launch? When do I open up the product more?" How did you think about that?
Parag: One of the piece of advice I got starting this company... I spoke with Patrick Collison at Stripe on building an API business. He said, "Your first API design will be wrong. Just know that." I don't know if this was intended, but I took that really to mean that keeping the customer set small and intimate so that when we find it's wrong and it's limiting what we would build, you want to be able to migrate to a next version of the API even if it's backward incompatible. We really wanted to maintain that flexibility, so we made a very intentional decision to limit the number of customers who would have an API while we didn't feel that we had thought Through. We didn't want to design the API in a lab, but we also didn't want to have an API that we were stuck supporting forever, so we made that intentional choice. The other thing we did was we put a lot of... Almost like we were very selective in the customers. We wanted to work with customers that we love working with and that love working with us, because that's what drives you in terms of energy.
Todd: Another thing I wanted to ask you about briefly is fundraising. We're seeing founders raise these really big rounds but also building smaller teams than years past. What advice maybe would you share with a founder of an AI company? Particularly everything's moving so fast these days, there's so much flashiness. How did you think about fundraising and what's the maybe advice you'd give?
Parag: Maybe two different dimensions. One, who to work with. I had one very simple rubric. For me, just the people I would spend time with. If they said something to me and I dramatically disagreed with them, will I feel sleepless that night to try to reconcile why? People who I feel that way with, I want them around me. I want the coalition we have with Khosla Ventures, with Charlotte Index, with you and Josh working with us from First Round Capital is... It's kind of interesting. You all come at our company and our vision with slightly different lenses, and the advice I get from all of you is slightly different. I have respect for all of these people. I expect to build it myself and ignore advice a lot of the times, because not right for me in that moment. But I think it's very material sometimes when you get a piece of advice and it changes how you think about something, where you go. I think that's the real value I see in choice of investor. To me, building the company's binary. Either we are successful or we are not. There's just nothing in the middle. I asked myself the question. I should raise the amount where I can argue up till that amount that I have a reason for believing that my binary odds increase for some reason. Once I can't make that case, I shouldn't raise more than that. If you can raise an infinite amount and... We were fortunate in that I got to raise what I thought was, based on my plans for the next few years, the point of diminishing returns on binary success. In fact, I don't even think it's diminishing. At some point, there's negative returns. That's the decision making framework we used. There's no science. All intuition. It's like, "Okay, if we raised X million dollars versus Y million dollars, do I think there are scenarios I can imagine where we are way more likely to be successful? I can't. Let's raise Y million dollars. What if it is Z million dollars? Can I imagine that? Not really. Let's just raise Y."
Todd: And what about when it comes to team building? I remember when the seed round happened, I think you already had five or six really, really good engineers lined up ready to go. When you were hiring engineers at Parallel now, just given how productive engineers can be in 2025, what do you look for that's maybe different than what you would've looked for five or six years ago?
Parag: Of course at Twitter I got to know and work with some amazing engineers. At the same time, there's a bunch of extremely high potential, amazing, young engineers around. I really wanted this company to bring in the best of both together. One of my hiring philosophies at Parallel really is that you want to maximize how much through hiring you can get alpha. What's the upside? You got to bet on potential, not been there, done that. Now, you can't put an entire team of 20 people just betting on potential together and get stuff done, so I hire a team. It's not an individual choice, but everyone on the team is either there, where we are taking risk betting on potential or we're hiring someone because they are particularly good at taking these extremely high potential people and channeling them towards a mission, giving them room to be creative and bring alpha while aligning them towards a singular outcome for your customers. The second thing that's really important if you embrace this take risk on hiring is you have to be somewhat like the system only works if you also embrace being decent at firing. I don't know how to be good at hiring where you don't make mistakes, especially when you're trying to bet on upside. You have to learn and set the expectations and work in a world where you're good at firing. I think if you put these together, you get to have a amazing high-performing team which can build fast and build for this current moment. That's sort of the philosophy we've really embraced.
Todd: It sounds like to me like you've got... I don't know if it's half the folks who are very high potential, maybe earlier career, maybe a little more chaotic. You've got some very senior folks who are kind of like the steady hands who have seen a lot in the field.
Parag: But I think there's more to it. It's not just about seniority. I think it's about people who are good at working with bringing the best out of high alpha people and have a track record of having that. Those tend to be people I've seen do that well.
Todd: More broadly, how do you see this changing? When it comes to org design, when it comes to the number of engineers you need to build an incredible product, where do you think we're going with all of this?
Parag: There are two forces that pull in two different directions. One, you don't need a lot of... We don't have any hierarchy right now. We don't talk about roles, we don't have teams, we don't have hierarchy. We're very chaotic.
Todd: There's no subteams within the 22 people?
Parag: There are no formal subteams. People work in small project groups, but those rotate. Now, of course, people take ownership of systems they've built, but there is a lot of fluidity. There's no formal team structures or managers or any such thing so far. That, I think, you get to do at the scale. I don't think it scales to double the size, but we get to do this for now. But actually the force that pulls you towards hiring more people is the possibilities are endless, the customer needs and the products you could build where there is demand for. Once you are working with some customers in some problem spaces, you see opportunity everywhere. Everything feels within reach. That pulls you in the direction of having more people. That's why I don't know which way it'll converge over time, but I do think that definitely you should expect more to be done by fewer people. There are obviously ways to work with AIs which are counterproductive and I've done this personally a lot, but systematically if you hire the right people and they have good judgment, we get the good out of AI.
Todd: What's a counterproductive way of working with AI that you see?
Parag: Oh, personally, sometimes there are some things which I could personally engineer and I would do better, but if I truly embrace and start enjoying white coding in our code basis, I might personally take longer and it's poor. It's like a learned skill, in my opinion, that you can really, in aggregate, waste time productivity, distract others with crazy code reviews or bugs that you're fixing downstream because you don't understand what you're doing. There are bad ways of working with AI. I think it's a learned skill to be productive with AI in a real production system. I've personally, I know my team sees this, white coded stuff without deep, full understanding of what I was doing and my team had to clean up behind me. There are bad ways of working with it, but I do think by and large people figure out what are good ways, especially in teams. There's a lot of code review learning or experiential learning and sharing that happens. Setting up the code base to be more productive for AIs and understanding what works and what doesn't work from your experience and others experiences, that really gets you dialed into the right pattern. So you do expect more productivity and I think it'll only increase from there.
Todd: Is there anything intentional that you guys had to do to enable the code base to be maximally productive with the size of team and the amount of AI code gen that you're using?
Parag: Very tactical things. I would say we have this massive crawler and index infrastructure, which is not a standard, small, front-end code base or Python scripts, but this large infrastructure project. Even running tests on it takes a while, and so we had to really hook up and give AI access to our CI to be able to trigger things and do small things of that kind to make the code base friendlier for our agents to use. I think we use it a lot for code review and we've given extreme experimentation license to every engineer. We don't standardize how people do things. Everyone gets to do their own entrance. Some people are more experimental than others and they experiment with different ways of doing things. That's the approach we've taken. Everything's going to change all the time. We want to being exploit more constantly rather than pick our lane and stick to it.
Todd: Where do you net out in this debate around code gen? I think it's something that people have very different opinions on. In terms of what is the role of the software engineer in 2025, what does it look like in a couple of years?
Parag: I think a few things will continue to matter. What to build, not necessarily how to build it, and taste. Those things I think remain the role. I think good software engineers have always had that and I think that will remain. The most opinionated stance on this topic that I have is actually that there is a... Our APIs are designed to be declarative. What I mean by declarative is talk about what you want output to be but don't tell me how. I think increasingly that's the direction we'll end up in. There will be a, I think, code we write today is the how. The conversations that we have with our AI agents and code agents are essentially trying to communicate through both a combination of feedback and don't do this and how, actually agreeing on what we're trying to do. But we're not yet agreeing on it or articulating it or storing it in repo.
Todd: Okay. I mean, one of the things that we really like to think about and talk about is companies that are putting AI into production today in truly valuable ways, because people I think make all kinds of predictions. Your customers, I think by definition, are trying to push the limits of technology. So you have this unique window into what these elite teams are actually doing today in 2025. How are your customers using AI in ways that are working or doing something that's just really interesting that you think others could learn from?
Parag: I think one of the very hard problems that we don't focus on but our customers are exceptional at is communicating to end humans how to collaborate and partner with AIs. Expectation-setting, creating an iteration playground for people to iterate with them, to communicate that the answers aren't always right and do so in a way that makes sense to the customer, to communicate what's happening under the hoods as this agent is going out and doing work. I think some of the elite AI teams are really good at having opinionated ways of doing that in different modalities for different customers. That's a common thread we see. I think the other common thread we see with our customers, which is probably slightly different from the question you're asking, is they're operating in this extreme founder mode. What I mean by that is some of our customers are established companies, even a BPO. The common thread across all of them that I can pin out is they're super forward-looking, taking risk on their own business, innovating, and operating in founder mode. It's kind of interesting to see, but I do think, to me, that's how I look at where we will truly co-innovate together is that.
Todd: On the flip side, what mistakes do you see companies making when they try to put AI into production. Or what is failing? What is not working?
Parag: I think we at Parallel fail to be great at helping everyone measure quality. I think evals are just like a famously hard problem. You can build all kinds of general purpose evals or evals for your product, evals for distribution of use cases, but each customer scenario is different and the eval is actually constantly changing. I think we haven't yet dialed in how we can help people understand quality, communicate it from us to our customers and from them to their customers when that is the case. I think that is still a thing that there is a lot to innovate on in order to understand when AI works and when it doesn't work and when to use it, when not to use it. I think that's partly our burden to solve and I don't think we have nailed that yet.
Todd: Do you have ideas on what that might look like?
Parag: We want to automate, in some sense, and estimate outputs of evals. I think the estimation problem is easier than the actual evaluating problem. I think we do that some of that internally to identify cases where we didn't do a good job. We built regression tests internally. None of those things yet for us are working well enough that we can establish a customer contract around that, but we'd like to. You want to be able to establish a customer contract around your quality, which is relevant to them and material for them.
Todd: Yeah, it's true. It's not something that really there's a standard way of measuring if you are a customer and you're expecting some level of quality out of AI that you're using.
Parag: I think if we unlock that, then you will actually access a large amount of use cases than actual customers who might today be slightly skeptical, who might have been burned by trying things that didn't work, who might want a higher quality SLA but don't even know how to measure it or ask for it or know when it's there or not there.
Todd: The word agent, I think, is one of the most overused word that I hear at least. What are agents actually doing that's truly useful that you see day to day and maybe what do you think that looks like two years from now?
Parag: I used to joke that agents is "for something that is slow" but really I think there's now a much more clear definition of agents. I'm actually starting to embrace the term. I think it's when you allow the AI to have agency to make decisions around how it wants to solve a problem and have access to a collection of tools in order to do that and don't constrain it too much and allow for many different paths to occur in order to achieve an end state. I think that's a reasonable definition of agent. I think the obvious thing is agent doing small content coding projects work. They don't always work, but they work a lot more than I thought they would two years ago. I think that is a case that it works. Our agents, if I may, I think they work surprisingly well in many, many use cases and they're not too different in terms of the liability as you might expect relative to the coding agent. I think that sort of deep web research doing work over the open web kind of use cases, I think they're working.
Todd: What do you think the Parallel APIs do exceptionally well today? And then what are the things that you're thinking about where you want to invest in and improve the Parallel of 2026, 2027?
Parag: Parallel APIs are really good at the highest quality web research. The second thing we do really well is really good attribution down to specific sources from which web research was done in a very fine-grained, granular way. That allows us to build a differentiator, which is confidence scoring for all of our output elements. Our system is calibrated to know when we might have gotten some answers wrong. I think that's been a core thing for us and it's a core need for a lot of our customers, because they want to trust the AI and we want to measure the performance of AI. We want to figure out where to allocate more compute and retrieval money as we are doing research. For all of these things, knowing when you're more likely to be wrong is really important, and we've spent a lot of time doing that, but that also shows up as something that is a value prop for customers.
Todd: Continuing on the agent thread, what do you think the modern agentic stack looks like and why are web tools such an essential part of that?
Parag: I think knowledge work is overused. It's useful to think about. If you think about work and an agent either doing it for you or helping you be more productive, there is a expectation in, I believe, every piece of software we use, whether or not it's an agent, must use some form of an advanced AI like an LLM or some multimodal version of it. I think if you're doing that, it's almost silly to not have access to the web. The expectation very soon for every piece of software we interact with, it's going to be that it has access to an LLM, it has access to the web. If you now think about building... And that's why we want to be the web tool. But now if you zoom into the agents, you think of any agent inside a work environment, let's say for getting personal agents for a moment, it must have access to all kinds of internal tools and data in that enterprise. It must have access to the web. It perhaps must have access, because they're so good at coding, to a sandbox where it can write and run code. You can come up with a few other tools it needs, but I think those are the top three. Access to the web, access to internal data, access to code sandbox. And then you can have a tool number four, you can have a tool number five, but we're building the web.
Todd: And when you think about agents interacting with the web at scale, what are some of the hard problems that most people haven't even realized yet?
Parag: So many hard problems up and down the stack. One, there is a hard problem on data access. How do we incentive align the business model and the economics of this future web where there's agents? We only deal with open data and I think there is an existential risk on the web that the web might close up more and more, unless we figure out and solve problems in a way that incentivizes this data to remain open. It must be optimal for most people for putting their content, their ideas, their products, or whatever it is that publish on the web out there, that they should be able to add maximal value and actually sustain their business's goals, whatever they might be, economics, and it must be better for their goals to be open rather than pay-walled or closed. It's not obvious that that will happen. That's why we want to show up and we're going to pave the path for that being the way the web ends up, which is incentive to remain open through open markets rather than incentives to close up shop.
Todd: You have all of this very valuable proprietary data that understandably the producers of that data want to monetize in some way. Do you have an idea for how this all gets sorted out in the coming years?
Parag: I can talk about a few properties and reasons for believing. One property is I believe we'll add so much value through AIs. Going back to the theme of positive sum, there's going to be so much surplus that we have to embrace the world where we are able to share and drive collaboration instead of creating [inaudible 01:00:03]. When you have the pie getting larger, you can actually figure out ways of getting people to collaborate. We have that moment today where, to everyone, it is obvious that the pie can get large. We need to take that mindset. That's one reason for believing. Another property that's interesting is when you think about information, what ads did really well was differential pricing at scale and efficiency. What does that mean? It means that most ad-supported businesses, no matter how you look at it, queries, users are widely accessible to everyone, extremely subsidized, and free. But they make money from a small fraction of users, which allows them to be accessible to a wide, wide, wide swath of users. They lose money on a large number of users, but that's how the business works. It's differential pricing end of the day. I think we need to bring in that property into getting people publishing on the web. Very simple example. If you consume someone's content through some really expensive model and spend $4 in GPU to get to some work done at first round versus my retired dad just reading the news through that, I'd like for you to subsidize that so you shouldn't have to pay the exact same for the same content. And if we built market mechanics, which were open and allowed us to have the context...
Todd: So the same content being used by different people who have different thresholds of valuing that content.
Parag: Yeah. We need differential pricing and we need scalable market mechanics. Figure this out. Part of the reason we do so much attribution down to sources is to be able to do that math, to be able to understand feasibility. We really want to create open solutions where everyone has incentive to remain open. We'll all be beneficiaries of a web that is open for AI to use it.
Todd: Incredibly interesting idea. How do you get started on something like this? Is this something where Parallel would go find certain corpuses of data that you thought were super interesting but that you'd have to pay for as a way of starting to figure this out?
Parag: Yes, but also even when we didn't have to pay for it, we would like to incentivize people proactively sharing it with us and to recognize their contribution and pay for it. We want to truly co-build this open marketplace and it'll require us to find partners and find innovators, find people who will take some risk as we learn the math together and share in the vision of a future open web and are able to see it that way. I think if we pull the right partners from the AI world and the web publishing side, I think it's possible.
Todd: Parag, let's maybe wrap up on a couple personal questions. What are some of the topics that you're going really deep on now personally and geeking out on?
Parag: I'm just geeking out on the web just all the time, but outside of this I think the future web index and how it will get built. So I read a lot of research papers, talked a lot of AI researchers. We'll build a research team and I think that is a very interesting set of things to be done there. That's the current geek obsession. It's material to my company.
Todd: Are there interesting papers that you've read or interesting new thoughts that you've had?
Parag: My worldview is currently shaping into convergence between recommendation systems and search systems and database systems. I think they're all going to start looking like the same thing.
Todd: And what are the things that you're thinking about, again, personally in terms of leveling up as an incredible founder CEO?
Parag: I used to say this thing at Twitter. I don't know if you ever heard it, but I only want to do jobs that I wouldn't hire myself for. I think it's like I do best when I'm slightly uncomfortable, insecure, don't feel up to it. That's part of why I wanted to be a founder, because it's the kind of job that makes you feel that way. Great thing is it makes you feel that way no matter where you are in the founding journey. You're constantly underqualified to be what your aspiration, ambition, and company needs from you. Today, if I have to think about where I need to really level up, it's in simplifying what we're doing. It's in communicating what we're doing. It's in expanding the coalition they're building towards the open web, towards the set of customers. Our coalition is our customers and I think there is just so much to be done in doing that. Every day, I feel like I got a third of what I wanted to get done despite having so many AIs at my disposal.
Todd: Well, Parag, thank you. This has been an incredibly interesting conversation. I enjoyed it immensely. Thank you for being with us.