Don’t have a UX research team? Jane Davis’ tips from Zoom, Zapier & Dropbox to get you started
Episode 33

Don’t have a UX research team? Jane Davis’ tips from Zoom, Zapier & Dropbox to get you started

Today’s episode is with Jane Davis, the Director of UX Research and UX Writing at Zoom. She previously led UX Research and Content Design at Zapier, and managed the growth research team at Dropbox.

Play Episode

Today’s episode is with Jane Davis, the Director of UX Research and UX Writing at Zoom. She previously led UX Research and Content Design at Zapier, and managed the growth research team at Dropbox.

Throughout the episode, Jane tackles the thorniest customer development questions and walks us through the end-to-end research process in incredible detail, covering everything from clarifying your goals and asking the right questions, to selecting participants and synthesizing insights.    

We start by going through how she applies her playbook in the early-stage startup context — when you’re shipping the first version of your product and don’t yet have the resources to invest in a full research team. We also dig into challenges such as deeply understanding the problem you’re solving, taking on a competitive or a greenfield market, and figuring out willingness to pay.

We also get into best practices for prototyping and iterating, as well as some of the common roadblocks startups face later on, including how to build for multiple users and what to do when people aren’t excited about your product or using it frequently.

Whether you’re talking to potential customers before you start a company, or are looking to get better feedback from your current users, there’s tons of insights in here for founders, product-builders, and design folks alike.

Here’s the book Jane mentioned in the episode: Just Enough Research by Erica Hall.

We also recommend checking out Jane’s recent article: What’s the point of a UX research team?

You can email us questions directly at [email protected] or follow us on Twitter @firstround and @brettberson

Jane Davis:

When I start hearing a theme, I write it down, partially because, writing it down makes me more cognizant of the fact that it is appearing in my head. And then, the next thing I ask myself is, what would need to be true of the next several interviews, for me to feel like this wasn't a theme? And I will, oftentimes, seek out information that goes against what I've heard to pressure test it. So, if I think I'm hearing a theme around recruiting being the most difficult part of a research project, I will deliberately go in the opposite direction where I will say, "Tell me what the easiest part of recruiting is." Or like, "What's the most difficult part of your research process? So that I'm giving people the opportunity to give me an answer that I'm not expecting."

Brett Berson:

Welcome to In Depth, a new show that surfaces tactical advice, founders and startup leaders need to grow their teams, their companies and themselves. I'm Brett Berson, a partner at First Round, and we're a venture capital firm that helps startups like Notion, Roblox, Uber and Square, tackle company building firsts.

Through over 400 interviews on the review, we've shared standout company building advice, the kind that comes from those willing to skip the talking points and go deeper into not just what to do, but how to do it. With our new podcast, In Depth, you can listen in to these deeper conversations every single week. Learn more and subscribe today at firstround.com.

For today's episode, I'm really excited to be joined by Jane Davis. Jane has worked at an incredible stack of startups, and as tons of enterprise experience at different company stages. These days, you'll find her at Zoom where she's the director of UX Research and UX Writing. These days, you'll find her at Zoom where she's the director of UX Research and UX Writing.

Previously, she headed up UX Research and content design at Zapier, and managed the growth and research team at Dropbox. I'm going to start off by admitting that I really nerded out in this conversation, and used it as an opportunity to ask her all of the tricky questions I've always had about customer development. And Jane's responses are spot on, giving a great window into the day-to-day work she does and best practices she leans on.

Throughout our conversation, Jane walks us through the end-to-end research process, from clarifying your goals and asking the right questions, to selecting participants and synthesizing insights that can inform product building. We also cover the different phases of building. We start by going through how she applies her playbook in the early stage startup context, when you're shipping the first version of your product and you don't yet have the resources to invest in a full-time research team.

We also dig into challenges such as deeply understanding the problem you're solving, taking on a competitive or greenfield market, and figuring out willingness to pay. We then get into best practices for prototyping and iterating, as well as some of the common roadblocks startup leaders face later on, including how to build for multiple users and what to do when people aren't excited about your product or using it frequently.

What I loved about our conversation is that, even though Jane gets into some big concepts like confirmation bias or stated and revealed preferences, she also does a spectacular job of diving into the brass tax of how to approach this work. She shares tons of tactical pointers from the specific go-to questions she always asks customers, to why you should always pay for transcripts of your user interviews, whether you're talking to potential customers before you start a company or are looking to get better feedback from your current users.

There's tons of insights in here for founders, for product builders, and design folks alike. I hope you enjoy this episode, and now my conversation with Jane.

Thank you so much for joining us.

Jane Davis:

Oh, thanks. It's great to talk to you today.

Brett Berson:

I thought we could start by maybe going in reverse company size order, and start with some of the things you've learned in the applicable ideas and frameworks that fit really early stage and we can grow through company maturity.

And one of the reasons I'm excited to talk about these topics with you is, you've operated at so many different scales.

Jane Davis:

I wind up all over the place, as it turns out.

Brett Berson:

On the early stage side, I thought a really interesting place to start might be, when you're spending time with founders and CEOs of five person or 10 person or 15 person companies, and maybe they're about to ship their first version of the product or they have their first 50 customers, and they're not going to build a user research function at nine people, but they want to infuse a lot of the things that you do, oftentimes, at scale into a really early stage company in terms of talking to customers, taking that information, organizing it in a way that can inform where they go and order that information as well. I was interested to talk about, if you're teaching us really small company about how to take some of these ideas and implement them, what are the types of things that you would be explaining to them?

Jane Davis:

Sure. I think, a lot of the time, when I'm working with companies that are very early stage, I really want them to focus on a deep understanding of a problem. One of the things that can make that difficult with startups is that, you oftentimes are starting from a solution. You're like, "I have a great idea for solving a problem." And that can be true, but the thing that's most interesting to me there is, always that you've identified a problem in need of a solution. That is the much harder part, to me, and the part that offers a lot of really rich opportunity.

And so, stepping back from discussing concepts, from putting solution ideas in front of potential customers, and really just deeply understanding the actual problem you're trying to solve, and the people for whom you're trying to solve it. It can be really difficult when you've got, what feels like, a really strong idea in your head to step back and say, "Let's actually move away from what has felt like progress, in the form of coming up with this idea, and actually spend a lot of our time, really, just deeply understanding the space we're trying to operate in. And understanding, not just the specific problem that we think we're solving, but everything that goes into the rest of it, as well."

So, a great example here is thinking about the workflows that surround a single solution. I have a friend who was putting together a FinTech tool, where they were really looking at this one specific part of a workflow, and they had what they felt like was a really strong solution for it, and they were really having a lot of trouble getting traction with customers. And after talking to them, the thing that kept coming up was that, people weren't looking for just that one part of the workflow to be solved, what they really wanted was a much more end to end solution, so that they weren't constantly transferring from tool to tool to tool. And so, really understanding what it is people are actually trying to accomplish and the problems that you're trying to solve for them, rather than saying, "Hey, we think we've got the solution right here."

I like that approach. One, because it helps you avoid confirmation bias. But two, it actually sets you up for later growth and innovation in the same space, where if you're really deeply understanding the problem and you do have a strong solution for it, then you can also understand what the next opportunities are in this space. So, you can say, "Okay, we're going to build this one solution first, but we know that there are opportunities in these three other places that are related to the solution we're building, and we have these built-in growth opportunities for our company and these built-in product expansions or other lines of business that we could be pursuing, if we're successful with this initial one, or an opportunity to pivot."

And so, I think of this kind of research as, what we call, digging the well. It's very much about creating this repository of insights that you can draw for years and years and years, beyond just that initial round of customer visits or user interviews where you can say to yourself, "If we really deeply understand the problem space, then we don't have to invest in understanding another small piece of it later."

That's the number one thing I really try to get early stage companies to focus on.

Brett Berson:

Can you share the how behind that? So, you talk to a founder and they do this classic thing where they jump into the solution and they tell you how excited they are about it, and you explain to them and you're like, "You need to be deeply rooted in the problem and the customer." And they're like, "Great, totally agree." And then, you give them your playbook to the how behind that. Can you explain what that looks like?

Jane Davis:

Yeah, absolutely. So, the first thing is that, it never ever looks like showing a solution to a potential customer, so even if you do want to do solution validation, I never do it as part of this particular round of research.

The playbook I try and get founders to run is very much start from absolute zero, start from assuming you know nothing about how this person is doing their job or about this person's life, and really just try to get the most detailed picture you can of how they get things done. And so, for very, very early stage startups and founders, I think one of the most fruitful things can be, to just try to understand all of the workflows that go into someone's job and how they get their work done. Or if you're moving into the consumer space, how you spend your time outside of work, because you never know what kind of insights are going to uncover. And so, leaving it broad gives you the opportunity to find out things that are relevant to you, that you wouldn't have thought to ask.

As much as possible, I advise founders to keep it incredibly broad, and then dig in. And so, my go-to question in my playbook is always, tell me what you did yesterday. And I will have people just walk me through their day, from start to finish, and we can go for an hour longer just digging into that. And so, they'll be like, "I picked up my phone and just glanced through my email." And then, we can start asking about, what did you do when you were glancing through your email? Did you respond to anything? Is that something you do every single day? And then, just digging into all of those different moments.

And I always have a set of things that I want to walk out of the interview knowing. I usually have three to five bullet points, and those are the things that I want to understand. So, it could be something as specific as, have you ever paid for custom content on any of the platforms you're on, like Facebook stickers or Discord nitro pack or things like that. Or it could be as broad as, do you ever share files with people outside of your organization, and digging into how they do that.

And ideally, from there, you can form a relationship, so that you can say, "This was so helpful. We'd love to come back when we've got some more ideas about how we might solve some of these problems for you." And show them to you, so that you aren't having to re-recruit over and over again to talk to people.

I really do focus on these, what can feel like meandering or indirect path interviews because they give you the chance to get the full picture of the problems someone is trying to solve during the course of their day, and it also lets the participant lead. So, rather than saying like, "Oh, tell me about all of the problems you have related to this topic." It's just, they might not even think of the thing you're trying to do as a problem, and that's quite telling, as well, is, if somebody isn't bringing that up as, "Oh, this is a real pain point for me." Then, it's time to examine whether you're even looking at the right problem space. Will someone pay for something if it doesn't feel like a pain point? Is always an important question.

Brett Berson:

Are there other questions that you find, particularly, illuminating, or is it all about the follow-up? So, if somebody says, "I log into Salesforce." And then, you say, "Well, why did you log into Salesforce?" Is that the primary way? Are there other types of questions or ways you might use that time to generate the most useful insights?

Jane Davis:

It really depends on where the company is. But a few questions that I particularly enjoy asking to uncover opportunities, one is, tell me about the last time you thought there's got to be a better way to do this. I really like that question because it gets people thinking about, what is the last time that my technology let me down or that I encountered a problem in my life that I thought should be solved, and it wasn't. Those are really interesting opportunities. So, I always like to say, "When is the last time you thought something should be possible, and it wasn't?"

I think the other generic ones that I like are, tell me the last time you had a really good experience. And I don't say with technology, I literally just say, "Tell me the last time you had a really good experience." Because it will tell you about the things that they value. And that's another important one to know is, what types of things do people value in their experiences and their interactions? Because that can inform a lot of the design decisions that you wind up making, and where you wind up investing your resources, in terms of what you build and how you build things.

So, those are two that I kind of find myself going back to over and over again. Also, when I'm trying to understand more specific workflows with people, I'm a big fan of the magic wand question, which is, if you could wave a magic wand and change one thing about how you're doing this today, what would it be? And that's another one that gets you at more targeted opportunities, as well as the things that people most care about solving in the way they get work done.

Brett Berson:

If we were going to focus on more specific questions, let's take a fictitious example, like you and me are going to go create a new piece of software to help user researchers do their job better. Something broad like that. And you're starting to do this very early customer understanding work, how would you set that time up? Are there specific areas that you would go outside of these core questions that you outlined?

Jane Davis:

I always base my research on, what are the decisions we need to make, based on what we find out? And that's the genesis of those. What are the three bullet points we're going to walk out of here with?

And so, for building a new user research tool, the top things I would want to know are, how are the people getting research done today? What parts of that are working well? And what parts of that are not working well? And so, I would start by saying, "All right, walk me through the last research project you did." And then, I would just say, "Oh, well, I set it up, I started by pulling an email list of our participants." And they'd say, "Oh, all right. Well, what tools did you use for that? Tell me a little bit about how that went. Were there any problems? Did you encounter any stumbling blocks?"

And so, with that one, I really like to make them, either workflow or project-based, because you can start digging in, in that way, so that it's a little bit more targeted, but it's still a storytelling experience. And so, every step of the way can be like, "What I'm hearing you say is you normally start by pulling an email list and putting it into an Excel file. Tell me what you do next." Slowly creating the story and feeling out the way they feel about each point in it every time.

By the end of that, ideally, I should have a full picture of their research project workflow. From the very beginning, how do you get research projects? Where do the ideas come from? How do you know you need to kick one off? All the way through, what do you do with your findings once you've finished the research project? What happens with them, then?

And then, oftentimes, I will even have sketched out that narrative arc as we're talking and I'll say, "So, if you had to circle the part in this project that was the most difficult, tell me which part that is, and why." I do the same thing with, which part of this was easiest? Which part of this required the least effort? Why was that?

And then, I will get at the wish questions, as I think of them, which are always like, what do you wish you could do right now, that you can't? Or tell me about your ideal experience. And that's not because I'm trying to get their solution ideas, but because, again, I'm trying to get at what they're trying to accomplish, and what they value most, and what is most difficult.

I have a general rule that I don't listen to any solution ideas that come out of UX Research interviews, which is probably a controversial thing to admit on a podcast like this. But really, I try to instead ask why they are coming up with that solution idea. So, I always get it... Oh, well, what makes you say you would want a tool that does that? What is it about that idea, that would be so helpful to you? And really, digging in anytime someone is working on that level of solution to get at the underlying problem.

Brett Berson:

Why do you discard the solutions? Is it the belief that people just are good at identifying their own problems but not the right solution?

Jane Davis:

No, it's because I don't want to anchor to any given solution when I am in that exploration phase. When I say I discard them, what I mean isn't I forget that I ever heard it, it's that I, specifically, set it aside until I am done fully understanding the problem.

I don't include them in research write-ups, I don't really include those things in my readouts or anything like that, it's just I set them in a parking lot where it's like, here are some of the ways in which people have suggested that these problems get solved. But right now, what we're really trying to do is understand the problems to be solved themselves. And at that point, solutions can act as a distraction.

Brett Berson:

Do you often have to sort through when people are unintentionally lying to themselves or to you? Is that part of the way that you kind of root around in these conversations? An example that comes to mind for me is, you're talking to someone and you're asking them about their Netflix consumption, then you ask them what are the types of things that you like to watch? And they're telling you history documentaries, and then if you actually look at what they watch, it's Too Hot to Handle or something else, like the classic revealed preference idea.

I would just assume, there are so many biases... Not even a malicious desire to lie to you, but that you kind of have to sort through, in some way.

Jane Davis:

Yeah, absolutely. And that's where, I think, really, the absolute best possible relationship to have in an early stage company is analytics and qualitative research. That is my one true pairing, and that's because you can understand both those things. And I try never to discount the revealed preference versus the stated preference, because the stated preference tells you a lot about a person as well.

And so, understanding how a person wants to view themselves is actually incredibly valuable, because it tells you a lot about how you want to present them back to themselves in your product. I really value those moments where people are telling you that best version of themselves, because in many ways, a lot of opportunity, as we are designing and building products, is to help people feel like that version of themselves.

And so, when someone is saying, "This is the kind of person I want to be seen as." Even if you want to make sure that you still give them the opportunity and the ability to watch Too Hot to Handle, presenting them with historical documentaries as part of their front page on Netflix makes good sense because it fits with how they want to project themselves onto the world. And if you can mirror that feeling back to them with the product, while still satisfying their actual preferences, what an incredible opportunity.

Brett Berson:

I also don't want people to think that all I've been doing is watching the last two seasons of Too Hot to Handle.

Jane Davis:

No judgment if you have. It's been a long year.

Brett Berson:

It's been a long pandemic, so if you are looking for something on the corners of the internet, that'll really suck you in to... Both seasons are pretty interesting. You can also learn a lot about people at sort of an active anthropology. Anyway.

One of the areas we didn't touch on, in this process is actually selecting the people that you're going to talk to or study. And I'm curious to get your thoughts on how to approach that. And maybe we could go back to our user research, fictitious product. Obviously, there's something very different between talking to an individual contributor user researcher, there's different talking to someone who has a team, a team of teams, an early stage company, a late stage company, someone that's earlier in their user research career, late, people that aren't user researchers, but do some of that work in their job, maybe a product manager.

And so, how do you think about that process of having divergent conversations with divergent ideal customer profiles, and how that might evolve over time?

Jane Davis:

Yeah, that's a great one. And I think one of the things that I see a lot of startups struggle with is, the tension between casting a wide net for product market fit versus selecting your market first. And obviously, there are advantages to both of them.

When we're thinking about our fictitious user research startup, I think the question there is... There are a few initial questions that the company could ask itself. We could say to ourselves, do we believe there's an opportunity in helping career researchers inside companies do their job? Do we believe there's an opportunity in helping non-researchers conduct research, and sort of segmenting our opportunities out that way?

And I'm a big believer in secondary research and lit review and market research, as much as possible, and kind of leveraging work that's already been done. And so, if you can find materials that help you size each of those different opportunities super broadly, you're like, "Oh, there are a lot of other startups that are already operating in this one market, but this other market is underpenetrated and there's no tool that people are saying is really satisfying this need."

So, starting from that kind of secondary research and seeing if you can narrow down your potential markets, at all, saves you a lot of time, because ideally, once you've narrowed that down, you want to talk to quite a few people from each of those different potential markets and those different segments. And so, if you can narrow it down, at least, even with just some best guesses and say, "Okay, we're going to prioritize our markets this way, we're going to prioritize non researchers, and then we're going to prioritize freelancers or people who run consulting firms, and then we're going to prioritize in-house researchers at this size businesses, because we think the opportunity is greatest in those three, and they're all very different." So, the more you can make some difficult decisions upfront, you, at least, are scoping your research effectively and making it easier for yourself to go out and deeply understand something.

For all that I talked about, how important it is to, really, broadly, understand the problem space, the thing that makes that digestible and useful is when you've already identified a group of people with something in common, that you want to work with.

From there, I'd probably say, "All right, let's talk to six or seven different folks who are non-researchers, that conduct research, and let's understand how they're doing it today, what tools exist, what problems do they have? Is this enough of a problem that we think they might be willing to pay for a tool to do it? We could go into a complete tangent on monetization strategies and trying to identify monetization opportunities, early stage. But for the purposes of this, let's just assume we want people to subscribe or otherwise pay for our service directly." And so, if we can create those segments, and then deeply understand each of those segments, different needs and opportunities that lets us say, "Oh, wow. Okay. We thought that there was an opportunity in this market, but it turns out they're doing fine with the tools they already have." Or when we started looking at this market, that we thought was our third-best opportunity, that's actually where things are turning out to be most interesting.

Brett Berson:

So, once you have these conversations, and ideally you've done them in a high quality way, what is the process to synthesize that into insights that can actually drive at maybe a macro level, what you're building, but maybe you're doing work on a new potential feature with an existing product?

Jane Davis:

Now that I am further along in my career, I play extremely fast and loose with synthesis and it mostly takes place in my head, that is not an endorsement of that way of doing things, by any means, it's just by way of saying there's no one process that will work for every single person. But the way that I mentor people in doing it, and the way I always recommend doing it is, get transcripts. Whatever else you do to save money in your early stages as a startup, please, please, please record and transcribe your interviews with users and customers and people, so that you can go back and identify and pull out these common threads. It just makes everything so much easier.

And so, when I am personally doing a more rigorous synthesis process, I will have the key things that I wanted to learn from each of the interviews, and I'll usually set up a document that just has headings, and then I will pull out all of the information from the transcript, and just copy and paste it over into those sections. And then, once I've sectioned everything like that, I'll go through each of the sections and look for common themes. In recruiting, it might be a pain point is waiting on someone else to pull an email list. Or in post project, it might be figuring out where to put the research findings so that someone else can find them six months later. And then, I will just put all of the things from the transcript that I heard into those different themes or stages, and look for common threads.

And if I'm not finding any common threads, that's actually a really interesting thing to note, as well. It means that, either I haven't segmented my market as effectively as I thought, so I'm not finding people with anything in common, or it means that we're not turning up any major opportunities because nobody has these same pain points.

Humans are so good at finding patterns. It's one of my favorite things about people is that, we are really, really good at finding patterns, often to our own detriment. We find patterns where we shouldn't necessarily. But I think if you've gone through three interviews and you start thinking to yourself, "I think I'm starting to hear a theme around this topic." What I will, oftentimes, do is create an ongoing synthesis document where I start putting those might be a theme, but I haven't validated it yet, kind of things. And so, I will write all of those down and just be keeping an eye out for them when I start doing my more formal synthesis.

For example, I've been doing some research on custom content, and how and when people use custom emojis and things like that, and I've only done two interviews, but I've already heard a few things that, I think, I'm probably going to wind up hearing as ongoing themes. I actually keep an erasable whiteboard on my desk, and I just have them written down on there, so that I can go back and include those in my synthesis process.

Brett Berson:

Is there anything you do to make sure that you don't create some level of confirmation bias, where you start rooting around and hoping and pushing other people to say similar things?

Jane Davis:

As much as possible, when I start hearing themes, I don't chase them. I don't change my interview process at all. I ask the same questions and keep the same structure, even as information is coming in. And this is really the tactic I would use for these very early stage, very broad interviews. Obviously, there are situations where you would want to use what people are telling you from interviews, like when you're doing rapid iterative evaluative testing and updating prototypes in between sessions and things like that.

So, the first thing I do is, I don't change my interview approach, at all, even as I hear themes starting to come out. The second thing I do is, when I start hearing a theme, I write it down partially, because writing it down makes me more cognizant of the fact that it is appearing in my head. And then, the next thing I ask myself is, what would need to be true of the next several interviews, for me to feel like this wasn't a theme? And I will, oftentimes, seek out information that goes against what I've heard to pressure test it.

So, if I think I'm hearing a theme around recruiting being the most difficult part of a research project, I will deliberately go in the opposite direction where I will say, "Tell me what the easiest part of recruiting is." or "What's the most difficult part of your research process?" So that I'm giving people the opportunity to give me an answer that I'm not expecting.

Brett Berson:

So, once you start to collect these themes, what does that process look like then to translate it into product?

Jane Davis:

That's my favorite part, and it is one of the reasons that I love working in-house with companies, because the best possible way to do this is to have your product and design and engineering folks with you throughout the course of the research, so that all of you are working together to identify these themes and come up with ideas for what they mean for product.

Now, that is not always feasible or even desirable, depending on how quickly your company is trying to move. So, the number one thing that, I think, helps with this is, starting by deeply understanding a team's roadmap, and what they're trying to accomplish, and what decisions they're trying to make. And so, the first thing I ask teams when they come to me with research ideas or when I'm trying to help them create a research roadmap is really, what decisions do you want to be able to make in the next three months? What decisions do you want to be able to make in the next six months? And what decisions do you want to be able to make in the next year?

And that's really going to inform both how we design the research, but also how I think about the recommendations I make, and how I bring the information back to the team. So, I try very hard, as a researcher, to understand the realm of the possible when it comes to what we are doing in the product side of things. So, that means, not just how much time do we have between getting the research findings and trying to put something out into the world, but also what are our technical limitations? How constrained should my recommendations be? Are we going to build on mobile? Should I be thinking about multi-platform? Or are we going to try and keep it to a single platform to start?

And so, really understanding the actual nitty-gritty of the things the product team is worrying about day-to-day, informs a lot of how I come back with recommendations. Usually, I will come back with, here are the big opportunities that exist in this particular set of workflows. For our fictitious startup, it would be, our number one opportunity is helping people manage the recruiting process. The second-biggest pain point is helping people organize and find old research that's been done.

Really, I like to call those out very high level like, here are problems we could solve for people. And then, in my findings, I will start saying, here are some of the ways that we might go about solving these problems. But whenever possible, I like to get the team into the room and say, "All right, we heard that they want us to solve this specific problem. What are all the ways we could solve this problem for them?" And really, just opening it up so that everybody is working through the solution space together. And I think it is so vital to have all of the functions in the room, for that discussion, because each of them is responsible for a different set of constraints and a different set of parameters. But as much as possible, I do like to be part of that, so that I can help add nuance from the findings where there is opportunity for it.

Brett Berson:

How do you, then, as you're going down maybe this prototyping and building phase, how does that customer work fit into iterating on a new product?

Jane Davis:

It will really depend on what we have set out to deliver in the first phase. One of the things that can really shortcut the iteration process, a way you can avoid having to do a new round of research every single time, you come up with a new prototype or a change to your prototype is, take the time, at the very outset, to create a set of design principles and a set of heuristics for evaluating updates. Jakob Nielsen has a great set, Abby Covert also has a really strong set of heuristics for evaluating product, and then design principles. Those, I think, teams should create for themselves, and I think because they also reflect the values of a company and how a company wants to appear to its users and to the world, the promises it wants to make to people.

But those two tools, a set of heuristics and a set of design principles mean that you have an agreed upon set of things that you are trying to accomplish with every single iteration. And so, rather than usability testing every iteration, if you have those things, the team can do that evaluation process by itself. It can go through, and it can just be working through it, and it can take a new change to the workflow and say, "Oh, does this meet our standards for clarity? Does this meet our standards for findability? Does this meet our standards for demonstrating kindness? If that's going to be one of your design principles or something.

Brett Berson:

For those that aren't as familiar, can you explain heuristics or give some more examples of those?

Jane Davis:

Yeah, absolutely. Heuristic actually means, enabling someone to learn something themselves. So, heuristic evaluation is something that enables you to evaluate your own product, rather than having someone else evaluate it. So, it's self-discovery.

A lot of them are like, "Is this accessible? Is this clear? Is this concise?" And the heuristic evaluation also contains a definition of what clear means in that context like, can this be understood by someone at a third grade reading level?

So, it's important to have very specific definitions for your heuristics, so that you are all evaluating with the same standard. But basically, it's like a grading rubric, where you're like, "Okay, they met these four criteria and they missed these five criteria, and so they get a four out of nine."

Brett Berson:

Moving in a slightly different direction, I'm curious how this kind of customer work, customer conversations interviews look similar or different when you're talking to a user of your product.

Jane Davis:

That really depends on what you're hoping to get out of the conversation.

There are a few different reasons to talk to... I mean, there are a lot of reasons to talk to users of your product. Companies should be doing that on a fairly regular cadence, just to check in and see how they're doing on meeting user needs. But it really depends on the specific purpose. There are a few, broadly speaking, big situations for this. One is, you have an existing product and you are interested in expanding your product offering. So, you either want to identify a new product opportunity or you want to build some new functionality into your existing product. And in that case, you really want to understand where your product is situated in their broader work, so how it fits into their life, what problems it's solving for them, what problems are nearby that problem, that you could potentially solve, or what problems are related?

An example from my time at Dropbox was, when we were looking at people who were frequently sharing with folks outside of their organization, because we knew that people were already doing their own work inside Dropbox and working with Dropbox inside their organization, but we kept hearing pain points around sharing with people outside organizations. And so, anytime something is a pain point, that also means it's an opportunity. And in that case, you can get a little bit more targeted like, tell me about how you used Dropbox today, or tell me about how you use Zoom today. How often are you using it? Walk me through the last meeting you had, and who it was with? And all of that.

There are also times when you really just want to do a check-in and make sure that your product is actually meeting the needs that you think it is, jobs to be done. Research is probably the top example of that, where understanding the key value that people get from your product, the thing that's actually making them put their credit card into your form, that's more about the purchasing journey. That is what led you to actually buy this product. And so, it starts from the moment they heard about the product.

Brett Berson:

I'm curious if you have any specific advice, in terms of this work for companies with an early product that people like but aren't fanatical about, how you might get to a root cause. An example would be, we do customer calls all the time when we're looking at investment opportunities. And sometimes, you talk to customers about something and they're like, "This is a game changer. I can't imagine life without this." Other times, it's like, "Yeah, it's good. I use it sometimes. It's helpful." And I think there's lots and lots of companies and products that have built something that people like and find value in, but they haven't cut through or they haven't connected or delivered value, in a way, for the customer that they are evangelical about.

Jane Davis:

That is such an interesting place to be. The first thing I do in that situation is really try to understand the root cause. If this is solving a problem reasonably well for people, why aren't people that excited about it? And there are some products that just aren't that exciting, so some of it might just be a factor of like, "I don't want a product to be exciting, I just want to set it up and forget about it. I'm never going to rave about it, but also I'm going to be deeply invested in it."

So, I think the thing I always try to separate out there is, are people not enthusiastic about it because it meets a need they have, but it's just not that exciting a problem to have solved where you're like, "Oh, I just want this to run in the background, but I'm not going to be wild about it."? Network attached storage is probably a good example here where I'm like, "I need my backups, but I'm never going to be like, I am wild about this backup service."

And so, understanding, is it just the market that they're playing in? Is it the actual type of solution? Or is it how they have solved the problem? And I think a lot of companies wind up in this space, because they have partially solved a problem. That, to me, is the most dangerous thing.

And so, if I am talking to a company who achieves some success but doesn't have that super solid following, I will immediately try and understand if it's because they're solving only part of a problem for people, because that's when people will leave you.

Brett Berson:

And how do you figure that out?

Jane Davis:

Usually, I will try to do customer interviews along the lines of like, "Well, tell me what this product is doing for you. Tell me about how you use it. What other tools do you use in conjunction with it?" I will also ask about, what it's replaced for them? And the most common thing there is that, it's not replacing any tool, it's replacing something they used to do manually. It's the first thing of its kind that they're using, and that's why they started using it in the first place. I will ask like, "What made you start using it? Did it replace anything? If you could wave a magic wand, what's one other thing it would do for you?" Because that kind of starts getting you at the other part of the problem, that the product isn't solving. And then, what's one thing that you wish you knew about this product?

A thing that I used to hear, all the time, when I was doing research about Dropbox was, I know it can do so much more, but I don't know how to find out what. And that's actually this incredibly common thing. It's been a common thread when I've done research for other companies, when I've helped advise startups that are starting out. A lot of the lack of enthusiasm turns out to be a lack of education and a lack of awareness.

And so, if a product isn't solving an entire problem for someone, chances are, it's not because you didn't build that part of the product, it's because you haven't helped people get fully onboarded, you haven't delivered all of the value that they're trying to get from it.

Brett Berson:

Are there any other patterns you've seen when you're trying to diagnose a product that maybe someone uses once every other day, where when you think, if a customer really loves this and finds full value, they're using it five times a day. I'm particularly interested in the idea of expanding or increasing the value you're delivering, versus maybe decreasing the friction to get that value.

Jane Davis:

Yeah. This is a really interesting one because I think one of the problems, I see a lot of companies having is conflating frequency with value. Having worked in growth for a long time, one of the key metrics that product teams frequently set is how frequently someone is engaging with the product.

But one thing I've noticed is, depending on the problem you're trying to solve for someone, frequent engagement could actually mean that they're not getting the value they want to, from your product. Zapier, for example, is an automation product. So, arguably, if someone is engaging with Zapier on a daily or hourly basis, that might mean that they're building more Zaps, but it also might mean that they're having trouble getting it set up in a way that is delivering the value they're looking for.

So, I usually start from, how do we actually measure and understand value delivered to users? And to do that... It's going to be weird, Brett, but most of my solutions to these problems are going to be research.

Brett Berson:

When you're a hammer, everything starts to look like a nail.

Jane Davis:

It's true. I hope that becomes my nickname in the research community, is the hammer.

Brett Berson:

Someone's coming to you and they're having an issue with their significant other, they're thinking about breaking up. I can see how you could slot user research right into that.

Jane Davis:

100%, everything's a research problem, if you look at it, just a little bit.

Brett Berson:

Because all problems are people problems.

Jane Davis:

Exactly. Yes. And my whole job is people. In this situation, I think, this is actually one of those where having robust analytics can be incredibly useful jumping off point, because it depends on how much longitudinal data you're able to get. If your product launched two months ago, you're probably not going to be able to do really useful time-based modeling, but you can, at least, say, what are the things people are doing over and over again in our product? Or where are people spending most of their time? And then, using that to start understanding why they're doing it.

So, if someone is logging into your product three times a day, are they doing that because they're actually getting value from the product, or are they doing that because they're trying to get the product set up and they haven't been able to? Or are they doing that because they just accidentally keep closing their browser window or something?

One of the questions that I ask in interviews around training to explore the value someone gets from a product is, how would you do this if you didn't have this product? Walk me through what you would do if this product never existed. Because that's going to tell you a lot about what they would actually miss about you, and what your product is doing for them and the areas where you are delivering value.

Brett Berson:

Does this work change if you're operating in a market with lots of existing solutions, versus this is kind of a greenfield type market? One example would be, you're building a new messaging app for iPhone versus you're building a teleportation device.

Jane Davis:

First, I would say, there's probably a credible reason that you thought to yourself, we can build a messaging app for iPhone and be successful. And so, the first question there is, what is your hypothesis? Why do you believe that this is a good idea? That hypothesis should be, we believe we can deliver some kind of unique value that is either a smoother experience or... There are any number of reasons.

So, I think with the crowded market, you start from a more narrow hypothesis, where you are starting in comparison to something else, where you're like, "We want to provide more value relative to someone else." In a greenfield, it's really about starting to prioritize the opportunities where you're like, "Oh, a teleportation app. Well, okay, what is the most valuable thing about teleportation? Is it being somewhere instantly? Is it the fact that you can leave somewhere instantly?"

But ultimately, at the end of the day, you're trying to understand the same thing which is, what is this going to do for you, that nothing else does? In a super crowded field, that's going to be a much more specific answer. It's going to be like, "You let me have this very specific set of experiences." or "I really like the color combinations that you use in your messaging app or something like that." And in a teleportation app, it's going to be, you let me teleport.

Brett Berson:

I'm really enjoying this conversation because I'm getting to ask you all of the tricky questions I've always had about customer development and how this all ties together, which is one of my favorite topics. So, I'm quite delighted.

Going down this just a little bit more, when you're thinking about building a product in a competitive space and you're thinking about these points of differentiation, how do you figure out if the thing is differentiated enough to get someone to switch? An example would be, you're doing customer development and you're creating a new messaging app. This is five years ago and there wasn't disappearing messages, and you're thinking about building a product that allows you to do that. How do you figure out if that point of differentiation or that new piece of value is meaningful enough for the customer, that it's going to get them to switch and love your product?

Jane Davis:

That's such a tricky one because humans are such poor predictors of our future behavior. The best example I always give here is, if you ask me if I'm going to go to the gym tomorrow, I'm going to say yes. I may or may not go to the gym tomorrow, it's like a 30/70% chance. And so, I try to break it apart. So, rather than thinking about, what would get someone to switch? I try to break it into, what are all of the things that would prevent someone from switching, or what are all the barriers to this happening? What are all the things that need to go right for someone to make this decision?

And so, at that point, it becomes a set of smaller and more manageable questions, right? In order for someone to make the decision to switch, the price would need to be accessible enough for them. The onboarding experience would need to be easy enough for them. The importing and installation experience would need to be low friction enough. Then, it becomes a question of degrees, and so it becomes a research project, not about the new app that you're asking someone to use, but a project about how they're already doing thing.

I would probably, actually, start from a competitor research standpoint where I would just sit down with people and say, "Okay, walk me through your current messaging app. Have you switched messaging apps before?" And I would probably want to talk to people who had switched messaging apps before, who hadn't installed a custom app. And I'd want to talk to people who hadn't, people who hadn't, well, probably be, a lot of people just being like, "Yep, the default's good enough." If you were struggling to find people who have ever switched messaging apps, that tells you that the inertia there is so great that the opportunity might not be what you think it is. That's a situation where I would probably do some market landscape research around. I try to get really aggressive with my market sizing and be really, really conservative.

How many people do we actually believe we can get in a field where no one has ever engaged in this behavior before, realistically? But if you are in a situation where you're like, "Okay. Well, people have installed custom messaging apps before, let's go ask them what made them do it." And so, finding that analogous behavior where you're like, "Oh, tell me about when you decided to install this app. What was it about that app? Why did you install that app?" Kind of doing that like, "What's your competitor's job to be done? What was it that made you value that app so much to install it? It could be something like it was just easier, my default messaging app kept breaking or it didn't have this one feature."

And so, you can, at least get, what drives people to at least get over the barrier to switching? And then, you can start digging into what's missing and understand whether the new feature you've got is something that people are coming up with wanting on their own. So, they might not think about disappearing messages, but you might ask questions around, have you ever regretted a message you sent? What did you do? Or getting into the situations, where the kinds of problems that would lead to you wanting disappearing messages.

And that's a situation where I actually do think that concept testing can be really powerful once you've done a little bit of the initial digging. Just to keep myself honest, I would probably be looking for people to tell me, without being prompted, "Oh, if I saw this, I would switch to this because I don't want to specifically ask people." So, I usually try to let people say it to me on their own. If I've gone through a dozen concept tests and not one person has said, "Oh, wow, yeah, no, I'd install that tomorrow" I would start questioning my hypothesis there, just because people tend to be really effusive when you show them new concepts, they get excited. And so, if people are not that excited about a new feature, that lack of excitement is actually pretty strong signal.

Brett Berson:

What have you learned about products that are multi-user, where each user has maybe different incentives? An example might be, obviously, your experience at Zoom, you have end users that use the product on a daily basis. Maybe you have an IT department that's interfacing with a part of the product. Do you just think about them as completely distinct users and their own roadmaps and own jobs to be done, or are they unified, in some way?

Jane Davis:

I think the first step there is actually understanding all of the different people who are using your product. And by understanding, I mean, literally, just having a list of all of the decision makers and who is involved in buying this product, who is involved in administering this product, who is involved in actually using this product? So that anytime someone says, "Well, our users." You can say, "Which users?"

And then, the next part is really... This is going to sound so mercenary, but understanding which of those users are most important to your purposes, as a business. There will be trade-offs, and the number one place trade-offs come up is between people administering the product and people trying to use the product. There's a tension between what end users are trying to do and what admins are trying to do. They all have very different incentives.

And so, starting by understanding these incentives and having a very clear way to make decisions when they are in conflict to say, "Our number one concern is making sure." And then, going down the list from there and saying like, "Okay. Well, next most important are our end users because of enough end users complain about a product. That's going to be a real problem." That will help you, in the long run, avoid a lot of situations where people say users, and they mean totally different things.

I see that happen, all the time, at companies with multiple user types, where it's like, "Oh, we're trying to sell something to the chief information security officer using a deck that we put together for their head of marketing." Or something like, "They have completely competing goals and things they want to get, and so it's just going to be a very different problem."

A lot of companies don't differentiate, and then they don't make the hard decisions. You have to be willing to say, "We care more about one set of people than another."

Brett Berson:

I wanted to wrap up with just a couple of final questions. Something we haven't talked about in this customer development work is how you figure out willingness to pay, either from existing customers, potential customers. I think it's kind of a topic that comes up all the time when companies are thinking about overhauling pricing or changing the way they approach pricing. Do you have any lessons learned about how to do that really well?

Jane Davis:

I wish I had just a very clear, "This is exactly how to do it" answer. Unfortunately, I don't.

Brett Berson:

That's what I was hoping for.

Jane Davis:

I know. Yeah. I mean, obviously, everyone was... This would be incredibly highly rated podcast if I had that.

I think the number one lesson I have around willingness to pay is that, it is, again, so dependent on the value that you think you can provide for people. People pay for unique value and they pay in direct proportion to how unique the value is. If this is something they've never been able to do with a product before, their willingness to pay is going to be significantly higher.

And so, really understanding, are you solving a problem they've never had solved before? If so, absolutely, you can start charging at the high end of things. Or are you solving a problem that is a nice to have solution? There's a lot of different tools for getting at that. One is just scoping the problem, like how bad a problem is this for you? And then, a second is trying to get it comparable. So, I have never been able to find a way to just straight up figure out, how much would you pay for this specific product? What I have been able to do is come up with a list of comparables, things that are priced similarly, or things that do a similar job, and try to understand people's willingness to pay there.

A favorite of mine is, actually, just asking people to tell me all of the different things they pay for. What are all the apps you pay for? And oftentimes, we'll just go through their phone, and they'll be like, "Oh, I've got the paid version of this and the paid version of this and the paid version of this." And I will ask them to walk me through, why they have the paid version of that. The things I am listening for there are, is there a level of price sensitivity? If it costs less than $5, does that feel trivial? If it costs more than $15, does that feel like it has to be an absolute game changer?

I'm also listening for just the number of things that people are paying for, and that's because I'm trying to understand, are you making a calculation every single time an app comes out? Is it just, "Oh, I don't worry about it, I just say, install from the app store." So, I try to get a sense for people's existing spending habits in the space, whether it's like productivity apps or messaging apps or the broad marketplace, and how much money they've already invested there. And then, I will do a relative value feature exercise where I like to do participatory design for this, which is where you bring users or potential users or just random humans that you like into a room, and you have them help you design a product.

And the willingness to pay approach there is, you give someone a set number of points or dollars or however you want to think about it, and you say, "All right, here's the full suite of features or things you could spend money on, but you don't have enough money for all of them. So, where are you going to spend that money?" Especially if you're wondering which features to highlight or what features might be your marquee features. That's going to give you a really good understanding of the relative value of the different features in your product and your competitor's product.

Brett Berson:

I thought we could wrap up just with any resources or books or... Whatever. For folks that want to go deeper in this area, what's the stuff that's inspired you or you think might be useful for them?

Jane Davis:

The number one book I recommend to anyone who is even remotely research curious is Erica Hall's Just Enough Research. It is the gold standard for conduct your own rigorous but not overly academic research, in order to understand a problem and a set of users. And it covers the basics of methodologies, it's very much a practical hands-on book. If you're a reasonably quick reader, you can get through it in an afternoon. And she is probably my favorite person working in research, because she is both rigorous and pragmatic about the different demands of product development, the way companies actually operate.

So, that Erica Hall, Just Enough Research, if you're going to do any reading on the topic, start there. And that alone would make you into, probably, a decent enough researcher for me to consider hiring you.

Brett Berson:

That is an evangelical reference for something, so I think she has some level of product market fit with you, which is a great place to end.

Jane Davis:

Thank you. I loved talking about it, and really loved the questions. These were tricky, and I love a hard question.

Brett Berson:

Awesome. Well, thank you for doing this with us. This was so, so great.