A UX Research Crash Course for Founders — Customer Discovery Tips from Zoom, Zapier & Dropbox
Starting Up

A UX Research Crash Course for Founders — Customer Discovery Tips from Zoom, Zapier & Dropbox

Zoom’s Jane Davis answers all of your tricky customer development questions, creating a highly tactical guide for founders flying solo on UX research as they explore startup ideas, validate concepts, iterate on prototypes, and troubleshoot common product problems.

The window when founders are pre-product, exploring ideas is one of the most consequential periods in company building. There’s plenty for aspiring entrepreneurs to focus on in these nascent stages — from finding startup ideas and linking up with the right co-founder, to talking to users and shipping an MVP as quickly as possible.

You may have noticed that we’ve been covering these topics more frequently as of late here on The Review. That’s because we’ve increasingly found that while the broad areas that budding founders need to tackle are quite clear, the nuts and bolts execution of exactly how to approach each of these items on the starting-a-company punch list is more messy in practice.

Take talking to users, for example. In our experience, we’ve found the customer discovery cog to be one of the most challenging aspects on the startup assembly line (which is precisely why the First Round team launched Discovery Assist last year). Speaking with potential customers so you can figure out what to build sounds straightforward enough — but there’s so many opportunities for this process to go sideways.

For starters, finding the right sort of folks to talk to — and getting them to reply to emails — is no easy feat. Then there’s the challenge of structuring the interview and asking targeted questions that avoid common biases yet still yield actionable takeaways. Next, there’s the hurdle of wading through the piles of feedback. Synthesizing insights and pulling out themes to distill a sharper product strategy is a particularly tough obstacle — especially when conflicting answers point in different directions.

Even with a more refined product idea in sight, there are still plenty of potential snags — from getting feedback on prototypes and troubleshooting why an early version isn’t landing with users, to figuring out the complexities of building for multiple kinds of users.

Of course, while this may be new ground for first-time founders, there are plenty of experts and seasoned startup operators with deep wells of experience in this arena: the UX researchers who specialize in the art of qualitative research. But this function is usually only spun up at a certain scale, and even then, its role can be unclear and undervalued. Certainly in the -1 to 0 phase of company building, founders have to shoulder this crucial work on their own — typically with very little previous experience, no training, and a fuzzy sense of where to start.

We think Jane Davis is the perfect person to turn to here. Currently the Director of UX Research and UX Writing at Zoom, Davis has worked at a seriously impressive stack of startups, and has tons of enterprise experience at different company stages. (She previously led UX Research and Content Design at Zapier, and managed the growth research team at Dropbox.)

In this exclusive interview, Davis walks us through the end-to-end research process in incredible detail, covering everything from clarifying your goals and structuring interviews, to selecting participants and synthesizing insights. She starts by taking us through how she applies her playbook in the early-stage exploration phase. Next, she shares best practices for prototyping and iterating, as well as some of the roadblocks founders face slightly later on, including what to do when people aren’t excited about your product or using it frequently.

Even though Davis gets into some big concepts — like confirmation bias or stated and revealed preferences — she does a spectacular job of diving into the brass tacks of how to approach this work, from the specific go-to questions she always asks in customer conversations, to why you should pay for transcripts of your user interviews.

It’s an excellent crash course for budding founders looking to cultivate a researcher’s mindset as they talk to potential customers, but it’s also a useful guide for product builders and designers looking to sharpen their own skills. Read on for a deep dive into how to translate the information you get from research into better product decisions and stronger startups.

Photo of Jane Davis
Jane Davis, the Director of UX Research and UX Writing at Zoom.

EARLY EXPLORATION: CONSIDERING STARTUP IDEAS WHILE STAYING FOCUSED ON THE PROBLEM, NOT YOUR SOLUTION

Some founders begin with blue-sky brainstorming and end up starting a company in a space they’re not an expert in. Others come in placing a specific bet based on years of industry expertise and experience. Either way, bias abounds.

“A great example here is thinking about the workflows that surround a single solution. I have a friend who was putting together a FinTech tool,” says Davis. “They were looking at this one specific part of a workflow and they had what they felt like was a really strong solution for it. But they were having a lot of trouble getting traction. And the thing that kept coming up was that people weren't looking for just that one part of the workflow to be solved — what they really wanted was a much more end-to-end solution so that they weren't constantly transferring from tool to tool,” she says.

“That’s why early idea exploration never, ever looks like showing a solution to a potential customer. Even if you do want to do solution validation later on, I never do it as part of this particular round of research,” says Davis. Below, she covers other common mistakes and additional lessons founders need to keep in mind:

Mistake #1: Generating short-term research that’s biased toward your own solutions. Instead, dig a well you can return to later.

“The number one thing I try to get early-stage companies to focus on is cultivating a deep understanding of a problem,” says Davis. “Oftentimes you are starting from a solution, thinking you have a great idea for solving a problem. And that can be true. But have you identified a problem in need of a solution? To me, that is the much harder part. It’s about stepping back from discussing concepts and putting solution ideas in front of potential customers in order to more deeply understand the actual problem you're trying to solve and the people for whom you're trying to solve it.”

When you've got what feels like a strong idea in your head, it can be really difficult to step back and say, “Let's move away from what has felt like progress in the form of coming up with this idea and instead spend a lot of our time just deeply understanding the space we're trying to operate in.”

This deeper understanding helps avoid confirmation bias, of course. But there’s a secondary benefit as well. “It actually sets you up for later growth and innovation in the same space. If you're really deeply understanding the problem and you do have a strong solution for it, then you can also understand what the next opportunities will be,” she says.

“So you can say, ‘Okay, we're going to build this one solution first, but we know that there are opportunities in these three other places that are related to the solution we're building. And we have these built-in product expansions or other lines of business that we could pursue if we're successful with this initial idea — or an opportunity to pivot if we need to,’” she says.

This kind of research is what we call digging the well — it's very much about creating a repository of insights that you can draw from for years and years beyond just that initial round of interviews in customer discovery.

Mistake #2: Not scoping your research. Instead, narrow down your audience by looking for commonalities.

With the right frame of mind, the next step is to actually select the people you’ll talk to or study. “One of the things that I see a lot of startups struggle with is the tension between casting a wide net for product/market fit versus selecting your market first. And obviously there are advantages to both,” says Davis.

“I'm a big believer in secondary research, using lit review and market research as much as possible to leverage work that's already been done. If you can find materials that help you size each of those different opportunities super broadly, you can find out things like, ‘Oh, there are a lot of other startups that are already operating in this one market, but this other market is under-penetrated and there's no tool that’s really satisfying this need.’”

Starting from secondary research and seeing if you can narrow down your potential markets saves you a lot of time — because ideally once you've narrowed that down, you want to talk to quite a few people from each of those different potential markets and segments.

In essence, you can’t talk to everyone. “You really have to segment your potential audience. And the way to do that most effectively — rather than using just demographics — is looking at what they have in common, the types of behaviors they engage in that are similar,” Davis says.

To better explain segmenting opportunities, she takes us through the example of a fictitious startup that wants to build software to help user researchers (one that we’ll return to throughout the article). “There are a few initial questions that the company could ask itself: Do we believe there's an opportunity in helping in-house, career researchers do their job? Or freelancers and consultants? Or helping non-researchers conduct research?” says Davis. “All of those are non-demographics. You could get a huge spread of people across a wide range in each of those, but they all meet your criteria for a target segment.”

“Then you can say, ‘We think the opportunity is greatest in those three segments. So let’s start by talking to six or seven different folks who are non-researchers that conduct research, and let's understand how they're doing it today. What tools exist? What problems do they have? Is this enough of a problem that we think they might be willing to pay for a tool to do it?” she says. “That’s what allows you to say, ‘We thought that there was an opportunity in this market, but it turns out they're doing fine with the tools they already have.’ Or, ‘When we started looking at this market that we thought was our third best opportunity, but that's actually where things are turning out to be most interesting.’”

It’s really important to broadly understand the problem space. But the thing that makes that digestible and useful is when you've already identified a group of people with something in common.

Mistake #3: Coming in with too many specifics. Instead, start from zero and then dig in.

“I try to get founders to start from absolute zero. Start from assuming you know nothing about this person’s life or how they’re doing their job, and try to get the most detailed picture you can of how they get things done. That could be understanding all of the workflows that go into someone's job. Or if you're moving into the consumer space, how they spend their time outside of work. You never know what kind of insights you’re going to uncover. Leaving it incredibly broad gives you the opportunity to find out relevant things that you wouldn't have thought to ask,” says Davis.

“In this exploratory phase, I focus on what can feel like these meandering or indirect path interviews, because they give you the chance to get the full picture of the problems someone is trying to solve during the course of their day. And it also lets the participant lead,” she says.

If you start by saying, “Tell me about all of the problems you have related to this topic,” they might not even think of the thing you're trying to solve as a problem — and that's quite telling. Then it's time to examine whether you're even looking at the right space.

Mistake #4: Not having a clear set of learnings you want to walk out with. Instead, keep these questions in your back pocket to uncover opportunities.

But that’s not to say that early customer discovery interviews should be entirely aimless. “I always have a set of things that I want to walk out of the interview knowing, usually three to five bullet points. It could be something as specific as ‘Have you ever paid for custom content on any of the platforms you're on, such as Facebook stickers or Discord Nitro?’ Or it could be as broad as asking ‘Do you ever share files with people outside of your organization?’ and then digging into how they do that,” says Davis.

I always base my user research on this question: What are the decisions we need to make based on what we find out?

“Returning to the example of building a new user research tool, the top things I would want to know are how are people getting research done today? What parts of that are working well? And what parts of that are not working well? Again I really like to make the specific questions either workflow- or project-based because you can start digging in in a more targeted way, but still keep that storytelling experience,” she says.

Here’s a potential script: “I would start by saying, ‘Alright, walk me through the last research project you did.’ And then they might say, ‘Well, I set it up. I started by pulling an email list of our participants.’ And then I’d ask, ‘What tools did you use for that? Tell me a little bit about how that went. Were there any problems? What do you do next?’” she says. “By the end of that, ideally I should have a full picture of their research project workflow. So from the very beginning — How do you get research projects? Where do the ideas come from? How do you know you need to kick one off? — all the way through, what do you do with your findings? Once you've finished the research project, what happens with them then?”

Here’s another tidbit you may find helpful: “Oftentimes I will even have sketched out that narrative arc as we're talking. And so at the end I'll say, ‘If you had to circle the part in this project that was the most difficult, tell me which part that is and why.’ And then I’ll ask, ‘Which part of this required the least effort? Why was that?’”

If you’re looking for even more specific questions to add to your customer discovery arsenal, Davis has got you covered. Here are the go-to inquiries she finds herself returning to over and over again when interviewing users:

  • Tell me what you did yesterday. “I have people walk me through their day from start to finish — we can go for an hour longer just digging into that,” says Davis. “They'll be like, ‘I picked up my phone and glanced through my email.’ And then we can start asking follow ups — ‘Did you respond to anything? Is that something you do every single day?’ — and digging into all of those different moments.”
  • Tell me about the last time you thought “There's gotta be a better way to do this.” “I really like that question because it gets people thinking about the last time that technology let them down, or that they encountered something that they thought should be solved. Another variation is ‘When was the last time you thought something should be possible but it wasn't?’”
  • Tell me about the last time you had a really good experience. “The key here is that I don't say ‘with technology’ — I literally just say, ‘Tell me the last time you had a really good experience,’ because it will tell you about the things that they value. And it’s important to know the types of things people value in their experiences and interactions, because that can inform a lot of design decisions and where you wind up investing your resources.”
  • If you could wave a magic wand and change one thing about how you're doing this today, what would it be? “When I'm trying to understand more specific workflows with people, I'm a big fan of the magic wand question. It gets at more targeted opportunities, as well as the things that people care about solving most.”
  • Tell me about your ideal experience. What do you wish you could do right now that you can't? “With these kinds of ‘wish questions,’ as I think of them, I'm not trying to get their solution ideas, but rather what they're trying to accomplish, what they value most, and what the most difficult part is right now. If they offer up a specific solution or feature idea, I always ask, ‘What makes you say you would want a tool that does that? What is it about that idea that would be so helpful to you?’”
I have a general rule that I don't listen to any solution ideas that come out of early UX research interviews, which is probably a controversial thing to admit. But really I try to instead ask why they are coming up with that idea.

FIGURING OUT WHAT TO GO AFTER: SYNTHESIZING YOUR FINDINGS WITHOUT GETTING IN YOUR OWN WAY

After talking to dozens (or hundreds) of potential users, you might think a founder will have a clear sense of their next move. But often zooming out to spot the patterns proves to be the most tricky phase of user research. In addition to collecting too many (often opposing) ideas and figuring out the features that will truly differentiate a product, founders often have to confront both customers’ biases and their own — from hidden preferences, to seeing patterns where they don’t exist. Read on for Davis’ advice on how to sort through the thicket.

Mistake #1: Chasing themes. Instead, go in the opposite direction.

A natural next step is starting the work of translating takeaways into product strategy at a macro level, or new potential features at a more micro level. “There's no one synthesis process that will work for every single person, but the way that I mentor people and the way I always recommend doing it is to first get transcripts,” says Davis.

Whatever else you do to save money in your early-stage startup please, please, please record and transcribe your interviews with users and potential customers so that you can go back and pull out common threads later. It just makes everything so much easier.

“When I’m personally doing a more rigorous synthesis, I will take those key bullets that I wanted to learn from each of the interviews, and I'll usually set up a document that has those as headings. And then I will pull out all of the information from the transcripts and just copy and paste it over into those sections. Once I've sectioned everything, I'll go through each part and look for common themes,” says Davis.

She returns to our fictional user research startup. “So in recruiting, a common pain point might be waiting on someone else to pull an email list. Or in post-project, it might be figuring out where to put the research findings so that someone else can find them six months later,” she says. “And if I'm not finding any common threads, that's actually really interesting to note as well. It means that either I haven't segmented my market as effectively as I thought, so I'm not finding people with anything in common, or it means that we're not turning up any major opportunities because nobody has the same pain points.”

This is a particularly concerning development because it goes against our natural tendencies, says Davis. “Humans are so good at finding patterns, often to our own detriment. More often, we find patterns where we shouldn't.”

Here’s her more tactical advice for guarding against this temptation:

  • Put potential themes in a parking lot. If you've gone through three interviews and you start thinking, “I'm starting to hear a theme around this topic,” try creating an ongoing synthesis document, instead of waiting until the end. “That’s where I put those might-be-a-theme-but-I-haven't-validated-it-yet kind of things. For example, I've been doing some research on custom content and how people use custom emojis. And I've only done a few interviews, but I've already heard a few things that I think I'm probably going to wind up hearing as ongoing themes,” says Davis. “Writing it down makes me more cognizant of the fact that it is appearing in my head. I actually keep an erasable whiteboard on my desk and I just write them down there so that I can go back and include those in my synthesis process later on.”
  • Don’t chase themes. “As much as possible when I start hearing themes, I don't chase them. I don't change my interview process at all,” says Davis. “I ask the same questions and keep the same structure, even as information is coming in. And this is really the tactic I would use for very early-stage, broad interviews. Obviously there are situations where you would want to use what people are telling you — like when you're doing rapid iterative, evaluative testing and updating prototypes in between sessions,” says Davis.
  • Push yourself in the opposite direction. “The next thing I ask myself is what would need to be true of the next several interviews for me to feel like this wasn't a theme? And I will oftentimes seek out information that goes against what I've heard to pressure test it,” says Davis.
If I think I'm hearing a theme around recruiting being the most difficult part of a research project, I will deliberately go in the opposite direction and say, “Tell me what the easiest part of recruiting is,” so that I'm giving people the opportunity to give me an answer that I'm not expecting.

Mistake #2: Discounting revealed preferences. Instead, reflect that back in your product.

A classic problem when weighing feedback from potential customers is sorting fact from fiction — whether unintentional or intentional, there’s often a gap between what users tell you and how they actually behave. This is all the more true when you’re collecting feedback on new concepts or business ideas. (There’s a reason why the subtitle of the classic book “The Mom Test,” is “how to talk to customers & learn if your business is a good idea when everyone is lying to you.”)

As a small, somewhat silly example: Say a user tells you they love watching historical documentaries, but when you scroll through their Netflix viewing history, you find shows that are more in the mold of “Too Hot to Handle.”

“I never discount people's stated preferences — even when the data is saying something completely different. I actually really love those moments where stated and revealed preferences diverge because it tells you something incredibly meaningful about who a person is and how they want to see themselves,” says Davis. “In many ways, a lot of the opportunity as we are designing and building products is to help people feel like that best version of themselves. Even if you want to make sure that you still give them the ability to watch ‘Too Hot to Handle,’ presenting them with historical documentaries as part of their front page on Netflix makes good sense because it fits with how they want to project themselves onto the world.”

Understanding how a person wants to view themselves is actually incredibly valuable. It tells you a lot about how you can mirror that feeling back to them with your product while still satisfying their actual preferences.

BUILDING THE EARLY PRODUCT: VALIDATING & PROTOTYPING WHILE KEEPING USERS AT THE CENTER

With her findings finalized, Davis starts by calling out the problems the team could solve at a high-level. “Usually I will come back with the big opportunities that exist in this particular set of workflows. And then in my findings, I will start saying, ‘Here are some of the ways that we might go about solving these problems,’ but whenever possible, I like to get the team into the room and say, ‘Alright, we heard that they want us to solve this specific problem. What are all the ways we could solve this problem for them?’”

Ideally everybody is working through the solution space together. There’s room to add nuance from the research findings. There are things that wouldn't be possible from a design perspective, or ideas that product might be really enthusiastic about that we just couldn't do technically.

Davis summarizes her best advice to help teams stay true to the research findings as they design, build and ship:

Mistake #1: Skipping over evaluative work. Instead, define design principles and a set of heuristics to stay on track.

“One of the things that can really shortcut the iteration process and help you avoid doing a new round of research every single time you come up with a new prototype is taking the time at the very outset to create a set of design principles and a set of heuristics for evaluating updates,” says Davis.

“Heuristic evaluation is something that enables you to evaluate your own product rather than having someone else evaluate it. It’s self-discovery around questions like is this accessible? Is this clear? Is this concise?” says Davis. “And the heuristic evaluation also contains a definition of what clear means in that context. For example, can this be understood by someone at a third grade reading level? It's important to have very specific definitions for your heuristics so that you are all evaluating with the same standard. Essentially it's like a grading rubric where you can say, ‘Okay, this new feature met these four criteria and it missed these five criteria. And so it gets a four out of nine.’”

If you’re seeking inspiration, Davis is at the ready with recommendations. “Jakob Nielsen has a great set. Abby Covert also has a really strong set of heuristics for evaluating products,” she says. “But when it comes to design principles, teams should create those for themselves because they also reflect the company values, how you want to appear to users, and the promises you want to make to people.”

If you have a set of heuristics and design principles, that means that you have an agreed upon set of things that you are trying to accomplish. And so rather than usability testing every iteration, with those guard rails in place, the team can do that evaluation process by itself.

“A team can take a new change to the workflow and say, ‘Does this meet our standards for clarity? Does this meet our standards for findability? Does this meet our standards for our design principle of demonstrating kindness?’” says Davis. “I know that it can feel a little strange for a researcher to be recommending this. A lot of teams worry about being able to keep themselves honest when it comes to these sorts of self-evaluations. But I have yet to meet a team that used heuristics and design principles and didn't hold itself to higher standards than users did.”

Here’s how to put it into practice: “Every sprint or every release is a good way to do it. However you've decided to organize your development process, build heuristic evaluation into that process so that you're doing it on a recurring basis,” Davis says. “Anytime you have a significant version change, in the same way that you would do QA and bug testing, ask yourselves, does this part of the workflow meet these standards we've set? And be sure to go through them as a team so that you can discuss it live — because obviously even with guidance and examples of how you might apply it, there'll still be room for discussion.”

Here’s what those debates might entail: “How much of a risk does this present to the user's ability to be successful? What happens if we do move forward? Bluntly, there are times when it's okay to be a little loose about whether you meet your heuristics,” says Davis. “It’s a tool for accomplishing a specific goal, which is enabling a user to be successful with your product, but there are also trade-offs to be made. If you are trying to get a first-mover advantage in the market, then you're going to want to meet most of your heuristics. If you've got a particularly niche audience that’s so excited to have the problem solved, then maybe they’ll be a little forgiving about some of the hiccups in the product itself.”

There are times when it's okay to be a little bit less stringent with heuristics and design principles. They give the team a baseline to operate from and a way to get together in a room on a regular cadence and say, “Are we meeting the standards that we've set for ourselves that we know users need us to meet to be successful?”

Mistake #2: Getting stuck on figuring out if a new feature is differentiated. Instead, identify all the barriers to switching products and study up on the competition.

Take another (highly specific) product hypothetical: Rewind the clock to several years back. You’re looking to create a new messaging app. You have an idea to introduce the brand-new capability of sending messages that disappear. How do you figure out if that new feature is enough to get a customer to try a new product?

“Humans are such poor predictors of our future behavior. If you ask me if I'm going to go to the gym tomorrow, I'm going to say yes,” laughs Davis. “So rather than thinking about what would get someone to switch, I try to break it down into all of the things that would prevent someone from switching. What are all the things that need to go right, for someone to make this decision? And so at that point, it becomes a set of smaller and more manageable questions, right?”

For example, the price would need to be accessible, the onboarding experience would need to be easy, the importing and installation experience would need to be low friction. “It becomes a research project — not about the new app that you're asking someone to use, but about how they're already doing a thing,” she says. That’s where competitor research comes in.

“What's your competitor's job to be done? I would just sit down with people and say, ‘Okay, walk me through your current messaging app. Have you switched messaging apps before? What made you do it?’ It’s about finding that analogous behavior. If you are struggling to find people who have ever switched messaging apps before, that tells you about the inertia there,” says Davis.

“Then you can start digging into what's missing, and understand whether the new feature you've got in mind is something that people are coming up with wanting on their own. So they might not think up disappearing messages, but you could ask questions like, ‘Have you ever regretted a message you sent? What did you do?’”

This is where concept testing can be more appropriate, says Davis. “If I've gone through a dozen concept tests and not one person has said unprompted, ‘Oh wow, I'd install that tomorrow,’ then I would start questioning my hypothesis. People tend to be really effusive when you show them new concepts. That lack of excitement about a new feature is actually a pretty strong signal.”

GETTING FEEDBACK: HOW TO LEARN MORE ABOUT WHAT IS AND ISN’T WORKING FROM YOUR EXISTING USERS

Of course, the research process doesn’t end once a new product line or fresh feature is out the door. “There are obviously a lot of reasons to talk to users of your product. Companies should be doing that on a fairly regular cadence, but it really depends on the specific purpose,” says Davis. Here, she digs into common challenges that research can help solve when you’re in diagnostic mode:

Mistake #1: Partially solving a problem. Instead, get to the root cause to figure out why your users aren’t evangelists.

A common scenario: Companies build a product that people like, but don’t love.

“The first thing I do in that situation is try to understand the root cause. If this is solving a problem reasonably well for people, why aren't people that excited about it?” says Davis. “There are some products that just aren't that exciting. Users want to set it up and forget about it — they’ll never rave about it, but they’re still deeply invested. Network attached storage is probably a good example — I need it, but I'm never going to be wild about this backup service. And so it’s about understanding if it’s just the market that you’re playing in or the actual type of solution you’ve built,” she says.

But sometimes it’s about the way you’ve solved the problem. “I think a lot of companies wind up in this space because they have partially solved a problem. That to me is the most dangerous place to be in.”

If I’m talking to a company that’s achieved some success, but doesn't have that super solid following, I’ll immediately try to understand if it's because they are solving only part of a problem — because that's when people will leave you.

To figure where your product falls, ask these questions in customer interviews:

  • Tell me what this product is doing for you. How do you use it?
  • What other tools do you use in conjunction with it?
  • What has this product replaced for you?
  • What made you start using this product in the first place? “What is it they actually like about it? We don't want to inadvertently move away from that.”
  • If you could wave a magic wand, what's one other thing it would do for you?
  • What's one thing that you wish you knew about this product? “A thing that I used to hear all the time when I was doing research at Dropbox was ‘I know it can do so much more, but I don't know how to find out what,’ and that's actually this incredibly common thing. A lot of the lack of enthusiasm turns out to be a lack of education and a lack of awareness.”
If a product isn't solving an entire problem for someone, chances are it's not because you didn't build something, it's because of onboarding. You haven't delivered all of the value that they're trying to get from it.

Mistake #2: Conflating value with frequency. Instead, focus on helping users achieve their goals.

“Having worked in growth for a long time, one of the key metrics that product teams often set is how frequently someone is engaging with the product. But depending on the problem you're trying to solve for, frequent engagement could actually mean that they're not getting the value they want,” says Davis.

Take Zapier. “It’s an automation product. So arguably if someone is engaging with Zapier on a daily or hourly basis, that might mean that they're building more zaps — but it also might mean they're having trouble getting it set up in a way that is delivering the value they're looking for. So I usually start from how do we actually measure and understand the value delivered to users?”

Sidebar: As Davis has shared, that requires starting the product development process with one query: When you’re adding new features, the question shouldn’t be, “Is this potentially useful to someone?” Instead ask, “How does this contribute to our users achieving their goals? It’s very easy to “value add” your way straight out of product/market fit.

Back to the specific frequency issue: “Having robust analytics can be incredibly useful jumping off point here, because it depends on how longitudinal of data you're able to get. If your product launched two months ago, you're probably not going to be able to do really useful time-based modeling, but you can at least say, what are the things people are doing over and over again in our product? And then you can start to understand why they're doing it,” she says. “If someone is logging into your product three times a day, are they doing that because they're actually getting value from the product? Or do they just accidentally keep closing their browser window?”

Understand the why and the how behind user behaviors, identifying which ones actually demonstrate value and which ones are potential leading indicators of a broken experience.

Of course, Davis has a trusty user research question to ask in interviews:

  • How would you do this if you didn't have this product? Walk me through what you would do if this product never existed. “That's going to tell you a lot about what they would actually miss about you and the areas where you are delivering value.”

Mistake #3: Not making the hard decisions about which users are more important. Instead, make a list and acknowledge the trade-off.

Another classic dilemma is building for multiple users, particularly in the enterprise world of buyers, admins and end-users. “The first step is actually understanding all of the different people who are using your product. By understanding, I mean literally just making a list so that anytime someone says, ‘Well, our users want…’, you can say ‘Which users?’” says Davis

This next part is going to sound rather mercenary, she warns. “Understand which of those users are most important for your purposes. As a business, there will be trade-offs and the number one place trade-offs come up is between people administering the product and people trying to use the product. Start by understanding the different incentives and having a clear way to make decisions when they are in conflict by saying our number one concern is this, our secondary concern is this. In the long run, that will help you avoid a lot of situations where people say users but mean totally different things.”

Many companies don't differentiate and then they don't make the hard decisions. It sounds harsh, but you have to be willing to say, “We care more about one set of users than another.”

Editor’s note: If you’re looking to dig even deeper into user research practices, Davis has another recommendation: "The number one book I recommend to anyone who is even remotely research curious is “Just Enough Research” by Erica Hall. It is the gold standard for conducting your own rigorous, but not overly academic research, and it covers the basics of methodologies.”

This article is a lightly-edited summary of Jane Davis' appearance on our new podcast, "In Depth." If you haven't listened to our show yet, be sure to check it out here.

Cover image by Getty / PM Images.