loading
loading
Sign up
Topic Catalog
Sign inGet your PM growth plan

Complete PM skills assessment

skill icon
Get your tailored growth plan packed with the topics and resources for your goals
Start assessment
ALREADY HAVE AN ACCOUNT? SIGN INTOPIC CATALOG
PROGet FULL ASSESSMENT
User Research Methods
User Research

User Research Methods

User research helps product teams understand user needs, behaviors, and goals. This guide covers key methods like interviews, surveys, usability testing, and mixed approaches to support better decisions throughout the product lifecycle.

Product Discovery
User Experience
Product Map × Community
Product Map × Community

Types of User Research

User research helps product teams understand their users’ goals, needs, and behaviors. It guides better decisions throughout the product lifecycle from early discovery to post-launch evaluation.

Different types of research answer different kinds of questions. Some methods explore what people want or feel, others observe how they behave. By selecting the right approach, teams can reduce uncertainty, improve usability, and design products that solve real problems.

Research methods fall into four main categories based on the type of data, the nature of insights, the research setting, and the intended purpose.

Qualitative vs. Quantitative

This distinction relates to the type of data collected and how it’s interpreted.

  • Qualitative research focuses on meaning and understanding. It reveals why users behave in certain ways, what motivates them, and how they experience a product. Interviews, diary studies, and field observations are common methods. These help teams uncover patterns and pain points that aren’t always visible in metrics.

  • Quantitative research focuses on measurement and scale. It answers questions like how many, how often, and what percentage. Surveys, usage analytics, A/B testing, and structured usability studies fall into this category. These methods are useful for tracking patterns, validating assumptions, and comparing results over time.

While qualitative research is best for exploring ideas and finding root causes, quantitative research helps teams prioritize and scale with confidence. Together, they provide a fuller picture.

Quantitative vs. Qualitative Usability Testing
Quantitative vs. Qualitative Usability Testing
Both these complementary types of user research play important roles in an iterative design cycle. Qualitative research informs the design process; quantitative research provides a basis for benchmarking programs and ROI calculations.
article
nngroup.com

Attitudinal vs. Behavioral

This dimension distinguishes between what users say and what they actually do.

  • Attitudinal research captures thoughts, beliefs, and preferences. Surveys, interviews, and focus groups are typical examples. This approach helps teams understand expectations, satisfaction, and mental models.

  • Behavioral research looks at actions and decisions. It includes usability tests, heatmaps, session recordings, and analytics tools. These methods show how people navigate a product, where they struggle, and what they ignore.

Product teams often combine both to compare expectations with reality. This helps close gaps between user intent and experience.

Context of Use

User research can take place in different settings depending on what you need to learn. The environment affects how realistic the results are and how much control you have over the study.

  • Natural use: Research happens in the user’s real environment with minimal interference. Field studies, diary research, and product analytics help uncover everyday behavior and habits. These methods offer realism but less control.

  • Scripted use: Participants follow predefined tasks in a controlled setting. Usability testing and A/B studies fall into this category. These methods are useful for spotting usability issues and comparing designs.

  • Limited or conceptual use: Some research uses simplified versions of the product. Examples include card sorting, tree testing, and concept testing. These are helpful when testing early ideas or content structure.

  • No product use: Certain methods don’t involve using the product at all. Brand research, visual style testing, and competitor analysis help teams understand broader perceptions and user expectations.

Each context gives different value. Natural studies show real-world behavior. Scripted sessions highlight problems in task flows. Conceptual tools validate ideas before building.

Choosing the right context helps teams match research to goals, whether it’s testing designs, exploring new ideas, or studying real-world behavior.

Phases of Product Development

User research also varies by the question it aims to answer in the product development cycle.

  • Generative research explores user needs and uncovers new opportunities. It supports early-stage discovery and informs product direction.

  • Formative research tests and refines design choices. It helps improve usability and fit as the product evolves.

  • Evaluative research provides feedback on the effectiveness of a product or design, helping teams refine and improve what they’ve built.

  • Summative research measures the final result. It tracks success against key goals or compares performance across versions.

Each purpose serves a different phase, from idea generation to post-launch analysis. Clear goals help teams choose the right method at the right time.

Research Methods

User research methods are techniques used to learn about user needs, behavior, and preferences. Each method answers different questions and supports decision-making at various stages of product development.

Methods are typically grouped by the type of data they provide: qualitative or quantitative, and by whether they focus on attitudes or behaviors.

When to Use Which User-Experience Research Methods
When to Use Which User-Experience Research Methods
Modern day UX research methods answer a wide range of questions. To help you know when to use which user research method, each of 20 methods is mapped across 3 dimensions and over time within a typical product-development process.
article
www.nngroup.com

Each method contributes a different piece to the puzzle. The best research plans combine methods to balance depth, scale, and accuracy, turning user behavior into product clarity.

Qualitative Methods

Qualitative research explores how and why users behave in certain ways. It focuses on depth and context, helping teams uncover insights that numbers alone cannot explain.

Attitudinal Qualitative Methods

These methods focus on what users say, capturing beliefs, preferences, and emotions.

Interviews

One-on-one conversations that uncover user goals, pain points, and mental models.

  • Expert/Stakeholder Interview: Gather perspectives from internal experts or domain specialists.

  • Ethnographic interview: Explore daily routines and cultural context.

  • Generative interview: Explore new ideas or unmet needs.

  • Problem interview: Identify challenges users face today.

  • Decision interview: Understand how users make choices.

  • Product/Market Fit interview: Assesses whether the product solves a real problem for a specific group of users and how essential it is to them.

  • Onboarding interview: Explores first impressions, expectations, and early friction during a user’s initial experience with the product.

Other Attitudinal Methods

  • Diary or camera studies: Participants document their experiences over time through notes, photos, or videos, providing insight into behavior across contexts.

  • Concept testing: Users react to early product or feature ideas to assess desirability, clarity, and potential value.

  • Contextual inquiry: A semi-structured interview conducted while observing the user in their environment, blending attitudinal and behavioral input.

  • Focus groups: Small group discussions to explore shared opinions and reactions.

  • Think-aloud sessions: Users verbalize their thoughts while using the product, revealing confusion or hesitation.

  • Participatory design: Users co-create ideas or solutions by working directly with the design team.

  • Desirability studies: Evaluate emotional reactions to product concepts or design directions.

  • Card sorting: Users organize content into categories to reveal how they expect information to be grouped.

  • Tree testing: Evaluates how easily users can navigate a proposed information structure.

  • First-click tests: Measure where users click first when trying to complete a task, revealing expectations and interface clarity.

  • Five-second tests: Show a screen or concept for five seconds, then ask users what they recall. Useful for testing clarity and messaging.

  • User shadowing: Observing users as they go about their tasks while asking clarifying questions. Helps uncover pain points and unmet needs.

  • Customer feedback: Analyze suggestions, reviews, or support tickets to capture user sentiment.

Behavioral Qualitative Methods

These methods focus on what users do, observing real actions rather than reported preferences.

  • Field studies: Observe users in their natural environment to understand real-world context and behavior.

  • Usability testing: Watch users complete tasks to identify friction, confusion, or failure points.

  • Remote moderated testing: Conduct usability sessions via video to capture insights without location barriers.

  • Eyetracking: Monitor where users look to evaluate focus, attention, and scanning patterns.

  • Playtesting: Used primarily for games, this method examines engagement, comprehension, and mechanics.

  • RITE method: A lightweight usability approach that tests and adjusts the design continuously over short cycles.

Quantitative Methods

Quantitative research collects measurable data and allows teams to track patterns, compare versions, and validate decisions across larger groups.

Attitudinal Quantitative Methods

These methods gather structured feedback at scale.

  • Surveys: Collect opinions, satisfaction scores, or expectations using predefined questions.

  • Unmoderated panel studies: Involve large groups of users responding to tasks or questions without a facilitator.

  • True intent studies: Ask users about their goals when visiting a website or using a feature.

  • Monadic testing: Present one concept at a time to avoid comparison bias and measure independent reactions.

Behavioral Quantitative Methods

These methods capture real interactions and performance metrics.

  • A/B testing: Compare two or more versions of a design to see which performs better on key outcomes.

  • Usability benchmarking: Track task completion, error rates, and time on task across different product versions or over time.

  • Click and scroll tracking: Record how users navigate pages and where they interact, helping identify drop-off points or ignored content.

  • Emotional response metrics: Use physiological data or self-reporting to measure user reactions to design elements or flows.

Mixed Methods

Mixed methods research combines qualitative and quantitative approaches to give a fuller picture of user behavior. It pairs insights about “why” users act a certain way with data on “how often” it happens. There are two common ways to apply mixed methods:

  • Sequential: One method informs the next. You might start with interviews to explore user needs, then follow with a survey to quantify how common those needs are.

  • Concurrent: Both types of data are collected at the same time to compare or validate findings.

Teams might use interviews to explore user needs, then follow with a survey to measure how common those needs are. Or they may run usability tests alongside analytics to identify issues and see their impact at scale.

Atomic Research

Atomic Research is a method for managing user research as a growing system of small, linked insights. Instead of storing research in static documents, it breaks knowledge down into reusable parts that can be connected, tested, and updated over time.

Each insight is built from four components:

  • Experiments – What was done

  • Facts – What was observed

  • Insights – What it suggests

  • Recommendations – What action to take

This structure creates transparency, allows insights to evolve as new data emerges, and encourages evidence-based thinking. A single fact can support multiple insights, and several insights can feed into one recommendation.

Atomic Research shifts research from isolated reports to a connected, living system. It helps teams discover patterns, avoid repeated work, and make better product decisions with confidence. Tools like Glean.ly are designed to support this workflow at scale.

UX Research in the Product Life Cycle

User research supports better decisions throughout the entire product journey. Instead of treating research as a one-time activity, teams can embed the right questions and methods into each stage of development. This structure follows a continuous loop of exploration, validation, delivery, and feedback.

Strategize Phase

At the earliest stage, research helps teams understand user goals, behaviors, and unmet needs. This is about discovering opportunities, not validating solutions.

Generative & Exploratory Research

This type of research is used to build a deep understanding of users and their context. It helps surface unmet needs, behaviors, and motivations that shape product direction. Insights gathered here form the foundation for framing the problem space and identifying opportunity areas.

Key questions:

  • Who are the users?

  • What problems do they face in their daily workflows?

  • What goals or outcomes are they trying to achieve?

Methods: Field studies, diary studies, ethnographic interviews, contextual inquiry.

Outcomes: Personas, journey maps, problem statements, opportunity areas.

Product Discovery

This phase focuses on validating user problems and identifying which solutions might be most valuable. Research informs the product vision and helps prioritize what to build next.

Key questions:

  • Which features are most valuable to users?

  • How are users solving this problem today?

  • What are the gaps in competitor offerings?

Methods: Problem interviews, concept testing, card sorting, competitive analysis, surveys.

Outcomes: Validated problems, feature prioritization, early solution directions.

Product Validation

At this stage, the goal is to test whether early concepts and prototypes are usable, valuable, and aligned with user needs before committing to full development. Research helps reduce risk by identifying what works and what needs improvement.

Key questions:

  • Do users understand the product concept?

  • Can they complete key tasks without help?

  • What elements need to be changed or clarified?

Methods: Prototype testing, first-click tests, card sorting, tree testing.

Outcomes: Refined backlog and prioritization, validated flows, early usability insights.

Design Phase

The design phase focuses on shaping and refining the user experience. Research during this stage supports rapid iteration, identifies usability issues, and ensures the final design is clear, functional, and ready for launch.

Design Discovery & Iteration

During design and development, research supports continuous testing and improvement. Fast, focused feedback helps the team identify usability issues early and fine-tune the experience before launch.

Key questions:

  • Where do users struggle in the current design?

  • What patterns are causing confusion?

  • How can we simplify the experience?

Methods: Usability testing, think-aloud sessions, RITE method, remote moderated sessions.

Outcomes: Design improvements, smoother task flows, validated interaction patterns.

Design Validation

Just before launch, research ensures that the final product is usable, accessible, and ready to meet user expectations. Testing at this stage helps teams confirm that quality standards are met and identify any final adjustments.

Key questions:

  • Is the product easy to use without guidance?

  • Are accessibility and performance goals being met?

  • What issues might impact launch success?

Methods: Usability benchmarking, accessibility testing, performance testing, visual QA.

Outcomes: Launch readiness, final design adjustments, baseline performance metrics.

Launch & After Care

After release, research focuses on real-world usage, satisfaction, and areas for improvement. It also feeds back into discovery for future iterations. Track performance, validate assumptions, and identify what to improve next.

Key questions:

  • Is the product meeting user expectations?

  • What are users struggling with now?

  • What should we focus on in the next cycle?

Methods: Analytics review, satisfaction surveys, feedback forms, post-launch usability testing.

Outcomes: Performance data, retention insights, roadmap guidance.

Research Team & Operations

Strong research practice goes beyond knowing methods. It relies on clear processes, consistent workflows, and structured approaches to collecting, analyzing, and applying insights. High-performing research teams build systems that deliver lasting value and help teams make better decisions with confidence.

Core Research Activities

Research teams support a continuous cycle of work that turns questions into actionable insights. This includes:

  • Conducting research: Planning studies, recruiting participants, and using the right methods to answer specific questions.

  • Generating and testing hypotheses: Turning assumptions into testable ideas and validating them through structured studies.

  • Results processing: Analyzing data, synthesizing findings, and drawing out key insights.

  • Driving product improvements: Working with design and development teams to translate findings into product changes.

  • Reporting findings: Communicating results clearly through summaries, presentations, or dashboards.

  • Supporting product decisions: Providing evidence that informs feature prioritization and roadmap planning.

Voice of Customer Analysis

Research teams help capture and represent the user’s perspective across the organization. This work includes:

  • Customer feedback: Reviewing input from surveys, support tickets, and direct conversations to find recurring themes.

  • Brand perception: Understanding how users experience the brand and where expectations fall short.

  • Customer reviews: Monitoring reviews and public sentiment to identify strengths and areas for improvement.

  • Competitor positioning & reactions: Analyzing how users compare your product to alternatives and what drives their choices.

Research Operations

Research Ops ensure that studies are well-run, insights are accessible, and teams can work at scale. Key activities include:

  • Initiate research studies: Setting clear processes for selecting, planning, and launching research projects.

  • Maintaining research database: Keeping centralized repositories of past research, insights, and participant data.

  • Knowledge management: Making findings easy to search, understand, and use across teams.

  • Participant management: Building and maintaining panels of users for regular testing and feedback.

Key Research Artifacts

Research teams create strategic artifacts that help organizations understand the full user experience and align teams around meaningful improvements.

Customer Journey Map (CJM)

A visual overview of the end-to-end customer experience. It highlights key touchpoints, user emotions, pain points, and opportunities across different stages of interaction.

Employee Journey Map (EJM)

A map of the internal employee experience involved in delivering customer value. It helps uncover inefficiencies, gaps, and opportunities to improve internal workflows that impact users.

Service Blueprint

A detailed diagram that connects customer-facing interactions with behind-the-scenes processes. It provides a full view of how services are delivered and where changes can improve consistency and quality.

These artifacts serve as shared reference points. They help teams stay focused on user needs, align around priorities, and design experiences that work across functions.

AI Tools for UX Research

AI-powered tools are reshaping how UX research is planned, executed, and analyzed. While they don’t replace real user insight, they help teams scale research efforts, move faster, and focus more time on interpretation rather than manual tasks.

AI-moderated interviews: Platforms like Whyser conduct AI-led interviews with real users. These tools manage the conversation flow, transcribe responses, and summarize key findings. This speeds up early validation and makes qualitative research more scalable.

Synthetic user research: Tools such as Synthetic Users simulate user behavior using AI-generated personas. They can provide directional feedback at scale when timelines are tight or access to real users is limited. Simulated responses offer early signals but should be balanced with real user data.

Survey and feedback analysis: Tools like Sprig use AI to analyze open-ended survey responses. They can detect sentiment and emotion, extract topics, and group keywords automatically, making it easier to surface themes and patterns across large datasets.

Unmoderated testing and reports: Platforms like Maze apply AI to analyze task performance in unmoderated usability tests. The system generates summaries and suggested insights, helping teams move quickly from data collection to action.

Strategic Impact
Software Engineering
Prompt Engineering
Artificial Intelligence
Prompt Engineering
Use Prompting to Automate Processes

Common use cases:

  • Drafting interview guides

  • Generating research summaries

  • Clustering qualitative feedback

  • Identifying themes and emotional tone

  • Testing ideas at scale with synthetic or remote input

AI tools are most effective when used to support, not replace core research activities. When paired with strong research operations and human oversight, they help teams improve consistency, increase speed, and make better use of insights.