back
platform
Topics
The Product Map
AI
Log in
Sign Up
back
platform
Topics
Product Analytics
Analytics

Product Analytics

Product analytics helps teams understand user behavior, identify friction, and measure what drives retention and growth. This guide covers Lean Analytics, funnels, cohorts, KPI trees and frameworks to support better product decisions.

Business Goals Ownership
Fluency with Data
Ben Yoskovitz × Product Map
Ben Yoskovitz × Product Map

Product analytics measures how users move through a product, where they find value, where they get stuck, and which behaviors drive retention, revenue, and expansion. For product managers, it is a decision system that helps teams spot friction, validate assumptions, prioritize improvements, and connect product changes to business results.

Lean Analytics

The most useful way to approach product analytics is through the lens of Lean Analytics*by Alistair Croll and Benjamin Yoskovitz.

Their framing is practical because it starts with the business stage and risk, then narrows the team onto the one question that matters most right now. That is a better operating model than treating funnels, cohorts, KPI trees, and web analytics as unrelated techniques.

The Lean Analytics Cycle was inspired by Lean Startup and expands on the Build → Measure → Learn continuous feedback loop.

  1. Start with the stage your product or company is in.

  2. Identify the biggest risk at that stage.

  3. Choose the One Metric That Matters for that risk.

  4. Then use the analytical tools in this guide to diagnose what is happening and decide what to change.

Build, Measure, Learn: The Expanded Edition
Build, Measure, Learn: The Expanded Edition
Introducing the Lean Analytics Cycle, a way to experiment successfully
article
focusedchaos.co

Stages and Gates

Lean Analytics describes a useful progression for startups and product bets: Empathy, Stickiness, Virality, Revenue, and Scale. Each stage has a different dominant question.

1. Empathy

At the empathy stage, the job is to understand whether the team is solving a meaningful problem for a specific audience. Analytics is usually lighter here. Qualitative signals, repeated pain points, and early behavioural patterns matter more than volume.

2. Stickiness

Stickiness is where product analytics becomes central. The question is whether users come back because the product solves their problem repeatedly. This is where retention, repeated engagement, activation quality, and time-to-value become more important than signups alone.

3. Virality

At this stage, the product has proven value for a group of early adopters through Stickiness. The next question is whether a broader set of users or customers can get that same value. This is a growth stage. The job is to identify the right channels to acquire quality users. Some of that comes from virality, where users share with other users.

But other levers matter too: content, paid acquisition, community, and partnerships. Track not just where new users come from, but whether they activate and retain at rates comparable to early adopters.

4. Revenue

Revenue asks whether the business can capture enough value to become sustainable. That includes conversion to paid behaviour, pricing response, expansion, and payback logic. Product analytics here needs to connect product usage with commercial outcomes.

5. Scale

Scale is where the team proves that growth can continue without breaking operationally or economically. The analytics challenge becomes broader: segment quality, operational constraints, acquisition efficiency, system performance, and organisational visibility.

What Makes a Good Metric

A good metric should be understandable, comparative, and behaviour-changing. When possible, it should be expressed as a ratio or rate rather than a raw total.

This matters because raw counts often hide reality. More signups can mask worse activation quality. More page views can hide lower task success. More active users can hide weaker retention. Product analytics gets sharper when the team chooses metrics that explain behaviour, not just scale.

One Metric That Matters (OMTM)

This is the single metric a team focuses on at a given moment because it best reflects the biggest risk in the business.

OMTM is not the same as a North Star Metric. A North Star usually remains more stable and represents long-term value creation. OMTM is more tactical. It changes as the product, company stage, and strategic constraint change.

For example:

  • In empathy, the OMTM might reflect repeated evidence of the problem or strong activation from a narrow audience.

  • In stickiness, it might be weekly retained users or the share of users reaching a repeat-value action.

  • In revenue, it might be trial-to-paid conversion or payback period.

The point is focus. A PM should not be trying to move ten strategic metrics at once.

Benchmarking and Lines in the Sand

Choosing a metric is only half the work. The harder question is what counts as good or bad performance on that metric.

There is no universal database of benchmarks, and averages can mislead if the context does not match. A retention rate that would concern a consumer social app might be reasonable for an enterprise tool. Benchmarks require interpretation. Specific reference points help.

Industry Benchmarks

  • For user retention, good six-month retention for consumer social products sits around 25%, while great is around 45%. For SMB/mid-market SaaS, good is around 60% and great is around 80%. Enterprise SaaS sits higher: good is around 75%, great is around 90%.

  • For activation, the median rate across SaaS products is 25% and the average is 34%, with B2B Enterprise products averaging around 45% and marketplaces sitting as low as 15–20% because the activation milestone is a completed transaction.

  • For net revenue retention, bottom-up SaaS products that use a land-and-expand model typically target 100% or above, with great sitting at 120% or higher.

Setting a Threshold

Lean Analytics introduces the idea of a line in the sand: a threshold that defines what success looks like for the current OMTM. Without it, teams celebrate activity without knowing whether it is enough. A line in the sand forces the team to commit to a number before the data comes in, not after.

Setting a useful line in the sand involves three steps:

  1. Find a comparable benchmark from a credible source.

  2. Adjust it for your stage, segment, and business model.

  3. Commit to it publicly before the experiment or sprint begins.

The goal is not precision. It is accountability. A rough threshold the team commits to is more useful than a perfect number no one is held to.

Startup Growth Pyramid

The Startup Growth Pyramid is useful as a reminder that metrics sit on top of foundations. Teams cannot reason well about growth if they have not established customer understanding, product value, and a working growth mechanism. PMs should use it as a sequencing framework, not as a reporting taxonomy.

Engines of Growth

Engines of Growth help PMs understand the main mechanism behind expansion.

  • Sticky growth depends on users staying. The core questions are about churn, retention, repeat usage, and depth of value. If users do not come back, acquisition improvements will not fix the business.

  • Viral growth depends on existing usage creating new usage. The key analytics questions are about invitation quality, sharing loops, collaboration mechanics, and downstream activation of referred users.

  • Paid growth depends on acquiring users profitably. The key metrics are not only CAC or click-through rate, but whether the acquired users activate, retain, and generate enough value to justify spend.

Most products have one dominant engine at a time. Product analytics should reflect that reality instead of mixing every growth model into one dashboard.

Funnel Analysis

Funnels are useful when the product manager needs to understand where a defined journey breaks down. They are best for onboarding, checkout, upgrade flows, lead qualification, and other sequences with a clear path from one step to the next.

Funnel Data

Conversion rate is the starting point, not the conclusion. A PM should also look at:

  • Conversion over time

  • Time between steps

  • The volume of users affected at each step

  • Segment differences by channel, persona, geography, or engagement level

Time-to-convert is especially important. A step can show acceptable conversion while still creating heavy friction if users take too long to move forward. Slow progression often signals hesitation, confusion, or hidden dependencies.

Pirate Funnel (AARRR)

Pirate Metrics is one of the simplest ways to structure product analytics. It breaks the user lifecycle into Acquisition, Activation, Retention, Referral, and Revenue. The value of AARRR is not the acronym itself. The value is that it forces teams to ask where value creation is really breaking.

Fluency with Data
Measuring Progress, KPIs
Product Metrics
KPIs, Metrics
Product Metrics
Product metrics help teams track user behavior, growth, and retention. This guide covers key metrics like DAU, LTV, churn, and activation, along with frameworks like AARRR and North Star to support data-informed decisions.
VIEW TOPIC

Marketing Funnel

For marketing and web journeys, frameworks like REAN or AIDA can be useful shorthand. In digital products, REAN is often more practical because it extends attention past the initial conversion into nurture and repeat engagement.

The PM should still connect the marketing funnel back to product value. Traffic quality matters more than traffic volume if the wrong users keep arriving at the activation step.

Sales Funnel

Sales funnels matter when the product sits inside a longer buying process. Leads, marketing-qualified leads, sales-qualified leads, negotiation, close, and renewal each represent a different type of friction. Product managers should care when product experience, product proof, onboarding, or in-product value signals influence progression between those stages.

Funnel Segmentation

A funnel is rarely useful without segmentation. The same headline conversion rate can hide major differences between:

  • Acquisition channels

  • High-value and low-value users

  • Highly engaged and lightly engaged users

  • New and returning users

Segmentation helps the team choose where to act first. A practical rule is to prioritise problems that hit high volume, occur before the aha moment, and affect strategically important segments.

How Funnels Support Decisions

Use funnels when you need to identify where a sequence breaks. Then use qualitative methods, such as interviews, session review, or support feedback, to understand why it breaks. After changes ship, compare new cohorts against old ones to see whether the intervention truly improved behaviour.

Cohort Analysis

Cohort analysis is the strongest tool in this guide for understanding stickiness. It shows how behaviour changes over time for groups of users that share something in common.

Cohorts

Cohorts group users by a shared start point or event, so you can compare how retention, engagement, and conversion change over time. Common cohort types include:

  • Acquisition cohorts, grouped by when users first arrived

  • Channel cohorts, grouped by source

  • Behavioural cohorts, grouped by actions users took

  • Demographic or technographic cohorts, grouped by user characteristics

Behavioural cohorts are often the most actionable for PMs. Comparing users who completed a key onboarding step against those who did not can reveal much more than comparing users by signup month alone.

Cohort Setup

Useful cohort analysis depends on deliberate setup:

  1. Choose the right cohort rule

  2. Set an appropriate time interval

  3. Define the retained behaviour clearly

  4. Use a meaningful minimum cohort size

  5. Separate leading and lagging interpretation

The goal is not to generate a complex chart. The goal is to compare like with like.

Retention Curves

Retention curves show whether the product creates durable value. A steep drop followed by a flattening line usually means the product has found a smaller but real core audience. A curve that keeps falling toward zero suggests the product is not yet sticky for that segment.

This is one of the most important patterns in product analytics because it moves the conversation away from surface activity and toward repeated value.

Left-Justified and Right-Justified Views

A left-justified view aligns cohorts by lifecycle age, such as days or weeks since signup. This is useful when comparing onboarding or retention patterns across cohorts.

A right-justified view aligns cohorts by calendar time. This is useful when testing whether a release, campaign, or product change affected all active cohorts at once.

Cohorts as a Product Tool

Use cohorts to answer questions such as:

  • Did the new onboarding improve long-term retention?

  • Do users who complete a certain action retain better?

  • Which acquisition sources bring users who stick?

  • Did the feature release improve repeat usage or only create a short spike?

Funnels tell you where users drop. Cohorts tell you whether value lasts.

KPI Tree

A KPI tree helps product managers connect company outcomes with product behaviour. It is most useful when the problem is not a single step in a journey, but a broader question about what is driving performance.

Levels of Metrics

A simple KPI tree often moves through three levels:

  1. Company outcome

  2. Product driver

  3. User behaviour

This structure helps teams avoid jumping straight to metrics that are easy to see but hard to act on.

Hierarchy of Metrics

At the top of the tree sits the main outcome or focus metric. Below it are the drivers that mathematically or behaviourally explain it. Below those are supporting metrics that help diagnose which part of the system is underperforming.

This is where product managers should distinguish between:

  • The North Star Metric, which reflects enduring user value

  • The One Metric That Matters, which reflects the current strategic constraint

  • Supporting diagnostic metrics, which help explain movement in the focus metric

Tree Structure

Build KPI trees top-down, not bottom-up. Start with the business outcome, break it into major drivers, then break those drivers into smaller inputs the team can influence.

For example, revenue might decompose into:

  • Number of active customers

  • Conversion rate to paid

  • Average revenue per customer

Each of those can then break into supporting drivers. This is more useful than starting with whichever metrics already exist in the analytics tool.

Growing the Tree

KPI trees are not static. They improve as the team tests hypotheses, learns which drivers are more sensitive, and removes vanity metrics that do not support decisions.

The practical rule is simple: do not build a tree to show everything. Build it to reveal where to act next.

MECE Principle

The MECE (Mutually Exclusive, Collectively Exhaustive) principle is important here. Branches should be mutually exclusive and collectively exhaustive. In simple terms, that means they should not overlap, and together they should explain the whole parent metric.

If a KPI tree fails that test, it becomes difficult to trust the diagnosis.

E2E Analytics

End-to-end analytics connects the full customer journey, from first touch through activation, retention, revenue, and repeat purchase. It matters when the customer experience crosses systems and teams, such as marketing, website, product, CRM, support, finance, or sales.

Tracking a Single Customer View

Customer Journey (or Single Customer View) is an end-to-end view that helps PMs see more than isolated touchpoints. It helps answer questions such as:

  • Which acquisition sources produce customers who retain?

  • Where do users disappear between marketing, signup, onboarding, and monetisation?

  • Which account or user journeys lead to expansion or repurchase?

Data Collection

The main challenge is not collecting more data. It is collecting data in a way that can be connected later. That usually means:

  • Consistent campaign and source tagging

  • Stable customer and user identifiers

  • Clear event naming

  • Shared definitions across systems

If those foundations are weak, the journey breaks apart and each team ends up reading a different version of reality.

Data Analytics

This is where architecture matters. End-to-end analytics depends on a usable master data model that connects people, accounts, sessions, events, transactions, and lifecycle stages. The PM does not need to own the warehouse, but should understand how master data, metric definitions, and journey logic are established.

The key question is whether the data model allows the team to trace movement from acquisition to value to revenue with confidence.

Data Visualization

Dashboards are useful only after the architecture is sound. A good end-to-end dashboard helps teams follow the journey, align on shared definitions, and spot handoff failures between systems. A bad one simply hides broken data behind charts.

Three Ways to Use Data

Data serves at least three distinct purposes. Mixing them up produces bad decisions and worse conversations.

Input

Data is an input into decisions, not the decision itself. In a Build → Measure → Learn cycle, measurement informs the next move. But it competes with other inputs: customer interviews, strategic judgment, and technical constraints. Relying only on quantitative data tends to optimise for incremental improvements rather than meaningful changes.

Filter

Data acts as a filter when the team needs to choose between competing priorities or push back against the loudest voice in the room. When individual opinion and business data conflict, the team has something to reason from rather than defer to seniority.

Communication tool

This is where most teams underinvest. A well-constructed metric view of the business builds shared understanding without requiring every stakeholder to dig into raw numbers. Product managers should design their metrics communication, not just their metrics. If a metric cannot be explained in a sentence, it will not create alignment.

Choosing the Right Lens

One of the main jobs of a product manager is choosing the right analytical lens for the question at hand.

  • Use Lean Analytics stages when you need to decide what matters most right now.

  • Use AARRR when you need a lifecycle view of where value is leaking.

  • Use funnels when you need to locate friction in a sequence.

  • Use cohorts when you need to understand whether value persists over time.

  • Use KPI trees when you need to connect company outcomes with product drivers.

  • Use web and end-to-end analytics when you need supporting implementation visibility across channels and systems.

In practice, these tools often work best in sequence. A KPI tree helps identify the constrained outcome. A funnel helps locate the friction. A cohort analysis helps confirm whether the fix improved durable behaviour.

Related Topics
View all topics
Fluency with Data
Measuring Progress, KPIs
Unit Economics Calculation
Unit Economics
Unit Economics Calculation
Unit economics analyses the costs and revenues of a single product unit to measure profitability and scalability. This topic explores key metrics like CAC and LTV, strategies to optimise performance, and the role of unit economics.
Fluency with Data
Measuring Progress, KPIs
Product Metrics
KPIs, Metrics
Product Metrics
Product metrics help teams track user behavior, growth, and retention. This guide covers key metrics like DAU, LTV, churn, and activation, along with frameworks like AARRR and North Star to support data-informed decisions.
View all topics