Inflect
← Back to blog
2026-04-08 · Phil Davis

NRR at 115%: what it measures, and the four ways to calculate it wrong

Net Revenue Retention is the number VCs and PE operating partners open your board package to find first. Bessemer calls it “the single most predictive metric in cloud” — and they’re right. An NRR above 120% means your existing customer base is growing fast enough that you’d be profitable even without a single new customer. An NRR below 90% means you’re running a leaky bucket.

But here’s the thing: most of the NRR numbers I see in board packages are wrong. Not by a lot. Not because anyone is lying. Wrong because there are at least four different ways to calculate NRR, and most teams aren’t explicit about which one they’re using.

What NRR actually measures

The formula everyone agrees on:

NRR = (Beginning ARR + Expansion − Contraction − Churn) / Beginning ARR

Over a trailing 12-month period, at the cohort level. If you started the year with $5M ARR and ended with $5.75M from that same cohort (after all expansion, contraction, and churn), your NRR is 115%.

That’s the intuition. The calculation is where things break.

The four ways to get it wrong

1. Using the wrong denominator.

Some teams use ending ARR as the denominator instead of beginning ARR. This sounds trivial; it isn’t. For a fast-growing company where new logos are being added throughout the year, using ending ARR depresses NRR because you’re dividing by a larger number. A company with 115% NRR by the correct method might report 108% this way. Investors know the difference.

2. Using MRR instead of ARR for annual contracts.

If your contracts are annual, NRR should be computed on ARR (contract value), not on cash receipts. Cash receipts front-load revenue. If a customer signs in March and pays in full, they show no “expansion” for the rest of the year — even though their ARR is growing. ARR-based NRR smooths this correctly.

3. Ignoring the period boundary.

NRR should be computed on a cohort that existed at the beginning of the trailing period. Customers acquired during the period should not be in the denominator. Many teams accidentally include mid-period new logos because their billing system doesn’t distinguish “expansion on existing customer” from “new logo.” This inflates NRR.

4. Conflating gross retention and net retention.

GRR (Gross Revenue Retention) is NRR without expansion — only contraction and churn. GRR tells you how good you are at keeping customers. NRR tells you whether your existing customers are growing. A company with 91% GRR and 115% NRR has strong expansion but a churn problem underneath. Present both. Boards that see only NRR miss the structural churn signal.

What good looks like, by stage

Per OpenView’s SaaS Benchmarks report, median NRR by ARR cohort:

  • Under $1M ARR: NRR benchmarks are noisy and less predictive — cohorts are too small.
  • $1M–$10M ARR: Strong performers are at 105%–115%. The best are above 120%.
  • $10M–$50M ARR: Median is around 110%. Top quartile is 120%+.
  • $50M+ ARR: Churn becomes harder to offset with expansion. Median is 108–112%.

If you’re a Series B SaaS company at $8M ARR reporting 95% NRR, that’s the number your lead investor will want to discuss first. Have the framing ready.

How Marlow handles NRR

This is one of the reasons Marlow uses explicit metric lineage for every number in the package. When she computes NRR, she records:

  • Which method (ARR-weighted trailing 12-month cohort)
  • Which data source (customer billing export)
  • The confidence level (High if billing data is complete; Medium if we’re inferring from trial balance deferred revenue)
  • Any caveats (e.g., “mid-period new logos excluded; see footnote”)

If the billing data isn’t uploaded, NRR renders as “N/A — customer data required” rather than defaulting to zero or an implied estimate. A zero is worse than an N/A — it’s a wrong number presented as a right one.

The board confirmed our ARR-weighted trailing 12-month method in October 2025. Marlow uses it consistently from that point forward and cites the confirmation date in footnotes. No methodology drift, no re-explanation every quarter.


Phil Davis is a fractional CFO and the founder of Inflect.

Sources

  1. Bessemer Venture Partners — 10 Laws of Cloud ComputingBVP defines NRR and explains why it's the single most predictive metric for SaaS valuation multiples.
  2. OpenView Partners — SaaS Benchmarks ReportAnnual NRR benchmarks by ARR cohort and growth stage. Used for the benchmark ranges quoted in this post.
  3. Jason Lemkin, SaaStr — Net Revenue RetentionLemkin's definition of NRR and its relationship to company value.
  4. Kyle Poyar, OpenView — Usage-Based Pricing and NRRHow usage-based models distort traditional NRR calculations.