When the PDF says one thing and the slide says another
Two years ago, I was called into a board meeting at a Series B company where I wasn’t the CFO. The previous CFO had left two months earlier. The board was working through the board package — a standard PDF with supplementary slides.
Halfway through the revenue slide, the lead investor stopped. He’d noticed something. The PDF said gross margin was 68.4%. The slide said 71.2%. Same period. Same company.
Neither number was wrong, exactly. The PDF was using total revenue in the denominator. The slide was using subscription revenue only (excluding professional services). Both are defensible. Neither was labeled.
The meeting went sideways. Not because the margin was wrong — because nobody in the room could immediately explain the difference. The interim CFO didn’t know which template the slides had come from. Nobody knew which number the board had been tracking for the previous six months.
They spent 40 minutes reconstructing the methodology. The strategy discussion didn’t happen.
The trust problem is architectural, not human
This kind of incident is almost never the result of negligence. It’s the result of a fragmented production process.
The PDF gets built in one tool. The slides get built in another. The dashboard is a third system. Each has its own formatter, its own rounding rules, its own treatment of edge cases (what to do with a negative gross margin month? with a partial period? with foreign currency revenue?).
The numbers diverge because the formatters diverge. And because nobody built a gate that checks whether they agree before the package goes out.
The standard fix — “let’s have someone check the numbers before we send” — doesn’t work at scale. Manual checks fail when the team is rushed (they always are before a board meeting), when the package is long (it always is), and when the checker isn’t the person who built the package (they usually aren’t).
The only fix that works structurally is a validation gate that runs before the export button enables.
What the gate should check
At minimum, before any board package leaves the system:
1. Balance sheet balance. Assets must equal liabilities plus equity, to within $0.50. This sounds like a low bar. It isn’t. Unmapped accounts, missing retained earnings entries, and sign convention errors (debit vs. credit) all break the balance silently if nobody is checking. A $10.6M asset side against a $32.0M liability-plus-equity side is a real failure mode I’ve seen in a live board package.
2. Cross-artifact metric reconciliation. Every metric that appears in the PDF and also in the PPTX must match to the cent (for dollar amounts) and to 0.1 percentage points (for rates). Gross margin on slide 6 must equal gross margin in the PDF executive summary. The dashboard KPI card must show the same ARR as the P&L.
3. KPI completeness. No metric on the KPI slide should show as zero unless zero is the correct computed value. “Gross Margin 0.0%” is the classic failure — it usually means the formatter hit an edge case (no revenue, divide-by-zero, missing data) and fell back to zero silently. A blank or “N/A — data required” is honest. A zero isn’t.
4. Methodology consistency. NRR computed using Method A this month and Method B last month gives a number that isn’t comparable. The package should declare the method in a footnote and flag if the method changed since the last period.
The cost of getting it wrong
The direct cost is the 40 minutes of reconstruction in that board meeting. The indirect cost is harder to quantify but real: the board’s confidence in the numbers drops, and it doesn’t recover quickly.
A board that trusts the numbers asks strategic questions. A board that doesn’t trust the numbers asks reconciliation questions. You’re paying CFO rates for reconciliation work.
The sequence in a high-trust board meeting looks like: “Our NRR is 115% — up from 108% a year ago, here’s why.” The sequence in a low-trust meeting looks like: “Our NRR is 115%, which we compute as…” — and then you’re in the weeds for 20 minutes before you can talk about what it means.
How we built the gate into Inflect
Every Inflect package runs a pre-export validation before the PDF and PPTX render:
- Balance sheet check: hard fail if assets ≠ liabilities + equity (to $0.50).
- Cross-artifact diff: compare every metric in the output set across PDF, PPTX, and dashboard JSON. Refuse to export if any metric differs by more than $0.01 or 0.1pp.
- KPI completeness: flag any metric where value is 0 and data_quality is “unavailable.” Surface the offending metrics in the review UI with the accounts that caused the imbalance.
The export button doesn’t enable until all three pass. This is not optional. You can override it — but you have to explicitly acknowledge the override, and it’s logged.
Marlow’s commentary draft also won’t be finalized if the numbers haven’t passed validation. The narrative and the numbers have to reconcile before anything goes to the board.
The Series B company with the 40-minute meeting? They’re now a design partner. The first package they ran through Inflect caught a $18K revenue-recognition timing difference between the PDF and the PPTX that had been present, uncaught, for three months.
Forty minutes is cheap at the consulting rate. The compounding credibility cost is not.
Phil Davis is a fractional CFO and the founder of Inflect.
Sources
- AICPA — Audit Committee Effectiveness: What Works Best — AICPA guidance on financial reporting accuracy and audit committee responsibilities.
- McKinsey — The CFO's role in enterprise value creation — Research on how financial reporting credibility affects executive trust and decision quality.
- FASB — ASC 606 Revenue from Contracts with Customers — Canonical source for revenue recognition guidance; referenced in the discussion of deferred revenue treatment.