Optimized Sales Optimized Marketing Target Accounts For CROs For CFOs For CMOs Blog Glossary Compare Tools About Schedule a Demo
Compare

Forecast vs Actual: How to Measure and Close the Gap

Pete Furseth 8 min read
revenue analyticscomparisonforecast accuracysales forecastingB2B SaaS
Forecast vs Actual: How to Measure and Close the Gap
Home/ Blog/ Forecast vs Actual: How to Measure and Close the Gap

Only 7% of B2B companies achieve 90%+ forecast accuracy (Gartner, 2025). That means 93% of revenue teams are making resource allocation decisions, hiring plans, and board commitments based on numbers they will miss. The forecast vs actual gap is not a reporting problem. It is an operational problem that compounds every quarter.

The good news: forecast variance is diagnosable. Every miss has a root cause, and root causes cluster into a small number of patterns. After building custom forecast models for B2B SaaS companies for twenty years, I can tell you that most companies miss for the same five reasons. The companies that close the gap are the ones that stop treating the forecast as a number and start treating it as a system.

This is the guide to measuring the gap, diagnosing the cause, and building the system that prevents the next miss.

Forecast vs Actual: The Core Metrics

MetricFormulaWhat It Tells YouTarget
Forecast accuracy1 - (\Actual - Forecast\/ Actual) x 100Overall prediction quality90%+
Forecast bias(Forecast - Actual) / Actual x 100Direction of the miss (over or under)Within +/- 5%
Variance by segmentSegment forecast vs segment actualWhere the miss originatesNo segment > 15% variance
Deal slippage rateDeals that pushed / Total forecasted dealsPipeline movement disciplineUnder 10%
Coverage-to-close ratioBeginning pipeline / Closed revenueHow much pipeline you need per dollar closedTrack quarterly to establish baseline
Stage conversion varianceActual stage conversion vs modeled conversionWhether conversion assumptions holdWithin 5% of model

The Five Root Causes of Forecast Misses

1. Deal Slippage

Deal slippage is the most common cause of forecast misses. Deals that were forecasted to close this quarter push to next quarter. They do not die. They slide.

The aggregate impact is brutal. If 15% of your committed pipeline slips, and your pipeline coverage was 3x (which assumes roughly 33% conversion), you lose nearly half your target. Slippage creates a compounding problem because the slipped deals also inflate next quarter's pipeline, creating false confidence.

How to detect it: Track the "vintage" of pipeline. Deals that entered this quarter versus deals carried over from last quarter. If more than 30% of your committed pipeline is carryover from previous quarters, your forecast is built on stale opportunities. How to fix it: Implement strict commit criteria tied to buyer actions, not rep judgment. A deal is "committed" when the economic buyer has verbally confirmed timing and budget, not when the rep believes it will close. ORM's models assign probability based on buyer behavior data, not CRM stage alone.

2. Conversion Rate Drops

Your forecast model assumes certain conversion rates at each pipeline stage. When those rates decline without detection, the forecast inherits the error.

A 5% drop in Stage 2 to Stage 3 conversion does not sound dramatic. But if you have $20M in Stage 2, that 5% drop means $1M less pipeline reaching Stage 3, which means $300-400K less closed revenue after applying downstream conversion rates.

How to detect it: Monitor stage conversion rates weekly, not quarterly. Plot them as a rolling 90-day average. Any downward trend beyond one standard deviation from the trailing four-quarter mean warrants investigation. How to fix it: Decompose the drop by segment and rep. Conversion rate drops are rarely uniform. They concentrate in specific segments (maybe enterprise is flat but mid-market dropped) or specific reps (new hires ramping slower than modeled). Fix the specific problem, not the average.

3. Pipeline Quality Degradation

Total pipeline value looks healthy. Coverage is 3.5x. But the composition has shifted. More early-stage deals, fewer late-stage deals. More small deals, fewer large ones. More competitive deals, fewer uncontested ones.

The aggregate number hides the quality decline. A $40M pipeline with a healthy stage distribution produces a very different outcome than a $40M pipeline front-loaded with Stage 1 opportunities.

How to detect it: Weight pipeline by stage and deal quality indicators. Compare the weighted pipeline to the unweighted number. If the gap is growing, quality is declining. Also track "pipeline created this quarter that closed this quarter" as a leading indicator of pipeline freshness. How to fix it: Separate pipeline quantity targets from pipeline quality metrics. Your demand gen team should be measured not just on pipeline created but on pipeline that converts. ORM's prescriptive models flag pipeline quality issues before they hit the forecast.

4. Rep Sandbagging and Happy Ears

Human bias is baked into any forecast that relies on rep input. Reps who have been burned by missed commits in the past will sandbag. Reps chasing accelerators will over-commit. Both distortions hit the forecast.

Sandbagging creates the illusion of overperformance ("we beat forecast!") while masking the real pipeline health. Happy ears create the opposite problem: inflated commitments that evaporate in the final weeks of the quarter.

How to detect it: Compare each rep's forecast accuracy over trailing four quarters. Reps who consistently beat forecast by 20%+ are sandbagging. Reps who consistently miss by 15%+ may have happy ears. Both patterns are addressable. How to fix it: Reduce reliance on subjective rep input. ORM's models generate forecasts from pipeline data (stage, age, engagement signals, historical conversion) rather than asking reps what they think will close. The model does not have emotions.

5. Coverage Gaps

Sometimes the forecast misses because there simply was not enough pipeline to hit the target at any reasonable conversion rate. This is the most avoidable cause because it is detectable months in advance.

If your historical pipeline-to-revenue conversion rate is 25% and you need $10M in closed revenue, you need $40M in pipeline at the start of the quarter. If you enter the quarter with $30M, the math says you will close $7.5M. No amount of deal acceleration changes that arithmetic.

How to detect it: Track pipeline coverage by segment at the start of each quarter. Compare to the coverage ratio required to hit target based on your historical conversion rate (not the industry average). How to fix it: Build pipeline coverage requirements into planning, not just monitoring. Set coverage targets by segment and track them as a leading indicator 90 days before the quarter starts. ORM's models calculate the exact coverage needed per segment based on your specific conversion rates.

Building a Forecast vs Actual System

A one-time variance analysis is useful. A recurring system is transformational. Here is the structure.

Weekly: Pipeline health check. Compare current pipeline stage distribution to the distribution required to hit target. Flag any stage where volume dropped 10%+ week over week. Monitor conversion rates by stage. Monthly: Variance decomposition. Compare month-to-date actuals to the monthly pace required by the forecast. Decompose any variance by segment, rep, and deal type. Identify whether the issue is pipeline quantity, quality, or conversion. Quarterly: Full forecast vs actual review. The comprehensive analysis. Compare the forecast to the actual by every dimension: segment, stage, rep, deal size, deal type, new vs expansion. Identify the top three root causes. Update the model assumptions for next quarter. Annually: Model recalibration. Review trailing four quarters of forecast vs actual data. Identify systematic biases. Recalibrate conversion rate assumptions, coverage requirements, and seasonal adjustments.

How ORM Closes the Gap

ORM builds custom forecast models that eliminate the most common sources of forecast error. Our approach works in three layers.

Layer 1: Data-driven forecasting. We generate the forecast from your pipeline data using mathematical models calibrated to your specific conversion rates, sales cycle length, and deal dynamics. This eliminates the subjective bias from rep input. Layer 2: Variance decomposition. When the forecast and actual diverge, our models automatically decompose the variance by segment, stage, and rep. You do not have to hunt for the root cause. It surfaces automatically. Layer 3: Prescriptive analytics. The model does not just tell you where the gap is. It tells you what to do. Which deals to prioritize. Where to add pipeline. How to reallocate resources. The gap between forecast and actual becomes a specific action plan, not a number on a slide.

Our clients typically move from 65-75% forecast accuracy to 85-95% within two quarters. The improvement comes from better modeling (more accurate predictions) and better execution (prescriptive actions that close the gaps the model identifies).

The Bottom Line

Every forecast will be wrong. The question is whether it is wrong by 2% or 20%, and whether you have a system to diagnose and correct the error in real time.

Forecast vs actual analysis is not a post-mortem exercise. It is an operating system. The companies that run it rigorously, decomposing variance weekly, identifying root causes monthly, and recalibrating models quarterly, are the ones that achieve the forecast accuracy their boards expect.

The gap between 75% accuracy and 95% accuracy is not a tooling gap. It is a discipline gap. Build the system. Run the system. The accuracy follows.

Related reading: - Sales Forecasting: Complete Guide - Forecast Accuracy - Pipeline Coverage Ratio - Deal Slippage - Revenue Variance - Stage Conversion Rate

Frequently Asked Questions

What is forecast vs actual analysis?

Forecast vs actual analysis compares your predicted revenue to what you actually closed, then decomposes the variance by segment, stage, rep, and deal type to identify why the gap exists. It is not just measuring the miss. It is diagnosing the cause so you can prevent the next one.

What is an acceptable forecast accuracy for B2B SaaS?

Best-in-class B2B SaaS companies achieve 90-95% forecast accuracy. The median sits around 75% (Gartner, 2025). Companies below 80% are losing significant value through misallocated resources, missed hiring windows, and cash flow surprises. ORM clients typically reach 85-95% accuracy within two quarters.

How do you calculate forecast accuracy?

The most common formula is: Forecast Accuracy = 1 - (|Actual - Forecast| / Actual) x 100. If you forecasted $10M and closed $9M, accuracy is 90%. But a single number hides segment-level variance. Always decompose by segment, deal type, and rep to find the real story.

Why do sales forecasts miss?

The five most common causes are: deal slippage (deals push to next quarter), conversion rate drops (stage-to-stage conversion declines without detection), pipeline quality degradation (more deals but lower close probability), rep sandbagging or happy ears (subjective inputs distort the number), and coverage gaps (insufficient pipeline to hit target even at historical conversion rates).

PF
Pete Furseth
Sales & Marketing Leader, ORM Technologies
Pete has built custom revenue forecast models for B2B SaaS companies for over a decade.

See how ORM turns these insights into action

ORM builds custom revenue forecast models for B2B SaaS companies. Not dashboards. Prescriptive analytics that tell you what to do next.

Schedule a Demo