Optimized Sales Optimized Marketing Target Accounts For CROs For CFOs For CMOs Blog Glossary Compare Tools About Schedule a Demo
Sales Forecasting

Forecast Accuracy: How to Measure It, Why It Matters, and How to Improve It

Pete Furseth 8 min read
forecast accuracysales forecastingB2B SaaS
Forecast Accuracy: How to Measure It, Why It Matters, and How to Improve It
Home/ Blog/ Forecast Accuracy: How to Measure It, Why It Matters, and How to Improve It

Forecast Accuracy: How to Measure It, Why It Matters, and How to Improve It

By Pete Furseth

Forecast accuracy is the single metric that tells you whether your revenue operation is working or performing theater. Every other sales metric feeds into it. Pipeline velocity, win rate, stage conversion, deal aging. They all exist to make this one number better.

And almost nobody gets it right.

87% of enterprises missed revenue targets in 2025 (Clari Labs, 2026). Only 7% of companies achieve 90%+ forecast accuracy (Gartner). That gap is not a minor calibration issue. It is a systemic failure in how revenue organizations measure, manage, and act on their pipeline data.

I have spent two decades building forecast models for B2B SaaS companies. The pattern is always the same. Teams measure the wrong things, update too infrequently, and confuse the forecast with a wish. This guide covers the formulas that matter, the benchmarks that set the bar, and the five changes that move teams from guessing to planning.

What Is Forecast Accuracy?

Forecast accuracy measures how close your revenue prediction was to actual results. It is expressed as a percentage, where 100% means you nailed it and anything below tells you how far off you were.

The concept is simple. The execution is where companies go sideways.

Most organizations treat the forecast as a single number produced once per quarter and judged at the end. That is like checking your speedometer only when you arrive at the destination. The value of forecast accuracy is not in the final grade. It is in the weekly signal that tells you whether you are on track, off track, and what to do about it.

Forecast Accuracy Formula

The standard formula:

Forecast Accuracy = 1 - (|Forecast - Actual| / Actual) x 100

If you forecast $1M and close $900K, your accuracy is 90%. If you forecast $1M and close $1.2M, your accuracy is 80%. Note that over-forecasting and under-forecasting both count as misses. Beating the number by 20% is not a win from a forecasting perspective. It means your model did not capture what was happening in the pipeline.

Here is a quick reference for the variations you will encounter:

FormulaWhat It MeasuresWhen to Use
1 - (\Forecast - Actual\/ Actual) x 100Overall accuracyQuarterly and annual reviews
(Forecast - Actual) / Actual x 100Directional bias (over vs under)Identifying systematic optimism or pessimism
Mean Absolute Percentage Error (MAPE)Average error across periodsComparing accuracy across segments or time
Weighted Forecast ErrorError weighted by deal sizeEnterprise pipelines where deal size varies widely
MAPE is useful when you want to compare accuracy across different segments or time periods. If your SMB forecast runs at 85% accuracy and your enterprise forecast runs at 65%, that tells you where the problem lives.

The directional formula matters because the direction of the miss reveals the cause. Consistent over-forecasting means your pipeline is inflated with deals that will not close. Consistent under-forecasting means your model is not capturing deal momentum or you have a sandbagging problem.

Why Forecast Accuracy Matters More Than Ever

Three market shifts have made forecast accuracy the defining RevOps metric of this era.

The Margin for Error Has Collapsed

Median B2B win rates dropped to 19% in 2024, down from 23% in 2022 (First Page Sage, 2025). Sales cycles have lengthened 22% since 2022 (Digital Bloom, 2025). When fewer deals close and each one takes longer, the cost of a forecast miss goes up. You cannot recover from a bad Q1 forecast by accelerating Q2 deals when those deals take six months to close.

Board and Investor Expectations Have Tightened

In the era of efficient growth, revenue predictability has replaced revenue growth as the primary valuation driver. Companies with forecast variance above 20% struggle to command premium multiples. The forecast is no longer an internal planning tool. It is a credibility metric with external stakeholders.

The Data to Get It Right Now Exists

Companies with weekly pipeline velocity tracking achieve 87% forecast accuracy versus 52% for teams that track irregularly (Digital Bloom, 2025). The gap is not about having better data. It is about using the data that already exists in your CRM with the right cadence and methodology.

How to Measure Forecast Accuracy

Measuring forecast accuracy requires three decisions: what to measure, at what level, and how often.

What to Measure

Track accuracy at three levels:

1. Total revenue. The number your CFO and board care about. This is the headline metric. 2. By segment. Enterprise, mid-market, and SMB behave differently. A blended accuracy of 80% can hide an enterprise forecast that is off by 40% and an SMB forecast that is nearly perfect. 3. By rep or team. Individual accuracy reveals coaching opportunities. A rep who consistently over-forecasts by 30% has a different problem than one who consistently under-forecasts by 10%.

At What Level of Granularity

The most useful accuracy measurement is at the deal level, rolled up to segment and total. This means comparing each deal's forecasted close date and amount against actual outcomes. Deal-level accuracy exposes patterns that aggregate numbers hide.

For example, a team might hit 85% total accuracy but have systematic errors in deal timing. They close the right amount of revenue, but not the deals they predicted. That looks fine on the scorecard. It is terrible for resource planning.

How Often

Weekly. This is non-negotiable for B2B SaaS companies with sales cycles above 30 days. Companies that review forecast accuracy weekly can course-correct. Companies that review quarterly can only post-mortem.

The weekly review compares the current forecast to the prior week's forecast and to the actual run rate. If this week's forecast shifted by more than 10% from last week without a clear cause (a large deal closing or dying), the forecast is not data-driven. It is a guess that changes with rep sentiment.

Forecast Accuracy Benchmarks for B2B SaaS

Here is where most companies actually land, and what good looks like:

Accuracy LevelPercentage of CompaniesWhat It Indicates
90%+7% (Gartner)Prescriptive analytics, weekly cadence, deal-level tracking
80-89%~20%Strong process, data-driven, room to improve on timing
70-79%~30%Median performance, stage-weighted models, monthly reviews
60-69%~25%Rep-driven forecasts, quarterly reviews, stale pipeline
Below 60%~18%No structured process, CRM data quality issues
The 7% figure from Gartner is the number that should bother every revenue leader. It means 93% of companies are making hiring decisions, budget allocations, and board commitments on forecasts that miss by double digits.

The companies in the 90%+ category are not using magic. They are doing five things differently.

Five Changes That Move Forecast Accuracy from 60% to 90%+

1. Replace Rep Judgment with Deal Signals

The single biggest source of forecast error is rep judgment. Reps are optimistic by nature and training. They overweight their most recent conversation with a prospect and underweight structural signals like deal age, stakeholder count, and competitive presence.

The fix is to layer objective deal signals on top of rep input. Deals with an 11x velocity delta between top and bottom performers in the same pipeline (Ebsta/Pavilion, 2025) are not being forecast correctly if you rely on rep calls alone.

Build a deal score based on measurable activity: number of stakeholder contacts, recency of last meeting, presence of a mutual action plan, and stage-appropriate milestones completed. Use that score to weight, not replace, the rep's call.

2. Track Leading Indicators, Not Lagging Ones

Most forecast models over-index on close dates and stage progression. Both are lagging indicators. By the time a deal slips its close date, the miss was baked in weeks earlier.

Leading indicators that predict outcomes:

- Stakeholder engagement depth. Deals with 3+ stakeholders engaged close at 68% versus 23% for single-threaded deals (Forecastio, 2024). - Activity recency. Deals with no activity in the past 14 days have a dramatically lower close probability. This is your early warning system. - Stage velocity. Is the deal moving through stages at or above the historical average? Deals that slow down rarely speed back up. - Champion activity level. Is the internal champion responding, scheduling follow-ups, and sharing materials internally? Or have they gone quiet?

If you shift 30% of your forecast inputs from lagging to leading indicators, accuracy improves immediately. Not because the model is smarter. Because you are looking at what predicts the outcome instead of what describes what already happened.

3. Implement Weekly Pipeline Hygiene

The 87% vs. 52% accuracy gap between weekly and irregular tracking (Digital Bloom, 2025) is the most compelling data point in all of revenue operations. It is also the easiest to act on.

Weekly pipeline hygiene means three things:

1. Every deal has a validated next step. If there is no next meeting or action scheduled, the deal is stale. Flag it. 2. Stage criteria are enforced. A deal does not move to Stage 3 because the rep feels good about it. It moves because it has met defined exit criteria for Stage 2. 3. Stale deals are confronted, not ignored. A deal that has been in the same stage for twice the historical average needs a decision. Recommit with a specific recovery plan or move it out.

This is not about more process. It is about replacing the fiction in your pipeline with facts. 76% of organizations say less than half their CRM data is accurate (Validity, 2025). Weekly hygiene is how you fix that, one pipeline review at a time.

4. Build Probabilistic Forecasts, Not Point Estimates

A forecast that says "we will close $2.3M this quarter" is a point estimate. It is guaranteed to be wrong. The question is how wrong.

A probabilistic forecast gives you a range: "$1.9M at the floor, $2.3M at commit, $2.7M at best case." Each number is backed by a probability distribution derived from deal-level data.

The commit number should close 80% of the time. If your commit number only lands 50% of the time, it is not a commit. It is a hope.

To build this, you need historical close rates by stage, weighted by deal characteristics. A $200K deal in Stage 3 with two stakeholders and a 90-day age has a different close probability than a $200K deal in Stage 3 with five stakeholders and a 30-day age. Your model should reflect that.

5. Close the Feedback Loop

The difference between a forecast that improves and one that does not is the post-quarter review. After every quarter, answer three questions:

1. Which deals did we forecast to close that did not? Why? Was it a data problem, a judgment problem, or a timing problem? 2. Which deals closed that we did not forecast? Why? Are there patterns the model is missing? 3. What would we change in the methodology based on this quarter's results?

Most companies skip step three. They note the miss, adjust next quarter's targets, and run the same process. That is how you stay at 70% accuracy for five years.

The companies at 90%+ treat every quarter as a calibration event. They adjust stage probabilities, update velocity benchmarks, and refine the deal scoring model. The forecast gets better because they build a system that learns.

Common Forecast Accuracy Mistakes

Five patterns I see repeatedly:

Treating the forecast as a target, not a prediction. When the forecast becomes a political number that reps negotiate rather than a data-driven prediction, accuracy disappears. The forecast should describe reality, not aspirations. Ignoring pipeline age. A $5M pipeline with 40% of deals stale for 30+ days is not a $5M pipeline. It is a $3M pipeline with $2M of noise. Until you account for deal slippage, your forecast will over-predict. Forecasting once per month. Monthly forecasts miss the week-to-week shifts that determine the quarter. By the time you catch a problem in the second monthly review, you have lost four weeks of recovery time. Using a single model. No single forecasting method captures all the dynamics of a B2B pipeline. The best teams blend stage-weighted probability, velocity analysis, and deal-level scoring. For more on this, see the sales forecasting complete guide. Not segmenting. A blended forecast hides segment-level problems. Always break accuracy down by deal size, source, and team.

Forecast Accuracy and Revenue Predictability

Forecast accuracy is the input. Revenue predictability is the output.

Revenue predictability means the company can commit a number to the board and hit it. It means the CFO can plan headcount, marketing spend, and cash flow without building in a 20% buffer for forecast error. It means the CEO can make commitments to investors that hold.

For companies between $100M and $1B ARR, the shift from 70% to 90% forecast accuracy is often the difference between a 3x and a 5x revenue multiple at exit. Acquirers and investors price predictability. A company that consistently hits within 5% of forecast commands a premium over one that swings 15-20% quarter to quarter, even if the total revenue is the same.

The path from where most companies are (70-80% accuracy) to where they should be (85-95%) is not a technology problem. 48% of companies now have a RevOps function (Revenue Operations Alliance, 2024). The tools exist. The data exists. What is missing is the discipline to use them weekly, the willingness to confront stale pipeline, and a forecast methodology that treats every quarter as a calibration event.

Frequently Asked Questions

How do you calculate forecast accuracy?

Forecast Accuracy = 1 - (|Forecast - Actual| / Actual) x 100. For example, if you forecast $1M and close $900K, accuracy is 90%. Track this quarterly and by segment.

What is a good forecast accuracy rate?

Only 7% of companies achieve 90%+ accuracy (Gartner). Median B2B SaaS companies land between 70-80%. Companies using prescriptive analytics with weekly reviews consistently hit 85-95%.

What causes forecast inaccuracy?

Three root causes: reliance on rep judgment instead of data, stale pipeline (deals sitting without activity), and measuring lagging indicators like close dates instead of leading indicators like deal velocity and stakeholder engagement.

Frequently Asked Questions

How do you calculate forecast accuracy?

Forecast Accuracy = 1 - (|Forecast - Actual| / Actual) x 100. For example, if you forecast $1M and close $900K, accuracy is 90%. Track this quarterly and by segment.

What is a good forecast accuracy rate?

Only 7% of companies achieve 90%+ accuracy (Gartner). Median B2B SaaS companies land between 70-80%. Companies using prescriptive analytics with weekly reviews consistently hit 85-95%.

What causes forecast inaccuracy?

Three root causes: reliance on rep judgment instead of data, stale pipeline (deals sitting without activity), and measuring lagging indicators like close dates instead of leading indicators like deal velocity and stakeholder engagement.

PF
Pete Furseth
Sales & Marketing Leader, ORM Technologies
Pete has built custom revenue forecast models for B2B SaaS companies for over a decade.

See how ORM turns these insights into action

ORM builds custom revenue forecast models for B2B SaaS companies. Not dashboards. Prescriptive analytics that tell you what to do next.

Schedule a Demo