Back to Blog

Forecasting advertising results: how we calculate monthly and yearly growth

Forecasting advertising performance is one of the most misunderstood parts of digital marketing. Many businesses expect exact numbers. Others rely purely on intuition or best guesses. In reality, accurate forecasting sits somewhere in between: a structured, data-driven process built on assumptions, historical performance, and controlled uncertainty.

This is how we approach forecasting advertising results for the next month and the next year, and why realistic projections are more valuable than optimistic promises.

Every forecast starts with clean historical data

The foundation of any ad performance forecast is reliable historical data. Our baseline is last year, using the latest available actuals. But a complete picture of a brand’s lifetime helps identify what’s happening behind the numbers and spot correlations across the same timeframes over multiple years.

We typically focus on spend, revenue, and ROAS. CPM, CPC, and CPA trends. Conversion rate and AOV stability. Seasonality effects and key calendar events.

A common mistake is treating absolute and relative measures separately. Absolute values show overall scale without revealing proportions. Relative values show ratios and percentages without reflecting actual magnitude. Meaningful interpretation always requires considering them together, seeing the percentage change and the dollar change side by side.

Separating what we can control from what we can’t

A key mistake in forecasting is treating all variables as equal. We split inputs into two groups.

On the controllable side: budget allocation, campaign structure, funnel optimization, and landing page performance. On the uncontrollable side: auction pressure and competition, platform algorithm changes, market demand shifts, and broader economic conditions.

Forecasts are built primarily around controllable variables, while uncontrollable ones are addressed through scenario planning. We normalize the data by excluding anomalies like stock issues, website downtime, tracking failures, or one-time viral creatives. The goal is to understand what “normal performance” actually looks like before projecting forward.

Scenario-based forecasting, not a single number

Instead of producing one “promised” result, we build three scenarios.

Conservative. Minimal optimization, higher CPMs, slower scaling. This is what performance looks like if conditions work against us.

Realistic. Expected improvements based on historical patterns and planned optimizations. This is the scenario we build strategy around.

Optimistic. Strong creative performance and favorable auction dynamics. This is what’s possible when things align.

Each scenario includes projected spend, revenue, ROAS, and acquisition costs. This approach helps stakeholders understand risk, not just upside. Developing several versions also allows us to anticipate how conditions may evolve and examine key metrics from multiple angles.

Monthly forecasts: short-term precision

Monthly forecasts focus on execution. They answer questions like: how much can we scale without breaking efficiency? How many new creatives are required? What performance drop is acceptable during testing phases?

Because short-term data is more predictable, monthly forecasts are regularly updated using moving averages. We continually review the forecast for the upcoming month and adjust based on current conditions to keep it aligned with quarterly and annual KPIs.

This is where forecasting becomes a living tool rather than a static document. The numbers move, and the forecast moves with them.

Yearly forecasts: direction, not promises

Annual forecasts are strategic, not tactical. Their purpose is to set realistic growth targets, plan budgets and resources, and align marketing efforts with broader business goals.

We account for seasonality, diminishing returns at scale, and expected efficiency decay. To stay sharp, we typically add roughly 5% stretch to the forecast. If targets are too conservative and easily beaten, it creates complacency. A modest stretch goal keeps the team pushing without setting anyone up for failure.

Why most ad forecasts fail (and how we avoid it)

Most forecasts fail because they ignore creative fatigue, assume linear scaling, or rely solely on platform-reported metrics. We avoid this by updating forecasts continuously, comparing projections against actuals, using blended data across platform reporting, analytics, and CRM, and factoring in how external conditions affect customer behavior.

A scaling strategy also means more than just allocating extra budget to what’s already working. It usually involves launching new campaigns, dedicating more spend to testing, and opening new platforms when there’s a genuine opportunity to create and capture demand.

Forecasting is a decision-making tool, not a crystal ball

Forecasting is not about predicting the future perfectly. It’s about making better decisions today with the information available. A good forecast creates clarity, sets expectations, and enables controlled growth, which is exactly what performance marketing should deliver.

The brands that scale sustainably aren’t the ones with the most aggressive projections. They’re the ones whose projections are grounded in real data, honest assumptions, and the discipline to update them when conditions change.