The Spring 2009 Foresight feature on assessing forecastability is a must-read for anyone who gets yelled at for having lousy forecasts. (It should also be read by those…
The Spring 2009 Foresight feature on assessing forecastability is a must-read for anyone who gets yelled at for having lousy forecasts. (It should also be read by those who do the yelling, but youâd have to be living in Neverland to believe that will ever happen.) As I promised in yesterday's guest blogging by Len Tashman, Editor of Foresight, here are a few comments on this topic.
Why is it that some things can be forecast with relatively high accuracy (e.g. the time of sunrise every morning for years into the future), while other things cannot be forecast with much accuracy at all, no matter how sophisticated our approach (e.g. calling heads or tails in the tossing of a fair coin)? Begin by thinking of behavior as having a structured, or rule-guided, or deterministic component, along with a random component. To the extent that we can understand and model the deterministic component, then (assuming we have modeled it correctly and the rule guiding the behavior doesnât change over time) the accuracy of our forecasts is limited only by the degree of randomness.
Coin tossing gives a perfect illustration of this. With a fair coin, the behavior is completely random. Over the long term, our forecast (Heads or Tails) will be correct 50% of the time and there is nothing we can do to improve on it. Our accuracy is limited by the nature of the behavior â that it is entirely random.
While suffering from many imperfections (as Peter Catt rightly points out in his article), the Coefficient of Variation (CV) is still a pretty good quick-and-dirty indicator of forecastability in typical business forecasting situations. Compute CV based on sales for each entity you are forecasting over some time frame, such as the past year. Thus, if an item sells an average of 100 units per week, with a standard deviation of 50, then CV = standard deviation / mean = .5 (or 50%).
It is useful to create a scatterplot relating CV to the forecast accuracy you achieve. In this scatterplot of data from a consumer goods manufacturer, there are roughly 5000 points representing 500 items sold through 10 DCs. Forecast accuracy (0 to 100%) is along the vertical axis, CV (0 to 160% (truncated)) is along the horizontal axis. As you would expect, with lower sales volatility (CV near 0), the forecast was generally much more accurate than for item/DC combinations with high volatility.
The line through this scatterplot is NOT a best fit regression line. It can be called the âForecast Value Added Lineâ and shows the approximate accuracy you would have achieved using a simple moving average as your forecast model for each value of CV. The way to interpret the diagram is that for item/DC combinations falling above the FVA Line, this organizationâs forecasting process was âadding valueâ by producing forecasts more accurate than would have been achieved by a moving average. Overall, this organization's forecasting process added 4 percentage points of value, achieving 68% accuracy versus 64% for the moving average. The plot also identifies plenty of instances where the process made the forecast worse (those points falling below the line), and these would merit further investigation.
Such a scatterplot (and use of CV) doesnât answer the more difficult question â how accurate can we be? But I'm pretty convinced that the surest way to get better forecasts is to reduce the volatility of the behavior you are trying to forecast. While we may not have any control over the volatility of our weather, we actually do have a lot of control over the volatility of demand for our products and services. More about this another timeâ¦