The Argument for Max(Forecast,Actual)

There is a long running debate among forecasting professionals, on whether to use Forecast or Actual in the denominator of your percentage error calculations. The Winter 2009 issue of Foresight had an article by Kesten Green and Len Tashman, reporting on a survey (… of the International Institute of Forecasters discussion list and Foresight subcribers) asking:

What should the denominator be when calculating percentage error?

This non-scientific survey's responses were 56% for using Actual, 15% for using Forecast, and 29% for something other, such as the average of Forecast and Actual, or an average of the Actuals in the series, or the absolute average of the period-over-period differences in the data (yielding a Mean Absolute Scaled Error or MASE). One lone respondent favored using the Maximum (Forecast, Actuals).

Fast forward to the new Summer 2010 issue of Foresight (p.46):

Letter to the Editor

I have just read with interest Foresight’s article “Percentage Error: What Denominator” (Winter 2009, p.36). I thought I’d send you a note regarding the response you received to that survey from one person who preferred to use in the denominator the larger value of forecast and actuals.

I also have a preference for this metric in my environment, even though I realize it may not be academically correct. We have managed to gain an understanding at senior executive level that forecast accuracy improvement will drive significant competitive advantage.

I have found over many years in different companies that there is a very different executive reaction to a reported result of 60% forecast accuracy vs. a 40% forecast error— even though they are equivalent! Reporting the perceived, high error has a tendency to generate knee-jerk reactions and drive to the creation of unrealistic goals. Reporting the equivalent accuracy metric tends to cause executives to ask the question “What can we do to improve this?” I know that this is not logical but it is something I have observed time and again and so I now always recommend reporting forecast accuracy to a wider audience.

But if you are going to use forecast accuracy as a metric then, if you have specified the denominator to be either actuals or forecast, you will always have some errors that are greater than 100%. When converting these large errors to accuracy (accuracy being 1 – error) then you end up with a negative accuracy result; this is the type of result that always seems to cause misunderstanding with management teams. A forecast accuracy result of minus 156% just does not seem to be intuitively understandable.

When you use the maximum of forecast or actuals as the denominator, the forecast accuracy metric is constrained between 0 and 100%, making it conceptually easier for a wider audience, including the executive team, to understand.

If the purpose of the metric is to identify areas of opportunity and drive improvement actions, using the larger value as the denominator and reporting accuracy as opposed to error enables the proper diagnostic activities to take place and reduces disruption caused by misinterpretation of the “correct” error metric.

To summarize, I use the larger value methodology for ease of communication to key personnel who are not familiar with the intricacies of forecasting process and measurement.

David Hawitt
SIOP Development Manager for a global technology company

I have long favored “Forecast Accuracy” as a metric for management reporting, defining it as:

FA = {1 – [ ∑ |F – A| / ∑ Max (F,A) ] } x 100

where the summation is over n observations of forecasts and actuals. FA is defined to be 100% when both forecast and actual are zero. Here is a sample of the calculation over 6 weeks for two products, X and Y:

As all forecasting performance metrics do, calculating Forecast Accuracy using Max (F,A) in the denominator has its flaws — and it certainly has an army of detractors. Yet the detractors miss the point that David so nicely makes. We recognize that Max(F,A) is not “academically correct.” It lacks properties that would make it useful in other calculations. There is virtually nothing a self-respecting mathematician would find of value in it except it forces the Forecast Accuracy metric to always be scaled between 0-100%, thereby making it an excellent choice for reporting performance to management! If nothing else, it helps you avoid wasting time explaining the weird and non-intuitive values you can get with the usual calculation of performance metrics.