Tricks arenât just for kids (or Louisiana senators or New York governors for that matter). Tricks are the lifeblood for many a forecasting software salesperson. Why admit that forecasting is difficult, that most things canât be forecast as accurately as we would like, or that your software…
Tricks arenât just for kids (or Louisiana senators or New York governors for that matter). Tricks are the lifeblood for many a forecasting software salesperson. Why admit that forecasting is difficult, that most things canât be forecast as accurately as we would like, or that your software has the statistical capabilities of a turnip? It is so much easier to sell forecasting software with false promise and deception.
Trick #1: Show how you fit, not how you forecast
Most people realize how difficult it is to generate accurate forecasts. But have you ever realized how easy it is to fit a model to historical behavior??? Think about it: If you have two historical data points, you can get perfect fit with a line. If you have three points, you can get perfect fit with a quadratic. As you increase the number of data points in your time-series history, you can always find a polynomial that fits the history perfectly. Unfortunately, fitting history is not the business problem. The problem is generating a decent forecast.
Forecasting vendors love to demonstrate how well their models fit your history. Mean Absolute Percent Error (MAPE) of the historical fit is always fabulous, rarely more than a few percentage points. The unspoken (and sometimes even spoken) implication is that their software can model your history and, therefore, solve your forecasting problem. This is the implication in the online Microsoft demo that purports to show how to use Excel to generate forecasts. (If you havenât viewed this 60-second crime against human decency already, please take a minute to do so.)
Forecasting software needs to be evaluated by how well it forecasts, not by how well it fits your history. Proper assessment of forecasting performance tells you two important things. First, it lets you distinguish competing software packages â which ones can forecast worth a darn, and which ones canât. Second, it can give you an indication of what kind of accuracy is âreasonable to expectâ when you implement a new package. Expecting âfit to historyâ to be a reliable indicator of forecast accuracy is a sure path to disappointment.