In the last post I argued that we don’t have a sure way to measure true (i.e. “unconstrained”) demand. While demand is commonly defined as “what the customer wants, and when they want it,” it is actually a nebulous concept. For a manufacturer, what a customer orders is not the same as… true demand (for various reasons described in the prior blog post), nor is what actually ships. At a retailer, what is actually sold off the shelves is not the same as true demand, either. For example, the customer may not be able to find what they want in the store (due to out-of-stocks, or poor merchandise presentation), so there is true demand but no recorded sale. Determining true demand for a service can be equally vexing. I may have a taste for a Royale with Cheese at McDonald’s, but go to Wendy’s instead if the drive-thru line is too long. Or I may call the cable company to complain about my tv reception, only to hang-up in frustration trying to wade through their voice menu system.
While we may know true demand under certain special circumstances, we don’t have a general operational definition that will work in all situations. This means two things:
• We will be unable to construct a history of true demand to feed into our statistical forecasting models (except under certain special circumstances)
• We will be unable to assess the accuracy of our forecast of true demand (i.e. our “unconstrained forecast”) (except under certain special circumstances)
Conceptually these are important points. In most circumstance we don’t know what true demand really is, so we don’t know for sure how accurately we are forecasting it. I argued, therefore, that forecasting performance should be evaluated against the “constrained” forecast. The constrained forecast represents our best guess at what is “really going to happen” — i.e. actual shipments or sales or services provided. We can measure what actually happens.
As a practical matter, maybe all of this isn’t so important. While we can’t know exactly what true demand really is under most circumstances, we can often get close enough to make the concept useful in forecasting. It is true that demand ≠ orders, yet if an organization does a good job at filling orders (say 98%+), then “orders” and “true demand” are virtually the same (within a few percentage points). When we forecast, our errors are often 25%, 50%, or more. The fact that the demand history upon which we build our forecasts is not perfect (but may be off by a few percentage points from true demand) is inconsequential compared to the magnitude of the forecast error. (Even if we were able to capture perfect history of true demand, it might only make our forecasts a few percentage points better at best – and that isn’t going to rock anyone’s world.)
There is probably more to say on this topic, so expect a Part 3.