M4 Forecasting Competition Launched

The field of forecasting has progressed significantly during the last half a century. The relevant advancements have been mainly facilitated by the development of new methods and by the widespread usage of powerful computers that enable the manipulation of large amounts of useful data and render computationally intensive methods 'affordable'.
 
In addition, cognitive psychologists have studied how forecasts are made judgmentally and have suggested ways of recognizing the biases affecting our predictions. These developments have resulted in a comprehensive collection of tools and methods that is readily available to decision makers. However, and despite such advancements, the role and value of forecasting is regularly under scrutiny and its contribution to the societal welfare has often been challenged.
 
The latest financial crisis, for example, raised many questions with regards to the true underlying value of forecasting as a field. Consequently, it may be argued that the biggest challenge facing currently the field of forecasting is not in the introduction of additional methods or more experiments in further studying judgmental biases.
 
Instead, we must objectively evaluate available empirical evidence in the field of forecasting and cognitive psychology in order to provide decision and policy makers with evidence based information. During the last few decades there has been a rising trend, centering in medicine, of 'evidence based knowledge'. Arguably, the time has come to apply such evidence based knowledge to the field of forecasting as well by introducing the M4 forecasting competition.

Building on the success of previous empirical competitions (M, M2, M3), the purpose of the M4-Competition is to further study the accuracy/validity (and utility) of various forecasting methods. The experimental structure of the M4-Competition has been extended and enriched, in comparison with the previous competitions, in some very significant ways. In particular:

— We have increased the number of time series utilized to 10,111 and grouped them into various categories (financial, economic, demographic, etc) so that accuracy differences among each category can be identified.

— Given its growing importance, the Internet is being introduced as a separate category. The same is true for intermittent/count series that prevail in many industrial applications.

— We have increased the number of methods being compared including new methods that have been developed during the last decade.

— There is a considerable emphasis being placed on uncertainty; this is to be facilitated through the construction of confidence intervals for all series and methods enabling the evaluation of forecast uncertainty. In addition, this will allow determining whether the errors are independent, normally distributed and constant and if not the implications if the model assumptions are violated.

— The empirical utility of the forecasts will be separately analyzed and contrasted to forecast accuracy. Sales/demand forecasts, for example, are to be analyzed also with regards to their stock control implications.

— An evaluation of published judgmental forecasts will be made to study their accuracy and uncertainty. In addition, comparisons will be made between such forecasts and those using time series methods. An interesting potential outcome of these comparisons will not only be that related to accuracy and uncertainty but also the possibility of systematic biases.

— A major objective of the M4-Competition is to ensure the objectivity and replicability of the results. For this reason, all the series considered in the competition will be posted on the Internet (http://m4competition.com) to allow for their maximum exposure and as large number of participants as possible.

— A final objective of the M4-Competition is to make the results/conclusions available through the Internet and motivate a debate on how to improve the value of forecasting by making it more useful and relevant to decision and policy makers.

The competition will commence on September 1st, 2010 and the entire exercise will be co-ordinated through a dedicated website: http://m4competition.com/.

It is our expectation that the M4-Competition will add significant value to make Evidence Based Forecasting (EBF) more useful and relevant.

The M4-Competition team: Spyros Makridakis (INSEAD), Vassilis Assimakopoulos (National Technical University of Athens), Konstantinos Nikolopoulos (University of Manchester), Aris Syntetos (University of Salford), Dimitrios Thomakos (University of Peloponnese).