In this issue we have Do Phillips curves conditionally help to forecast inflation, Tests of equal forecast accuracy for overlapping models, Advances in Forecasting Under Instability, Advances in forecast evaluation.
 Do Phillips curves conditionally help to forecast inflation?
Date: 
2011 
By: 
Michael Dotsey Shigeru Fujita Tom Stark 
URL: 

The Phillips curve has long been used as a foundation for forecasting inflation. Yet numerous studies indicate that over the past 20 years or so, inflation forecasts based on the Phillips curve generally do not predict inflation any better than a univariate forecasting model. In this paper, the authors take a deeper look at the forecasting ability of Phillips curves from both an unconditional and a conditional view. Namely, they use the test results developed by Giacomini and White (2006) to examine the forecasting ability of Phillips curve models. The authors’ main results indicate that forecasts from their Phillips curve models are unconditionally inferior to those of their univariate forecasting models and sometimes the difference is statistically significant. However, the authors do find that conditioning on various measures of the state of the economy does at times improve the performance of the Phillips curve model in a statistically significant way. Of interest is that improvement is more likely to occur at longer forecasting horizons and over the sample period 1984Q1—2010Q3. Strikingly, the improvement is asymmetric — Phillips curve forecasts tend to be more accurate when the economy is weak and less accurate when the economy is strong. It, therefore, appears that forecasters should not fully discount the inflation forecasts of Phillips curvebased models when the economy is weak. 

Keywords: 
Phillips curve ; Unemployment 
 Tests of equal forecast accuracy for overlapping models
Date: 
2011 
By: 
Todd Clark Michael W. McCracken 
URL: 

This paper examines the asymptotic and finitesample properties of tests of equal forecast accuracy when the models being compared are overlapping in the sense of Vuong (1989). Two models are overlapping when the true model contains just a subset of variables common to the larger sets of variables included in the competing forecasting models. We consider an outofsample version of the twostep testing procedure recommended by Vuong but also show that an exact onestep procedure is sometimes applicable. When the models are overlapping, we provide a simpletouse fixed regressor wild bootstrap that can be used to conduct valid inference. Monte Carlo simulations generally support the theoretical results: the twostep procedure is conservative while the onestep procedure can be accurately sized when appropriate. We conclude with an empirical application comparing the predictive content of credit spreads to growth in real stock prices for forecasting U.S. real GDP growth. 

Keywords: 
Forecasting 
 Advances in Forecasting Under Instability
Date: 
2011 
By: 
Barbara Rossi 
URL: 

The forecasting literature has identi fied two important, broad issues. The fi rst stylized fact is that the predictive content is unstable over time; the second is that insample predictive content does not necessarily translate into outofsample predictive ability, nor ensures the stability of the predictive relation over time. The objective of this chapter is to understand what we have learned about forecasting in the presence of instabilities, especially regarding the two questions above. The empirical evidence raises a multitude of questions. If insample tests provide poor guidance to outofsample forecasting ability, what should researchers do? If there are statistically significant instabilities in the Grangercausality relationships, how do researchers establish whether there is any Grangercausality at all? And if there is substantial instability in predictive relationships, how do researchers establish which models is the “best” forecasting model? And finally, if a model forecasts poorly, why is that, and how should researchers proceed to improve the forecasting models? In this chapter, we will answer these questions by discussing various methodologies for inference as well as estimation that have been recently proposed in the literature. We also provide an empirical analysis of the usefulness of the existing methodologies using an extensive database of macroeconomic predictors of output growth and inflation. 

JEL: 
C53 
 Tests of equal forecast accuracy for overlapping models
Date: 
2011 
By: 
Todd E. Clark Michael W. McCracken 
URL: 

This paper examines the asymptotic and finitesample properties of tests of equal forecast accuracy when the models being compared are overlapping in the sense of Vuong (1989). Two models are overlapping when the true model contains just a subset of variables common to the larger sets of variables included in the competing forecasting models. We consider an outofsample version of the twostep testing procedure recommended by Vuong but also show that an exact onestep procedure is sometimes applicable. When the models are overlapping, we provide a simpletouse fixed regressor wild bootstrap that can be used to conduct valid inference. Monte Carlo simulations generally support the theoretical results: the twostep procedure is conservative while the onestep procedure can be accurately sized when appropriate. We conclude with an empirical application comparing the predictive content of credit spreads to growth in real stock prices for forecasting U.S. real GDP growth. 

Keywords: 
Forecasting 
 Advances in forecast evaluation
Date: 
2011 
By: 
Todd E. Clark Michael W. McCracken 
URL: 

This paper surveys recent developments in the evaluation of point forecasts. Taking West’s (2006) survey as a starting point, we briefly cover the state of the literature as of the time of West’s writing. We then focus on recent developments, including advancements in the evaluation of forecasts at the population level (based on true, unknown model coefficients), the evaluation of forecasts in the finite sample (based on estimated model coefficients), and the evaluation of conditional versus unconditional forecasts. We present original results in a few subject areas: the optimization of power in determining the split of a sample into insample and outofsample portions; whether the accuracy of inference in evaluation of multistep forecasts can be improved with judicious choice of HAC estimator (it can); and the extension of West’s (1996) theory results for populationlevel, unconditional forecast evaluation to the case of conditional forecast evaluation. 

Keywords: 
Forecasting 
 Volatility Forecasting: Downside Risk, Jumps and Leverage Effect
Date: 
201109 
By: 
Audrino, Francesco Hu, Yujia 
URL: 

We provide new empirical evidence on volatility forecasting in relation to asymmetries present in the dynamics of both return and volatility processes. Leverage and volatility feedback effects among continuous and jump components of the S&P500 price and volatility dynamics are examined using recently developed methodologies to detect jumps and to disentangle their size from continuous return and continuous volatility. Granted that jumps in both return and volatility are important components for generating the two effects, we find jumps in return can improve forecasts of volatility, while jumps in volatility improve volatility forecasts to a lesser extent. Moreover, disentangling jump and continuous variations into signed semivariances further improve the outofsample performance of volatility forecasting models, with negative jump semivariance being highly more informative then positive jump semivariance. The model proposed is able to capture many empirical stylized facts while still remaining parsimonious in terms of number of parameters to be estimated. 

Keywords: 
High frequency data, Realized volatility forecasting, Downside risk, Leverage effect 
JEL: 
C13 
 Forecasting Based on Common Trends in Mixed Frequency Samples
Date: 
20110613 
By: 
Peter Fuleky (UHERO and Department of Economics, University of Hawaii) Carl S. Bonham (UHERO and Department of Economics, University of Hawaii) 
URL: 

We extend the existing literature on small mixed frequency single factor models by allowing for multiple factors, considering indicators in levels, and allowing for cointegration among the indicators. We capture the cointegrating relationships among the indicators by common factors modeled as stochastic trends. We show that the stationary singlefactor model frequently used in the literature is misspecified if the data set contains common stochastic trends. We find that taking advantage of common stochastic trends improves forecasting performance over a stationary singlefactor model. The commontrends factor model outperforms the stationary singlefactor model at all analyzed forecast horizons on a root mean squared error basis. Our results suggest that when the constituent indicators are integrated and cointegrated, modeling common stochastic trends, as opposed to eliminating them, will improve forecasts. 

Keywords: 
Dynamic Factor Model, Mixed Frequency Samples, Common Trends, Forecasting, Tourism Industry 
JEL: 
E37 
 Do Experts’ SKU Forecasts improve after Feedback?
Date: 
20110926 
By: 
Rianne Legerstee (Erasmus University Rotterdam) Philip Hans Franses (Erasmus University Rotterdam) 
URL: 

We analyze the behavior of experts who quote forecasts for monthly SKUlevel sales data where we compare data before and after the moment that experts received different kinds of feedback on their behavior. We have data for 21 experts located in as many countries who make SKUlevel forecasts for a variety of pharmaceutical products for October 2006 to September 2007. We study the behavior of the experts by comparing their forecasts with those from an automated statistical program, and we report the forecast accuracy over these 12 months. In September 2007 these experts were given feedback on their behavior and they received a training at the headquarters’ office, where specific attention was given to the ins and outs of the statistical program. Next, we study the behavior of the experts for the 3 months after the training session, that is, October 2007 to December 2007. Our main conclusion is that in the second period the experts’ forecasts deviated lesser from the statistical forecasts and that their accuracy improved substantially. 

Keywords: 
model forecasts; expert forecasts; judgmental adjustment; feedback; outcome feedback; performance feedback; cognitive process feedback; task properties feedback 
JEL: 
C53 
 Indeterminacy and forecastability
Date: 
2011 
By: 
Ippei Fujiwara Yasuo Hirose 
URL: 

Recent studies document the deteriorating performance of forecasting models during the Great Moderation. This conversely implies that forecastability is higher in the preceding era, when the economy was unexpectedly volatile. We offer an explanation for this phenomenon in the context of equilibrium indeterminacy in dynamic stochastic general equilibrium models. First, we analytically show that a model under indeterminacy exhibits richer dynamics that can improve forecastability. Then, using a prototypical New Keynesian model, we numerically demonstrate that indeterminacy due to passive monetary policy can yield superior forecastability as long as the degree of uncertainty about sunspot fluctuations is relatively small. 

Keywords: 
Forecasting ; Mathematical models ; Monetary policy 
 Differences in Early GDP Component Estimates Between Recession and Expansion
Date: 
201102 
By: 
Tara M. Sinclair (Department of Economics/Institute for International Economic Policy, George Washington University) H.O. Stekler (Department of Economics, George Washington University) 
URL: 

In this paper we examine the quality of the initial estimates of the components of both real and nominal U.S. GDP. We introduce a number of new statistics for measuring the magnitude of changes in the components from the initial estimates available one month after the end of the quarter to the estimates available 3 months after the end of the quarter. We further specifically investigate the potential role of changes in the state of the economy for these changes. Our analysis shows that the early data generally reflected the composition of the changes in GDP that was observed in the later data. Thus, under most circumstances, an analyst could use the early data to obtain a realistic picture of what had happened in the economy in the previous quarter. However, the differences in the composition of the vectors of the two vintages were larger during recessions than in expansions. Unfortunately, it is in those periods when accurate information is most vital for forecasting. 

Keywords: 
Flash Estimates, Data Revisions, GDP Components, Statistical Tests, Business Cycles 
JEL: 
C82 
 Improving GDP measurement: a forecast combination perspective
Date: 
2011 
By: 
S. Boragan Aruoba Francis X. Diebold Jeremy Nalewaik Frank Schorfheide Dongo Song 
URL: 

Two oftendivergent U.S. GDP estimates are available, a widelyused expenditureside version GDPE, and a much less widelyused incomeside version GDI . The authors propose and explore a “forecast combination” approach to combining them. They then put the theory to work, producing a superior combined estimate of GDP growth for the U.S., GDPC. The authors compare GDPC to GDPE and GDPI , with particular attention to behavior over the business cycle. They discuss several variations and extensions. 

Keywords: 
Business cycles ; Recessions ; Expenditures, Public 
 An Empirical Investigation of US Fiscal Expenditures and Macroeconomic Outcomes
Date: 
201109 
By: 
Yunus Aksoy (Department of Economics, Mathematics & Statistics, Birkbeck) Giovanni Melina (Department of Economics, Mathematics & Statistics, Birkbeck) 
URL: 

In addition to containing stable information to explain inflation, statelocal expenditures have also a larger share of the forecast error variance of US inflation than the Federal funds rate. Nondefense federal expenditures are useful in predicting real output variations and, starting from the early 1980s, present also a larger share of the forecast error variance of US real output than the Federal funds rate. 

Keywords: 
Information value, statelocal expenditures, forecast error variance decomposition 
 Irrationality or efficiency of macroeconomic survey forecasts? Implications from the anchoring bias test
Date: 
2011 
By: 
Hess, Dieter Orbe, Sebastian 
URL: 

We analyze the quality of macroeconomic survey forecasts. Recent findings indicate that they are anchoring biased. This irrationality would challenge the results of a wide range of empirical studies, e.g., in asset pricing, volatility clustering or market liquidity, which rely on survey data to capture market participants’ expectations. We contribute to the existing literature in two ways. First, we show that the cognitive bias is a statistical artifact. Despite highly significant anchoring coefficients a bias adjustment does not improve forecasts’ quality. To explain this counterintuitive result we take a closer look at macroeconomic analysts’ information processing abilities. We find that analysts benefit from the use of an extensive information set, neglected in the anchoring bias test. Exactly this information advantage drives the misleading anchoring bias test results. Second, we find that the superior information aggregation capabilities enable analysts to easily outperform sophisticated timeseries forecasts and therefore survey forecasts should clearly be favored. — 

Keywords: 
macroeconomic announcements,efficiency of forecasts,anchoring bias,rationality of analysts 
JEL: 
G12 
Taken from the NEPFOR mailing list edited by Rob Hyndman.