Forecasting Papers 2011-10-13

In this issue we have Do Phillips curves conditionally help to forecast inflation, Tests of equal forecast accuracy for overlapping models, Advances in Forecasting Under Instability, Advances in forecast evaluation.

    1. Do Phillips curves conditionally help to forecast inflation?

Date:

2011

By:

Michael Dotsey Shigeru Fujita Tom Stark

URL:

http://d.repec.org/n?u=RePEc:fip:fedpwp:11-40&r=for

The Phillips curve has long been used as a foundation for forecasting inflation. Yet numerous studies indicate that over the past 20 years or so, inflation forecasts based on the Phillips curve generally do not predict inflation any better than a univariate forecasting model. In this paper, the authors take a deeper look at the forecasting ability of Phillips curves from both an unconditional and a conditional view. Namely, they use the test results developed by Giacomini and White (2006) to examine the forecasting ability of Phillips curve models. The authors’ main results indicate that forecasts from their Phillips curve models are unconditionally inferior to those of their univariate forecasting models and sometimes the difference is statistically significant. However, the authors do find that conditioning on various measures of the state of the economy does at times improve the performance of the Phillips curve model in a statistically significant way. Of interest is that improvement is more likely to occur at longer forecasting horizons and over the sample period 1984Q1—2010Q3. Strikingly, the improvement is asymmetric — Phillips curve forecasts tend to be more accurate when the economy is weak and less accurate when the economy is strong. It, therefore, appears that forecasters should not fully discount the inflation forecasts of Phillips curve-based models when the economy is weak.

Keywords:

Phillips curve ; Unemployment

    1. Tests of equal forecast accuracy for overlapping models

Date:

2011

By:

Todd Clark Michael W. McCracken

URL:

http://d.repec.org/n?u=RePEc:fip:fedcwp:1121&r=for

This paper examines the asymptotic and finite-sample properties of tests of equal forecast accuracy when the models being compared are overlapping in the sense of Vuong (1989). Two models are overlapping when the true model contains just a subset of variables common to the larger sets of variables included in the competing forecasting models. We consider an out-of-sample version of the two-step testing procedure recommended by Vuong but also show that an exact one-step procedure is sometimes applicable. When the models are overlapping, we provide a simple-to-use fixed regressor wild bootstrap that can be used to conduct valid inference. Monte Carlo simulations generally support the theoretical results: the two-step procedure is conservative while the one-step procedure can be accurately sized when appropriate. We conclude with an empirical application comparing the predictive content of credit spreads to growth in real stock prices for forecasting U.S. real GDP growth.

Keywords:

Forecasting

    1. Advances in Forecasting Under Instability

Date:

2011

By:

Barbara Rossi

URL:

http://d.repec.org/n?u=RePEc:duk:dukeec:11-20&r=for

The forecasting literature has identi…fied two important, broad issues. The fi…rst stylized fact is that the predictive content is unstable over time; the second is that in-sample predictive content does not necessarily translate into out-of-sample predictive ability, nor ensures the stability of the predictive relation over time. The objective of this chapter is to understand what we have learned about forecasting in the presence of instabilities, especially regarding the two questions above. The empirical evidence raises a multitude of questions. If in-sample tests provide poor guidance to out-of-sample forecasting ability, what should researchers do? If there are statistically significant instabilities in the Granger-causality relationships, how do researchers establish whether there is any Granger-causality at all? And if there is substantial instability in predictive relationships, how do researchers establish which models is the “best” forecasting model? And finally, if a model forecasts poorly, why is that, and how should researchers proceed to improve the forecasting models? In this chapter, we will answer these questions by discussing various methodologies for inference as well as estimation that have been recently proposed in the literature. We also provide an empirical analysis of the usefulness of the existing methodologies using an extensive database of macroeconomic predictors of output growth and inflation.

JEL:

C53

    1. Tests of equal forecast accuracy for overlapping models

Date:

2011

By:

Todd E. Clark Michael W. McCracken

URL:

http://d.repec.org/n?u=RePEc:fip:fedlwp:2011-024&r=for

This paper examines the asymptotic and finite-sample properties of tests of equal forecast accuracy when the models being compared are overlapping in the sense of Vuong (1989). Two models are overlapping when the true model contains just a subset of variables common to the larger sets of variables included in the competing forecasting models. We consider an out-of-sample version of the two-step testing procedure recommended by Vuong but also show that an exact one-step procedure is sometimes applicable. When the models are overlapping, we provide a simple-to-use fixed regressor wild bootstrap that can be used to conduct valid inference. Monte Carlo simulations generally support the theoretical results: the two-step procedure is conservative while the one-step procedure can be accurately sized when appropriate. We conclude with an empirical application comparing the predictive content of credit spreads to growth in real stock prices for forecasting U.S. real GDP growth.

Keywords:

Forecasting

    1. Advances in forecast evaluation

Date:

2011

By:

Todd E. Clark Michael W. McCracken

URL:

http://d.repec.org/n?u=RePEc:fip:fedlwp:2011-025&r=for

This paper surveys recent developments in the evaluation of point forecasts. Taking West’s (2006) survey as a starting point, we briefly cover the state of the literature as of the time of West’s writing. We then focus on recent developments, including advancements in the evaluation of forecasts at the population level (based on true, unknown model coefficients), the evaluation of forecasts in the finite sample (based on estimated model coefficients), and the evaluation of conditional versus unconditional forecasts. We present original results in a few subject areas: the optimization of power in determining the split of a sample into in-sample and out-of-sample portions; whether the accuracy of inference in evaluation of multi-step forecasts can be improved with judicious choice of HAC estimator (it can); and the extension of West’s (1996) theory results for population-level, unconditional forecast evaluation to the case of conditional forecast evaluation.

Keywords:

Forecasting

    1. Volatility Forecasting: Downside Risk, Jumps and Leverage Effect

Date:

2011-09

By:

Audrino, Francesco Hu, Yujia

URL:

http://d.repec.org/n?u=RePEc:usg:econwp:2011:38&r=for

We provide new empirical evidence on volatility forecasting in relation to asymmetries present in the dynamics of both return and volatility processes. Leverage and volatility feedback effects among continuous and jump components of the S&P500 price and volatility dynamics are examined using recently developed methodologies to detect jumps and to disentangle their size from continuous return and continuous volatility. Granted that jumps in both return and volatility are important components for generating the two effects, we find jumps in return can improve forecasts of volatility, while jumps in volatility improve volatility forecasts to a lesser extent. Moreover, disentangling jump and continuous variations into signed semivariances further improve the out-of-sample performance of volatility forecasting models, with negative jump semivariance being highly more informative then positive jump semivariance. The model proposed is able to capture many empirical stylized facts while still remaining parsimonious in terms of number of parameters to be estimated.

Keywords:

High frequency data, Realized volatility forecasting, Downside risk, Leverage effect

JEL:

C13

    1. Forecasting Based on Common Trends in Mixed Frequency Samples

Date:

2011-06-13

By:

Peter Fuleky (UHERO and Department of Economics, University of Hawaii) Carl S. Bonham (UHERO and Department of Economics, University of Hawaii)

URL:

http://d.repec.org/n?u=RePEc:hai:wpaper:201110&r=for

We extend the existing literature on small mixed frequency single factor models by allowing for multiple factors, considering indicators in levels, and allowing for cointegration among the indicators. We capture the cointegrating relationships among the indicators by common factors modeled as stochastic trends. We show that the stationary single-factor model frequently used in the literature is misspecified if the data set contains common stochastic trends. We find that taking advantage of common stochastic trends improves forecasting performance over a stationary single-factor model. The common-trends factor model outperforms the stationary single-factor model at all analyzed forecast horizons on a root mean squared error basis. Our results suggest that when the constituent indicators are integrated and cointegrated, modeling common stochastic trends, as opposed to eliminating them, will improve forecasts.

Keywords:

Dynamic Factor Model, Mixed Frequency Samples, Common Trends, Forecasting, Tourism Industry

JEL:

E37

    1. Do Experts’ SKU Forecasts improve after Feedback?

Date:

2011-09-26

By:

Rianne Legerstee (Erasmus University Rotterdam) Philip Hans Franses (Erasmus University Rotterdam)

URL:

http://d.repec.org/n?u=RePEc:dgr:uvatin:20110135&r=for

We analyze the behavior of experts who quote forecasts for monthly SKU-level sales data where we compare data before and after the moment that experts received different kinds of feedback on their behavior. We have data for 21 experts located in as many countries who make SKU-level forecasts for a variety of pharmaceutical products for October 2006 to September 2007. We study the behavior of the experts by comparing their forecasts with those from an automated statistical program, and we report the forecast accuracy over these 12 months. In September 2007 these experts were given feedback on their behavior and they received a training at the headquarters’ office, where specific attention was given to the ins and outs of the statistical program. Next, we study the behavior of the experts for the 3 months after the training session, that is, October 2007 to December 2007. Our main conclusion is that in the second period the experts’ forecasts deviated lesser from the statistical forecasts and that their accuracy improved substantially.

Keywords:

model forecasts; expert forecasts; judgmental adjustment; feedback; outcome feedback; performance feedback; cognitive process feedback; task properties feedback

JEL:

C53

    1. Indeterminacy and forecastability

Date:

2011

By:

Ippei Fujiwara Yasuo Hirose

URL:

http://d.repec.org/n?u=RePEc:fip:feddgw:91&r=for

Recent studies document the deteriorating performance of forecasting models during the Great Moderation. This conversely implies that forecastability is higher in the preceding era, when the economy was unexpectedly volatile. We offer an explanation for this phenomenon in the context of equilibrium indeterminacy in dynamic stochastic general equilibrium models. First, we analytically show that a model under indeterminacy exhibits richer dynamics that can improve forecastability. Then, using a prototypical New Keynesian model, we numerically demonstrate that indeterminacy due to passive monetary policy can yield superior forecastability as long as the degree of uncertainty about sunspot fluctuations is relatively small.

Keywords:

Forecasting ; Mathematical models ; Monetary policy

    1. Differences in Early GDP Component Estimates Between Recession and Expansion

Date:

2011-02

By:

Tara M. Sinclair (Department of Economics/Institute for International Economic Policy, George Washington University) H.O. Stekler (Department of Economics, George Washington University)

URL:

http://d.repec.org/n?u=RePEc:gwi:wpaper:2011-05&r=for

In this paper we examine the quality of the initial estimates of the components of both real and nominal U.S. GDP. We introduce a number of new statistics for measuring the magnitude of changes in the components from the initial estimates available one month after the end of the quarter to the estimates available 3 months after the end of the quarter. We further specifically investigate the potential role of changes in the state of the economy for these changes. Our analysis shows that the early data generally reflected the composition of the changes in GDP that was observed in the later data. Thus, under most circumstances, an analyst could use the early data to obtain a realistic picture of what had happened in the economy in the previous quarter. However, the differences in the composition of the vectors of the two vintages were larger during recessions than in expansions. Unfortunately, it is in those periods when accurate information is most vital for forecasting.

Keywords:

Flash Estimates, Data Revisions, GDP Components, Statistical Tests, Business Cycles

JEL:

C82

    1. Improving GDP measurement: a forecast combination perspective

Date:

2011

By:

S. Boragan Aruoba Francis X. Diebold Jeremy Nalewaik Frank Schorfheide Dongo Song

URL:

http://d.repec.org/n?u=RePEc:fip:fedpwp:11-41&r=for

Two often-divergent U.S. GDP estimates are available, a widely-used expenditure-side version GDPE, and a much less widely-used income-side version GDI . The authors propose and explore a “forecast combination” approach to combining them. They then put the theory to work, producing a superior combined estimate of GDP growth for the U.S., GDPC. The authors compare GDPC to GDPE and GDPI , with particular attention to behavior over the business cycle. They discuss several variations and extensions.

Keywords:

Business cycles ; Recessions ; Expenditures, Public

    1. An Empirical Investigation of US Fiscal Expenditures and Macroeconomic Outcomes

Date:

2011-09

By:

Yunus Aksoy (Department of Economics, Mathematics & Statistics, Birkbeck) Giovanni Melina (Department of Economics, Mathematics & Statistics, Birkbeck)

URL:

http://d.repec.org/n?u=RePEc:bbk:bbkefp:1105&r=for

In addition to containing stable information to explain inflation, state-local expenditures have also a larger share of the forecast error variance of US inflation than the Federal funds rate. Non-defense federal expenditures are useful in predicting real output variations and, starting from the early 1980s, present also a larger share of the forecast error variance of US real output than the Federal funds rate.

Keywords:

Information value, state-local expenditures, forecast error variance decomposition

    1. Irrationality or efficiency of macroeconomic survey forecasts? Implications from the anchoring bias test

Date:

2011

By:

Hess, Dieter Orbe, Sebastian

URL:

http://d.repec.org/n?u=RePEc:zbw:cfrwps:1113&r=for

We analyze the quality of macroeconomic survey forecasts. Recent findings indicate that they are anchoring biased. This irrationality would challenge the results of a wide range of empirical studies, e.g., in asset pricing, volatility clustering or market liquidity, which rely on survey data to capture market participants’ expectations. We contribute to the existing literature in two ways. First, we show that the cognitive bias is a statistical artifact. Despite highly significant anchoring coefficients a bias adjustment does not improve forecasts’ quality. To explain this counterintuitive result we take a closer look at macroeconomic analysts’ information processing abilities. We find that analysts benefit from the use of an extensive information set, neglected in the anchoring bias test. Exactly this information advantage drives the misleading anchoring bias test results. Second, we find that the superior information aggregation capabilities enable analysts to easily outperform sophisticated timeseries forecasts and therefore survey forecasts should clearly be favored. —

Keywords:

macroeconomic announcements,efficiency of forecasts,anchoring bias,rationality of analysts

JEL:

G12

Taken from the NEP-FOR mailing list edited by Rob Hyndman.