Forecasting papers 2009-09-29

In this issue we have Nonparametric time series forecasting with dynamic updating, Forecasting Large Datasets with Bayesian Reduced Rank Multivariate Models, Forecast performance of implied volatility and the impact of the volatility risk premium, and more.

 

  1. Nonparametric time series forecasting with dynamic updating

Date:

2009-08

By:

Han Lin Shang
Rob J Hyndman

URL:

http://d.repec.org/n?u=RePEc:msh:ebswps:2009-8&r=for

We present a nonparametric method to forecast a seasonal univariate time series, and propose four dynamic updating methods to improve point forecast accuracy. Our methods consider a seasonal univariate time series as a functional time series. We propose first to reduce the dimensionality by applying functional principal component analysis to the historical observations, and then to use univariate time series forecasting and functional principal component regression techniques. When data in the most recent year are partially observed, we improve point forecast accuracy using dynamic updating methods. We also introduce a nonparametric approach to construct prediction intervals of updated forecasts, and compare the empirical coverage probability with an existing parametric method. Our approaches are data-driven and computationally fast, and hence they are feasible to be applied in real time high frequency dynamic updating. ! The methods are demonstrated using monthly sea surface temperatures from 1950 to 2008.

Keywords:

Functional time series, Functional principal component analysis, Ordinary least squares, Penalized least squares, Ridge regression, Sea surface temperatures, Seasonal time series.

JEL:

C14

  1. Forecasting Large Datasets with Bayesian Reduced Rank Multivariate Models

Date:

2009

By:

Andrea Carriero
George Kapetanios
Massimiliano Marcellino

URL:

http://d.repec.org/n?u=RePEc:eui:euiwps:eco2009/31&r=for

The paper addresses the issue of forecasting a large set of variables using multivariate models. In particular, we propose three alternative reduced rank forecasting models and compare their predictive performance for US time series with the most promising existing alternatives, namely, factor models, large scale Bayesian VARs, and multivariate boosting. Speci.cally, we focus on classical reduced rank regression, a two-step procedure that applies, in turn, shrinkage and reduced rank restrictions, and the reduced rank Bayesian VAR of Geweke (1996). We .nd that using shrinkage and rank reduction in combination rather than separately improves substantially the accuracy of forecasts, both when the whole set of variables is to be forecast, and for key variables such as industrial production growth, inflation, and the federal funds rate. The robustness of this finding is confirmed by a Monte Carlo experiment based on bootstrap! ped data. We also provide a consistency result for the reduced rank regression valid when the dimension of the system tends to infinity, which opens the ground to use large scale reduced rank models for empirical analysis.

Keywords:

Bayesian VARs, factor models, forecasting, reduced rank

  1. Forecast performance of implied volatility and the impact of the volatility risk premium

Date:

2009-07-21

By:

Ralf Becker (Manchester)
Adam Clements (QUT)
Christopher Coleman-Fenn (QUT)

URL:

http://d.repec.org/n?u=RePEc:qut:auncer:2009_58&r=for

Forecasting volatility has received a great deal of research attention, with the relative performance of econometric models based on time-series data and option implied volatility forecasts often being considered. While many studies find that implied volatility is the preferred approach, a number of issues remain unresolved. Implied volatilities are risk-neutral forecasts of spot volatility, whereas time-series models are estimated on risk-adjusted or real world data of the underlying. Recently, an intuitive method has been proposed to adjust these risk-neutral forecasts into their risk-adjusted equivalents, possibly improving on their forecast accuracy. By utilising recent econometric advances, this paper considers whether these risk-adjusted forecasts are statistically superior to the unadjusted forecasts, as well as a wide range of model based forecasts. It is found that an unadjusted risk-neutral implied volatility i! s an inferior forecast. However, after adjusting for the risk premia it is of equal predictive accuracy relative to a number of model based forecasts.

Keywords:

Implied volatility, volatility forecasts, volatility models, volatility risk premium, model confidence sets

JEL:

C12

  1. The Multistep Beveridge-Nelson Decomposition

Date:

2009-09-24

By:

Tommaso Proietti

URL:

http://d.repec.org/n?u=RePEc:eei:rpaper:eeri_rp_2009_24&r=for

The Beveridge-Nelson decomposition defines the trend component in terms of the eventual forecast function, as the value the series would take if it were on its long-run path. The paper in-troduces the multistep Beveridge-Nelson decomposition, which arises when the forecast function is obtained by the direct autoregressive approach, which optimizes the predictive ability of the AR model at forecast horizons greater than one. We compare our proposal with the standard Beveridge-Nelson decomposition, for which the forecast function is obtained by iterating the one-step-ahead predictions via the chain rule. We illustrate that the multistep Beveridge-Nelson trend is more efficient than the standard one in the presence of model misspecification and we subsequently assess the predictive validity of the extracted transitory component with respect to future growth.

Keywords:

Trend and Cycle, Forecasting, Filtering.

JEL:

C22

  1. Real-time inflation forecasting in a changing world

Date:

2009

By:

Jan J. J. Groen
Richard Paap
Francesco Ravazzolo

URL:

http://d.repec.org/n?u=RePEc:fip:fednsr:388&r=for

This paper revisits inflation forecasting using reduced-form Phillips curve forecasts, that is, inflation forecasts that use activity and expectations variables. We propose a Phillips-curve-type model that results from averaging across different regression specifications selected from a set of potential predictors. The set of predictors includes lagged values of inflation, a host of real-activity data, term structure data, nominal data, and surveys. In each individual specification, we allow for stochastic breaks in regression parameters, where the breaks are described as occasional shocks of random magnitude. As such, our framework simultaneously addresses structural change and model uncertainty that unavoidably affect Phillips-curve-based predictions. We use this framework to describe personal consumption expenditure (PCE) deflator and GDP deflator inflation rates for the United States in the post-World War II period. ! Over the full 1960-2008 sample, the framework indicates several structural breaks across different combinations of activity measures. These breaks often coincide with policy regime changes and oil price shocks, among other important events. In contrast to many previous studies, we find less evidence of autonomous variance breaks and inflation gap persistence. Through a real-time out-of-sample forecasting exercise, we show that our model specification generally provides superior one-quarter-ahead and one-year-ahead forecasts for quarterly inflation relative to an extended range of forecasting models that are typically used in the literature.

Keywords:

Inflation (Finance) ; Forecasting ; Phillips curve ; Regression analysis

  1. Disagreement among Forecasters in G7 Countries

Date:

2009-09

By:

Jonas Dovern (Kiel Economics)
Ulrich Fritsche (Department for Socioeconomics, Department for Economics, University of Hamburg)
Jiri Slacalek (European Central Bank)

URL:

http://d.repec.org/n?u=RePEc:hep:macppr:200906&r=for

Using the Consensus Economics dataset with individual expert forecasts from G7 countries we investigate determinants of disagreement (crosssectional dispersion of forecasts) about six key economic indicators. Disagreement about real variables (GDP, consumption, investment and unemployment) has a distinct dynamic from disagreement about nominal variables (in ation and interest rate). Disagreement about real variables intensifes strongly during recessions, including the current one (by about 40 percent in terms of the interquartile range). Disagreement about nominal variables rises with their level, has fallen after 1998 or so (by 30 percent), and is considerably lower under independent central banks (by 35 percent). Cross-sectional dispersion for both groups increases with uncertainty about the underlying actual indicators, though to a lesser extent for nominal series. Countryby- country regressions for inflation and inte! rest rates reveal that both the level of disagreement and its sensitivity to macroeconomic variables tend to be larger in Italy, Japan and the United Kingdom, where central banks became independent only around the mid-1990s. These findings suggest that more credible monetary policy can substantially contribute to anchoring of expectations about nominal variables; its eects on disagreement about real variables are moderate.

Keywords:

disagreement, survey expectations, monetary policy, forecasting

JEL:

E31

  1. Commodity prices, commodity currencies, and global economic developments

Date:

2009

By:

Jan J. J. Groen
Paolo A. Pesenti

URL:

http://d.repec.org/n?u=RePEc:fip:fednsr:387&r=for

In this paper, we seek to produce forecasts of commodity price movements that can systematically improve on naive statistical benchmarks. We revisit how well changes in commodity currencies perform as potential efficient predictors of commodity prices, a view emphasized in the recent literature. In addition, we consider different types of factor-augmented models that use information from a large data set containing a variety of indicators of supply and demand conditions across major developed and developing countries. These factor-augmented models use either standard principal components or the more novel partial least squares (PLS) regression to extract dynamic factors from the data set. Our forecasting analysis considers ten alternative indices and sub-indices of spot prices for three different commodity classes across different periods. We find that, of all the approaches, the exchange-rate-based model and the PLS fac! tor-augmented model are more likely to outperform the naive statistical benchmarks, although PLS factor-augmented models usually have a slight edge over the exchange-rate-based approach. However, across our range of commodity price indices we are not able to generate out-of-sample forecasts that, on average, are systematically more accurate than predictions based on a random walk or autoregressive specifications.

Keywords:

Commodity exchanges ; Foreign exchange rates ; Commodity futures ; Regression analysis ; Forecasting

  1. What Happened to Risk Management During the 2008-09 Financial Crisis?

Date:

2009

By:

Juan-Angel Jimenez-Martin (Dpto. de Fundamentos de Análisis Económico II, Universidad Complutense)
Michael McAleer
Teodosio Pérez-Amaral (Dpto. de Fundamentos de Análisis Económico II, Universidad Complutense)

URL:

http://d.repec.org/n?u=RePEc:ucm:doicae:0919&r=for

When dealing with market risk under the Basel II Accord, variation pays in the form of lower capital requirements and higher profits. Typically, GARCH type models are chosen to forecast Value-at-Risk (VaR) using a single risk model. In this paper we illustrate two useful variations to the standard mechanism for choosing forecasts, namely: (i) combining different forecast models for each period, such as a daily model that forecasts the supremum or infinum value for the VaR; (ii) alternatively, select a single model to forecast VaR, and then modify the daily forecast, depending on the recent history of violations under the Basel II Accord. We illustrate these points using the Standard and Poor's 500 Composite Index. In many cases we find significant decreases in the capital requirements, while incurring a number of violations that stays within the Basel II Accord limits.

  1. "Modelling and Forecasting Noisy Realized Volatility"

Date:

2009-09

By:

Manabu Asai (Faculty of Economics, Soka University)
Michael McAleer (Econometric Institute, Erasmus School of Economics, Erasmus University Rotterdam and Tinbergen Institute and Center for International Research on the Japanese Economy (CIRJE), Faculty of Economics, University of Tokyo)
Marcelo C. Medeiros (Department of Economics, Pontifical Catholic University of Rio de Janeiro)

URL:

http://d.repec.org/n?u=RePEc:tky:fseres:2009cf669&r=for

Several methods have recently been proposed in the ultra high frequency financial literature to remove the effects of microstructure noise and to obtain consistent estimates of the integrated volatility (IV) as a measure of ex-post daily volatility. Even bias-corrected and consistent (modified) realized volatility (RV) estimates of the integrated volatility can contain residual microstructure noise and other measurement errors. Such noise is called "realized volatility error". As such measurement errors ignored, we need to take account of them in estimating and forecasting IV. This paper investigates through Monte Carlo simulations the effects of RV errors on estimating and forecasting IV with RV data. It is found that: (i) neglecting RV errors can lead to serious bias in estimators due to model misspecification; (ii) the effects of RV errors on one-step ahead forecasts are minor when consistent estimators are used and w! hen the number of intraday observations is large; and (iii) even the partially corrected R2 recently proposed in the literature should be fully corrected for evaluating forecasts. This paper proposes a full correction of R2 , which can be applied to linear and nonlinear, short and long memory models. An empirical example for &P 500 data is used to demonstrate that neglecting RV errors can lead to serious bias in estimating the model of integrated volatility, and that the new method proposed here can eliminate the effects of the RV noise. The empirical results also show that the full correction for R2 is necessary for an accurate description of goodness-of-fit.

  1. MIDAS vs. mixed-frequency VAR: Nowcasting GDP in the Euro Area

Date:

2009

By:

Vladimir Kuzin
Massimiliano Marcellino
Christian Schumacher

URL:

http://d.repec.org/n?u=RePEc:eui:euiwps:eco2009/32&r=for

This paper compares the mixed-data sampling (MIDAS) and mixed-frequency VAR (MF-VAR) approaches to model speci.cation in the presence of mixed-frequency data, e.g., monthly and quarterly series. MIDAS leads to parsimonious models based on exponential lag polynomials for the coe¢ cients, whereas MF-VAR does not restrict the dynamics and therefore can su¤er from the curse of dimensionality. But if the restrictions imposed by MIDAS are too stringent, the MF-VAR can perform better. Hence, it is di¢ cult to rank MIDAS and MF-VAR a priori, and their relative ranking is better evaluated empirically. In this paper, we compare their performance in a relevant case for policy making, i.e., nowcasting and forecasting quarterly GDP growth in the euro area, on a monthly basis and using a set of 20 monthly indicators. It turns out that the two approaches are more complementary than substitutes, since MF-VAR tends to perform better f! or longer horizons, whereas MIDAS for shorter horizons.

Keywords:

nowcasting, mixed-frequency data, mixed-frequency VAR, MIDAS

JEL:

E37

  1. Bayesian Option Pricing Using Mixed Normal Heteroskedasticity

Date:

2009

By:

Jeroen V.K. Rombouts
Lars Stentoft

URL:

http://d.repec.org/n?u=RePEc:lvl:lacicr:0926&r=for

While stochastic volatility models improve on the option pricing error when compared to the Black-Scholes-Merton model, mispricings remain. This paper uses mixed normal heteroskedasticity models to price options. Our model allows for significant negative skewness and time varying higher order moments of the risk neutral distribution. Parameter inference using Gibbs sampling is explained and we detail how to compute risk neutral predictive densities taking into account parameter uncertainty. When forecasting out-of-sample options on the S&P 500 index, substantial improvements are found compared to a benchmark model in terms of dollar losses and the ability to explain the smirk in implied volatilities.

Keywords:

Bayesian inference, option pricing, finite mixture models, out-of-sample prediction, GARCH models

JEL:

C11

  1. Modelling and Forecasting Liquidity Supply Using Semiparametric Factor Dynamics

Date:

2009-09

By:

Wolfgang Karl Härdle
Nikolaus Hautsch
Andrija Mihoci

URL:

http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2009-044&r=for

We model the dynamics of ask and bid curves in a limit order book market using a dynamic semiparametric factor model. The shape of the curves is captured by a factor structure which is estimated nonparametrically. Corresponding factor loadings are assumed to follow multivariate dynamics and are modelled using a vector autoregressive model. Applying the framework to four stocks traded at the Australian Stock Exchange (ASX) in 2002, we show that the suggested model captures the spatial and temporal dependencies of the limit order book. Relating the shape of the curves to variables reflecting the current state of the market, we show that the recent liquidity demand has the strongest impact. In an extensive forecasting analysis we show that the model is successful in forecasting the liquidity supply over various time horizons during a trading day. Moreover, it is shown that the model's forecasting power can be used to impr! ove optimal order execution strategies.

Keywords:

limit order book, liquidity risk, semiparametric model, factor structure, prediction

JEL:

C14

  1. On the Look-Out for the Bear: Predicting Stock Market Downturns in G7 Countries

Date:

2009-05

By:

Christian Friedrich
Melanie Klein

URL:

http://d.repec.org/n?u=RePEc:kie:kieasw:451&r=for

The paper examines the informational content of a series of macroeconomic indicator variables with the intention to predict stock market downturns – colloquially also referred to as `bear markets' – for G7 countries. The sample consists of monthly stock market indices and a set of exogenous indicator variables that are subject to examination, ranging from January 1970 to September 2008. The methodical approach is twofold. In the rst step, a modied version of the Bry-Boschan business cycle dating algorithm is used to identify bull and bear markets from the data by creating dummy variable series. In the second step, a substantial number of probit estimations is carried out, by regressing the newly identied dummy variable series on dierent specications of indicator variables. By applying widely used in- and out-of-sample measures, the specications are evaluated and the forecasting performance of the indicators is assessed. ! The results are mixed. While industrial production, and money stock measures seem to have no predictive power, short and long term interest rates, term spreads as well as unemployment rate exhibit some. Here, it is clearly possible to extract some informational content even three months in advance and so to beat the predictions made by a recursively estimated constant

Keywords:

Bear Market Predictions, Bry-Boschan, Probit Model

  1. Multivariate Contemporaneous Threshold Autoregressive Models

Date:

2009-03

By:

Michael Dueker
Zacharias Psaradakis
Martin Sola
Fabio Spagnolo

URL:

http://d.repec.org/n?u=RePEc:udt:wpecon:2009-03&r=for

In this paper we propose a contemporaneous threshold multivariate smooth transition autoregressive (C-MSTAR) model in which the regime weights depend on the ex ante probabilities that latent regime-specific variables exceed certain threshold values. A key feature of the model is that the transition function depends on all the parameters of the model as well as on the data. Since the mixing weights are a function of the regime-specific contemporaneous variance-covariance matrix, the model can account for contemporaneous regime-specific co-movements of the variables. The stability and distributional properties of the proposed model are discussed, as well as issues of estimation, testing and forecasting. The practical usefulness of the C-MSTAR model is illustrated by examining the relationship between US stock prices and interest rates and discussing the regime specific Granger causality relationships.

Keywords:

Nonlinear autoregressive models; Smooth transition; Stability; Threshold.

JEL:

C32

  1. Forecasting Interest Rates for Future Inter-Company Loan Planning: An Alternative Approach

Date:

2009-09-16

By:

Óscar Montero

URL:

http://d.repec.org/n?u=RePEc:col:000118:005840&r=for

Inter-company loans do not draw considerable attention from Multinational Enterprises (MNEs) because they are not core business-related transactions, but planning an interest rate for them is actually important in countries with transfer pricing legislation. Nonetheless, Organisation for Economic Cooperation and Development (OECD) Guidelines do not point to concrete direction about methods or approaches to be performed in order to incorporate arm's length principle in interest rates for future inter-company loans. Transfer pricing analysis for inter-company loans are usually performed applying the Comparable Uncontrolled Price Method (CUP method). The CUP method compares the interest rate of inter-company loans with statistical ranges built over interest rates of market comparable transactions. Interquartile Range (IQR) is a common statistical range to establish comparison. In accordance with the CUP method in transfer! pricing analysis, Plunkett and Mimura mentioned "corporate bonds markets are a rich source of data for identifying market interest rates and for which there is sufficient data to establish comparability" (Plunkett & Mimura, 2005). Aditionally, Plunkett and Powell suggest the use of data from corporate bond yields in accordance with credit rating of the borrower as supplementary (Plunkett & Powell, 2008).

Taklen from the NEP-FOR mailing list edited by Rob Hyndman.