In this issue we have Nonparametric time series forecasting with dynamic updating, Forecasting Large Datasets with Bayesian Reduced Rank Multivariate Models, Forecast performance of implied volatility and the impact of the volatility risk premium, and more.
Date: 
200908 
By: 
Han Lin Shang 
URL: 

We present a nonparametric method to forecast a seasonal univariate time series, and propose four dynamic updating methods to improve point forecast accuracy. Our methods consider a seasonal univariate time series as a functional time series. We propose first to reduce the dimensionality by applying functional principal component analysis to the historical observations, and then to use univariate time series forecasting and functional principal component regression techniques. When data in the most recent year are partially observed, we improve point forecast accuracy using dynamic updating methods. We also introduce a nonparametric approach to construct prediction intervals of updated forecasts, and compare the empirical coverage probability with an existing parametric method. Our approaches are datadriven and computationally fast, and hence they are feasible to be applied in real time high frequency dynamic updating. ! The methods are demonstrated using monthly sea surface temperatures from 1950 to 2008. 

Keywords: 
Functional time series, Functional principal component analysis, Ordinary least squares, Penalized least squares, Ridge regression, Sea surface temperatures, Seasonal time series. 
JEL: 
C14 
Date: 
2009 
By: 
Andrea Carriero 
URL: 

The paper addresses the issue of forecasting a large set of variables using multivariate models. In particular, we propose three alternative reduced rank forecasting models and compare their predictive performance for US time series with the most promising existing alternatives, namely, factor models, large scale Bayesian VARs, and multivariate boosting. Speci.cally, we focus on classical reduced rank regression, a twostep procedure that applies, in turn, shrinkage and reduced rank restrictions, and the reduced rank Bayesian VAR of Geweke (1996). We .nd that using shrinkage and rank reduction in combination rather than separately improves substantially the accuracy of forecasts, both when the whole set of variables is to be forecast, and for key variables such as industrial production growth, inflation, and the federal funds rate. The robustness of this finding is confirmed by a Monte Carlo experiment based on bootstrap! ped data. We also provide a consistency result for the reduced rank regression valid when the dimension of the system tends to infinity, which opens the ground to use large scale reduced rank models for empirical analysis. 

Keywords: 
Bayesian VARs, factor models, forecasting, reduced rank 
Date: 
20090721 
By: 
Ralf Becker (Manchester) 
URL: 

Forecasting volatility has received a great deal of research attention, with the relative performance of econometric models based on timeseries data and option implied volatility forecasts often being considered. While many studies find that implied volatility is the preferred approach, a number of issues remain unresolved. Implied volatilities are riskneutral forecasts of spot volatility, whereas timeseries models are estimated on riskadjusted or real world data of the underlying. Recently, an intuitive method has been proposed to adjust these riskneutral forecasts into their riskadjusted equivalents, possibly improving on their forecast accuracy. By utilising recent econometric advances, this paper considers whether these riskadjusted forecasts are statistically superior to the unadjusted forecasts, as well as a wide range of model based forecasts. It is found that an unadjusted riskneutral implied volatility i! s an inferior forecast. However, after adjusting for the risk premia it is of equal predictive accuracy relative to a number of model based forecasts. 

Keywords: 
Implied volatility, volatility forecasts, volatility models, volatility risk premium, model confidence sets 
JEL: 
C12 
Date: 
20090924 
By: 
Tommaso Proietti 
URL: 
http://d.repec.org/n?u=RePEc:eei:rpaper:eeri_rp_2009_24&r=for 
The BeveridgeNelson decomposition defines the trend component in terms of the eventual forecast function, as the value the series would take if it were on its longrun path. The paper introduces the multistep BeveridgeNelson decomposition, which arises when the forecast function is obtained by the direct autoregressive approach, which optimizes the predictive ability of the AR model at forecast horizons greater than one. We compare our proposal with the standard BeveridgeNelson decomposition, for which the forecast function is obtained by iterating the onestepahead predictions via the chain rule. We illustrate that the multistep BeveridgeNelson trend is more efficient than the standard one in the presence of model misspecification and we subsequently assess the predictive validity of the extracted transitory component with respect to future growth. 

Keywords: 
Trend and Cycle, Forecasting, Filtering. 
JEL: 
C22 
Date: 
2009 
By: 
Jan J. J. Groen 
URL: 

This paper revisits inflation forecasting using reducedform Phillips curve forecasts, that is, inflation forecasts that use activity and expectations variables. We propose a Phillipscurvetype model that results from averaging across different regression specifications selected from a set of potential predictors. The set of predictors includes lagged values of inflation, a host of realactivity data, term structure data, nominal data, and surveys. In each individual specification, we allow for stochastic breaks in regression parameters, where the breaks are described as occasional shocks of random magnitude. As such, our framework simultaneously addresses structural change and model uncertainty that unavoidably affect Phillipscurvebased predictions. We use this framework to describe personal consumption expenditure (PCE) deflator and GDP deflator inflation rates for the United States in the postWorld War II period. ! Over the full 19602008 sample, the framework indicates several structural breaks across different combinations of activity measures. These breaks often coincide with policy regime changes and oil price shocks, among other important events. In contrast to many previous studies, we find less evidence of autonomous variance breaks and inflation gap persistence. Through a realtime outofsample forecasting exercise, we show that our model specification generally provides superior onequarterahead and oneyearahead forecasts for quarterly inflation relative to an extended range of forecasting models that are typically used in the literature. 

Keywords: 
Inflation (Finance) ; Forecasting ; Phillips curve ; Regression analysis 
Date: 
200909 
By: 
Jonas Dovern (Kiel Economics) 
URL: 

Using the Consensus Economics dataset with individual expert forecasts from G7 countries we investigate determinants of disagreement (crosssectional dispersion of forecasts) about six key economic indicators. Disagreement about real variables (GDP, consumption, investment and unemployment) has a distinct dynamic from disagreement about nominal variables (in ation and interest rate). Disagreement about real variables intensifes strongly during recessions, including the current one (by about 40 percent in terms of the interquartile range). Disagreement about nominal variables rises with their level, has fallen after 1998 or so (by 30 percent), and is considerably lower under independent central banks (by 35 percent). Crosssectional dispersion for both groups increases with uncertainty about the underlying actual indicators, though to a lesser extent for nominal series. Countryby country regressions for inflation and inte! rest rates reveal that both the level of disagreement and its sensitivity to macroeconomic variables tend to be larger in Italy, Japan and the United Kingdom, where central banks became independent only around the mid1990s. These findings suggest that more credible monetary policy can substantially contribute to anchoring of expectations about nominal variables; its eects on disagreement about real variables are moderate. 

Keywords: 
disagreement, survey expectations, monetary policy, forecasting 
JEL: 
E31 
Date: 
2009 
By: 
Jan J. J. Groen 
URL: 

In this paper, we seek to produce forecasts of commodity price movements that can systematically improve on naive statistical benchmarks. We revisit how well changes in commodity currencies perform as potential efficient predictors of commodity prices, a view emphasized in the recent literature. In addition, we consider different types of factoraugmented models that use information from a large data set containing a variety of indicators of supply and demand conditions across major developed and developing countries. These factoraugmented models use either standard principal components or the more novel partial least squares (PLS) regression to extract dynamic factors from the data set. Our forecasting analysis considers ten alternative indices and subindices of spot prices for three different commodity classes across different periods. We find that, of all the approaches, the exchangeratebased model and the PLS fac! toraugmented model are more likely to outperform the naive statistical benchmarks, although PLS factoraugmented models usually have a slight edge over the exchangeratebased approach. However, across our range of commodity price indices we are not able to generate outofsample forecasts that, on average, are systematically more accurate than predictions based on a random walk or autoregressive specifications. 

Keywords: 
Commodity exchanges ; Foreign exchange rates ; Commodity futures ; Regression analysis ; Forecasting 
Date: 
2009 
By: 
JuanAngel JimenezMartin (Dpto. de Fundamentos de Análisis Económico II, Universidad Complutense) 
URL: 

When dealing with market risk under the Basel II Accord, variation pays in the form of lower capital requirements and higher profits. Typically, GARCH type models are chosen to forecast ValueatRisk (VaR) using a single risk model. In this paper we illustrate two useful variations to the standard mechanism for choosing forecasts, namely: (i) combining different forecast models for each period, such as a daily model that forecasts the supremum or infinum value for the VaR; (ii) alternatively, select a single model to forecast VaR, and then modify the daily forecast, depending on the recent history of violations under the Basel II Accord. We illustrate these points using the Standard and Poor's 500 Composite Index. In many cases we find significant decreases in the capital requirements, while incurring a number of violations that stays within the Basel II Accord limits. 
Date: 
200909 
By: 
Manabu Asai (Faculty of Economics, Soka University) 
URL: 

Several methods have recently been proposed in the ultra high frequency financial literature to remove the effects of microstructure noise and to obtain consistent estimates of the integrated volatility (IV) as a measure of expost daily volatility. Even biascorrected and consistent (modified) realized volatility (RV) estimates of the integrated volatility can contain residual microstructure noise and other measurement errors. Such noise is called "realized volatility error". As such measurement errors ignored, we need to take account of them in estimating and forecasting IV. This paper investigates through Monte Carlo simulations the effects of RV errors on estimating and forecasting IV with RV data. It is found that: (i) neglecting RV errors can lead to serious bias in estimators due to model misspecification; (ii) the effects of RV errors on onestep ahead forecasts are minor when consistent estimators are used and w! hen the number of intraday observations is large; and (iii) even the partially corrected R2 recently proposed in the literature should be fully corrected for evaluating forecasts. This paper proposes a full correction of R2 , which can be applied to linear and nonlinear, short and long memory models. An empirical example for &P 500 data is used to demonstrate that neglecting RV errors can lead to serious bias in estimating the model of integrated volatility, and that the new method proposed here can eliminate the effects of the RV noise. The empirical results also show that the full correction for R2 is necessary for an accurate description of goodnessoffit. 
Date: 
2009 
By: 
Vladimir Kuzin 
URL: 

This paper compares the mixeddata sampling (MIDAS) and mixedfrequency VAR (MFVAR) approaches to model speci.cation in the presence of mixedfrequency data, e.g., monthly and quarterly series. MIDAS leads to parsimonious models based on exponential lag polynomials for the coe¢ cients, whereas MFVAR does not restrict the dynamics and therefore can su¤er from the curse of dimensionality. But if the restrictions imposed by MIDAS are too stringent, the MFVAR can perform better. Hence, it is di¢ cult to rank MIDAS and MFVAR a priori, and their relative ranking is better evaluated empirically. In this paper, we compare their performance in a relevant case for policy making, i.e., nowcasting and forecasting quarterly GDP growth in the euro area, on a monthly basis and using a set of 20 monthly indicators. It turns out that the two approaches are more complementary than substitutes, since MFVAR tends to perform better f! or longer horizons, whereas MIDAS for shorter horizons. 

Keywords: 
nowcasting, mixedfrequency data, mixedfrequency VAR, MIDAS 
JEL: 
E37 
Date: 
2009 
By: 
Jeroen V.K. Rombouts 
URL: 

While stochastic volatility models improve on the option pricing error when compared to the BlackScholesMerton model, mispricings remain. This paper uses mixed normal heteroskedasticity models to price options. Our model allows for significant negative skewness and time varying higher order moments of the risk neutral distribution. Parameter inference using Gibbs sampling is explained and we detail how to compute risk neutral predictive densities taking into account parameter uncertainty. When forecasting outofsample options on the S&P 500 index, substantial improvements are found compared to a benchmark model in terms of dollar losses and the ability to explain the smirk in implied volatilities. 

Keywords: 
Bayesian inference, option pricing, finite mixture models, outofsample prediction, GARCH models 
JEL: 
C11 
Date: 
200909 
By: 
Wolfgang Karl Härdle 
URL: 
http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2009044&r=for 
We model the dynamics of ask and bid curves in a limit order book market using a dynamic semiparametric factor model. The shape of the curves is captured by a factor structure which is estimated nonparametrically. Corresponding factor loadings are assumed to follow multivariate dynamics and are modelled using a vector autoregressive model. Applying the framework to four stocks traded at the Australian Stock Exchange (ASX) in 2002, we show that the suggested model captures the spatial and temporal dependencies of the limit order book. Relating the shape of the curves to variables reflecting the current state of the market, we show that the recent liquidity demand has the strongest impact. In an extensive forecasting analysis we show that the model is successful in forecasting the liquidity supply over various time horizons during a trading day. Moreover, it is shown that the model's forecasting power can be used to impr! ove optimal order execution strategies. 

Keywords: 
limit order book, liquidity risk, semiparametric model, factor structure, prediction 
JEL: 
C14 
Date: 
200905 
By: 
Christian Friedrich 
URL: 

The paper examines the informational content of a series of macroeconomic indicator variables with the intention to predict stock market downturns – colloquially also referred to as `bear markets' – for G7 countries. The sample consists of monthly stock market indices and a set of exogenous indicator variables that are subject to examination, ranging from January 1970 to September 2008. The methodical approach is twofold. In the rst step, a modied version of the BryBoschan business cycle dating algorithm is used to identify bull and bear markets from the data by creating dummy variable series. In the second step, a substantial number of probit estimations is carried out, by regressing the newly identied dummy variable series on dierent specications of indicator variables. By applying widely used in and outofsample measures, the specications are evaluated and the forecasting performance of the indicators is assessed. ! The results are mixed. While industrial production, and money stock measures seem to have no predictive power, short and long term interest rates, term spreads as well as unemployment rate exhibit some. Here, it is clearly possible to extract some informational content even three months in advance and so to beat the predictions made by a recursively estimated constant 

Keywords: 
Bear Market Predictions, BryBoschan, Probit Model 
Date: 
200903 
By: 
Michael Dueker 
URL: 

In this paper we propose a contemporaneous threshold multivariate smooth transition autoregressive (CMSTAR) model in which the regime weights depend on the ex ante probabilities that latent regimespecific variables exceed certain threshold values. A key feature of the model is that the transition function depends on all the parameters of the model as well as on the data. Since the mixing weights are a function of the regimespecific contemporaneous variancecovariance matrix, the model can account for contemporaneous regimespecific comovements of the variables. The stability and distributional properties of the proposed model are discussed, as well as issues of estimation, testing and forecasting. The practical usefulness of the CMSTAR model is illustrated by examining the relationship between US stock prices and interest rates and discussing the regime specific Granger causality relationships. 

Keywords: 
Nonlinear autoregressive models; Smooth transition; Stability; Threshold. 
JEL: 
C32 
Date: 
20090916 
By: 
Óscar Montero 
URL: 

Intercompany loans do not draw considerable attention from Multinational Enterprises (MNEs) because they are not core businessrelated transactions, but planning an interest rate for them is actually important in countries with transfer pricing legislation. Nonetheless, Organisation for Economic Cooperation and Development (OECD) Guidelines do not point to concrete direction about methods or approaches to be performed in order to incorporate arm's length principle in interest rates for future intercompany loans. Transfer pricing analysis for intercompany loans are usually performed applying the Comparable Uncontrolled Price Method (CUP method). The CUP method compares the interest rate of intercompany loans with statistical ranges built over interest rates of market comparable transactions. Interquartile Range (IQR) is a common statistical range to establish comparison. In accordance with the CUP method in transfer! pricing analysis, Plunkett and Mimura mentioned "corporate bonds markets are a rich source of data for identifying market interest rates and for which there is sufficient data to establish comparability" (Plunkett & Mimura, 2005). Aditionally, Plunkett and Powell suggest the use of data from corporate bond yields in accordance with credit rating of the borrower as supplementary (Plunkett & Powell, 2008). 
Taklen from the NEPFOR mailing list edited by Rob Hyndman.