In this issue we have GDP Modelling with Factor Model: an Impact of Nested Data on Forecasting Accuracy, The accuracy of a forecast targeting central bank, A statistical test for forecast evaluation under a discrete loss function, Nowcasting inflation using high frequency data and more.
 GDP Modelling with Factor Model: an Impact of Nested Data on Forecasting Accuracy
Date: 
20110408 
By: 
Bessonovs, Andrejs 
URL: 

Uncertainty associated with an optimal number of macroeconomic variables to be used in factor model is challenging since there is no criteria which states what kind of data should be used, how many variables to employ and does disaggregated data improve factor model’s forecasts. The paper studies an impact of nested macroeconomic data on Latvian GDP forecasting accuracy within factor modelling framework. Nested data means disaggregated data or subcomponents of aggregated variables. We employ StockWatson factor model in order to estimate factors and to make GDP projections two periods ahead. Root mean square error is employed as the standard tool to measure forecasting accuracy. According to this empirical study we conclude that additional information that contained in disaggregated components of macroeconomic variables could be used to enhance Latvian GDP forecasting accuracy. The efficiency gain improving forecasts is about 0.150.20 percentage points of year on year quarterly growth for the forecasting period 1 quarter ahead, but for 2 quarter ahead it’s about half percentage point. 

Keywords: 
Factor model; forecasting; nested data; RMSE. 
JEL: 
C53 
 The accuracy of a forecast targeting central bank
Date: 
2011 
By: 
Falch, Nina Skrove 
URL: 

This paper evaluates inflation forecasts made by Norges Bank which is recognized as a successful forecast targeting central bank. It is reasonable to expect that Norges Bank produces inflation forecasts that are on average better than other forecasts, both 'naïve' forecasts, and forecasts from econometric models outside the central bank. The authors find that the superiority of the Bank's forecast cannot be asserted, when compared with genuine exante real time forecasts from an independent econometric model. The 1step Monetary Policy Report forecasts are preferable to the 1step forecasts from the outside model, but for the policy relevant horizons (4 to 9 quarters ahead), the forecasts from the outsider model are preferred with a wider margin. An explanation in terms of too high speed of adjustment to the inflation target is supported by the evidence. Norges Bank's forecasts are convincingly better than 'naïve' forecasts over the second half of our sample, but not over the whole sample, which includes a change in the mean of inflation. — 

Keywords: 
inflation forecasts,monetary policy,forecast comparison,forecast targeting central bank,econometric models 
JEL: 
C32 
 A statistical test for forecast evaluation under a discrete loss function
Date: 
2011 
By: 
Francisco J. Eransus (Departamento de Economía Cuantitativa (Department of Quantitative Economics), Facultad de Ciencias Económicas y Empresariales (Faculty of Economics and Business), Universidad Complutense de Madrid) 
URL: 

We propose a new approach to evaluating the usefulness of a set of forecasts, based on the use of a discrete loss function de ned on the space of data and forecasts. Exist ing procedures for such an evaluation either do not allow for formal testing, or use tests statistics based just on the frequency distribution of (data , forecasts)pairs. They can easily lead to misleading conclusions in some reasonable situations, because of the way they formalize the underlying null hypothesis that the set of forecasts is not useful. Even though the ambiguity of the underlying null hypothesis precludes us from per forming a standard analysis of the size and power of the tests, we get results suggesting that the proposed DISC test performs better than its competitors. 

Keywords: 
Forecasting Evaluation, Loss Function. 
 Nowcasting inflation using high frequency data
Date: 
201104 
By: 
Michele Modugno (European Central Bank, DGR/EMO, Kaiserstrasse 29, D60311, Frankfurt am Main, Germany.) 
URL: 

This paper proposes a methodology to nowcast and forecast inflation using data with sampling frequency higher than monthly. The nowcasting literature has been focused on GDP, typically using monthly indicators in order to produce an accurate estimate for the current and next quarter. This paper exploits data with weekly and daily frequency in order to produce more accurate estimates of inflation for the current and followings months. In particular, this paper uses the Weekly Oil Bulletin Price Statistics for the euro area, the Weekly Retail Gasoline and Diesel Prices for the US and daily World Market Prices of Raw Materials. The data are modeled as a trading day frequency factor model with missing observations in a state space representation. For the estimation we adopt the methodology exposed in Banbura and Modugno (2010). In contrast to other existing approaches, the methodology used in this paper has the advantage of modeling all data within a unified single framework that, nevertheless, allows one to produce forecasts of all variables involved. This offers the advantage of disentangling a modelbased measure of ”news” from each data release and subsequently to assess its impact on the forecast revision. The paper provides an illustrative example of this procedure. Overall, the results show that these data improve forecast accuracy over models that exploit data available only at monthly frequency for both countries. JEL Classification: C53, E31, E37. 

Keywords: 
Factor Models, Forecasting, Inflation, Mixed Frequencies. 
 A Bunch of Models, a Bunch of Nulls and Inference About Predictive Ability.
Date: 
201101 
By: 
Pablo Pincheira 
URL: 

Inference about predictive ability is usually carried out in the form of pairwise comparisons between two competing forecasting methods. Nevertheless, some interesting questions are concerned with families of models and not just with a couple of forecasting strategies. An example of this would be the question about the predictive accuracy of pure timeseries models versus models based on economic fundamentals. It is clear that an appropriate answer to this question requires comparing families of models, which may include a number of different forecasting strategies. Another usual approach in the literature consists of comparing the accuracy of a new forecasting method with a natural benchmark. Nevertheless, unless the econometrician is completely sure about the superiority of the benchmark over the rest of the methods available in the literature, he/she may want to compare the accuracy of his/her new forecasting model, and its extensions, against a broader set of methods. In this article we present a simple methodology to test the null hypothesis of equal predictive ability between two families of forecasting methods. Our approach corresponds to a natural extension of the White (2000) reality check in which we allow for the families being compared to be populated by a large number of forecasting methods. We illustrate our testing approach with an empirical application comparing the ability of two families of models to predict headline inflation in Chile, the US, Sweden and Mexico. With this illustration we show that comparing families of models using the usual approach based on pairwise comparisons of the best expost performing models in each family, may lead to conclusions that are at odds with those suggested by our approach. 
 Risk Management of Risk under the Basel Accord Forecasting ValueatRisk of VIX Futures
Date: 
20110201 
By: 
ChiaLin Chang 
URL: 

The Basel II Accord requires that banks and other Authorized Deposittaking Institutions (ADIs) communicate their daily risk forecasts to the appropriate monetary authorities at the beginning of each trading day, using one or more risk models to measure ValueatRisk (VaR). The risk estimates of these models are used to determine capital requirements and associated capital costs of ADIs, depending in part on the number of previous violations, whereby realised losses exceed the estimated VaR. McAleer, JimenezMartin and PerezAmaral (2009) proposed a new approach to model selection for predicting VaR, consisting of combining alternative risk models, and comparing conservative and aggressive strategies for choosing between VaR models. This paper addresses the question of risk management of risk, namely VaR of VIX futures prices. We examine how different risk management strategies performed during the 200809 global financial crisis (GFC). We find that an aggressive strategy of choosing the Supremum of the single model forecasts is preferred to the other alternatives, and is robust during the GFC. However, this strategy implies relatively high numbers of violations and accumulated losses, though these are admissible under the Basel II Accord. 

Keywords: 
Median strategy; ValueatRisk (VaR); daily capital charges; violation penalties; optimizing strategy; aggressive risk management; conservative risk management; Basel II Accord; VIX futures; global financial crisis (GFC) 
JEL: 
G32 
 Modelling and Forecasting Noisy Realized Volatility
Date: 
2011 
By: 
Manabu Asai (Faculty of Economics Soka University, Japan) 
URL: 

Several methods have recently been proposed in the ultra high frequency financial literature to remove the effects of microstructure noise and to obtain consistent estimates of the integrated volatility (IV) as a measure of expost daily volatility. Even biascorrected and consistent realized volatility (RV) estimates of IV can contain residual microstructure noise and other measurement errors. Such noise is called “realized volatility error”. As such errors are ignored, we need to take account of them in estimating and forecasting IV. This paper investigates through Monte Carlo simulations the effects of RV errors on estimating and forecasting IV with RV data. It is found that: (i) neglecting RV errors can lead to serious bias in estimators; (ii) the effects of RV errors on onestep ahead forecasts are minor when consistent estimators are used and when the number of intraday observations is large; and (iii) even the partially corrected 2R recently proposed in the literature should be fully correcte d for evaluating forecasts. This paper proposes a full correction of 2 R . An empirical example for S&P 500 data is used to demonstrate the techniques developed in the paper. 

Keywords: 
realized volatility; diffusion; financial econometrics; measurement errors; forecasting; model evaluation; goodnessoffit. 
JEL: 
G32 
 Implied Probability Distribution in Financial Options
Date: 
201010 
By: 
Luis Ceballos 
URL: 

The objective of this work is to learn about the information contained in local market financial options regarding the pesodollar parity. The goal is to test whether this is a relevant source that should be considered by the financial agents when forming expectations regarding the future path of underlying assets. The main methodologies for estimating the probability distribution function derived from option prices are reviewed. The present article relies on the methodology developed by Malz (1997) which, in contrast with others, makes no assumptions on the underlying asset and requires very few market quotes. The main results of this research are twofold. First, the implicit volatility in options does not perform better than alternative methods, and a significant bias and inefficiency component is found. Second, the interval forecasts derived from the probability distributions show that only the threemonthahead forecast seems to be optimal in the sense of lack of both forecasting error lag dependence and dependence on volatility, while one and sixmonthahead forecasts do exhibit these dependencies. 
 Forecasting the U.S. Term Structure of Interest Rates using a Macroeconomic Smooth Dynamic Factor Model
Date: 
20110407 
By: 
Siem Jan Koopman (VU University Amsterdam) 
URL: 

We extend the class of dynamic factor yield curve models for the inclusion of macroeconomic factors. We benefit from recent developments in the dynamic factor literature for extracting the common factors from a large panel of macroeconomic series and for estimating the parameters in the model. We include these factors into a dynamic factor model for the yield curve, in which we model the salient structure of the yield curve by imposing smoothness restrictions on the yield factor loadings via cubic spline functions. We carry out a likelihoodbased analysis in which we jointly consider a factor model for the yield curve, a factor model for the macroeconomic series, and their dynamic interactions with the latent dynamic factors. We illustrate the methodology by forecasting the U.S. term structure of interest rates. For this empirical study we use a monthly time series panel of unsmoothed FamaBliss zero yields for treasuries of different maturities between 1970 and 2009, which we combine with a macro panel of 110 series over the same sample period. We show that the relation between the macroeconomic factors and yield curve data has an intuitive interpretation, and that there is interdependence between the yield and macroeconomic factors. Finally, we perform an extensive outofsample forecasting study. Our main conclusion is that macroeconomic variables can lead to more accurate yield curve forecasts. 

Keywords: 
FamaBliss data set; Kalman filter; Maximum likelihood; Yield curve 
JEL: 
C32 
 Modelling Regime Switching and Structural Breaks with an Infinite Dimension Markov Switching Model
Date: 
20110415 
By: 
Yong Song 
URL: 

This paper proposes an infinite dimension Markov switching model to accommodate regime switching and structural break dynamics or a combination of both in a Bayesian framework. Two parallel hierarchical structures, one governing the transition probabilities and another governing the parameters of the conditional data density, keep the model parsimonious and improve forecasts. This nonparametric approach allows for regime persistence and estimates the number of states automatically. A global identification algorithm for structural changes versus regime switching is presented. Applications to U.S. real interest rates and inflation compare the new model to existing parametric alternatives. Besides identifying episodes of regime switching and structural breaks, the hierarchical distribution governing the parameters of the conditional data density provides significant gains to forecasting precision. 

Keywords: 
hidden Markov model; Bayesian nonparametrics; Dirchlet process 
JEL: 
C51 
 Classical timevarying FAVAR models – estimation, forecasting and structural analysis
Date: 
2011 
By: 
Eickmeier, Sandra 
URL: 

We propose a classical approach to estimate factoraugmented vector autoregressive (FAVAR) models with time variation in the factor loadings, in the factor dynamics, and in the variancecovariance matrix of innovations. When the timevarying FAVAR is estimated using a large quarterly dataset of US variables from 1972 to 2007, the results indicate some changes in the factor dynamics, and more marked variation in the factors' shock volatility and their loading parameters. Forecasts from the timevarying FAVAR are more accurate than those from a constant parameter FAVAR for most variables and horizons when computed insample, for some variables in pseudo real time, mostly financial indicators. Finally, we use the timevarying FAVAR to assess how monetary transmission to the economy has changed. We find substantial time variation in the volatility of monetary policy shocks, and we observe that the reaction of GDP, the GDP deflator, inflation expectations and longterm interest rates to an equallysized monetary policy shock has decreased since the early1980s. — 

Keywords: 
FAVAR,timevarying parameters,monetary transmission,forecasting 
JEL: 
C3 
 Nowcasting With Google Trends in an Emerging Market
Date: 
201007 
By: 
Yan CarrièreSwallow 
URL: 

Most economic variables are released with a lag, making it difficult for policymakers to make an accurate assessment of current conditions. This paper explores whether observing Internet browsing habits can inform practitioners about realtime aggregate consumer behavior in an emerging market. Using data on Google search queries, we introduce a simple index of interest in automobile purchases in Chile and test whether it improves the fit and efficiency of nowcasting models for automobile sales. We also examine to what extent our index helps us identify turning points in sales data. Despite relatively low rates of Internet usage among the population, we find that models incorporating our Google Trends Automotive Index outperform benchmark specifications in both insample and outof sample nowcasts while providing substantial gains in information delivery times. 
Taken from the NEPFOR mailing list edited by Rob Hyndman.