Impact of Nested Data on Forecasting Accuracy

In this issue we have GDP Modelling with Factor Model: an Impact of Nested Data on Forecasting Accuracy, The accuracy of a forecast targeting central bank, A statistical test for forecast evaluation under a discrete loss function, Nowcasting inflation using high frequency data and more.

 

  1. GDP Modelling with Factor Model: an Impact of Nested Data on Forecasting Accuracy

Date:

2011-04-08

By:

Bessonovs, Andrejs

URL:

http://d.repec.org/n?u=RePEc:pra:mprapa:30211&r=for

Uncertainty associated with an optimal number of macroeconomic variables to be used in factor model is challenging since there is no criteria which states what kind of data should be used, how many variables to employ and does disaggregated data improve factor model’s forecasts. The paper studies an impact of nested macroeconomic data on Latvian GDP forecasting accuracy within factor modelling framework. Nested data means disaggregated data or sub-components of aggregated variables. We employ Stock-Watson factor model in order to estimate factors and to make GDP projections two periods ahead. Root mean square error is employed as the standard tool to measure forecasting accuracy. According to this empirical study we conclude that additional information that contained in disaggregated components of macroeconomic variables could be used to enhance Latvian GDP forecasting accuracy. The efficiency gain improving forecasts is about 0.15-0.20 percentage points of year on year quarterly growth for the forecasting period 1 quarter ahead, but for 2 quarter ahead it’s about half percentage point.

Keywords:

Factor model; forecasting; nested data; RMSE.

JEL:

C53

  1. The accuracy of a forecast targeting central bank

Date:

2011

By:

Falch, Nina Skrove
Nymoen, Ragnar

URL:

http://d.repec.org/n?u=RePEc:zbw:ifwedp:20116&r=for

This paper evaluates inflation forecasts made by Norges Bank which is recognized as a successful forecast targeting central bank. It is reasonable to expect that Norges Bank produces inflation forecasts that are on average better than other forecasts, both 'naïve' forecasts, and forecasts from econometric models outside the central bank. The authors find that the superiority of the Bank's forecast cannot be asserted, when compared with genuine ex-ante real time forecasts from an independent econometric model. The 1-step Monetary Policy Report forecasts are preferable to the 1-step forecasts from the outside model, but for the policy relevant horizons (4 to 9 quarters ahead), the forecasts from the outsider model are preferred with a wider margin. An explanation in terms of too high speed of adjustment to the inflation target is supported by the evidence. Norges Bank's forecasts are convincingly better than 'naïve' forecasts over the second half of our sample, but not over the whole sample, which includes a change in the mean of inflation. —

Keywords:

inflation forecasts,monetary policy,forecast comparison,forecast targeting central bank,econometric models

JEL:

C32

  1. A statistical test for forecast evaluation under a discrete loss function

Date:

2011

By:

Francisco J. Eransus (Departamento de Economía Cuantitativa (Department of Quantitative Economics), Facultad de Ciencias Económicas y Empresariales (Faculty of Economics and Business), Universidad Complutense de Madrid)
Alfonso Novales Cinca (Departamento de Economía Cuantitativa (Department of Quantitative Economics), Facultad de Ciencias Económicas y Empresariales (Faculty of Economics and Business), Universidad Complutense de Madrid)

URL:

http://d.repec.org/n?u=RePEc:ucm:doicae:1107&r=for

We propose a new approach to evaluating the usefulness of a set of forecasts, based on the use of a discrete loss function de…ned on the space of data and forecasts. Exist- ing procedures for such an evaluation either do not allow for formal testing, or use tests statistics based just on the frequency distribution of (data , forecasts)-pairs. They can easily lead to misleading conclusions in some reasonable situations, because of the way they formalize the underlying null hypothesis that the set of forecasts is not useful. Even though the ambiguity of the underlying null hypothesis precludes us from per- forming a standard analysis of the size and power of the tests, we get results suggesting that the proposed DISC test performs better than its competitors.

Keywords:

Forecasting Evaluation, Loss Function.

  1. Nowcasting inflation using high frequency data

Date:

2011-04

By:

Michele Modugno (European Central Bank, DG-R/EMO, Kaiserstrasse 29, D-60311, Frankfurt am Main, Germany.)

URL:

http://d.repec.org/n?u=RePEc:ecb:ecbwps:20111324&r=for

This paper proposes a methodology to nowcast and forecast inflation using data with sampling frequency higher than monthly. The nowcasting literature has been focused on GDP, typically using monthly indicators in order to produce an accurate estimate for the current and next quarter. This paper exploits data with weekly and daily frequency in order to produce more accurate estimates of inflation for the current and followings months. In particular, this paper uses the Weekly Oil Bulletin Price Statistics for the euro area, the Weekly Retail Gasoline and Diesel Prices for the US and daily World Market Prices of Raw Materials. The data are modeled as a trading day frequency factor model with missing observations in a state space representation. For the estimation we adopt the methodology exposed in Banbura and Modugno (2010). In contrast to other existing approaches, the methodology used in this paper has the advantage of modeling all data within a unified single framework that, nevertheless, allows one to produce forecasts of all variables involved. This offers the advantage of disentangling a model-based measure of ”news” from each data release and subsequently to assess its impact on the forecast revision. The paper provides an illustrative example of this procedure. Overall, the results show that these data improve forecast accuracy over models that exploit data available only at monthly frequency for both countries. JEL Classification: C53, E31, E37.

Keywords:

Factor Models, Forecasting, Inflation, Mixed Frequencies.

  1. A Bunch of Models, a Bunch of Nulls and Inference About Predictive Ability.

Date:

2011-01

By:

Pablo Pincheira

URL:

http://d.repec.org/n?u=RePEc:chb:bcchwp:607&r=for

Inference about predictive ability is usually carried out in the form of pairwise comparisons between two competing forecasting methods. Nevertheless, some interesting questions are concerned with families of models and not just with a couple of forecasting strategies. An example of this would be the question about the predictive accuracy of pure time-series models versus models based on economic fundamentals. It is clear that an appropriate answer to this question requires comparing families of models, which may include a number of different forecasting strategies. Another usual approach in the literature consists of comparing the accuracy of a new forecasting method with a natural benchmark. Nevertheless, unless the econometrician is completely sure about the superiority of the benchmark over the rest of the methods available in the literature, he/she may want to compare the accuracy of his/her new forecasting model, and its extensions, against a broader set of methods. In this article we present a simple methodology to test the null hypothesis of equal predictive ability between two families of forecasting methods. Our approach corresponds to a natural extension of the White (2000) reality check in which we allow for the families being compared to be populated by a large number of forecasting methods. We illustrate our testing approach with an empirical application comparing the ability of two families of models to predict headline inflation in Chile, the US, Sweden and Mexico. With this illustration we show that comparing families of models using the usual approach based on pairwise comparisons of the best ex-post performing models in each family, may lead to conclusions that are at odds with those suggested by our approach.

  1. Risk Management of Risk under the Basel Accord Forecasting Value-at-Risk of VIX Futures

Date:

2011-02-01

By:

Chia-Lin Chang
Juan-Ángel Jiménez-Martín
Michael McAleer
Teodosio Pérez-Amaral

URL:

http://d.repec.org/n?u=RePEc:cbt:econwp:11/12&r=for

The Basel II Accord requires that banks and other Authorized Deposit-taking Institutions (ADIs) communicate their daily risk forecasts to the appropriate monetary authorities at the beginning of each trading day, using one or more risk models to measure Value-at-Risk (VaR). The risk estimates of these models are used to determine capital requirements and associated capital costs of ADIs, depending in part on the number of previous violations, whereby realised losses exceed the estimated VaR. McAleer, Jimenez-Martin and Perez-Amaral (2009) proposed a new approach to model selection for predicting VaR, consisting of combining alternative risk models, and comparing conservative and aggressive strategies for choosing between VaR models. This paper addresses the question of risk management of risk, namely VaR of VIX futures prices. We examine how different risk management strategies performed during the 2008-09 global financial crisis (GFC). We find that an aggressive strategy of choosing the Supremum of the single model forecasts is preferred to the other alternatives, and is robust during the GFC. However, this strategy implies relatively high numbers of violations and accumulated losses, though these are admissible under the Basel II Accord.

Keywords:

Median strategy; Value-at-Risk (VaR); daily capital charges; violation penalties; optimizing strategy; aggressive risk management; conservative risk management; Basel II Accord; VIX futures; global financial crisis (GFC)

JEL:

G32

  1. Modelling and Forecasting Noisy Realized Volatility

Date:

2011

By:

Manabu Asai (Faculty of Economics Soka University, Japan)
Michael McAleer (Econometrisch Instituut (Econometric Institute), Faculteit der Economische Wetenschappen (Erasmus School of Economics) Erasmus Universiteit, Tinbergen Instituut (Tinbergen Institute).)
Marcelo C. Medeiros (Department of Economics Pontifical Catholic University of Rio de Janeiro(PUC-Rio))

URL:

http://d.repec.org/n?u=RePEc:ucm:doicae:1109&r=for

Several methods have recently been proposed in the ultra high frequency financial literature to remove the effects of microstructure noise and to obtain consistent estimates of the integrated volatility (IV) as a measure of ex-post daily volatility. Even bias-corrected and consistent realized volatility (RV) estimates of IV can contain residual microstructure noise and other measurement errors. Such noise is called “realized volatility error”. As such errors are ignored, we need to take account of them in estimating and forecasting IV. This paper investigates through Monte Carlo simulations the effects of RV errors on estimating and forecasting IV with RV data. It is found that: (i) neglecting RV errors can lead to serious bias in estimators; (ii) the effects of RV errors on one-step ahead forecasts are minor when consistent estimators are used and when the number of intraday observations is large; and (iii) even the partially corrected 2R recently proposed in the literature should be fully correcte d for evaluating forecasts. This paper proposes a full correction of 2 R . An empirical example for S&P 500 data is used to demonstrate the techniques developed in the paper.

Keywords:

realized volatility; diffusion; financial econometrics; measurement errors; forecasting; model evaluation; goodness-of-fit.

JEL:

G32

  1. Implied Probability Distribution in Financial Options

Date:

2010-10

By:

Luis Ceballos

URL:

http://d.repec.org/n?u=RePEc:chb:bcchwp:596&r=for

The objective of this work is to learn about the information contained in local market financial options regarding the peso-dollar parity. The goal is to test whether this is a relevant source that should be considered by the financial agents when forming expectations regarding the future path of underlying assets. The main methodologies for estimating the probability distribution function derived from option prices are reviewed. The present article relies on the methodology developed by Malz (1997) which, in contrast with others, makes no assumptions on the underlying asset and requires very few market quotes. The main results of this research are twofold. First, the implicit volatility in options does not perform better than alternative methods, and a significant bias and inefficiency component is found. Second, the interval forecasts derived from the probability distributions show that only the three-month-ahead forecast seems to be optimal in the sense of lack of both forecasting error lag dependence and dependence on volatility, while one- and six-month-ahead forecasts do exhibit these dependencies.

  1. Forecasting the U.S. Term Structure of Interest Rates using a Macroeconomic Smooth Dynamic Factor Model

Date:

2011-04-07

By:

Siem Jan Koopman (VU University Amsterdam)
Michel van der Wel (Erasmus University Rotterdam)

URL:

http://d.repec.org/n?u=RePEc:dgr:uvatin:20110063&r=for

We extend the class of dynamic factor yield curve models for the inclusion of macro-economic factors. We benefit from recent developments in the dynamic factor literature for extracting the common factors from a large panel of macroeconomic series and for estimating the parameters in the model. We include these factors into a dynamic factor model for the yield curve, in which we model the salient structure of the yield curve by imposing smoothness restrictions on the yield factor loadings via cubic spline functions. We carry out a likelihood-based analysis in which we jointly consider a factor model for the yield curve, a factor model for the macroeconomic series, and their dynamic interactions with the latent dynamic factors. We illustrate the methodology by forecasting the U.S. term structure of interest rates. For this empirical study we use a monthly time series panel of unsmoothed Fama-Bliss zero yields for treasuries of different maturities between 1970 and 2009, which we combine with a macro panel of 110 series over the same sample period. We show that the relation between the macroeconomic factors and yield curve data has an intuitive interpretation, and that there is interdependence between the yield and macroeconomic factors. Finally, we perform an extensive out-of-sample forecasting study. Our main conclusion is that macroeconomic variables can lead to more accurate yield curve forecasts.

Keywords:

Fama-Bliss data set; Kalman filter; Maximum likelihood; Yield curve

JEL:

C32

  1. Modelling Regime Switching and Structural Breaks with an Infinite Dimension Markov Switching Model

Date:

2011-04-15

By:

Yong Song

URL:

http://d.repec.org/n?u=RePEc:tor:tecipa:tecipa-427&r=for

This paper proposes an infinite dimension Markov switching model to accommodate regime switching and structural break dynamics or a combination of both in a Bayesian framework. Two parallel hierarchical structures, one governing the transition probabilities and another governing the parameters of the conditional data density, keep the model parsimonious and improve forecasts. This nonparametric approach allows for regime persistence and estimates the number of states automatically. A global identification algorithm for structural changes versus regime switching is presented. Applications to U.S. real interest rates and inflation compare the new model to existing parametric alternatives. Besides identifying episodes of regime switching and structural breaks, the hierarchical distribution governing the parameters of the conditional data density provides significant gains to forecasting precision.

Keywords:

hidden Markov model; Bayesian nonparametrics; Dirchlet process

JEL:

C51

  1. Classical time-varying FAVAR models – estimation, forecasting and structural analysis

Date:

2011

By:

Eickmeier, Sandra
Lemke, Wolfgang
Marcellino, Massimiliano

URL:

http://d.repec.org/n?u=RePEc:zbw:bubdp1:201104&r=for

We propose a classical approach to estimate factor-augmented vector autoregressive (FAVAR) models with time variation in the factor loadings, in the factor dynamics, and in the variance-covariance matrix of innovations. When the time-varying FAVAR is estimated using a large quarterly dataset of US variables from 1972 to 2007, the results indicate some changes in the factor dynamics, and more marked variation in the factors' shock volatility and their loading parameters. Forecasts from the time-varying FAVAR are more accurate than those from a constant parameter FAVAR for most variables and horizons when computed insample, for some variables in pseudo real time, mostly financial indicators. Finally, we use the time-varying FAVAR to assess how monetary transmission to the economy has changed. We find substantial time variation in the volatility of monetary policy shocks, and we observe that the reaction of GDP, the GDP deflator, inflation expectations and long-term interest rates to an equally-sized monetary policy shock has decreased since the early-1980s. —

Keywords:

FAVAR,time-varying parameters,monetary transmission,forecasting

JEL:

C3

  1. Nowcasting With Google Trends in an Emerging Market

Date:

2010-07

By:

Yan Carrière-Swallow
Felipe Labbé

URL:

http://d.repec.org/n?u=RePEc:chb:bcchwp:588&r=for

Most economic variables are released with a lag, making it difficult for policy-makers to make an accurate assessment of current conditions. This paper explores whether observing Internet browsing habits can inform practitioners about real-time aggregate consumer behavior in an emerging market. Using data on Google search queries, we introduce a simple index of interest in automobile purchases in Chile and test whether it improves the fit and efficiency of nowcasting models for automobile sales. We also examine to what extent our index helps us identify turning points in sales data. Despite relatively low rates of Internet usage among the population, we find that models incorporating our Google Trends Automotive Index outperform benchmark specifications in both in-sample and outof- sample nowcasts while providing substantial gains in information delivery times.

Taken from the NEP-FOR mailing list edited by Rob Hyndman.