In this issue we have Forecasting national activity using lots of international predictors, Factor forecasting using international targeted predictors, Pooling versus model selection for nowcasting with many predictors, Forecasting with a DSGE Model of the term Structure of Interest Rates, and more.
|We look at how large international datasets can improve forecasts of national activity. We use the case of New Zealand, an archetypal small open economy. We apply "data-rich" factor and shrinkage methods to tackle the problem of efficiently handling hundreds of predictor data series from many countries. The methods covered are principal components, targeted predictors, weighted principal components, partial least squares, elastic net and ridge regression. Using these methods, we assess the marginal predictive content of international data for New Zealand GDP growth. We find that exploiting a large number of international predictors can improve forecasts of our target variable, compared to more traditional models based on small datasets. This is in spite of New Zealand survey data capturing a substantial proportion of the predictive information in the international data. The largest forecasting accuracy gains from inc! luding international predictors are at longer forecast horizons. The forecasting performance achievable with the data-rich methods differs widely, with shrinkage methods and partial least squares performing best. We also assess the type of international data that contains the most predictive information for New Zealand growth over our sample.|
|Keywords:||Forecasting, factor models, shrinkage methods, principal components, targeted predictors, weighted principal components, partial least squares|
|This paper considers factor forecasting with national versus factor forecasting withinternational data. We forecast German GDP based on a large set of about 500 time series, consisting of German data as well as data from Euro-area and G7 countries. For factor estimation, we consider standard principal components as well as variable preselection prior to factor estimation using targeted predictors following Bai and Ng [Forecasting economic time series using targeted predictors, Journal of Econometrics 146 (2008), 304-317]. The results are as follows: Forecasting without data preselection favours the use of German data only, and no additional information content can be extracted from international data. However, when using targeted predictors for variable selection, international data generally improves the forecastability of German GDP.|
|Keywords:||forecasting, factor models, international data, variable selection|
|This paper discusses pooling versus model selection for now- and forecasting in the presence of model uncertainty with large, unbalanced datasets. Empirically, unbalanced data is pervasive in economics and typically due to di¤erent sampling frequencies and publication delays. Two model classes suited in this context are factor models based on large datasets and mixed-data sampling (MIDAS) regressions with few predictors. The specification of these models requires several choices related to, amongst others, the factor estimation method and the number of factors, lag length and indicator selection. Thus, there are many sources of mis-specification when selecting a particular model, and an alternative could be pooling over a large set of models with different specifications. We evaluate the relative performance of pooling and model selection for now- and forecasting quarterly German GDP, a key macroeconomic indicator for t! he largest country in the euro area, with a large set of about one hundred monthly indicators. Our empirical findings provide strong support for pooling over many specifications rather than selecting a specific model.|
|Keywords:||casting, forecast combination, forecast pooling, model selection, mixed – frequency data, factor models, MIDAS|
|By:||Zagaglia, Paolo (Dept. of Economics, Stockholm University)|
|This paper studies the forecasting performance of the general equilibrium model of bond yields of Marzo, Söderström and Zagaglia (2008), where long-term interest rates are an integral part of the monetary transmission mechanism. The model is estimated with Bayesian methods on Euro area data. I investigate the out-of-sample predictive performance across different model specifications, including that of De Graeve, Emiris and Wouters (2009). The accuracy of point forecasts is evaluated through both univariate and multivariate accuracy measures. I show that taking into account the impact of the term structure of interest rates on the macroeconomy generates superior out-of-sample forecasts for both real variables, such as output, and inflation, and for bond yields.|
|Keywords:||Monetary policy; yield curve; general equilibrium; bayesian estimation|
|By:||Yang K. Lu (Boston University)
Pierre Perron (Boston University)
|We consider the estimation of a random level shift model for which the series of interest is the sum of a short memory process and a jump or level shift component. For the latter component, we specify the commonly used simple mixture model such that the component is the cumulative sum of a process which is 0 with some probability (1-a) and is a random variable with probability a. Our estimation method transforms such a model into a linear state space with mixture of normal innovations, so that an extension of Kalman filter algorithm can be applied. We apply this random level shift model to the logarithm of absolute returns for the S&P 500, AMEX, Dow Jones and NASDAQ stock market return indices. Our point estimates imply few level shifts for all series. But once these are taken into account, there is little evidence of serial correlation in the remaining noise and, hence, no evidence of long-memory. Once the estimated! shifts are introduced to a standard GARCH model applied to the returns series, any evidence of GARCH effects disappears. We also produce rolling out-ofsample forecasts of squared returns. In most cases, our simple random level shifts model clearly outperforms a standard GARCH(1,1) model and, in many cases, it also provides better forecasts than a fractionally integrated GARCH model.|
|Keywords:||structural change, forecasting, GARCH models, long-memory|
|This paper compares the mixed-data sampling (MIDAS) and mixed-frequency VAR (MF-VAR) approaches to model speci cation in the presence of mixed-frequency data, e.g., monthly and quarterly series. MIDAS leads to parsimonious models based on exponential lag polynomials for the coe¢ cients, whereas MF-VAR does not restrict the dynamics and therefore can su¤er from the curse of dimensionality. But if the restrictions imposed by MIDAS are too stringent, the MF-VAR can perform better. Hence, it is di¢ cult to rank MIDAS and MF-VAR a priori, and their relative ranking is better evaluated empirically. In this paper, we compare their performance in a relevant case for policy making, i.e., nowcasting and forecasting quarterly GDP growth in the euro area, on a monthly basis and using a set of 20 monthly indicators. It turns out that the two approaches are more complementary than substitutes, since MF-VAR tends to perform better ! for longer horizons, whereas MIDAS for shorter horizons.|
|Keywords:||nowcasting, mixed-frequency data, mixed-frequency VAR, MIDAS|
|By:||Zhongjun Qu (Boston University)
Pierre Perron (Boston University)
|Empirical ?ndings related to the time series properties of stock returns volatility indicate autocorrelations that decay slowly at long lags. In light of this, several long-memory models have been proposed. However, the possibility of level shifts has been advanced as a possible explanation for the appearance of long-memory and there is growing evidence suggesting that it may be an important feature of stock returns volatility. Nevertheless, it remains a conjecture that a model incorporating random level shifts in variance can explain the data well and produce reasonable forecasts. We show that a very simple stochastic volatility model incorporating both a random level shift and a short-memory component indeed provides a better in-sample fit of the data and produces forecasts that are no worse, and sometimes better, than standard stationary short and long-memory models. We use a Bayesian method for inference and develop ! algorithms to obtain the posterior distributions of the parameters and the smoothed estimates of the two latent components. We apply the model to daily S&P 500 and NASDAQ returns over the period 1980.1-2005.12. Although the occurrence of a level shift is rare, about once every two years, the level shift component clearly contributes most to the total variation in the volatility process. The half-life of a typical shock from the short-memory component is very short, on average between 8 and 14 days. We also show that, unlike common stationary short or long-memory models, our model is able to replicate keys features of the data. For the NASDAQ series, it forecasts better than a standard stochastic volatility model, and for the S&P 500 index, it performs equally well.|
|Keywords:||Bayesian estimation, Structural change, Forecasting, Long-memory, State-space models, Latent process|
|By:||Lence, Sergio H.
|Keywords:||Agribusiness, Agricultural Finance,|
Taken from the NEP-FOR mailing list edited by Rob Hyndman.