In this issue we have Forecasting with Approximate Dynamic Factor Models, Forecasting volatility, Testing interval forecasts with a GMMbased approach, and Backtesting ValueatRisk using Forecasts for Multiple Horizons.
 Forecasting with Approximate Dynamic Factor Models: the Role of NonPervasive Shocks
Date: 
201107 
By: 
Mattéo Luciani 
URL: 

In this paper we investigate whether accounting for nonpervasive shocks improves the forecast of a factor model. We compare four models on a large panel of US quarterly data: factor models, factor models estimated on selected variables, Bayesian shrinkage, and factor models together with Bayesian shrinkage for the idiosyncratic component. The results of the forecasting exercise show that the four approaches considered perform equally well and produce highly correlated forecasts, meaning that nonpervasive shocks are of no helps in forecasting. We conclude that comovements captured by factor models are informative enough to make accurate forecasts. 

JEL: 
C13 
 Forecasting volatility: does continuous time do better than discrete time?
Date: 
201107 
By: 
Carles Bretó 
URL: 

In this paper we compare the forecast performance of continuous and discretetime volatility models. In discrete time, we consider more than ten GARCHtype models and an asymmetric autoregressive stochastic volatility model. In continuoustime, a stochastic volatility model with mean reversion, volatility feedback and leverage. We estimate each model by maximum likelihood and evaluate their ability to forecast the two scales realized volatility, a nonparametric estimate of volatility based on highfrequency data that minimizes the biases present in realized volatility caused by microstructure errors. We find that volatility forecasts based on continuoustime models may outperform those of GARCHtype discretetime models so that, besides other merits of continuoustime models, they may be used as a tool for generating reasonable volatility forecasts. However, within the stochastic volatility family, we do not find such evidence. We show that volatility feedback may have serious drawbacks in terms of forecasting and that an asymmetric disturbance distribution (possibly with heavy tails) might improve forecasting. 

Keywords: 
Asymmetry, Continuous and discretetime stochastic volatility models, GARCHtype models, Maximum likelihood via iterated filtering, Particle filter, Volatility forecasting 
 Mapping the state of financial stability
Date: 
201109 
By: 
Peter Sarlin (Åbo Akademi University, Turku Centre for Computer Science, Joukahaisenkatu 3–5, 20520 Turku, Finland.) 
URL: 

The paper uses the SelfOrganizing Map for mapping the state of financial stability and visualizing the sources of systemic risks as well as for predicting systemic financial crises. The SelfOrganizing Financial Stability Map (SOFSM) enables a twodimensional representation of a multidimensional financial stability space that allows disentangling the individual sources impacting on systemic risks. The SOFSM can be used to monitor macrofinancial vulnerabilities by locating a country in the financial stability cycle: being it either in the precrisis, crisis, postcrisis or tranquil state. In addition, the SOFSM performs better than or equally well as a logit model in classifying insample data and predicting outofsample the global financial crisis that started in 2007. Model robustness is tested by varying the thresholds of the models, the policymaker’s preferences, and the forecasting horizons. JEL Classification: E44, E58, F01, F37, G01. 

Keywords: 
Systemic financial crisis, systemic risk, SelfOrganizing Map (SOM), visualization, prediction, macroprudential supervision. 
 Testing interval forecasts: a GMMbased approach
Date: 
201108 
By: 
ElenaIvona Dumitrescu (LEO – Laboratoire d’économie d’Orleans – CNRS : UMR6221 – Université d’Orléans) 
URL: 
http://d.repec.org/n?u=RePEc:hal:wpaper:halshs00618467&r=for 
This paper proposes a new evaluation framework for interval forecasts. Our model free test can be used to evaluate intervals forecasts and High Density Regions, potentially discontinuous and/or asymmetric. Using a simple Jstatistic, based on the moments de ned by the orthonormal polynomials associated with the Binomial distribution, this new approach presents many advantages. First, its implementation is extremely easy. Second, it allows for a separate test for unconditional coverage, independence and conditional coverage hypotheses. Third, MonteCarlo simulations show that for realistic sample sizes, our GMM test has good smallsample properties. These results are corroborated by an empirical application on SP500 and Nikkei stock market indexes. It con rms that using this GMM test leads to major consequences for the expost evaluation of interval forecasts produced by linear versus nonlinear models. 

Keywords: 
Interval forecasts, High Density Region, GMM. 
 Backtesting ValueatRisk using Forecasts for Multiple Horizons, a Comment on the Forecast Rationality Tests of A.J. Patton and A. Timmermann
Date: 
20110920 
By: 
Lennart F. Hoogerheide (VU University Amsterdam) 
URL: 

Patton and Timmermann (2011, ‘Forecast Rationality Tests Based on MultiHorizon Bounds’, <I>Journal of Business & Economic Statistics</I>, forthcoming) propose a set of useful tests for forecast rationality or optimality under squared error loss, including an easily implemented test based on a regression that only involves (longhorizon and shorthorizon) forecasts and no observations on the target variable. We propose an extension, a simulationbased procedure that takes into account the presence of errors in parameter estimates. This procedure can also be applied in the field of ‘backtesting’ models for ValueatRisk. Applications to simple AR and ARCH time series models show that its power in detecting certain misspecifications is larger than the power of wellknown tests for correct Unconditional Coverage and Conditional Coverage. 

Keywords: 
ValueatRisk; backtest; optimal revision; forecast rationality 
JEL: 
C12 
 A Simple Model for Vast Panels of Volatilities
Date: 
201109 
By: 
Mattéo Luciani 
URL: 

Realized volatilities, when observed through time, share the following stylized facts: co–movements, clustering, long–memory, dynamic volatility, skewness and heavy–tails. We propose a simple dynamic factor model that captures these stylized facts and that can be applied to vast panels of volatilities as it does not suffer from the curse of dimensionality. It is an enhanced version of Bai and Ng (2004) in the following respects: i) we allow for long–memory in both the idiosyncratic and the common components, ii) the common shocks are conditionally heteroskedastic, and iii) the idiosyncratic and common shocks are skewed and heavy–tailed. Estimation of the factors, the idiosyncratic components and the parameters is straightforward: principal components and low dimension maximum likelihood estimations. A throughout Monte Carlo study shows the usefulness of the approach and an application to 90 daily realized volatilities, pertaining to S&P100, from January 2001 to December 2008, evinces, among others, the following findings: i) All the volatilities have long–memory, more than half in the nonstationary range, that increases during financial turmoil. ii) Tests and criteria point towards one dynamic common factor driving the co–movements. iii) The factor has larger long–memory than the assets volatilities, suggesting that long–memory is a market characteristic. iv) The volatility of the realized volatility is not constant and common to all. v) A forecasting horse race against univariate short– and long–memory models and short–memory dynamic factor models shows that our model outperforms short–, medium–, and long–run predictions, in particular in periods of stress. 

Keywords: 
realized volatilities; vast dimensions; factor models; longmemory; forecasting 
JEL: 
C32 
 Econometric Analysis and Prediction of Recurrent Events
Date: 
20110919 
By: 
Adrian Pagan (School of Economics, University of Sydney) 
URL: 

Economic events such as expansions and recessions in economic activity, bull and bear markets in stock prices and financial crises have long attracted substantial interest. In recent times there has been a focus upon predicting the events and constructing Early Warning Systems of them. Econometric analysis of such recurrent events is however in its infancy. One can represent the events as a set of binary indicators. However they are different to the binary random variables studied in microeconometrics, being constructed from some (possibly) continuous data. The lecture discusses what difference this makes to their econometric analysis. It sets out a framework which deals with how the binary variables are constructed, what an appropriate estimation procedure would be, and the implications for the prediction of them. An example based on Turkish business cycles is used throughout the lecture. 

Keywords: 
Business and Financial Cycles, Binary Time Series, BBQ Algorithm 
JEL: 
C22 
 The Analysis of Stochastic Volatility in the Presence of Daily Realised Measures
Date: 
20110920 
By: 
Siem Jan Koopman (VU University Amsterdam) 
URL: 

We develop a systematic framework for the joint modelling of returns and multiple daily realised measures. We assume a linear state space representation for the log realised measures, which are noisy and biased estimates of the log integrated variance, at least due to Jensen’s inequality. We incorporate filtering methods for the estimation of the latent log volatility process. The endogeneity between daily returns and realised measures leads us to develop a consistent twostep estimation method for all parameters in our specification. This method is computationally straightforward even when the stochastic volatility model contains nonGaussian return innovations and leverage effects. The empirical results reveal that measurement errors become significantly smaller after filtering and that the forecasts from our model outperforms those from a set of recently developed alternatives. 

Keywords: 
Kalman filter; leverage; realised volatility; simulated maximum likelihood 
JEL: 
C22 
 Carbon Tax Scenarios and their Effects on the Irish Energy Sector
Date: 
201109 
By: 
Di Cosmo, Valeria 
URL: 

In this paper we use annual time series data from 1960 to 2008 to estimate the long run price and income elasticities underlying energy demand in Ireland. The Irish economy is divided into five sectors: residential, industrial, commercial, agricultural and transport, and separate energy demand equations are estimated for all sectors. Energy demand is broken down by fuel type, and price and income elasticities are estimated for the primary fuels in the Irish fuel mix. Using the estimated price and income elasticities we forecast Irish sectoral energy demand out to 2025. The share of electricity in the Irish fuel mix is predicted to grow over time, as the share of carbon intensive fuels such as coal, oil and peat, falls. The share of electricity in total energy demand grows most in the industrial and commercial sectors, while oil remains an important fuel in the residential and transport sectors. Having estimated the baseline forecasts, two different carbon tax scenarios are imposed and the impact of these scenarios on energy demand, carbon dioxide emissions, and government revenue is assessed. If it is assumed that the level of the carbon tax will track the futures price of carbon under the EUETS, the carbon tax will rise from ?21.50 per tonne CO2 in 2012 (the first year forecasted) to ?41 in 2025. Results show that under this scenario total emissions would be reduced by approximately 861,000 tonnesof CO2 in 2025 relative to a zero carbon tax scenario, and that such a tax would generate ?1.1 billion in revenue in the same year. We also examine a high tax scenario under which emissions reductions and revenue generated will be greater. Finally, in order to assess the macroeconomic effects of a carbon tax, the carbon tax scenarios were run in HERMES, the ESRI’s mediumterm macroeconomic model. The results from HERMES show that, a carbon tax of ?41 per tonne CO2 would lead to a 0.21 per cent contraction in GDP, and a 0.08 per cent reduction in employment. A higher carbon tax would lead to greater contractions in output. 

Keywords: 
CO2 emissions/Energy demand/Environmental tax/income distribution 
JEL: 
Q4 
Taken from the NEPFOR mailing list edited by Rob Hyndman.