In this issue we have Density forecasting for long-term peak electricity demand, A View of Damped Trend as Incorporating a Tracking Signal into a State Space Model, The tourism forecasting competition, A comparison of forecast performance, and many more.

Date: | 2008-08 |

By: | Rob J Hyndman Shu Fan |

URL: | http://d.repec.org/n?u=RePEc:msh:ebswps:2008-6&r=for |

Long-term electricity demand forecasting plays an important role in planning for future generation facilities and transmission augmentation. In a long term context, planners must adopt a probabilistic view of potential peak demand levels, therefore density forecasts (providing estimates of the full probability distributions of the possible future values of the demand) are more helpful than point forecasts, and are necessary for utilities to evaluate and hedge the financial risk accrued by demand variability and forecasting uncertainty. This paper proposes a new methodology to forecast the density of long-term peak electricity demand. Peak electricity demand in a given season is subject to a range of uncertainties, including underlying population growth, changing technology, economic conditions, prevailing weather conditions (and the timing of those conditions), as well as the general randomness inherent in individual usa! ge. It is also subject to some known calendar effects due to the time of day, day of week, time of year, and public holidays. We describe a comprehensive forecasting solution in this paper. First, we use semiparametric additive models to estimate the relationships between demand and the driver variables, including temperatures, calendar effects and some demographic and economic variables. Then we forecast the demand distributions using a mixture of temperature simulation, assumed future economic scenarios, and residual bootstrapping. The temperature simulation is implemented through a new seasonal bootstrapping method with variable blocks. The proposed methodology has been used to forecast the probability distribution of annual and weekly peak electricity demand for South Australia since 2007. We evaluate the performance of the methodology by comparing the forecast results with the actual demand of the summer 2007/08. | |

Keywords: | Long-term demand forecasting, density forecast, time series, simulation |

JEL: | C14 |

Date: | 2008-09 |

By: | Ralph D. Snyder Anne B. Koehler |

URL: | http://d.repec.org/n?u=RePEc:msh:ebswps:2008-7&r=for |

Damped trend exponential smoothing has previously been established as an important forecasting method. Here, it is shown to have close links to simple exponential smoothing with a smoothed error tracking signal. A special case of damped trend exponential smoothing emerges from our analysis, one that is more parsimonious because it effectively relies on one less parameter. This special case is compared with its traditional counterpart in an application to the annual data from the M3 competition and is shown to be quite competitive. | |

Keywords: | Exponential smoothing, monitoring forecasts, structural change, adjusting forecasts, state space models, damped trend |

JEL: | C32 |

Date: | 2008-12 |

By: | George Athanasopoulos Rob J Hyndman Haiyan Song Doris C Wu |

URL: | http://d.repec.org/n?u=RePEc:msh:ebswps:2008-10&r=for |

We evaluate the performance of various methods for forecasting tourism demand. The data used include 380 monthly series, 427 quarterly series and 530 yearly series, all supplied to us by tourism bodies or by academics from previous tourism forecasting studies. The forecasting methods implemented in the competition are univariate time series approaches, and also econometric models. This forecasting completion differs from previous competitions in several ways: (i) we concentrate only on tourism demand data; (ii) we include econometric approaches; (iii) we evaluate forecast interval coverage as well as point forecast accuracy; (iv) we observe the effect of temporal aggregation on forecasting accuracy; and (v) we consider the mean absolute scaled error as an alternative forecasting accuracy measure. | |

Keywords: | Tourism forecasting, ARIMA, Exponential smoothing, Time varying parameter model, Autoregressive distributed lag model, Vector autoregression |

JEL: | C22 |

Date: | 2009 |

By: | Rochelle M. Edge Michael T. Kiley Jean-Philippe Laforte |

URL: | http://d.repec.org/n?u=RePEc:fip:fedgfe:2009-10&r=for |

This paper considers the "real-time" forecast performance of the Federal Reserve staff, time-series models, and an estimated dynamic stochastic general equilibrium (DSGE) model–the Federal Reserve Board's new Estimated, Dynamic, Optimization-based (Edo) model. We evaluate forecast performance using out-of-sample predictions from 1996 through 2005, thereby examining over 70 forecasts presented to the Federal Open Market Committee (FOMC). Our analysis builds on previous real-time forecasting exercises along two dimensions. First, we consider time-series models, a structural DSGE model that has been employed to answer policy questions quite different from forecasting, and the forecasts produced by the staff at the Federal Reserve Board. In addition, we examine forecasting performance of our DSGE model at a relatively detailed level by separately considering the forecasts for various components of consumer expenditures and ! private investment. The results provide significant support to the notion that richly specified DSGE models belong in the forecasting toolbox of a central bank. |

Date: | 2008-12 |

By: | Jae H. Kim Haiyang Song Kevin Wong George Athanasopoulos Shen Liu |

URL: | http://d.repec.org/n?u=RePEc:msh:ebswps:2008-11&r=for |

This paper evaluates the performance of prediction intervals generated from alternative time series models, in the context of tourism forecasting. The forecasting methods considered include the autoregressive (AR) model, the AR model using the bias-corrected bootstrap, seasonal ARIMA models, innovations state-space models for exponential smoothing, and Harvey's structural time series models. We use thirteen monthly time series for the number of tourist arrivals to Hong Kong and to Australia. The mean coverage rate and length of alternative prediction intervals are evaluated in an empirical setting. It is found that the prediction intervals from all models show satisfactory performance, except for those from the autoregressive model. In particular, those based on the bias-corrected bootstrap in general perform best, providing tight intervals with accurate coverage rates, especially when the forecast horizon is long. | |

Keywords: | Automatic forecasting, Bootstrapping, Interval forecasting |

JEL: | C22 |

Date: | 2009 |

By: | Alfredo García-Hiernaux (Universidad Complutense de Madrid. Department of Quantitative Economics) |

URL: | http://d.repec.org/n?u=RePEc:ucm:doicae:0902&r=for |

A new procedure to predict with subspace methods is presented in this paper. It is based on combining multiple forecasts obtained from setting a range of values for a specic parameter that is typically xed by the user in the subspace methods literature. An algorithm to compute these predictions and to obtain a suitable number of combinations is provided. The procedure is illustrated by forecasting the German gross domestic product. |

Date: | 2009-03 |

By: | Rangan Gupta (Department of Economic, University of Pretoria) Emmanuel Ziramba (Department of Economics, University of South Africa) |

URL: | http://d.repec.org/n?u=RePEc:pre:wpaper:200909&r=for |

This paper first tests the restrictions implied by Hall's (1978) version of the permanent income hypothesis (PIH) obtained from a bivariate system of labor income and savings, using quarterly data over the period of 1947:01 to 2008:03 for the US economy, and then uses the model to forecast changes in labor income over the period of 1991:01 to 2008:03. First, our results indicate the overwhelming rejection of the restrictions on the data implied by the PIH. Second, we found that, when compared to univariate and bivariate versions of classical and Bayesian Vector Autoregressive (VAR) models, the PIH model, in general, is outperformed by all other models in terms of the average RMSEs for one- to eight-quarters-ahead forecasts for the changes in labor income. Finally, as far as forecasting is concerned, we found the most tight Gibbs sampled univarite Bayesian VAR to perform the best. In sum, we do not find evidence for the! US data to be consistent with the PIH, neither does the PIH model perform better relative to alternative atheoretical models in forecasting changes in labor income over an out of sample horizon that was characterized by high degree of volatility for the variable of interest. | |

Keywords: | Permanent Income Hypothesis, Forecast accuracy, Gibbs Sampling |

JEL: | E17 |

Date: | 2008-11-01 |

By: | François Dossou (SINOPIA AM – Sinopia AM – Sinopia AM) Sandrine Lardic (EconomiX – CNRS : UMR7166 – Université de Paris X – Nanterre) Karine Michalon (DRM – Dauphine Recherches en Management – CNRS : UMR7088 – Université Paris Dauphine – Paris IX) |

URL: | http://d.repec.org/n?u=RePEc:hal:journl:halshs-00365972_v1&r=for |

The recent period has highlighted a well-known phenomenon, namely the existence of a positive bias in experts' anticipations. Literature on this subject underlines optimism in the financial analyst community. In this work, our significant contributions are twofold: we provide explanatory bias prediction models which will subsequently allow the calculation of earnings adjusted forecasts, for horizons from 1 to 24 months. We explain the bias using macroeconomic as well as sector and firm specific variables. We obtain some important results. In particular, the macroeconomic variables are statistically significant and their signs are coherent with the intuition. However, we conclude that the microeconomic variables are the main explanatory variables. From the forecast evaluation statistics viewpoints, the adjusted forecasts make it possible quasi-systematically to improve the forecasts of the analysts. | |

Keywords: | Analysts ; Forecasts ; Bias ; Adjusted ; Forecasts ; Earnings Bias |

Date: | 2009-02 |

By: | Konstantin A. Kholodilin (DIW Berlin, Germany) Boriss Siliverstovs (KOF Swiss Economic Institute, ETH Zurich, Switzerland) |

URL: | http://d.repec.org/n?u=RePEc:kof:wpskof:09-215&r=for |

The paper evaluates the quality of the German national accounting data (GDP and its use-side components) as measured by the magnitude and dispersion of the forecast/ revision errors. It is demonstrated that government consumption series are the least reliable, whereas real GDP and real private consumption data are the most reliable. In addition, early forecasts of GDP, private consumption, and investment growth rates are shown to be systematically upward biased. Finally, early forecasts of all the variables seem to be no more accurate than naïve forecasts based on the historical mean of the final data. | |

Keywords: | Quality of statistical data, real-time data, signal-to-noise ratio, forecasts, revisions |

JEL: | C53 |

Date: | 2009-03 |

By: | Rangan Gupta (University of Pretoria) Stephen M. Miller (University of Connecticut and University of Nevada, Las Vegas) |

URL: | http://d.repec.org/n?u=RePEc:uct:uconnp:2009-10&r=for |

We examine the time-series relationship between housing prices in eight Southern California metropolitan statistical areas (MSAs). First, we perform cointegration tests of the housing price indexes for the MSAs, finding seven cointegrating vectors. Thus, the evidence suggests that one common trend links the housing prices in these eight MSAs, a purchasing power parity finding for the housing prices in Southern California. Second, we perform temporal Granger causality tests revealing intertwined temporal relationships. The Santa Anna MSA leads the pack in temporally causing housing prices in six of the other seven MSAs, excluding only the San Luis Obispo MSA. The Oxnard MSA experienced the largest number of temporal effects from other MSAs, six of the seven, excluding only Los Angeles. The Santa Barbara MSA proved the most isolated in that it temporally caused housing prices in only two other MSAs (Los Angels and Oxnard) ! and housing prices in the Santa Anna MSA temporally caused prices in Santa Barbara. Third, we calculate out-of-sample forecasts in each MSA, using various vector autoregressive (VAR) and vector error-correction (VEC) models, as well as Bayesian, spatial, and causality versions of these models with various priors. Different specifications provide superior forecasts in the different MSAs. Finally, we consider the ability of theses time-series models to provide accurate out-of-sample predictions of turning points in housing prices that occurred in 2006:Q4. Recursive forecasts, where the sample is updated each quarter, provide reasonably good forecasts of turning points. | |

Keywords: | Housing prices, Forecasting |

JEL: | C32 |

Date: | 2009-02-23 |

By: | Adam Clements (QUT) Mark Doolan (QUT) Stan Hurn (QUT) Ralf Becker (University of Manchester) |

URL: | http://d.repec.org/n?u=RePEc:qut:auncer:2009_50&r=for |

The performance of techniques for evaluating univariate volatility forecasts are well understood. In the multivariate setting however, the efficacy of the evaluation techniques is not developed. Multivariate forecasts are often evaluated within an economic application such as portfolio optimisation context. This paper aims to evaluate the efficacy of such techniques, along with traditional statistical based methods. It is found that utility based methods perform poorly in terms of identifying optimal forecasts whereas statistical methods are more effective. | |

Keywords: | Multivariate volatility, forecasts, forecast evaluation, Model confidence set |

JEL: | C22 |

Date: | 2009-02 |

By: | George Athanasopoulos Osmani T. de C. Guillén João V. Issler Farshid Vahid |

URL: | http://d.repec.org/n?u=RePEc:msh:ebswps:2009-2&r=for |

We study the joint determination of the lag length, the dimension of the cointegrating space and the rank of the matrix of short-run parameters of a vector autoregressive (VAR) model using model selection criteria. We consider model selection criteria which have data-dependent penalties for a lack of parsimony, as well as the traditional ones. We suggest a new procedure which is a hybrid of traditional criteria with data-dependant penalties. In order to compute the fit of each model, we propose an iterative procedure to compute the maximum likelihood estimates of parameters of a VAR model with short-run and long-run restrictions. Our Monte Carlo simulations measure the improvements in forecasting accuracy that can arise from the joint determination of lag-length and rank, relative to the commonly used procedure of selecting the lag-length only and then testing for cointegration. | |

Keywords: | Reduced rank models, model selection criteria, forecasting accuracy |

JEL: | C32 |

Date: | 2009-02 |

By: | Ronald MacDonald Lukas Menkhoff Rafael R. Rebitzky |

URL: | http://d.repec.org/n?u=RePEc:gla:glaewp:2009_13&r=for |

IThis paper sheds new light on a long-standing puzzle in the international finance literature, namely, that exchange rate expectations appear inaccurate and even irrational. We find for a comprehensive dataset that individual forecasters' performance is skill-based. ‘Superior' fore-casters show consistent ability as their forecasting success holds across currencies. They seem to possess knowledge on the role of fundamentals in explaining exchange rate behavior, as indicated by better interest rate forecasts. Superior forecasters are more experienced than the median forecaster and have fewer personnel responsibilities. Accordingly, foreign exchange markets may function in less puzzling and irrational ways than is often thought. | |

Keywords: | Foreign exchange market; individual exchange rate forecasts; interest rate forecasts; forecaster experience |

JEL: | F31 |

Date: | 2009-03 |

By: | Michael B. Devereux Gregor W. Smith James Yetman |

URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:14795&r=for |

Standard models of international risk sharing with complete asset markets predict a positive association between relative consumption growth and real exchange-rate depreciation across countries. The striking lack of evidence for this link the consumption/real-exchange-rate anomaly or Backus-Smith puzzle – has prompted research on risk-sharing indicators with incomplete asset markets. That research generally implies that the association holds in forecasts, rather than realizations. Using professional forecasts for 28 countries for 1990-2008 we find no such association, thus deepening the puzzle. Independent evidence on the weak link between forecasts for consumption and real interest rates suggests that the presence of 'hand-to-mouth' consumers may help to resolve the anomaly. | |

JEL: | F37 |

Date: | 2009 |

By: | Jan J. J. Groen George Kapetanios |

URL: | http://d.repec.org/n?u=RePEc:fip:fednsr:363&r=for |

In a factor-augmented regression, the forecast of a variable depends on a few factors estimated from a large number of predictors. But how does one determine the appropriate number of factors relevant for such a regression? Existing work has focused on criteria that can consistently estimate the appropriate number of factors in a large-dimensional panel of explanatory variables. However, not all of these factors are necessarily relevant for modeling a specific dependent variable within a factor-augmented regression. This paper develops a number of theoretical conditions that selection criteria must fulfill in order to provide a consistent estimate of the factor dimension relevant for a factor-augmented regression. Our framework takes into account factor estimation error and does not depend on a specific factor estimation methodology. It also provides, as a by-product, a template for developing selection criteria for regr! essions that include standard generated regressors. The conditions make it clear that standard model selection criteria do not provide a consistent estimate of the factor dimension in a factor-augmented regression. We propose alternative criteria that do fulfill our conditions. These criteria essentially modify standard information criteria so that the corresponding penalty function for dimensionality also penalizes factor estimation error. We show through Monte Carlo and empirical applications that these modified information criteria are useful in determining the appropriate dimensions of factor-augmented regressions. | |

Keywords: | Regression analysis ; Econometric models ; Time-series analysis ; Forecasting |

Date: | 2009-03 |

By: | Rime, Dagfinn Sarno, Lucio Sojli, Elvira |

URL: | http://d.repec.org/n?u=RePEc:cpr:ceprdp:7225&r=for |

This paper adds to the research efforts that aim to bridge the divide between macro and micro approaches to exchange rate economics by examining the linkages between exchange rate movements, order flow and expectations of macroeconomic variables. The basic hypothesis tested is that if order flow reflects heterogeneous expectations about macroeconomic fundamentals, and currency markets learn about the state of the economy gradually, then order flow can have both explanatory and forecasting power for exchange rates. Using one year of high frequency data collected via a live feed from Reuters for three major exchange rates, we find that: i) order flow is intimately related to a broad set of current and expected macroeconomic fundamentals; ii) more importantly, order flow is a powerful predictor of daily movements in exchange rates in an out-of-sample exercise, on the basis of economic value criteria such as Sharpe ratios an! d performance fees implied by utility calculations. | |

Keywords: | exchange rates; forecasting; macroeconomic news; microstructure; order flow |

JEL: | F31 |

Taken from the NEP-FOR mailing list edited by Rob Hyndman.