Special Issue of IJF on Presidential Election Forecasting

Anticipating what is likely to be one of the most interesting elections in modern history, the current issue of the International Journal of Forecasting (April–June 2008) is devoted to forecasting presidential elections.

Special issue editors James E. Campbell of the University at Buffalo, SUNY and Michael S. Lewis-Beck of the University of Iowa assembled seventeen forecasters (from political science, economics, and history) who have written ten articles on forecasting U.S. presidential elections.

Articles in this special issue deal with subjects in election forecasting from particular models to the broader concerns of election forecasting. Several articles present specific election forecasting models using diverse methodological tools-from econometric modeling to prediction markets to historical analysis. Other articles in the issue evaluate the success of election forecasting in past elections and questions confronting election forecasting models, such as whether open seat and incumbent elections should be treated differently. While most of the articles deal with the general election, one article ventured into forecasting the parties' nomination contests.

Some brief glimpses into the articles:

1. In "U.S. Presidential Election Forecasting: An Introduction" co-editors Campbell and Lewis-Beck provide a brief history of the development of the election forecasting field and an overview of the articles in this special issue.

2. In "Forecasting the Presidential Primary Vote: Viability, Ideology and Momentum," Wayne P. Steger of DePaul University takes on the difficult task of improving on forecasting models of presidential nominations. He focuses on the forecast of the primary vote in contests where the incumbent president is not a candidate, comparing models using information from before the Iowa Caucus and New Hampshire primary to those taking these momentum-inducing events into account.

3. In "It's About Time: Forecasting the 2008 Presidential Election with the Time-for-Change Model," Alan I. Abramowitz of Emory University updates his referenda theory based "time for a change" election forecasting model first published in 1988. Specifically, his model forecasts the two-party division of the national popular vote for the in-party candidate based on presidential approval in June, economic growth in the first half of the election year, and whether the president's party is seeking more than a second consecutive term in office.

4. In "The Economy and the Presidential Vote: What the Leading Indicators Reveal Well in Advance," Robert S. Erikson of Columbia University and Christopher Wlezien of Temple University ask what is the preferred economic measure in election forecasting and what is the optimal time before the election to issue a forecast? They find that leading indicators not only offer better predictions than income growth and that leading indicators assessed months before the election predict as well as income growth measured on election eve.

5. In "Forecasting Presidential Elections: When to Change the Model?" Michael S. Lewis-Beck 2

of the University of Iowa and Charles Tien of Hunter College, CUNY ask whether the addition of variables can genuinely reduce forecasting error, as opposed to merely boosting statistical fit by chance. They explore the evolution of their core model – presidential vote as a function GNP growth and presidential popularity. They compare it to a more complex, "jobs" model they have developed over the years and conclude that the more complex model exhibits theoretical and empirical gains over the simpler model.

6. In "Forecasting Non-Incumbent Presidential Elections: Lessons Learned from the 2000 Election," Andrew H. Sidman, Maxwell Mak, and Matthew J. Lebo of SUNY Stony Brook

use a Bayesian Model Averaging approach to the question of whether economic influences have a muted impact on elections without an incumbent as a candidate. The Sidman team concludes that a discount of economic influences actually weakens general forecasting performance.

7. In "Evaluating U.S. Presidential Election Forecasts and Forecasting Equations," James E. Campbell responds to critics of election forecasting by identifying the theoretical foundations of forecasting models and offering a reasonable set of benchmarks for assessing forecast accuracy. Contrasted with the Sidman team's article above, Campbell's analyses of his trial-heat and economy forecasting model and of Abramowitz's "time for a change" model indicates that it is still at least an open question as to whether models should be revised to reflect more muted referendum effects in open seat or non-incumbent elections.

8. In "Campaign Trial Heats as Election Forecasts: Measurement Error and Bias in 2004 Presidential Campaign Polls," Mark Pickup of Oxford University and Richard Johnston of the University of Pennsylvania provide an assessment of polls as forecasts. Comparing various sophisticated methods for assessing overall systematic bias in polling on the 2004 US presidential election, Johnston and Pickup show that three polling houses had large and significant biases in their preference polls.

9. In "Prediction Market Accuracy in the Long Run," Joyce E. Berg, Forrest D. Nelson, and Thomas A. Reitz, each affiliated with the University of Iowa's Tippie College of Buisiness, compare the presidential election forecasts produced from the Iowa Electronic Market (IEM) to forecasts from an exhaustive body of opinion polls. Their finding is that the IEM is usually more accurate than the polls.

10. In "The Keys to the White House: An Index Forecast for 2008," Allan J. Lichtman of American University provides an historian's checklist of 13 conditions, or "keys," that together forecast the presidential contest. These keys are a set of "yes or no" questions that concern both how the president's party has been doing and the circumstances surrounding the election. If fewer than six keys are turned against the in-party, it is predicted to win the election. If six or more keys are turned, the in-party is prediected to lose. Lichtman notes that this rule correctly postdicted every election from 1860 to 1980 and predicted the winner in every race since 1984.

11. In "The State of Presidential Election Forecasting: The 2004 Experience," by Randall J. Jones, Jr. reviews the accuracy of all of the major approaches used in forecasting the 2004 presidential election. In addition to examining campaign polls, trading markets, and regression 3 models, he examines the records of Delphi expert surveys, bellwether states, and probability models.

More details of articles…