Bowers Vs. 538 Vs. Pollster, Part 2

by: Chris Bowers

Wed Jan 07, 2009 at 20:30


Read Part One here

Two days, ago, I compared the average rate of error from final predictions to final election results for Pollster.com, fivethirtyeight.com, and my own predictions. Looking at 65 elections on November 4th, 2008, where all three sites made public final predictions / estimates, it turned out that Pollster.com and fivethirtyeight.com were equally accurate, and that I lagged about 8-10% behind.

While I was a bit further behind, I still wanted to see where I was less accurate. It turns out that when blowouts (final margin over 20%) and rarely polled elections (only one poll in the final eight days) are removed, my simple, rudimentary methodology was actually the equal of Pollster and 538. As long as there were at least two polls in the final eight days, simple poll averaging was just as good at predicting election outcomes as any other methodology around. Data in the extended entry.

Chris Bowers :: Bowers Vs. 538 Vs. Pollster, Part 2
First, I took the data from the previous analysis of error rates, and sorted it according to how close each of the 65 campaigns were. You can see this data here:

Election prediction error rates, sorted by final margin (PDF)

The results showed that all methods were better at predicting closer elections, no doubt because closer elections both receive more attention from pollsters and because the electorate is also paying more attention, thus making their responses to polls more informed and stable. Here is the median and mean error, sorted by final margin:

Median Error
Final Margin Bowers 538.com Pollster # of Cases
5.00 or less 1.33 1.12 1.33 10
5.01 to 10.00 1.63 1.50 2.04 13
10.01 to 15.00 2.65 2.22 2.12 11
15.01 to 20.00 3.46 2.71 2.29 9
20.00 or less 2.08 1.65 1.90 43
20.01 to 25.00 5.00 3.43 5.03 7
25.01 to 30.00 2.34 3.27 4.08 8
30.01 or more 7.24 5.59 6.89 7
All 2.55 2.23 2.23 65

Mean Error
Final Margin Bowers 538.com Pollster # of Cases
5.00 or less 2.32 1.90 1.43 10
5.01 to 10.00 2.60 2.01 2.36 13
10.01 to 15.00 2.48 2.43 2.45 11
15.01 to 20.00 3.52 3.40 3.36 9
20.00 or less 2.70 2.38 2.38 43
20.01 to 25.00 5.32 3.59 3.64 7
25.01 to 30.00 4.24 4.58 4.07 8
30.01 or more 10.36 6.98 8.12 7
All 3.99 3.28 3.34 65

Generally speaking, the rate of error for all campaigns increases as the competitiveness of the campaign decreases. In the case of my methodology, there is a severe drop once the final margin of the campaign passes 20%.

However, look at what happens when the elections with only a single poll in the final week are removed from the averages:

Prediction error rates, 2 or more polls in final week, 20.00% or smaller final margin
Average Bowers 538.com Pollster # of Cases
Median 1.62 1.65 1.74 37
Mean 2.55 2.44 2.29 37

While my method lags behind the equally capable Pollster.com and 538.com in the overall numbers, in these thirty-seven cases, there is no difference in the performance of the three methodologies. This is significant, because these happen to be the 37 cases where polling based forecasters are both needed and useful. Do people really need election forecasters to tell them what will happen when polls show the margin of the campaign to be greater than 20%? The outcome is obvious in those cases. Similarly, are election forecasters even useful for elections where either zero or one poll was conducted during the final week? Not really, as you can see from the average prediction error of single-final-week-poll campaigns:

Prediction error rates, all single-final-week-poll campaigns
Average Bowers 538.com Pollster # of Cases
Median 3.58 2.82 3.39 21
Mean 6.01 4.45 4.53 21

For these single-final-week poll campaigns, the mean error is roughly the same or even greater than, the margin of error in a single poll. That isn't very useful. The median error is a bit better, but still pretty poor compared to multiple-final-week-poll campaigns. All forecasters perform more than a full percentage point worse in that category, and close to 2 percentage points worse.

So, while my method is worse than Pollster.com and fivethirtyeight's for campaigns with either one or zero polls during the final week, and for blowout campaigns, it is equal to those methods for all other types of election forecasting. Given that over 90% of all closely watched campaigns are neither single-poll or blowout affairs, I think that is pretty good. Perhaps it isn't surprising either since, in the end, our forecasts are all dependant on the same polls. Election forecasters willing to look only at the data and set aside their own preconceptions will all flourish in high-polling, high-attention environments.


Tags: , , , , , , , (All Tags)
Print Friendly View Send As Email

USER MENU

Open Left Campaigns

SEARCH

   

Advanced Search

QUICK HITS
STATE BLOGS
Powered by: SoapBlox