I’ve gone over the network forecasts for the presidential election of 2016 and it wasn’t a pretty sight. While I don’t buy the line that “A lot of forecasters will be out of a job right now,” obviously there was, and will be a lot of teeth-gnashing and finger pointing by media outlets that have paid tens of millions of dollars for accurate forecasts. In one of the election shows that I watched, one of the pundits claimed that “A lot of forecasters will be out of a job.” He was wrong about everything else, so I’ll chalk that up to another bad prognostication.

Here’s a post from a gentleman who posted that Nate Silver at 538 actual got it right. His reasoning is that Silver’s expected margin of victory was only wrong for Ohio:

us-election-2016-538-prediction

That may be true, but as a political forecaster, should I be forecasting the margin of victory for each candidate, or the electoral vote win?

His chart means absolutely nothing. And even if it could mean something, he’s making the assumption that Ohio counts as much as Maine. From a political battleground perspective, this is completely faulty logic. Ohio has eighteen votes, Maine has two. Are they equal? Not even close. Should they be treated equally in the election? If you, as a military commander attacks a position with your division, would a difference of eighteen battalions versus three make a difference? If you don’t think so, you’re not going to be around for long.

A presidential candidate wins the electoral vote in the U.S. to win the office. Reason: The smaller states would have never ratified the Constitution if they thought that they would have no voice. The same reason why we have a senate. If you think this isn’t still important, try to pass a constitutional amendment to take away the electoral vote, and see how many of the smaller states sign off on it, and give up their senate seats in the bargain. Here’s a forecast for you: none of them will. And yet, all we saw on a lot of polls were straight voting percentages from the national perspective.

Here’s Nate Silver’s electoral forecast map for the election:

silver

Actually, you can see he called Ohio for Trump, or actually close and leaning Trump. So the article above about Silver’s “triumph’ is misleading in two ways. What killed Silver’s forecast (and everyone else’s) were the states of Florida, Pennsylvania,  Wisconsin, Michigan, and North Carolina. Here’s what actually happened:

actual

Interestingly enough, Silver predicted Clinton getting 302 electoral votes and Trump getting 235, almost the exact opposite of what happened.

So, let’s treat the entire massacre as a statistical crime scene. What happened? Why was everyone so wrong? Most of the forecasts, including Silver’s, were based on aggregates of polls from different states. Obviously, polling isn’t an exact science. If we’re going to talk about polls we have three different alternatives:

  1. People didn’t tell the pollsters who they were really going to vote for

This is called the “Bradley Effect” and is a theory to explain discrepancies between polls and actual results. Rasmussen reports actually looked into this and found some credence to this theory in play for this election. But none of the major forecasters took this into account. Is there a way they could have? Well, Rasmussen did.

2. The pollsters were biased

The Trump campaign has complained about this from the beginning, and many statisticians have complained that some of the polls seem like they’re oversampling democrats. While bias is difficult and in some cases impossible to prove, even Nate Silver has printed a mea culpa about his own bias during the primaries. Even though Silver admitted that there was a problem, there’s ample evidence that it was repeated, refuting his claim that you can “fail forward” to becoming a good scientist. Scientists and researchers are supposed to be a group that holds their objectivity high, particularly in the twenty-first century. But there is evidence that research methods are becoming increasingly biased, especially in the social sciences with this study claiming that sixty-five percent of research papers are making faulty conclusions due to misconduct or outright fraud. As an amusing side note, here’s Politico refuting the “Shy Trump” voters notion.  But even Politico noticed the difference between the online polls and phone polls, which leads to my third possibility.

3. The pollsters were incompetent

While many pollsters called Florida and North Carolina as tossup states, none of the majors called Pennsylvania, Wisconsin, or Michigan as battleground states. Silver claimed it back in May, but it didn’t seem to figure in his map above. In any case, Silver was wrong about Pennsylvania deciding the election in that post, it was Florida, and Pennsylvania, and Ohio, and the rest, there wasn’t a single pivotal state like in 2000. It wasn’t a landslide, but it wasn’t close from an electoral perspective.

Let’s discount the Bradley Effect, since there was opportunity, especially after Brexit to figure this into the models. Then we’re stuck with incompetence, or malice in the form of bias, or a combination of both.

What does this mean?

In terms of polls, we’re going to have to assume, that everything in the political polls, or that the pollsters claim, was either exaggerated or plain wrong, until rigorously proven otherwise. I know we’re talking about statistics here, and there’s no right or wrong, but when you claim the one party is going to get 230 electoral votes, and the other party is going to get 302, and the exact opposite happens, I think the word “wrong” is warranted here, but if you insist on a probabilistic measure, Silver gets a Brier score of .49, which isn’t great. Most of the other pollsters get a Brier score close to one, which is terrible.

Now, confidence into the overall reliability of these poll measurements must be questioned. Some things just don’t fit. Like the President having a high approval rating in the polls. If the Democrats lost an election to a mandate at the same time someone who is carrying forward his legacy loses, that poll doesn’t quite fit. A year ago, I would have accepted it without question. I have no evidence it’s wrong, but the inaccuracy of the election polls just calls the whole system into question. Then there are the supposed effects on the polls of the FBI investigation, or the debates, or any of the other items that came up during the election, and the pollsters said the polls went up by this much, or down by this much. We can no longer gauge those effects, because we have no gauge.

3 comments on “The Political Forecast Massacre of 2016

  1. The best and most direct explanation ever as to why there is an electoral college and how it will never go away for obvious and good reasons!

Leave a Reply

Your email address will not be published. Required fields are marked *