We don't do political polling, but we do a lot of surveys here at Small Business Labs.
Because of this, we're getting questions about why the polls on the presidential election were wrong.
First, the polls weren't wrong by much.
The election was quite close, particularly in the battleground states.
Trump won Florida by 120,000 votes out of 9 million cast. He won Pennsylvania by only 68,000 votes. Both of these are around 1% of the total.
The presidential polls mostly claimed to be accurate at the plus or minus 3% level. The polls in these two states had Clinton winning by 2%-3% , so Trump winning by 1% was more or less within their margins of error.
So statistically speaking, the polls where the outcome was within their margin of error weren't wrong.
But the pollsters did pick the wrong winner and there are reasons beyond just their statistical margins of error for this.
In the case of the presidential election, most polls got the turnout mix wrong. Their forecast models assumed there would be a higher urban turnout and a lower rural turnout than what actually happened. This proved to be a problem as even traditionally Democratic rural areas went to Trump.
Most polls also had Clinton winning slightly higher percentages of minority, college educated men and women voters. Again, these misses were small. But given how tight the election was, they mattered.
Watching the election returns, we knew Clinton was in trouble early. We knew this because we saw the voter turnout mix was favoring Trump in North Carolina, Florida and Virginia.
We were also struck by early exit poll data which indicated that almost of the polling errors were going in favor of Donald Trump. This was confirmed by the more detailed post election data. This is interesting because survey errors usually go both ways.
Unless there is systematic bias in the polls.
Polling sounds very scientific - and it mostly is. But there's a lot of human involvement in polling. The pollsters have to make dozens of small decisions as they create the polls, collect the data and analyze the results.
If the pollsters have a bias in one direction or another, this bias can easily creep into the poll results - even without the pollster consciously realizing it.
We think this happened with the presidential polls. We don't think it was intentional or even recognized by those doing the polling. But the reality is most pollsters were supporting Clinton and most polling organizations are based in places that were pro-Clinton.
So what does this mean?
The main lesson is polling is not as accurate as most people think. In addition to the margin of errors, there are many other ways polls can go wrong. Consumers of polling and survey data should keep this mind while reading or reviewing survey results of any kind.
For those of us who conduct polls, the lesson is to focus on reducing our biases. We also need to do a better job of educating people on the variability around survey results.