The opinion research industry is in “crisis.” Among my professional circles, the inaccuracy of the polling data and the surprise election of Donald Trump was THE topic on November 9th. However, this “crisis” is not exactly new news. If you go back to the 2012 election, some of the same observations were made. So, what’s going on? Does this mean the end of quantitative research? Not really, but I think we need to be re-schooled on exactly what to expect from a survey in this day-and-age.
The most important lesson is to understand that surveys are NOT prediction tools. In the social sciences, we attempt to understand POBAs and behaviors. POBAs (short for perceptions, opinions, beliefs, and attitudes) are what people say. Behaviors are what people do. If you go back 50 years, the gap between POBAs and behaviors was much narrower. During that period, a lot has changed to widen this gap.
For a survey to be representative, every member of the target population must have an equal opportunity to participate. When every home had a landline and people were OK with door-to-door canvassing, the results were darn good. Despite our best attempts to use blended sampling (online, cell phone, and landline), we still struggle with reliability. Factor in the further splintering of attention spans (text, social media, gaming, and now virtual reality), the complexities of sampling are hard to fathom.
The other “elephant in the room” is response rates. 40 – 50 years ago, it was estimated that participation rates in public opinion polling were 50%. Today, that number is in the single digits. Think about it. When you conduct a consumer survey you have no clue what 95% of non-responders think about your product or service.
Some of you might be thinking that I’m talking myself out of a job. Hardly! The need for insights is greater than ever. (Notice I said insights and not information.) The garden-variety survey is only one tool with its own strengths and weaknesses. Not to mention, we have a lot of new survey-based techniques that are much better and require a specialized skill set.
The big failure of the polling firms was an over-reliance on just one tool. This is in no way an endorsement of one candidate over the other, but many data points were overlooked. The attendance numbers at Trump rallies versus Clinton rallies, the intensity of support among Trump supporters vs. Clinton supporters, things people were willing to say privately vs. publicly, and much more. When combined, the data, anecdotes, and observations could have painted a different picture. The lesson here: don’t rely on one tool or you too will get Trumped. Believe me.