September 30, 2024
For today’s post, I’m going to take a slightly different tack. One question that’s been asked of me by a few friends and relations is essentially, “What’s the deal with the polls?” There are questions as to why on Monday, it’s reported that Candidate A is winning, 46% to 42%, but on Wednesday, the news is saying that Candidate B has an edge, 51% to 49%. What’s going on here? So, let’s talk about the polls, what they actually tell us, and how we can better understand them.
“Who Are You?”
The first thing to know is that there are many pollsters out there. There are the “big name” polling companies, organizations like the Pew Research Center, Gallup, SurveyUSA, or Mason-Dixon, whose regular business is conducting public opinion polls. These groups conduct all sorts of polls, year-round, and presidential campaign polling is just part of the fun. A number of polling organizations are connected with colleges and universities, like the Quinnipiac University Polling Institute, or one that is featured in a lot of Pennsylvania-based stories is Franklin & Marshall. There are also news organizations that often team up with pollsters to generate stories based on how “the electorate” feels. You’ll most often hear these introduced on mass media as things like, “in the latest NPR / Marist poll”, or “the numbers in this week’s CBS News / YouGov poll…”
“The Real Me”
The next question, naturally, is how are polls conducted? Usually, this is a multi-step process:
- A customer contacts a pollster, asking for a poll to be conducted, with specific questions to ask, number of people to be interviewed, and so forth.
- The pollster hires a call center to call the agreed-upon number of people and have them ask the questions — sometimes this is through random digit dialing (or RDD), or sometimes it’s by pulling telephone numbers from a phone book or a collated list of available numbers.
- Once the call center has a sufficient number of responses, they send it back to the pollster, who then weights the various demographic categories to make sure different groups are sufficiently represented.
- That data is then sent back to the original customer.
Simple, right? Well, at each step along the way, things can get tremendously complicated. Let’s look at the difficulties for each section:
“The Seeker” – The Customer and the Questions: Depending on who’s asking for the poll, they may have different goals in mind. For instance, if the customer has a specific political ideology behind them, they may have the pollster ask questions like, “On a scale of one to complete whackadoodle, just how crazy would you say our opponent is?” Obviously, this is an extreme example, but there are polls out there that aren’t too far off from this sort of thing. The more reputable pollsters will work with the customer on crafting questions that are more even-keeled, but even some pollsters have an agenda. For instance, just this week, there was news out that pollster Rasmussen Reports (which already has a widely-reported right-leaning bias) had been sharing their data privately with the Trump campaign prior to reporting to the public (LINK). Obviously, this is problematic.
“I Can’t Reach You” – The People They Contact: With RDD, the pollster will often choose an area code and prefix to start from, and then randomly select 4 more digits to fill out the rest of the number. Up until recently, this gave pollsters a reasonable expectation of reaching people within a certain geographic area, but this is quickly changing. This method can also hit a fair amount of business phones, fax machines, and other invalid numbers. The other thing to know is that if the call center uses a computer to automatically do the dialing, they are not allowed to call cell phones. So people who only have a cell phone and no land line (the vast majority of whom are younger Americans) are eliminated from automatic-dialer polls. As more and more people give up landlines, this will become more of an issue. And even if a call center successfully reaches a person, they may refuse to answer — it can take calling up to 10 numbers to generate one complete response. And finally, even after all that, there’s no guarantee that the person who responded is telling the truth. Famously, political commentator Chuck Todd was once called by SurveyUSA and easily managed to convince the system that he was a 19-year-old Republican Latina.
“I’m One” – Weighting the Categories: Here’s where the pollsters’ “secret sauce” comes into play. Depending on the results they receive, they will give certain groups and results more statistical “weight” to try to balance out the information that they know to be true. Let’s create an example scenarios: Peter Pollster gets 100 responses to his calls that were made during 8-5, Monday-Friday. Now, of those 100 calls, 75 of them were answered by women, because there are more housewives out there than there are househusbands. And since Peter knows that, demographically, Americans tend to be more 50-50 between men and women, he’ll give more “weight” to the 25 male responses and less to the 75 female ones, to try to balance things out. Same with other demographic data, if Peter feels that certain categories are over- or under-represented in his information. Selecting those weight allocations can be extremely tricky, and is often a closely held secret by the pollsters themselves. (Note that this is where a lot of the pollsters of 2016 significantly failed, which resulted in them predicting that Hillary Clinton was due to trounce Donald Trump in a landslide.)
“I Can See for Miles” – Reporting the Results: Even here, there can be some significant issues. The next time you hear poll numbers, pay attention to whether they report on “likely voters” or “registered voters” or “the whole electorate”. Each of these are slightly different groups that will answer questions in different ways. And just like the weighting issue above, how a pollster determines who is a “likely” voter can be highly subjective. For that matter, the customer who receives the information may share only portions of it with their audience, depending on their goals. Are they a public policy institute, just trying to get the word out on how things are looking? A news organization looking for a salacious news angle? A campaign hoping to drum up support for an issue? All of these could cherry-pick results to tell a narrative that may or may not be supported by the raw data.
“Getting In Tune”
Since the many failures of the 2016 presidential polls (and to a certain extent, even those of 2020), there has been a rise in so-called “aggregate” poll reporting websites that attempt to gather and average out data from multiple pollsters, in an effort to mathematically screen out some of the bias. (A lot of the information that I’m using for this story comes from one such academic site, Electoral-Vote.com, which I hold in high esteem.) Of course, as we are all human with biases of our own — both known and unknown — even this is not always tremendously successful. With any poll results you see or hear, it’s important to bear all of these caveats in mind; things like under- or over-weighted (or -sampled) demographic groups, effects of how questions are asked, and the like.
“Behind Blue Eyes”
Thanks for bearing with me through a slight deviation from my typical type of post, but I thought it was important enough to address. If you have any burning questions or ponderments about politics, feel free to comment on this post, or send me a message through Facebook or text! My hope is to educate and illuminate, and by doing so, make civic engagement more… well, engaging!
And finally, I found inspiration in the headline titles from another music artist today. Feel free to comment about what today’s might be, and stay tuned for who’s next!
Leave a Reply