Hi Peter and All ,
Here is something that might be a bit off topic but thought you all might like it .
Shalom,
Geketa A Quick Trip Down Memory Lane of Recent Polling
So a lot of people who don’t read me that closely are going to look at what follows and interpret it as “Jim’s saying the polls are always wrong.” That’s not what I’m saying, but I’m prefacing all of this with that prediction, because we’ve all seen that when people don’t like what you have to say, they attempt to cut off discussion by calling you insane or silly. Sneering “truther” in response to a disagreement from the conventional wisdom is almost as worn out as “racist.”
At the heart of the entire point of polling political races is the assuption that the people in the sample are a realistic representation of the folks who will vote in the election. Now that the response rate for polls has plummeted all the way down to 9 percent — that is, out of every 100 calls the pollster makes, only 9 are completed — getting a sample that looks like the likely electorate in Election Day is tougher and tougher.
So pollsters adjust, they make extra calls and make sure they have a sample that is properly balanced by gender, by race, by age, and often times, by geography of the nation or state that they’re polling. They do this based on this fairly simple conclusion — the makeup of the kind of people who will answer questions from a pollster for ten or twenty minutes may not accurately represent the makeup of who will vote in the election. So if one gender, racial group, age group, or region may be more likely to take the time to answer questions than another, why not one party?
Folks like me have been wondering for a while whether folks on the right — with distrust and suspicion of the media fueled by decades’ worth of stories and examples and anecdotes of what they deem media bias — are more likely to hang up on the pollster, and/or urge him to do anatomically difficult things to himself, than folks on the left. Think of this as an American version of the “Shy Tory” factor.
Look back at history:
In 2002, Democrats argued, and a media who largely agreed, that President George W. Bush had been “selected, not elected” in 2004, and contended that despite the events of 9/11, and the talk of war with Iraq, Democrats would thrive in the midterm elections.
I found this article describing the difference between the late polls and the final results on a lefty site charging massive voter fraud in favor of the Republicans. He summarizes:
14 races showed a post opinion poll swing towards the Republican Party (by between 3 and 16 points);
- 2 races showed a post opinion poll swing towards the Democratic Party (by 2 and 4 points);
- In three races the pollsters were close to correct;
- The largest post opinion poll vote swings occurred in Minnesota and Georgia where pollsters got the final result wrong
2004: Bob Shrum was calling John Kerry “Mr. President” after seeing the first round of exit polls. Think about it — this wasn’t just guessing who would actually vote, everybody coming out of a polling place was a definite voter. Even then, it got thrown off because Kerry voters were much more willing to talk to the exit pollsters than Bush voters:
Interviewing for the 2004 exit polls was the most inaccurate of any in the past five presidential elections as procedural problems compounded by the refusal of large numbers of Republican voters to be surveyed led to inflated estimates of support for John F. Kerry, according to a report released yesterday by the research firms responsible for the flawed surveys.
The exit pollsters emphasized that the flaws did not produce a single incorrect projection of the winner in a state on election night. But "there were 26 states in which the estimates produced by the exit poll data overstated the vote for John Kerry . . . and there were four states in which the exit poll estimates overstated the vote for George W. Bush," said Joe Lenski of Edison Media Research and Warren Mitofsky of Mitofsky International.
One other point: The exit pollsters were disproportionately collegiate women. Raise your hand if you think some men might be willing to tell a cute college coed that they voted for Kerry. Yup, me too.
2006: The popular vote in the House of Representatives races came out to 52 percent for the Democrats, 44 percent for Republicans, an eight-point margin. Some institutions came close on the generic ballot question, USA Today/Gallup (seven points), ABC News/Washington Post (six points), and Pew (four points). But others overstated it dramatically: Fox News (13 points), CNN (20 points), Newsweek (16 points), Time (15 points), and CBS/New York Times (18 points).
2008: If you’re a pollster who tends to overstate the number of Democrats in your sample, this was your year — fatigue over President Bush and war, a Wall Street collapse and economic meltdown, a drastically underfunded Republican candidate who spent much of his career fighting his own party, the first African-American nominee of a major party . . . and yet, some pollsters still overshot it: Marist, CBS News, and NBC/Wall Street Journal had Obama winning by nine, and Reuters had Obama winning by eleven, as did Gallup.
2010: Polling wasn’t quite as bad this cycle; everyone seemed to know a GOP wave was coming, and by the time Election Day rolled around, the GOP lead on the generic ballot turned out to have been overstated in quite a few of the later samples. But what’s interesting is how the polls indicating a GOP tsunami didn’t impact the conventional wisdom within Washington. The GOP’s gain of 63 seats — a final majority of 242 seats — was well beyond the total predicted by Politico’s John Harris and Jim Vandehei (224), NPR’s Ken Rudin (219), Arianna Huffington (228), and CNN’s Candy Crowley (223). This is not to argue a crazy conspiracy among the Washington crowd, just to point out that this year, for some reason, the polls didn’t influence the Beltway expectations — why, it’s almost as if poll results showing good news for Democrats are taken more seriously than ones showing good news for Republicans.
Then of course, you have the individual pollsters who sometimes go . . . well, haywire. Here’s from my piece about Zogby, who became the liberals’ pollster of choice in 2002 and 2004:
In 2002, his final polls were pretty lousy. In Minnesota, Zogby predicted Democrat Walter Mondale over Republican Norm Coleman by 6 points; Coleman won by 3. In Colorado, Zogby picked Democrat Ted Strickland over GOP incumbent Wayne Allard by 5; Allard won by 5. In Georgia, Zogby picked Democrat Max Cleland over Republican Saxby Chambliss by 2; Chambliss won by 7. In Texas, Zogby’s final poll had Republican John Cornyn over Democrat Ron Kirk by 4 points; Cornyn won by 12. Zogby’s final poll in the Florida gubernatorial race had Jeb Bush winning by 15, but only three weeks earlier he had Bush winning by only 3. Bush won by 13 points.
Late afternoon on Election Day [2004] —awfully late for a final call—Zogby predicted that Kerry would win Florida, Ohio, Iowa, and New Mexico (0 for 4!) and get at least 311 votes in the Electoral College, while Bush was assured of only 213. (The remaining 14 electoral votes were too close to call.)
There’s no other way to say it: The Big Z’s final polls were garbage. His final poll had Colorado too close to call; Bush won by 7 points. He had Florida by a tenth of a percentage point for Kerry and “trending Kerry”; Bush won by 5 points. Zogby had Bush winning North Carolina by 3; the president won John Edwards’s home state by 13. Zogby had Bush leading Tennessee by 4; the president won by 14. Zogby called Virginia a “slight edge” for the GOP; Bush won by 8. In West Virginia, Zogby predicted a Bush win by 4; the president won by 13. And in the vital swing state of Wisconsin, Zogby had Kerry up by 6; the final margin was 1 point.
Zogby’s dramatically far-off results were, I would argue, fueled by a combination of hubristic overconfidence in his own ability to read the mood of the electorate and the desire to tell his biggest fans what they want to hear. I’ll let you conclude if you think that description might apply to any other pundit you see cited a lot these days — including myself.
Besides pollsters seeing what they want to see, we must recall the fairly recent example of Research 2000, which may not have actually conducted the surveys that it announced to the world. Here’s a good summary of that scandal:
It came after Daily Kos published a statistical analysis of Research 2000's polls that alleged a series of statistical anomalies among the results. That analysis led Moulitsas to conclude that the weekly poll Research 2000 had conducted and run on Daily Kos during 2009 and 2010 "was likely bunk."
Moulitsas added that Ali had "refused to offer any explanation" for the anomalies or turn over raw data as requested. Daily Kos lawyer Adam Bonin vowed to "file the appropriate discovery requests" in order to determine whether Ali had fabricated data.
In a rambling public response published last July, Ali characterized "every charge" made by the Daily Kos lawsuit as "pure lies, plain and simple." He promised that "the motives as to why Kos is doing it will be revealed in the legal process."
But by agreeing to a settlement, Ali leaves open the question of whether his data were in fact fabricated.
The same July statement also included a comment that raised eyebrows among pollsters (typos in original):
Yes we weight heavily and I will, using te margin of error adjust the top line and when adjusted under my discretion as both a pollster and social scientist, therefore all sub groups must be adjusted as well.
After sending that statement, Ali disappeared from public view. Attempts to contact his email account temporarily bounced, his Twitter account went silent and the Research 2000 website started redirecting to a Wikipedia entry on opinion polls. Ali started posting again to his Twitter account two weeks ago, although he has so far not mentioned either the lawsuit or his polling business.
Now, not every pollster is making up their results; probably none of the polls we read about today are made up of whole cloth. But this case suggests that the most paranoid scenario — a pollster not really collecting data, just pretending to and telling the client some combination of what they want to hear and what sounds realistic — can happen.
I mention all of this because I hear from a lot of readers — up through this past weekend, in fact —some variation of “EEK! X poll shows my candidate down!”
Well, your candidate may be down. But you should know better than to panic over a poll, and you should know that there’s nothing anyone could or should be telling you to make you stop being as active as you are in these final hours. You should be checking the samples, to see if the partisan breakdown makes sense to you. If the percentage of Democrats in the sample is higher than the percentage of Democrats in the 2008 exit polls, some skepticism is warranted.
That’s how you find CNN releasing a poll Sunday night that has it tied, 49 percent to 49 percent, despite Mitt Romney winning independents by 22 points, 59 percent to 37 percent. Why? “Among those likely voters, 41% described themselves as Democrats, 29% described themselves as Independents, and 30% described themselves as Republicans.”
If the electorate is D+11 Tuesday, Romney’s doomed. If Romney’s winning independents by 22, he’s winning in a landslide.
National Review, Inc.