Jump to ContentJump to Main Navigation
The Performance of PoliticsObama's Victory and the Democratic Struggle for Power$

Jeffrey C. Alexander

Print publication date: 2010

Print ISBN-13: 9780199744466

Published to Oxford Scholarship Online: May 2012

DOI: 10.1093/acprof:oso/9780199744466.001.0001

Show Summary Details
Page of

PRINTED FROM OXFORD SCHOLARSHIP ONLINE (www.oxfordscholarship.com). (c) Copyright Oxford University Press, 2017. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a monograph in OSO for personal use (for details see http://www.oxfordscholarship.com/page/privacy-policy). Subscriber: null; date: 21 February 2017

(p.297) Appendix on the Poll of Polls

(p.297) Appendix on the Poll of Polls

The Performance of Politics
Oxford University Press

The “poll of polls” brings together the best measuements of the national trends of public opinion for the Obama v. McCain ­presidential contest from June 1, 2008, until the election on November 4. It is a compilation of the periodic tracking polls from four sources: Real Clear Politics (RCP), Rasmussen Reports, Gallup Daily Tracking Poll, and ABC/Washington Post.


These sources cover a range that eliminates extremes and provides representative sampling of results from four pollster types: aggregators, independent polling companies, old and establishment agencies, and news media. What is often called the “best in the business,” Selzer & Company, to which the Daily Kos attributes a “stellar reputation,”1 does not have the temporal consistency to chart the five-month period from June through November or the experience of a giant such as Rasmussen. Likewise, SurveyUSA and Pollster.com are reliable aggregate sources, but, in the end, RCP is preferable not only for sheer volume of data and usage but also, more importantly, because of the high professional esteem it is accorded even by competitors. At the other end of the spectrum, I have left out the notorious Zogby Poll, which polling experts across the board have rated as the worst despite its being quoted extensively by news media.

Size also matters, as does a proven track record. Nate Silver, of fiverthirtyeight.com (538), who first made a name for himself as the designer of a well-regarded (p.298) method of predicting baseball statistics, says 538 received more than one million page views per day during the campaign,2 but he is still the new kid stepping up to the plate. FiveThirtyEight’s traffic is dwarfed by that of RCP, which was founded in 2000. Its site averaged just under seven million page views per day two weeks before the election, which, according to one of its founders, John McIntyre, was significantly higher than four years before.3

With regard to the choice of Rasmussen, it is qualified not only by virtue of its huge size but also, once again, because polling experts consistently hold it in high regard. Silver comments: “None of these tracking polls are perfect, although Rasmussen—with its large sample size and high pollster rating—would probably be the one I’d want with me on a desert island.”4 The Daily Kos cited Rasmussen’s prediction accuracy in a postelection post, and a Fordham University study backs up that claim.

Gallup is trickier. Some polling analysts, as well as calculations from “report cards”—for example, SurveyUSA, 538, and the Daily Kos—are wary of its methodology. However, Gallup maintains a consistently large sample size, having interviewed no fewer than 1,000 out of a rolling sample of 2,700 registered U.S. voters nationwide each night during 2008. It is also one of the few companies to include a supplement of Spanish-language and cell phone numbers, creating a sample that represents 98 percent of the adult population, compared with 85 percent for land­line-only samples.5 A pioneer in public opinion polling, Gallup has maintained a strong industry reputation for absolute integrity since the 1930s. Despite critical comments, Silver considers its “strong professional ethics and sense of transparency” one of Gallup’s strengths.6

The ABC/Washington Post polls stand out from all of the other media/­newspaper outfits, an exception to the disdain that, along with Zogby, media/­newspaper polls often provoke among polling experts. It showed up as fourth and second on SurveyUSA report cards issued after the primaries,7 and it was rated the best of the newspaper polls and “average” in the field at large in 538’s assessment in May 2008.8

Political bias toward the left or the right is a consideration, but it is not as significant as some say, and it is also often offset by other considerations. Silver of 538 underestimated Obama’s margin of victory by four-tenths of one percent, and he is often considered a liberal pollster. Responding to a New York Times reporter who questioned his critical remarks about competitor RCP’s conservative bias, Silver acknowledged that “the two sites had a friendly rivalry and grudging respect for each other,”9 and he attests that RCP is “one of the first sites” he checked every day during the election.10

Virtually all pollsters fall prey to accusations of ideological or partisan bias. While I have not found evidence to support such accusations—against the four sources that compose the poll of polls—beyond a reasonable doubt, even (p.299) if such biases do exist, the allegedly Republican/conservative-biased Rasmussen and RCP nicely balance out the ostensibly Democratic/liberal tendencies of the other two. More credible would be the worry over “house effects,” or the tendency for a poll to lean consistently toward one candidate rather than the other. To consider these systematic effects as “bias” would be “misleading,” according to Charles Franklin, professor of political science at the University of Wisconsin, Madison, and codeveloper of Pollster.com. Instead, as he explains, “[T]he differences are due to a variety of factors that represent reasonable differences in practice from one organization to another.”11 Thus, by using polling results from multiple sources with different statistical leanings—for example, Rasmussen has a slight Republican leaning effect, and ABC/Washington Post leans slightly toward Democrats12—I reduce the potential for interference from house effects.

Nuts and Bolts

My compilation evenly weights the four polling sources described earlier. Clearly, the parameters are not identical—some survey public opinion daily but release the results on different dates, while others sample more periodically. This explains some of the sharp ups and downs on the various graphs, for example, in the earlier months, when ABC/Washington Post polling was less frequent. To ameliorate these temporal irregularities, I include different graphing types (line, area), as well as multiple temporal periods (daily, three-day, and seven-day). This form of representation illuminates some of the nuances in the poll of polls.

Gallup Daily Polling

Gallup Daily describes its polling mission as “track[ing] Americans’ views on politics, economics, and well-being, conducting 1,000 or more interviews each night and posting new numbers each day.”13 Its election-tracking results utilize combined data in three-day intervals, reporting the “percentage of registered voters who say they would support each candidate if the presidential election were held today.” The sample sizes range from about 2,400 to 4,700 registered voters (generally, about 2,700), and the maximum margin of sampling error is ±2 percentage points. Gallup Daily contacts respondents on landline telephones (for respondents with a landline telephone) or cellular phones (for respondents who have only cell phone). There is considerable debate concerning Gallup’s new models for targeting likely voters, but in the poll of polls I utilize only the surveys conducted among registered voters.

(p.300) Rasmussen Reports

Rasmussen is the only one among the eight daily tracking polls that uses an automated polling methodology (“robocalls”) to collect data for its surveys.14 On the one hand, it asserts that this process is “identical to that of traditional, operator-assisted research firms such as Gallup, Harris, and Roper” and provides this rationale: “Automated polling systems use a single, digitally recorded, voice to conduct the interview while traditional firms rely on phone banks, boiler rooms, and operator-assisted technology.” Their daily presidential tracking poll leads to consistency: “[T]he automated technology insures [sic] that every respondent hears exactly the same question, from the exact same voice, asked with the exact same inflection every single time.” On the other hand, the New York Times won’t publish the results from these types of polls, asserting they “employ an automated, recorded voice to call respondents who are asked to answer questions by punching telephone keys. Anyone who can answer the phone and hit the buttons can be counted in the survey—regardless of age.”15

Rasmussen chooses its samples of “adults” using census data to reflect population demographics such as age, race, gender, and political parties, and for political surveys in particular, it utilizes a series of screening questions to determine “likely” voters (e.g., voting history, interest in the current campaign, and likely voting intentions). Finally, it considers partisanship “through a dynamic weighting system that takes into account the state’s voting history, national trends, and recent polling in a particular state or geographic area.” This is one reason that Nate Silver considers Rasmussen to have had the most stable results even if it does make Rasmussen’s results a bit more (small c) conservative, reacting less strongly than others to changes in voters’ moods. Silver also notes that the three-day rolling sample of 3,000 likely voters is the largest sample size of any of the tracking polls.16

ABC/Washington Post

The ABC/Washington Post national tracking poll ran from October 16 through November 3, 2008.17 Aside from Gallup, it was the only other polling group to include cell phones. The “WaPo-ABC” daily polls utilized a multiple-night track (Thursday through Sunday nights) of interviews with approximately 450 randomly selected adults each day. For the last few days of the campaign, they increased the daily sample size to around 600, reporting that “These far-larger-than-usual sample sizes allow for analysis of some subgroups we can’t normally assess in regular polling,” citing Jewish voters, younger white evangelical Protestants, and younger white Catholics. Their stated sampling error is ±3 percentage points, with higher (p.301) error margins for subgroups.18 More impressive than their numbers, however, are the ABC News and (to a slightly lesser degree) Washington Post public statements concerning method, as well as methodology. Veteran polling expert Gary Langer elaborates the rigor of ABC’s standards:

The ABC News Polling Unit vets all survey research presented to ABC News to ensure it meets our standards for disclosure, validity, reliability and unbiased content. We recommend that our news division not report research that fails to meet these standards. On disclosure, in addition to the identities of the research sponsor and field work provider, we require a detailed statement of methodology, the full questionnaire and complete marginal data. If any of these are lacking, we recommend against reporting the results….In terms of content, we examine methodological statements for misleading or false claims, questionnaires for leading or biasing wording or ordering, and analyses and news releases for inaccurate or selective conclusions. In addition to recommending against reporting surveys that do not meet these standards, we promote and strongly encourage the reporting of good-quality polls that break new ground in opinion research.19

Langer also elaborates the intricacies of sampling cell phones and the beast called “sampling error,” referring the readers to an array of formal studies, several of which evaluate ABC polling results specifically. On November 4, Langer, who has won two Emmy awards for ABC’s reporting of public opinion polls in Iraq, proudly announces in his blog:

Our purpose hasn’t been to dwell on the horse race, but rather to further our understanding of how the public has come to its choices in the 2008 election—to identify the issues and candidate attributes that mattered, to measure voters’ responses to the thrust and parry of the race, to get a look inside the campaigns’ own playbooks as they’ve formed and made their appeals and to have an alternative to the spin and speculation that rush in when empirical data are absent. Public attitudes never are more important than in national elections. Our aim at the ABC News Polling Unit this year, as in presidential elections since 1984, has been simple: Trying our best to understand them.20

Similarly, The Washington Post offers a “Detailed Methodology” as well as a “FAQs” webpage, with countless links for further study, including one to “find the hard core, wonky details of Post polling methodology.”21

(p.302) Real Clear Politics

RCP is not as forthcoming about its methodology as other aggregators, a matter of some debate among political bloggers, analysts, and, of course, other pollsters. Nate Silver of 538 has called out RCP on this issue, along with criticizing some of John McIntyre’s poll choices:

It is clear to me that there is substantial subjectivity in how RCP selects the polls to include in its averages. RCP does not publish an FAQ, or any other set of standards. Nor, in my conversation with John, was he willing to articulate one. In my view, the fact that RCP does not disclose a set of standards means ipso facto that they are making judgment calls—that there is some subjectivity involved—in how their polls are selected.22

At the same time, however, Silver makes clear, “[I]t does not necessarily follow that these judgment calls reflect any deliberate partisan leaning, i.e. any ‘bias.’…I take John at his word that this is not the case.”23 Silver acquiesces to the inevitability of political subjectivity even as he defends his own poll-aggregating methods: “Look—I’m not going to tell you that my site is completely devoid of spin. I am a Democrat, and I see the world through a Democratic lens. But what I can promise you is that we’ll keep the spin separate from our metrics.24 McIntyre’s primary response to these critiques is one with which Silver agrees. Ultimately, it is a matter of reflection of their “respective business models,” he explains. “[RCP is] aiming for simplicity and ease of use, and trustworthiness. And they do these things well: RCP remains one of the first sites that I read every day.”25

Meanwhile, RCP remains a top choice for countless political junkies from the Left and the Right. Among them are Arianna Huffington (who referred to it as the “gold standard” for polling in an October 2008 post), Howard Fineman, chief political correspondent for Newsweek during the 2008 campaign, and conservative NYT columnist David Brooks, who commented, “Some people wake up every morning with a raw egg and exercise. I wake up every morning with RealClearPolitics.com. It’s the perfect one-stop shopping for the smartest commentary on politics and life.”26 Without discounting Silver’s critique, which stands out among those by armchair pundits and highly ideological bloggers, I see little more than a quasi-friendly rivalry between the leader of the pack and a powerful up-and-coming force in the world of online political news and analysis.

While RCP does keep its precise methodology under wraps, several posts by Jay Cost, who maintains RCP’s HorseRaceBlog, as well as public statements by cofounder John McIntyre, do make certain technical elements clear. Real Clear Politics uses an unweighted average, or mean, of a number of polls and calculates (p.303) the standard deviation—how much the polls are disagreeing with each other. Just before the election, Cost posted an extensive statistical primer on how this works, which was a remarkably reflexive piece on polling methodology. Testing the deviation of individual polls from the RCP average, Cost found that 9 of the 15 polls fell significantly outside the average (4 for Obama, 5 for McCain), and another 3 were just on the boundary of significance (1 for Obama, 2 for McCain). As Cost went on to explain, this is all about sampling:

The rules of statistics being what they are, we should expect a few polls here or there to fall outside the average by a statistically significant amount. But this is a lot…This variation cannot be chalked up to typical statistical “noise.” Instead, it is more likely that pollsters are disagreeing with each other in their sampling methodologies. In other words, different pollsters have different “visions” of what the electorate will look like on November 4th, and these visions are affecting their results.27

The number of polls going into RCP’s daily average varies, depending on individual polls’ frequencies. For period covered by the poll of polls, there were results from a total of 28 different organizations, with intervals of three to four days containing anywhere from a handful in June through August to at least a dozen from mid-September onward. According to McIntyre, he requires at least three polls before producing an average. In response to concerns over keeping older polls in the averages, he emphasizes that they give more weight to recent polls and asserts that his site’s numbers provide “a clearer picture of where things truly stand.”28

What It’s All For: Measuring the Three Boulders in the Campaign

From the aggregated trends revealed in the poll of polls, it is clear when the first boulder hits and the celebrity metaphor effect begins. Although McCain experiences a slight surge by the second week of July, it is is not until the end of the month that we see Obama’s numbers dip and remain that way rather than bouncing around. McCain gains just 1–3 percentage points on average during the August period, but, coupled with Obama’s drop, the race begins to take on a neck-and-neck stride, mirroring the start of June. At the end of August, Obama experiences a surge of support after his stellar convention performance but soon feels the effect of the curve ball hurled his way by McCain’s announcement of Sarah Palin as his running mate.

Floating from Palin’s initial popularity, the Republican campaign enjoys a new beginning in September with upward swings at the polls. But it is short lived. The (p.304) Palin image deflates, and from September 10 on, the Republican numbers begin to fall. Then McCain’s failure to perform effectively in the early days of the financial crisis sends them tumbling. After September 15, McCain will not see 50 percent support levels again. He manages to stay in the mid-40s for a couple of weeks, but once the drama of the financial crisis has crystallized and the 30-day countdown to Election Day begins, the Republican numbers hover at a steady 41–43 percent, with only a minor surge in the last few days. It is not enough to counter the steady increases that Obama makes. For the second half of September, Obama’s numbers climb to the highest level of support since his nomination, reaching from the high 40s to a solid 50 percent or more throughout October.

With one week to go, Obama leads McCain by 10 percentage points or more—the high- and low-ball figures range from 41 percent for McCain to 54 percent for Obama. Even by the most conservative estimates, the gap narrows only to 6 points by the morning of Election Day, and most other pollsters anticipate a 7–9 point final margin. The final results bear out the pollsters’ predictions remarkably well. Aggregate pollsters Real Clear Politics and Pollster.com predicted the margin identically, within 1.1 points (which coincidentally was the average among all of the top polling outfits). The winner for accuracy however, was Nate Silver from FiveThirtyEight, calling the margin within just. 4 of a point and McCain and Obama’s share of the popular vote at 0.1 and 0.6, respectively.


(1.) See poblano (aka Nate Silver), “Pollster Report Card,” Daily Kos, March 3, 2009, http://www.dailykos.com/storyonly/2008/3/4/1172/31168/539/468308.

(2.) Bernie Becker, “Political Polling Sites Are in a Race of Their Own,” NYT, October 22, 2008, http://www.nytimes.com/2008/10/28/us/politics/28pollsite.html?fta=y.

(3.) Ibid.

(4.) Nate Silver, “Tracking Poll Primer,” FiveThirtyEight, October 21, 2008, http://www.fivethirtyeight.com/2008/10/tracking-poll-primer.html. Further, Rasmussen accounts for about 37 percent of the input into 538’s tracking poll; see Silver, “House Effects in Da House,” FiveThirtyEight, August 25, 2008, http://www.fivethirtyeight.com/2008/08/house-effects-in-da-house.html.

(6.) Silver, “Tracking Poll Primer.”

(7.) SurveyUSA, “2008 Pollster Report Card, Updated to Include Potomac Primaries,” SurveyUSA Breaking News, February 13, 2008, http://www.surveyusa.com/index.php/2008/02/13/2008-pollster-report-card-updated-to-include-potomac-primaries/.

(8.) “Pollster Ratings, v3.1.1,” FiveThirtyEight, May 2, 2008, http://www.fivethirtyeight.com/2008/05/pollster-ratings-v311.html.

(9.) Becker, “Political Polling Site Are in a Race of Their Own.”

(10.) Silver, “RCP and the R2K Tracking Poll,” FiveThirtyEight, September 22, 2008, http://www.fivethirtyeight.com/2008/09/rcp-and-r2k-tracking-poll_6506.html.

(11.) Charles Franklin, “How Pollsters Affect Poll Results,” Pollster.com (blog), August 24, 2008, http://www.pollster.com/blogs/how_pollsters_affect_poll_resu.php.

(12.) Ibid.

(13.) See “GALLUP DAILY Measures Explained,” Gallup.com, no date, http://www.­gallup.com/poll/113842/GALLUP-DAILY-Measures-Explained.aspx).

(14.) See “Methodology,” Rasmussen Reports (blog), no date, http://www.rasmussenreports.com/public_content/about_us/methodology.

(15.) “The New York Times Polling Standards,” NYT, September 10, 2008, http://www.nytimes.com/ref/us/politics/10_polling_standards.html.

(16.) Silver, “Tracking Poll Primer,” October 21, 2008.

(17.) Prior to this period, I utilize data from their June, July, and August monthly polls, as well as the four polls taken in September and early October, which were also four-day rolling samples.

(18.) Jon Cohen, “WaPo-ABC Daily Tracking—Release #1,” WP (blog), October 20, 2008; http://voices.washingtonpost.com/behind-the-numbers/2008/10/wapo-abc_daily_tracking_1.html; Jennifer Agiesta, “WaPo-ABC Tracking: Nine Divided by 56 is…” WP, November 1, 2008, http://voices.washingtonpost.com/behind-the-numbers/2008/11/wapo-abc_tracking_nine_divided.html.

(19.) Langer, “ABC News’ Polling Methodology and Standards: The Nuts and Bolts of Our Public Opinion Surveys,” WP, November 25, 2009, http://abcnews.go.com/US/PollVault/story?id=145373&page=1.

(20.) Langer, “Final Tracking,” The Numbers (ABC News blog), November 4, 2008, http://blogs.abcnews.com/thenumbers/2008/11/final-tracking.html; emphasis added.

(21.) Washington Post Polls: Detailed Methodology,” WP, March 31, 2009, http://www.washingtonpost.com/wp-dyn/content/article/2009/03/31/AR2009033101229.html; “About Washington Post Polls,” WP, March 31, 2009, http://www.washingtonpost.com/wp-dyn/content/article/2009/03/31/AR2009033101241.html.

(22.) Silver, “RCP Follow-up,” FiveThirtyEight, October 2, 2008, http://www.­fivethirtyeight.com/2008/10/rcp-follow-up.html.

(23.) Ibid.

(p.352) (24.) Silver, “Real Credibility Problems,” FiveThirtyEight, October 2, 2008, http://www.­fivethirtyeight.com/2008/09/real-credibility-problems.html.

(25.) Silver, “RCP and the R2K Tracking Polls.”

(26.) “RealClearPolitics.com Launches New Web Site, Announces Financing,” PR Newswire, March 14, 2006. Additionally, Jim VandeHei, executive editor and a founder of the Politico site, a Web-based political journal, stated via e-mail: “The site is an essential stop for anyone interested in politics…They do a better job than anyone in the business at flagging the must-read political stories and analysis on the Web each day.” Steve Johnson, “Real Clear Politics real Clear on Its Growth, Mission,” Hypertext ­(chicagotribune.com blog), February 7, 2008, http://featuresblogs.chicagotribune.com/technology_­internetcritic/2008/02/real-clear-poli.html.

(27.) Jay Cost, “A Note on the Polls,” RCP HorseRaceBlog, October 24, 2008, http://www.realclearpolitics.com/horseraceblog/2008/10/a_note_on_the_polls_1.html.

(28.) Carl Bialik, “Election Handicappers Are Using Risky Tool: Mixed Poll Averages,” The Numbers Guy (WSJ blog), February 15, 2008, http://online.wsj.com/public/article/SB120303346890469991.html.