Jump to ContentJump to Main Navigation
The Space of OpinionMedia Intellectuals and the Public Sphere$

Ronald N. Jacobs and Eleanor Townsley

Print publication date: 2011

Print ISBN-13: 9780199797929

Published to Oxford Scholarship Online: May 2012

DOI: 10.1093/acprof:oso/9780199797929.001.0001

Show Summary Details
Page of

PRINTED FROM OXFORD SCHOLARSHIP ONLINE (www.oxfordscholarship.com). (c) Copyright Oxford University Press, 2017. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a monograph in OSO for personal use (for details see http://www.oxfordscholarship.com/page/privacy-policy). Subscriber: null; date: 25 February 2017

(p.247) Appendix A Samples and Data

(p.247) Appendix A Samples and Data

The Space of Opinion
Oxford University Press

The sample design has several connected goals. First, we wanted good representative samples that were large enough to compare the first years of Republican and Democratic presidential administrations, since prior research had led us to suspect a relationship between party of administration and the occupational composition of the field of media intellectuals (Townsley 2000). Second, we chose two-year periods during each presidency to capture the second year of a new administration, by which time most cabinet appointments have been made and arguably the honeymoon period with the media has ended (although see Clayman et al. 2007). Finally, choosing twenty-four-month periods helps us to avoid event specificity, for example, our sample being dominated by a single episode, like the events of September 11.

Along with quantitative variables describing regular features of the opinion space, there are approximately 10,000 pages of text data in the newspaper sample and 10,000 pages in the television sample, about 20,000 pages overall. Our goal is to draw connections between the social space of opinion described by the quantitative measures on the one hand—thus, the analysis of authors’ occupations, the range of content, length of speech, number of turns, and other characteristics of format—with an in-depth cultural analysis of the narrative features in particular cases, on the other. We supplement the content analyses throughout the chapters with close readings of strategically selected textual material. Indeed, it is our aim throughout the book to enhance both dimensions of our analysis by marrying both.

The Newspaper and Television Samples: Op-Eds, Speaker-Shows, Speaker-Segments

We drew a 12.5 percent sample of days from two twenty-four-month periods—1993–1994 and 2001–2002—and collected all the op-eds published in the New York Times and USA Today for those days from the full-text database provided by Nexis. The choice of the New York Times is (p.248) obvious, since it is widely regarded as the national paper of record. USA Today is the most widely circulated newspaper in the United States, and the flagship brand of the Gannett group, the largest newspaper conglomerate in the country during our sample periods. The unit of analysis in the newspaper sample is the op-ed (n = 910).

Television transcripts were sampled using the same logic we followed for newspapers: we took a random sample of days from the same two time periods, 1993–1994 and 2001–2002, and collected transcripts from Face the Nation, Crossfire, The NewsHour with Jim Lehrer, and Hannity & Colmes as representatives of the range of political talk shows on Sunday morning and daily television that are aired on network, public, and cable channels. Hannity & Colmes appears in the 2001–2002 sample only, reflecting an overall expansion in the number of television talk shows aired over the period. Television transcripts are much longer than individual newspaper op-eds, so we sampled fewer transcripts than op-eds—selecting fifty days from each period—with the goal of producing a sample size roughly comparable with the newspaper sample, which was evenly balanced between Sunday morning and weekday political talk shows. The basic unit of analysis in the television sample is the speaker-show (n = 909), that is, the speech of an individual speaker on a particular episode of a television show (table A.1). At this level, we can compare characteristics like occupation and sex of the speaker and overall length of speech between opinion authors in print and on television.

Television transcripts are more complex than newspaper op-eds, however, because they have multiple opinion authors in conversation speaking on a range of different topics. For this reason, we also coded the television transcripts at a second level, where the unit of analysis is the speaker-segment. There were 374 distinct topical segments from 157 shows, yielding 1,241 (p.249) speaker-segments. At this level of analysis, we compare differences in rhetoric, styles of argument, claims to authority, language use, and topic choice.

Table A.1 Sample details


Days sampled print


Days sampled television



# Op-eds


# Television shows


# Show-segments



Number of individual print authors


Number of individual TV authors


Total number of individual opinion authors



Author-op-ed units (print)


Speaker-show units (TV)


Author-opinion level (print and TV)


Number of op-eds


Number of speaker-segments (TV)


Segment level (print and TV)


(*) These do not add to 923 because there is overlap of authors between print and television.


A conspicuous feature of our samples is that a small number of individuals appear multiple times, notably professional opinion columnists and talk show hosts. In the newspaper sample, 446 unique authors produce 910 op-eds, and in the television sample, 511 unique speakers produced 909 units of opinion (where the unit of analysis is the speaker-show). We discuss this as a substantive finding in chapter 4, and it is a marked feature of the contemporary space of U.S. opinion.

Repeated measures on the same experimental subject are not a structural feature of our sample design, such as it is for time series data, where, for example, medical information is collected from a single experimental subject over time, or for much linguistic data, which typically take multiple measurements on each experimental subject. By contrast, we sample pieces of opinion in the space of opinion and not individual authors; in other words, the unit of analysis is the piece of opinion and not the person, and any opinion text has an equal probability of being selected, given our random sample of weekdays and Sundays.

Nonetheless, we are concerned that the repeated appearance of particular authors could mean there is less within-group variation for units of opinion with the same author. If this is true, the technical effect would be to artificially inflate our findings of significance. We have investigated this possibility by running significance tests that use much more stringent assumptions for the covariance structure than standard tests, and we have also experimented with down-weighting frequent authors to produce more conservative tests of significance. Our initial exploration confirms our sense that clustering is not driving our findings. (p.250)