Jump to ContentJump to Main Navigation
A New Engagement?Political Participation, Civic Life, and the Changing American Citizen$

Cliff Zukin, Scott Keeter, Molly Andolina, Krista Jenkins, and Michael X. Delli Carpini

Print publication date: 2006

Print ISBN-13: 9780195183177

Published to Oxford Scholarship Online: October 2011

DOI: 10.1093/acprof:oso/9780195183177.001.0001

Show Summary Details
Page of

PRINTED FROM OXFORD SCHOLARSHIP ONLINE (www.oxfordscholarship.com). (c) Copyright Oxford University Press, 2017. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a monograph in OSO for personal use (for details see http://www.oxfordscholarship.com/page/privacy-policy). Subscriber: null; date: 30 March 2017

Appendix

Appendix

Source:
A New Engagement?
Publisher:
Oxford University Press

(p.211) Research Design (Chapter 3)

Data for this book were collected during a two-year project funded by the Pew Charitable Trusts. The research had two principal goals: (1) to develop a reliable but concise set of indicators of civic and political engagement, with a special focus on youth aged 15–25; and (2) to assess the civic and political health of the nation.1 To accomplish these aims we employed a five-stage research design, moving from qualitative to quantitative.

Stage 1: Expert Panels

We began our field work in the spring of 2001 by convening two “expert panels” of those who either work with active youth on a daily basis or who study their political predispositions and behavior. Our intention was to start tabula rasa: we knew little about how young people were active in the civic and political life of the country, and wished to be blinded by no presumptions. The extant literature, to us, seemed to be bound too heavily to the past traditions of political science that focused (p.212) largely on voting behavior, and we were concerned about reification. Thus we chose a bottom-up approach. That is, rather than asking about how young people participated in normal political and civic activities, we asked what young people were doing, and then wondered whether these activities could be considered civic or political (see appendix table 1 for sample questions used in the panels).

Appendix Table 1 Expert Panel Discussion Questions

Activities and interests of young people

What kinds of political activities do young people participate in?

What are the biggest challenges to youth involvement in politics and civic life?

Where is the most hope and promise for youth involvement?

The post-Baby Boomer generation(s)

Do young people today participate less than or differently from others?

How do young people today differ from older generations; what are the reasons?

Is there a difference between Generation X and the latest wave of young adults and teens?

Is the current young group a “generation”? What shared experiences make them so?

Political information and attitudes

Where do young people get their information about politics? What do they know?

Describe how young people think about politics: as evil, irrelevant, or inaccessible?

Do they believe they can influence politics; do they want to influence politics?

Do volunteers in nonpolitical work see a link between their activities and politics?

Basic values

How do young people define “community”? Is it a place, or a set of interests?

Are the young people you work with connected to any particular community?

What core values do you think they have; and where are they learning these lessons?

Guidance for our work

When we study youth, civic, and political engagement, what should we consider?

Are there measures of political engagement among young people that have been overlooked, or that we won’t find through surveys? Where else should we look?

How could we help you better assess the effectiveness of your work?

Our expert panelists came from all walks of engagement life. They included representatives from political parties, labor unions, nonprofits, racial and ethnic-oriented groups, voting- and school-based civic organizations, among others. The discussion guide we started with was open- ended. Simply put, we asked specialists working in the field of youth civic engagement to tell us what they thought was on the minds of young people, what their values were, what sense they made of the wider world, (p.213) how they would describe young people in terms of personality and in their commitment to civic and political life, and what they did in terms of actions. The results of these two expert panels framed the questions we set out to explore in the second phase of our research. The following list shows the names of those who participated in our panel discussions, along with their affiliation at the time of their participation.

Mallory Barg

AmeriCorps

Rick Battistoni

Providence College

Amy Cohen

Corporation for National Service

Julia Cohen

Youth Vote 2000

Marco Davis

National Council of La Raza

Steve Culbertson

Youth Service America

Alison Byrne Fields

Rock the Vote

Ivan Frishberg

Public Interest Research Group

William A. Galston

University of Maryland

Vina Nguyen Ha

Korean Youth and Community Center and Southern Californians for Youth

Sandy Horwitt

National Association of Secretaries of State

Adrienne King-McCorkle

National Coalition on Black Civic Participation

Kimberly Roberts

AFL-CIO

Peter Schurman

MoveOn.org

Stephanie Seidel

Bread for the World

Susan Blad Seldin

College Democrats

Vicki Shabo

Lake Snell Perry and Associates

Robert Sherman

Surdna Foundation

Diane Ty

YouthNOISE

Mara Vanderslice

RESULTS

Lisa Wernick

College Republican National Committee

Stage 2: Focus Groups

We took what we had learned from our discussions with those on our expert panels as the raw fodder for the development of a discussion (p.214) guide we could use in focus groups we conducted with young people, and also groups in other age ranges. As described, we had stratified the world into four generational groups, based upon prior research and the objectives of the study. We conducted 11 focus groups during May and June of 2001, in four distinct regions of the country—the West (northern California: three groups), Midwest (Chicago: three groups), South (North Carolina: two groups), and Northeast (New Jersey: three groups)—to talk to people of all ages about politicians, government, citizenship, the kinds of problems they face in their communities, and a variety of other relevant topics in order to figure out what people think about politics and citizen engagement. Almost every one of the groups was stratified by age into one of the four groups we have described, with a greater number of groups being conducted with DotNets and GenXers. We also conducted a number of specialty groups, including one group comprising mainly young African Americans in North Carolina. Following the initial round of our focus groups, we revisited some of our DotNet and Generation X focus group participants after the 9/11 terrorist attacks in order to see what, if any, impact a national tragedy such as this had on their perception of government and politics. And finally, we conducted two additional focus groups following our national telephone survey in order to gauge the validity of our engagement typology, as described in detail in chapter 3.

While we modified the discussion guides used in each of these original four-site groups as we learned from one group to the next, each group covered a similar terrain. We started by asking in open-ended fashion what life was like in each of the communities we were in, followed by a discussion of what group participants may have done to participate in civic or community life. When this topic was exhausted, we asked what participants thought about what makes “a good citizen” and whether there were obligations of citizenship, and if so, what people believed them to be. We then moved the discussion into views of politics, often beginning this section by asking participants to “write down the first word that comes to mind when I say the word…” and then probing with the nouns “politics” and “government” among others. In most groups this led us into a discussion of (the importance of) “voting” as a form of participation. The last segments of the groups included a generational discussion, encompassing both what young people are like these (p.215) days, and how each age group thought they might be different than those who have come before and after them, depending on relevance to the particular group sitting around the table.

Stage 3: Questionnaire Pretesting and Internet Sample of Youth

Stage 3 of the research design encompassed two main activities. First, we ran a series of question wording experiments to test the most reliable and valid ways of measuring sets of behaviors and attitudes. These experiments on question wording focused on how to minimize socially desirable responses, the time frame over which behaviors could reliably be recalled, and alternative ways of either formulating questions or framing response categories for respondent choice.2 We were fortunate to have access to statewide surveys in both New Jersey and Virginia in the fall of 2001, where each state was holding its gubernatorial election, for testing.

Second, we realized from the conduct of focus groups with young people that there was a gap in our knowledge of their world, which we wanted to close before going into the field with a final instrument. Even after the focus groups, we felt we knew less than we would like about young people in general, and about their school-based experiences in particular. To glean additional insight into these areas, we commissioned Knowledge Networks (KN) to conduct the NYS. Knowledge Networks conducts web-based surveys with a true probability sample design by providing Internet access to randomly selected households.

Between January 29, 2002, and February 25, 2002, Knowledge Networks administered a survey to youth aged 15 to 25 about the activities they are engaged in, either by themselves or with others. The NYS was administered to a total of 1,166 KN panel members in two waves of data collection. An e-mail reminder was sent to nonresponders three days after the survey was fielded. An additional e-mail reminder was sent to the final 800 nonresponders on February 12, 2002. The research subjects completed the self-administered survey using an Internet appliance provided by Knowledge Networks. The median time for a participant to complete the survey was 18 minutes.

(p.216) There were three study groups that were stratified by education status. The subsamples included (1) persons currently in high school; (2) those currently in college or graduated from college; and (3) all others who did not fit the previous group descriptions between the ages of 15 and 25. To determine group status, an in-field screening question was used. The completion rate was slightly lower for the high school sample (62.5 percent versus 65.7 percent).

Stage 4: National Civic Engagement Survey I

The primary data we rely on in this book come from NCES1, which was conducted between April 4 and May 20, 2002. We employed Schulman, Ronca, and Bucavalas, Inc., to conduct a nationally representative telephone survey of respondents aged 15 and older. Owing to our focus on youth, DotNets and GenXers were oversampled (N = 1001 and 1000, respectively). We rounded out our sample by interviewing 604 Baby Boomers and 602 Dutifuls. Thus, our sample contains both a cross- section of all respondents aged 15 and older at the time of the survey and a disproportionately large sample of our two youngest cohorts. We weighted according to the proper educational, gender, racial, and ethnic characteristics of the general population based on census data. We also included a weight to ensure the accurate proportional representation of the four age cohorts.

For the cross-section, interviewers asked to speak with the person in the household aged 15 and older with the most recent birthday. Parents or guardians were asked for permission to speak with respondents not of legal age in order to provide informed consent. Similar procedures were used for selecting respondents for the two youth oversamples. We first filled our quota for the DotNet demographic by asking to speak with the person in the household between the ages of 15 and 25 with the most recent birthday. Parental consent was obtained before proceeding to speak with the respondent. Once the DotNet quota was filled and the person on the phone said there were GenXers in the household, interviewers asked to speak with that person between the ages of 26 and 37 with the most recent birthday.

(p.217) Stage 5: Validation, Verification, and Supplementation—National Civic Engagement Survey II

In order to test the stability and reliability of certain measures, we conducted a second national survey shortly after the 2002 election. The NCES2 obtained telephone interviews with a nationally representative sample of 1,400 adults living in continental United States telephone households. The interviews were conducted in English by Princeton Data Source and Schulman, Ronca, and Bucuvalas from November 14 to November 20, 2002.3

As many as 10 attempts were made to contact every sampled telephone number. Calls were staggered over times of day and days of the week to maximize the chance of making contact with potential respondents. Each household received at least one daytime call in an attempt to find someone at home. In each contacted household, interviewers asked to speak with the youngest male currently at home. If no male was available, interviewers asked to speak with the oldest female at home. This systematic respondent selection technique has been shown to produce samples that closely mirror the population in terms of age and gender.

The interviewed sample of all adults was weighted by form to match national parameters for sex, age, education, race, Hispanic origin, and region (U.S. Census definitions). These parameters came from a special analysis of the March 2001 Current Population Survey (CPS), which included all households in the continental United States that had a telephone.

The contact, cooperation, completion, and response rates for the NCES2 are as follows.

  • Contact rate—the proportion of working numbers where a request for interview was made: 64 percent

  • Cooperation rate—the proportion of contacted numbers where a consent for interview was at least initially obtained, versus those refused: 55 percent

  • Completion rate—the proportion of initially cooperating and eligible interviews that were completed: 96 percent

  • Response rate: 33 percent

(p.218) Factor Analysis for Electoral, Civic, and Voice Dimensions of Engagement

Appendix table 2 presents results from a factor analysis using Varimax rotation that demonstrates how we arrived at our three dimensions of behavior. The list of variables excludes two items that tapped behaviors that were very similar to two other items in the list: “boycotting” and signing electronic petitions. Similarly, two types of fund raising for charity were combined into a single indicator. The factor analysis was thus conducted on 16, rather than 19, separate indicators.

Voting and Nonvoting (Chapter 4)

General Problems in Measuring Registration and Voting in Surveys

The questions How many are registered? and How many vote? would seem to be straightforward questions. And they are. It is the answers that are a little curvy. And so before we move to estimates, we wish to begin this section with a few caveats of measurement and definition.

Appendix Table 2 Factor Analysis for Electoral, Civic, and Voice Dimensions of Engagement

 

Electoral

Civic

Voice

Regular volunteer for a nonpolitical organization

.75

 

 

Community problem solving

.69

 

 

Active membership in a group or organization

.65

 

 

Raise money for charity

.48

 

 

Display campaign buttons, stickers, etc.

 

.73

 

Contribute money to a political organization or candidate

 

.62

 

Vote regularly

 

.55

 

Persuade others politically

 

.55

 

Volunteer for a political organization or candidate

 

.47

 

Protest

 

 

.60

Boycott

 

 

.56

Sign a written petition

 

 

.52

Call into radio talk show

 

 

.50

Contact elected official

 

 

.47

Contact print media

 

 

.42

Canvass

 

 

.31

Caution 1: Social Desirability

(p.219) It is always difficult to estimate the incidence of behaviors when they are known to have a socially desirable patina in the political culture. The icon of “voting” no doubt has continued currency in which a good citizen is supposed to trade, making estimates from a public opinion survey or set of surveys subject to inflation. No doubt, error is disproportionately one way. There remain more pressures for a respondent to tell an interviewer that he or she does do something that is valued by the dominant cultural norm than there are reverse incentives for the same respondent to report not having engaged in a behavior when she or he has in fact done so.

But are cultural pressures for a socially desirable response constant for our age groupings? Do DotNets feel the same pressures as Dutifuls to overreport socially desirable behaviors, given that fewer of them report engaging in the behavior? If so, a comparative analysis of registration and voting spanning our ages—from DotNets to Xers to Boomers to Dutifuls—would find a consistency of overreporting, and one could have faith in the relative differences between age groups, even if one was not sure of the absolute level. However, should we believe that this norm applies with unequal force to the different age groups, we would expect less pressure for the culturally desirable responses of being registered and having voted among the younger age groups, for whom the norms have not been fully inculcated or taken root. In this case, we would expect observed differences between generations to overstate the actual differences in the population. All the data we present are based on citizen response to opinion surveys that, while we know them to overreport socially desirable behavior, may also be subject to an interaction effect between such acts and cohorts the magnitude of which we can only guess. So we have a first caveat emptor with regard to social desirability.

Caution 2: Different Modes, Different Estimates

As practitioners, we well understand that survey methods interact with survey findings. While we believe that a mixed-mode approach is a source of strength in the long run, it may also produce some inconsistencies in the short. Here some of our estimates are from self-administered (p.220) Internet samples; others are from telephone surveys. Differences in modes, response rates, and field houses used to collect data all may introduce sources of variation in estimates of behaviors.

Caution 3: Different Bases, Different Statements

There has been some debate lately about the difference between the voting age population (VAP) and the voting eligible population (VEP) as a measure of turnout. In computations of the former, the number of those actually voting is put over the denominator of all those over 17 years of age; in the latter, the denominator is a lower figure corrected by estimates of the number ineligible to vote for a variety of reasons such as noncitizenship or institutionalization.4 Given that a higher number of younger residents may be immigrants, failure to adjust the VAP (those who are eligible to vote and who comprise the sampling frame for participation in surveys) to the VEP might artificially overstate the differences in electoral activity between the two younger and older generations.

Roadmap to Our Measures: Data Sources and Questions Asked

Unfortunately, we do not have all the data we would like to address these questions. The main telephone survey (NCES1) was not designed with this purpose in mind. However, we were able to insert questions exploring the motives behind nonparticipation on two subsequent surveys. Each data set has limitations, although they are somewhat offsetting. Together, however, they afford us the opening of a small window into the world of noninvolvement among today’s youth.

We included a number of questions on the second telephone survey (NCES2) examining those who chose not to participate in the electoral arena. We asked all respondents if they were eligible and then if they were registered to vote. If so, they were asked how frequently they voted in elections and if they had voted in the national elections of 2002, which took place just two weeks before the survey was fielded. Those not registered5 (p.221) were read a list of seven closed-ended reasons why they had not done so, and asked to indicate whether each was a major reason why not, a minor reason, or not a reason at all. Registered voters who sat out the election were asked to respond to the same seven items as possible reasons why they might not have voted. This data set has the advantage of being tied to asking motivations about a specific, recently occurring election, but it has the disadvantage of being relatively shallow in its base of respondents. The NCES2 survey was not stratified by age cohort, giving us smaller samples of DotNets and GenXers to work with than we would have liked.

The National Conference of State Legislatures (NCSL) survey conducted in the summer of 2003 offsets these features. Because it over- sampled DotNets, it has a fairly robust respondent base of young adults.6 But the dependent variable of electoral participation is keyed not to a specific election but to whether people reported having voted in “all,” “most,” “some,” or “not many or no” elections. Nonparticipants were presented with nine reasons and asked to check off all those why they were not registered or did not always vote, as appropriate.

It is worth noting that because the two surveys have different “voting” questions acting as filters to define nonparticipants, they let differential numbers through this screen. However, we think this is not an important concern for two reasons. First, even though done through different modes and at slightly different times, the NCES2 (telephone) and NCSL (Internet) surveys produced virtually identical estimates of registered voters, the one question that was the same on both surveys (see appendix table 3). Second, as demonstrated by the virtually parallel lines in the (p.222) adjacent figure in their relationship to the generational cohorts, the screens functioned quite consistently between the two samples (appendix figure 1). The percentages indicate the proportion of each group classified as nonparticipants.

Appendix Table 3 Registered to Vote: NCES2 and NCSL Estimates

 

NCES2

NCSL

DotNets

64%

64%

GenXers

76%

79%

Boomers

84%

84%

Dutifuls

88%

90%

Source: NCES2 and NCSL.

As noted, the results from the NCSL analysis, presented in appendix table 4, are largely confirming. In this case, we have had to contrast DotNets with “all others” to achieve even a minimum sample size in the “other category,” even though including Xers with the older two groups minimizes “DotNet-Other” differences.

While more reasons for not voting are given in responses to the NCSL survey—both as a function of asking about more items in this self-administered survey and asking only for “reasons” as opposed to “major reasons”—we note the same patterns in the data as we described in the text of chapter 4. First, DotNets are less able to articulate any reason for not voting in all elections. Second, fewer nonparticipating DotNets give purposive reasons for not voting, such as being alienated, being turned off by politics or the negative character of politics, and not seeing a difference between candidates they are asked to choose between. They are much more likely than older respondents to feel they don’t have enough information to cast an informed vote, and somewhat more likely to say they do not vote for reasons of convenience. Finally, an important (p.223) observation is that very few DotNets (or others for that matter) say they do not vote because they think they can make more of a difference by volunteering in their communities. All in all, it is very hard to find a clear answer to the question of why young citizens are disproportionately absent from the electoral realm. Neither in the Internet or telephone survey did we find a key to open the door of nonvoting by looking through the self-reports of motivation. There is certainly as much in these data to suggest the problem may be a lack of relevancy as there is to suggest a rejection of politics. Perhaps simply not having a good enough reason to participate is also a good enough reason not to. (p.224)

Appendix

Appendix Figure 1 Percentage identified as nonparticipant in NCES2 and NCSL.

Appendix Table 4 Reasons for Not Voting in the NCSL Survey

 

DotNets

All Others

Reasons for not registering/not voting*

 

 

I’m not interested in politics

35%

41%

I’m not informed enough to make a decision

39%

27%

I am turned off by the process/negative political advertising

27%

43%

I dislike politics and government/I don’t trust politics

27%

35%

It’s hard to get reliable information about the candidates

24%

27%

I don’t have time/I’m often away/It’s inconvenient

22%

18%

My one vote isn’t going to make much of a difference

16%

18%

I can make more of a difference by volunteering in my local community

14%

12%

There’s often no difference between the two candidates

13%

34%

None of the above

26%

13%

Number of these nine reasons given by respondents

 

 

0 reasons

28%

12%

1 reason

26%

31%

2 reasons

13%

15%

3 or more reasons

33%

41%

Mean number of reasons

2%

2.4%

Total number of cases

214%

100%

*** Multiple responses accepted; entries are expressed as percentages. Source: NCSL.

Notes:

(1) . A public report that details our initial findings and the index of civic and political engagement can be found at the Web site: www.civicyouth.org (Keeter et al. 2002b).

(2) . For a description of what we found in our experiments, see Keeter, Zukin, Andolina, and Jenkins (2002).

(3) . In the course of analyzing the data collected during the main wave of data gathering, a number of subsequent methodological and substantive questions arose, as is always the case. We wished to explore the reliability of a particular set of civic engagement behaviors that would form the core of an index and were concerned about the possibility of a “house effect” in estimates of a particularly important variable measuring the rate of volunteering. Our desire to explore field house effects stemmed principally from what we observed regarding the measurement of volunteer activity. The identical question asked in a survey conducted by the PRC not long after we completed our spring survey yielded an estimate of volunteer activity of 55 percent. This was in stark comparison to our spring estimate of 33 percent. Adding to our puzzlement were estimates gleaned from an omnibus survey that asked the same question in August 2002. Regardless of where the question was placed within the survey, around 4 in 10 (43 or 44 percent) respondents reported volunteer activity in the previous 12 months. Thus, because we had some evidence to disconfirm our theory that where the question was placed relative to others in the questionnaire affected estimates of volunteer activity, we turned our attention to the possibility that differences between field houses might explain the findings.

In order to establish the reliability of our core indicators of engagement as measured in our original survey and assess house effects in regard to volunteer activity, we conducted two identical and simultaneously administered telephone surveys using different field houses in the fall of 2002. Schulman, Ronca, and Bucuvalas, Inc., the field house that collected our original data, was used again in order to validate findings from our original survey. Princeton Data Services was the other company chosen to validate our findings from our original survey findings, as well as account for the possible influence of field house effects on the measurement of volunteer activity. Each field house surveyed a total of 700 respondents. A reduced version of the spring instrument was used by both field houses. Questions measuring the core indicators of engagement remained in their original sequence.

A full accounting of what we found regarding field house effects is addressed in Zukin, Jenkins, Andolina, and Keeter (2003).

(4) . See McDonald and Popkin (2001).

(5) . Those not eligible to be registered have been excluded from this analysis. (p.234) Thus this examination of nonparticipation is based on those who could have participated but chose not to do so.

(6) . While starting with 632 DotNets, this number is reduced to 456 once those ineligible to participate have been excluded. Again, the analysis in this section excludes those not eligible for reasons of age, citizenship, or institutionalization from the base of nonparticipants.