Jump to ContentJump to Main Navigation
Science vs ReligionWhat Do Scientists Really Believe?$

Elaine Howard Ecklund

Print publication date: 2010

Print ISBN-13: 9780195392982

Published to Oxford Scholarship Online: May 2010

DOI: 10.1093/acprof:oso/9780195392982.001.0001

Show Summary Details
Page of

PRINTED FROM OXFORD SCHOLARSHIP ONLINE (www.oxfordscholarship.com). (c) Copyright Oxford University Press, 2017. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a monograph in OSO for personal use (for details see http://www.oxfordscholarship.com/page/privacy-policy). Subscriber: null; date: 23 May 2017

(p.157) Appendix A The Study

(p.157) Appendix A The Study

Source:
Science vs Religion
Publisher:
Oxford University Press

The data for Science vs. Religion comes from the Religion among Academic Scientists study (RAAS), a broad study of religion, spirituality, and ethics among university scientists at twenty-one elite research universities in the United States, conducted over a four-year period between 2005 and 2008. Scientists included in the study were randomly selected from seven natural and social science disciplines at universities that appear on the University of Florida’s annual report of the “Top American Research Universities.”1 The University of Florida ranked elite institutions according to nine different criteria, including total research funding, federal research funding, endowment assets, annual giving, number of national academy members, faculty awards, doctorates granted, postdoctoral appointees, and median SAT scores for undergraduates. Universities were ranked and selected according to the number of times they appeared in the top twenty-five for each of these nine indicators.2 The universities included in the sample are

  • Columbia University

  • Cornell University

  • Duke University

  • Harvard University

  • Johns Hopkins University

  • Massachusetts Institute of Technology

  • Princeton University

  • Stanford University

  • University of Pennsylvania

  • University of California at Berkeley

  • University of California, Los Angeles

  • University of Chicago

  • University of Illinois, Urbana Champaign

  • University of Michigan, Ann Arbor

  • University of Minnesota, Twin Cities

  • (p.158)
  • University of North Carolina, Chapel Hill

  • University of Washington, Seattle

  • University of Wisconsin, Madison

  • University of Southern California

  • Washington University

  • Yale University

When looking at this list, it is clear that there is not significant geographic diversity in the population of universities where I studied scientists. By surveying and interviewing scientists at Ivy League and top research universities we are missing many universities in the American South and the “fly-over” states in the middle of the country, places that are highly religious and that form important voting blocks when it comes to issues concerning science. Their predominant location in the Northeast and on the West Coast may also explain why some scientists underestimate the strength of religion in the United States.

In this understudied topic, an examination of scientists at elite institutions was initiated because elites are more likely to have an impact on the pursuit of knowledge in American society. As sociologist Randall Collins persuasively argues, top scholars are a kind of elite who contribute to knowledge creation in the broader society. If scientists at elite universities are at the forefront of the newest ideas in our society, studying their views broadens our understanding of the academy and the way it affects other major institutions in this country.3

There is no research to date that examines the attitudes toward religion and spirituality (using comprehensive indicators of religion and spirituality) among natural and social scientists who teach and do research at top U.S. research universities and that uses both survey and qualitative interview data. Even so, I benefit from two major studies on topics closely related to this one. Most recently, sociologists Neil Gross and Solon Simmons did a 2006 survey on the political and religious views of American faculty at different types of universities and colleges, also replicating questions from the General Social Survey. My work is different from theirs in that I focus specifically on natural and social scientists, and particularly those who work at elite research universities. Further, the RAAS study involves a broader survey of religiosity among this population of scientists. Their survey provides an important comparison, showing that at less elite institutions, social scientists are less religious than natural scientists, a finding that is not upheld at the kind of elite universities I studied.4 I have also benefited from the work of historian Edward Larson and journalist Larry Witham.5 In 1996, they replicated psychologist James Leuba’s exact early twentieth-century questions about belief in a personal god, belief in human (p.159) immortality, and desire for immortality among 1,000 scientists (biologists, physicists, and mathematicians) randomly selected from the current edition of American Men and Women of Science, which includes those in the public sector and in the academy. Larson and Witham provide an important comparison through examining a group of elite natural scientists and comparing their views on religion to those of the general population. The RAAS study differs substantially from this study as I examine scientists who work in top U.S. research universities and include those from seven different disciplines, including social scientists as well as natural scientists, and explore much broader contours of religiousness that are more representative of those among the general population as well as discover new forms of religion and spirituality among scientists.

The RAAS study began during a seven-week period from May through June in 2005, when 2,198 faculty members in the disciplines of physics, chemistry, biology, sociology, economics, political science, and psychology were randomly selected from the universities in the sample. Although faculty were randomly selected, oversampling occurred in the smaller fields and undersampling in the larger fields. For example, a little more than 62 percent of all sociologists in the sampling frame were selected, while only 29 percent of physicists and biologists were selected, reflecting the greater numerical presence of physicists and biologists at these universities when compared to sociologists. In analyses where discipline is not controlled for, data weights were used to correct for the over- or undersampling. Table A.1 describes the sample and weighting in greater detail.

Table A.1. Sampling and Data Weights

Number Sampled

Number of Respondents

Response Rate (%)

Percent of Sample

Number in Population

Percent in Population

Data Weight

Weighted Number of Respondents

Weighted Percent of Sample

Physics

328

241

73

12.9

1123

19.56

1.305894

315

19.6

Chemistry

300

214

71

11.5

719

12.53

.942092

202

12.5

Biology

372

289

78

15.5

1278

22.26

1.23932

358

22.3

Sociology

300

228

76

12.2

478

8.33

.58785

134

8.3

Economics

300

207

69

11.1

705

12.28

.954518

198

12.3

Political Science

300

225

75

12.0

718

12.51

.894604

201

12.5

Psychology

300

205

68

11.0

719

12.53

.983452

202

12.5

Other1

36

1.9

1

36

Refused

1

.1

1

1

Total

2200

1646

74.8

100

5740

100

1646

100

(1) When asked to specify, the most common “other” disciplines were subfields of the core disciplines, such as “molecular biology.”

Initially, I wrote a personalized letter to each potential participant in the study that contained a fifteen-dollar cash preincentive (i.e., I sent fifteen dollars in cash to each of the potential respondents regardless of whether they decided to participate in the survey). Each selected scientist received a unique identification code with which to log in to a website and complete the survey. After five reminder e-mails the research firm commissioned to field the survey, Schulman, Ronca, and Bucuvalas, Inc. (SRBI), called respondents up to a total of twenty times, requesting participation over the phone or on the web. The preincentive raised quite a bit of controversy among some scientists and admiration from others. For example, one psychologist said, “as soon as I opened that up I thought, ‘Oh my God. I’ve got the bills now. I have to do it [laughs] … It was just brilliant.”6 Other scientists called the study “harassment” or even “coercion.” For example, a well-known sociologist wrote me an email saying, “It is obnoxious to send money (cash!) to create the obligation to respond.”7 It is important to note that the study received full human subject’s approval by Rice University and later by University at Buffalo, SUNY. (p.160)

(p.161) As economists and political scientists have already discovered, the preincentive does work. Six and a half percent of the respondents completed the survey on the phone and 93.5 percent completed the web-based survey. Overall, this combination of methods resulted in the response rate of 75 percent or 1,646 respondents, ranging from a 68 percent rate for psychologists to a 78 percent rate for biologists, a high response rate for a survey of faculty.8 For example, even the highly successful Carnegie Commission study of faculty resulted in only a 59.8 percent rate.9 Many of the scientists who chose not to participate wrote to tell me why. I received 132 personal emails or letters from those who did not wish to participate (out of 552 total nonrespondents). Their reasons for not participating were systematically coded. In total, the scientists provided thirteen discrete reasons for not participating in the survey. Dominant reasons included “lack of time,” “problems with the incentive,” “traveling or away during the survey” and simply “do not wish to participate.” I also did demographic analyses of the nonrespondents and found no substantial differences along basic demographic indicators between those who responded and those who did not (such as gender, age, discipline, race).

The survey asked some questions about religious identity, belief, and practice, which were replicated from national surveys of the general population, such as the General Social Survey and other questions on spiritual practices, ethics, and the intersection of religion and science in the respondent’s discipline, some of which were replicated from other national surveys and some of which I developed for this study.10 There were also a series of inquiries about academic rank, publications, and demographic information. A complete survey guide is included in Appendix B.

At certain points in the manuscript I have compared the scientists in my survey to those who responded to the 1969 survey of American faculty sponsored by the Carnegie Commission, in order to make comparisons over time. For that survey, information was collected from a mail survey of faculty members who were employed by two- and four-year colleges and universities in the United States. Faculty members were asked questions about various social, political, and educational issues, demographic information, as well as several questions on religion. Among the 2,300 colleges and universities in the United States at that time, 43 universities were indicated as elite or high quality.11 Only faculty members who were employed by institutions that the Carnegie Commission indicated as “high quality universities” were used for analysis when I compare the Carnegie data to the RAAS survey. To roughly match the academic scientists from the 2005 survey, the Carnegie sample was further narrowed to include only those from the same natural and social science disciplines. The complete demographics of the RAAS survey population is included in Table A.2.

(p.162)

Table A.2. Demographics of the Sample

Natural Sciences

Social Sciences

Physics

Chemistry

Biology

Overall

Sociology

Economics

Political Science

Psychology

Overall

Academic Rank

% Full Professor

70.5

66.4

58.5

64.6

55.7

61.4

54.2

54.1

56.4

% Associate Professor (with tenure)

13.3

10.7

17.0

14.2

16.7

7.2

16.9

19.5

15.0

% Assistant Professor

15.4

19.2

22.8

19.3

25.9

29.0

26.7

24.9

26.7

% Associate Professor (Without Tenure)

.0

.9

.3

.3

.0

1.9

1.8

1.0

1.2

% Other Ranking

.8

2.8

1.4

1.5

1.8

.5

.4

.5

.7

Mean Age

51.3

49.9

49.8

50.38

48.6

46.6

48.3

49.4

48.25

% Currently Married

85.4

84.8

85.4

85.2

80.4

81.5

82.2

75.2

79.8

Mean Number of Children

2.20

2.18

2.46

2.30

2.33

2.11

2.27

2.19

2.21

% White

85.3

85.7

83.9

84.9

82.6

85.9

84.1

87.4

85.2

% Black

.4

1.0

1.1

.8

4.6

1.0

2.3

4.0

2.8

% Hispanic

.9

1.0

1.8

1.3

4.6

3.5

5.5

1.0

3.5

% Asian

12.5

9.4

12.9

11.9

4.6

9.1

5.0

6.0

6.3

Citizenship Status

% Non-Immigrant, U.S. Citizen

46.8

61.7

63.4

57.0

69.6

40.4

63.6

71.4

60.7

% 1st Generation Immigrant, U.S. Citizen

20.3

15.3

11.6

15.6

8.5

12.3

9.3

9.9

10.1

% 2nd Generation Immigrant, U.S. Citizen

5.5

5.7

4.9

12.2

4.9

3.9

8.0

3.0

13.5

% Non-U.S. Citizen

17.7

12.9

14.4

15.3

10.3

35.5

9.3

6.4

15.7

% Female

9.2

12.0

26.1

16.7

35.3

13.3

27.1

34.5

27.0

(p.163)

(p.164) While surveys provide us with broad contours, the relationship between religion and science among scientists themselves is so complex that basic statistics can never tell us the whole story. To this end I thought it necessary to employ a method that would allow discovery of new categories for how scientists structure meanings of religion, science, spirituality, and the relationship between these in their public and private lives. Many of the assertions in the book revolve around the in-depth interviews I conducted with natural and social scientists. (A final interview guide is included in Appendix C.) For the interviews, 501 of those who completed the survey were randomly selected and asked to participate in a longer in-depth interview. At least fifty individuals were selected from each of the seven fields. Over a three-year period between July 2005 and November 2007, 275 interviews were completed in person or over the phone. I completed 245 of these interviews personally and 30 were completed by research associates. These in-depth interviews (using a semistructured interview guide) ranged from twenty minutes to two and a half hours and were all completely transcribed with the help of twelve undergraduate research assistants at Rice University, where I was a postdoctoral fellow at the time when much of this research was completed. Respondents were asked specifically how they understand the terms “religion” and “spirituality.” They were also asked if religion or spirituality have any influence on their specific discipline or their particular research as well as how they perceive the relationship between religion and science.

The student research team and I worked on coding the interviews. In light of previous research we developed some codes a priori for testing existing theories about interdisciplinary and interfield (natural and social science) differences in views about the relationship between religion and science as well as definitions of religion, spirituality, and science.12 For straight definitions and interview questions that I developed to respond directly to other research I am able to provide frequencies of answer categories and do so in the text.

Other questions on the interview guide, however, were added after I had interviewed several or (in some cases) large numbers of respondents. I added these questions when themes emerged from the interviews that needed to then be systematically explored in the rest of the interviews. Once the interviews were sorted according to these categories, we then used a modified form of the inductive coding scheme13 to develop grounded categories about the range of ways academic scientists viewed religion, science, and the relationship between these. We then systematically recoded the interviews. I wanted to make sure that the same passage would be coded the same way by different research assistants. To that end, the final coding scheme was tested for inter-coder reliability and achieved a reliability statistic of .90. When a passage from an interview (p.165) transcript was not coded the same, then a code was revisited and changed to achieve consistency.

In addition, between 2005 and 2008, I (or research assistants) attended thirteen different lectures and events around the country where prominent scientists spoke about the connection between faith and science or how to better translate science to a broader audience. These lectures were coded for how scientists approached these topics. And discussions of several of these lectures appear in the text of the book. (p.166)

Notes:

(1) . After the RAAS study began, the “Top American Research Universities” project moved to Arizona State University. See http://mup.asu.edu/, accessed April 17, 2009.

(2) . These measures are similar to those used in other studies that examined elite universities. See, for example, Bowen and Bok, The Shape of the River, as well as Massey, Charles, Lundy, and Fischer, The Source of the River. The authors of these volumes used a similar strategy for designating a university as “elite” or “highly selective.”

(3) . See, for example Rado, “Cultural Elites and the Institutionalization of Ideas,” as well as Collins, The Sociology of Philosophies and Lindsay, “Elite Power” and Faith in the Halls of Power.

(4) . See, in particular, Gross and Simmons, “How Religious Are America’s College and University Professors?” See also Gross and Simmons, “The Religiosity of American College and University Professors.”

(5) . See, in particular, Larson and Witham, “Scientists Are Still Keeping the Faith” and “Leading Scientists Still Reject God.”

(6) . Psyc 17, conducted January 3, 2006.

(7) . This individual did not participate in the survey.

(8) . For more on the preincentive, see Ecklund and Scheitle, “Religion among Academic Scientists” as well as Armstrong, “Monetary Incentives in Mail Surveys.”

(9) . See Ladd and Lipset, “The Politics of Academic Natural Scientists and Engineers.”

(10) . The 1998 GSS had 2,832 respondents, although only half of the sample was asked the expanded set of religion and spirituality questions. The 2004 GSS had 2,812 respondents. Where possible, I used data from the GSS 2006 for the comparisons of scientists with the general population. See Davis, Smith, and Marsden, General Social Surveys.

(11) . The Carnegie data set used the Gourman Report to indicate quality of universities. Since this study was specifically interested in faculty at elite universities the sample was restricted to faculty members at universities termed “high quality,” using factors such as faculty publication records. Forty-three universities are indicated as “high quality” in the Carnegie dataset. Although the publicly available report about the data includes the names of these universities, the publically available data set does not indicate which particular institution faculty members are associated with, limiting the possibility to match specific institutions from the 1969 Carnegie data set with the 2005 RAAS data set. All of the 21 universities included in the 2005 survey, however, are also included on the list of 43 institutions that the Carnegie study defines as high quality research universities.

(12) . See Wuthnow, “Science and the Sacred,” as well as Lehman and Shriver, “Academic Discipline as Predictive of Faculty Religiosity” and Lehman, “Academic Discipline and Faculty Religiosity in Secular and Church-Related Colleges.” See also Stark and Finke, Acts of Faith. All of these authors write about the Carnegie Foundation (1969, 1984 surveys) research that compares the differences between natural and social scientists in their levels of religiosity.

(13) . See Strauss and Corbin, Basics of Qualitative Research.