Jump to ContentJump to Main Navigation
Psychology of ScienceImplicit and Explicit Processes$

Robert W. Proctor and E.J. Capaldi

Print publication date: 2012

Print ISBN-13: 9780199753628

Published to Oxford Scholarship Online: September 2012

DOI: 10.1093/acprof:oso/9780199753628.001.0001

Show Summary Details
Page of

PRINTED FROM OXFORD SCHOLARSHIP ONLINE (www.oxfordscholarship.com). (c) Copyright Oxford University Press, 2019. All Rights Reserved. An individual user may print out a PDF of a single chapter of a monograph in OSO for personal use.  Subscriber: null; date: 16 October 2019

The Role of Psychology in an Agent-Centered Theory of Science

The Role of Psychology in an Agent-Centered Theory of Science

Chapter:
(p.73) 4 The Role of Psychology in an Agent-Centered Theory of Science
Source:
Psychology of Science
Author(s):

Ronald N. Giere

Publisher:
Oxford University Press
DOI:10.1093/acprof:oso/9780199753628.003.0004

Abstract and Keywords

The question that frames this chapter is how humans have managed to learn such amazing things as the age of the universe. After briefly reviewing logical, methodological, historical, and social approaches to this question, the chapter focuses on contributions of the cognitive study of science. This leads to a comparison of the cognitive study of science and the psychology of science, which study how fundamental cognitive mechanisms operate in the context of generating scientific knowledge. There is, however, a second way humans use their psychological powers in the pursuit of knowledge, namely, by designing material and symbolic artifacts that greatly increase their cognitive powers. The resulting physical-computational-human systems have been incorporated into the cognitive sciences as “distributed cognitive systems.” The chapter proposes adoption of an agent-centered approach, in which ever more ubiquitous distributed cognitive systems can be fully cognitive without being fully computational.

Keywords:   cognitive study of science, psychology of science, cognitive mechanisms, scientific knowledge, material artifacts, symbolic artifacts, distributed cognitive systems

Approaches to the Study of Science

Since those whose work might be classified as belonging to a psychology of science have diverging relationships with psychology of science as a discipline, I should begin by circumscribing my own relationship. This can best be done, I think, by stating a fundamental problem that lies behind most of my work.

How are humans, given only their relatively modest evolved capacities (physical, cognitive, etc.), able to do science; in particular, to learn about such things as the movements of the continents, the mechanisms of inheritance, or the age of the universe?

This is implicitly a question about scientific knowledge, which reflects my disciplinary background in the philosophy of science. But the question is about learning, the process of acquiring knowledge, not about the ultimate validity of particular knowledge claims. I do not question that we do know the things mentioned, in some ordinary sense of “know.” The question also presumes an evolutionary perspective that takes seriously the idea that humans evolved from earlier forms of life. From this perspective, it really is quite remarkable that such evolved creatures as ourselves could come to know what we do in fact know. This is something that demands an explanation. I assume, however, that the explanation must itself be a scientific (naturalistic) explanation. This places me in the company of psychologists and sociologists of science who also seek a scientific explanation of the acquisition of scientific knowledge.

My question has two parts. One concerns the origin of science. How did humans originally come to do science? The second concerns contemporary practice. How do (p.74) humans now do science? The two questions are, of course, related. In particular, answers to the second question are constrained in that contemporary practice must be something that could arise naturally from our evolutionary past. Except for a few comments about the 17th-century scientific revolution, I will focus on contemporary scientific practice.

Answers to my question are implicit in standard approaches to understanding scientific practice. Here is a brief survey.

A Logical Approach

In this approach the focus is on the logical structure of theories and on an inductive logic that provides a logical (epistemological) relationship between observation and theory. In logical empiricism, for example, the logical structure of scientific theories and a logic of justification provide the basis for objective claims to knowledge. Recent historical scholarship reveals this movement to have been a continuation of Kantian and neo-Kantian projects to show how objective (indeed, universal) knowledge is possible (Friedman, 1999; Giere & Richardson, 1996). In this project, the actual psychology of real scientists (as opposed to ideally rational agents) is relevant only to the creation of empirical concepts, not to the validity of their application. This is the well-known distinction between “discovery” and “justification.”

A Methodological Approach

On this approach, scientists make discoveries by applying a scientific method. Of course, there are lots of candidates for just what constitutes the correct scientific method. One candidate familiar to many psychologists has its roots in pragmatism (Dewey, 1938, esp. Chapter 21). The basic pragmatist position is that one begins with a comfortably held set of beliefs which then conflicts with a new observation. The resulting epistemic discomfort initiates a process of inquiry which begins with the suggestion of hypotheses which, if true, would resolve the conflict (abductive step). From these hypotheses one deduces predictions about possible new observations (deductive step). These predictions are then tested against new observations/experiments. The most successful hypothesis is adopted (induction). Epistemic comfort is thus restored with a revised set of beliefs. This is, of course, just a schema. We need more details about the key parts of the process: abduction, experimentation, and induction. The part of this schema to which psychology seems most relevant is the abductive step. Research on analogical reasoning, for example, finds a place here. But this leaves the role of psychology in the study of science just where the logical empiricists put it: in the context of discovery.

A Historical Approach

There are actually two strains of thought that can be labeled “historical.” One is that initiated by Thomas Kuhn (1962) in The Structure of Scientific Revolutions. Here (p.75) scientists are treated as natural agents to whom empirical, psychological categories apply. Thus, invoking the psychology of his time, Kuhn described a scientist adopting a new paradigm as experiencing a “gestalt switch.” The second strain consists of philosophical responses to Kuhn by philosophers such as Imre Lakatos (1970), Stephen Toulmin (1972) and Larry Laudan (1977). In these accounts, the logical empiricist idea of “rational justification” is replaced by a semihistorical notion of “rational progress” measured in terms of such things as “solved problems.” These are characterized in terms of objective, linguistic entities such as theories, hypotheses, and data. The psychology of real agents is not part of these accounts, so there is again little place for a psychology of real scientists.

A Social Approach

In recent sociology of science, a scientific consensus (which by itself is taken to constitute “scientific knowledge”) is a function of social interactions, social networks, and the like (see Biagioli, 1999). Here one is dealing with real human agents, but their psychological characteristics are conceived of in everyday “folk psychological” terms. There is no need for a scientific psychology of science. Indeed, sociologists of science have been explicitly critical of attempted psychological explanations of any aspects of scientific practice. This seems an instance of the traditional border dispute between psychology and sociology. One of the most notorious invocations of this boundary occurred in Latour and Woolgar’s “Postscript” to the second edition of Laboratory Life (1986, 280) where they proposed “a ten-year moratorium on cognitive explanations of science” and promised that “if anything remains to be explained at the end of this period, we too will turn to the mind!”

A Cognitive Approach

By the mid-1980s, I personally had become disenchanted with logical approaches to understanding science, but also had reservations about the other then prominent approaches noted above. Inspired by books such as Howard Gardner’s (1985) The Mind’s New Science, I discovered the then newly emerging cognitive sciences. These provided a framework for my work in the philosophy of science. I replaced the philosophy of science notion of the structure of theories with the more general cognitive science notion of representation. Likewise, instead of inductive logic I talked about judgments regarding the fit of models to the world. The overall goal was not (or not directly) to justify science, but to understand it. The result was the book Explaining Science: A Cognitive Approach (Giere, 1988). I must admit, however, that the use of actual research in the cognitive sciences in this book is highly selective. The main themes of representation and judgment owe as much to work in the philosophy of science and decision theory as they do to work in the cognitive sciences. Only gradually did it become clear to me that there was an emerging subfield that could be called “The Cognitive Study of Science.” At one point, in response to criticism from a constructivist sociologist of science, I went so far as to propose (partly tongue (p.76) in cheek) a program for “The Cognitive Construction of Scientific Knowledge” (Giere, 1992).

Cognitive Studies of Science

The cognitive study of science developed in the 1970s and 1980s as a multidisciplinary mixture involving the history and philosophy of science and the newly emerging cognitive sciences. Early on there was some European influence due to the work of Piaget (1929) and later Howard Gruber (1981), but most of the early work was American. Beginning in the mid-1960s, Herbert Simon (1966) suggested applying the techniques of artificial intelligence to study the process of scientific discovery, culminating in the book Scientific Discovery: Computational Explorations of the Creative Processes (Langley, Simon, Bradshaw, & Zytkow, 1987). In the late 1970s, a group of psychologists at Bowling Green State University in Ohio, led by Ryan Tweney, began a program of systematically studying scientific thinking, including simulated experiments in which students were to try to discover the “laws” governing an “artificial universe.” Their work (Tweney, Doherty, & Mynatt, 1981) highlighted such phenomena as “confirmation bias,” the tendency of subjects to pursue hypotheses that agreed with some data even in the face of clearly negative data. Tweney himself (Tweney, 1985; Tweney, Mears, & Spitzmuller, 2005) turned his attention to Michael Faraday. Michael Gorman (1992), a psychologist at the University of Virginia, also conducted experiments on scientific reasoning in simulated situations. Around the same time, the psychologist Dedre Gentner and associates began developing theories of analogical reasoning and mental modeling which they applied to historical cases such as the discovery of electricity and the distinction between heat and temperature (Gentner, Holyoak, & Kokinov, 2001; Gentner & Stevens, 1983). A developmental psychologist, Susan Carey (1985), began applying Kuhn’s account of revolutionary change in science to the cognitive development of children. Michelene T. H. Chi (1992), a Pittsburgh psychologist, also investigated the phenomenon of conceptual change in the context of differences in problem-solving strategies among novices and experts.

In the wake of Kuhn’s Structure of Scientific Revolutions, a number of philosophers of science began applying more recent notions from the cognitive sciences to understand conceptual change in science. Nancy Nersessian (1984) applied notions of mental models and analogical reasoning to the development of field theories in physics in the 19th and early 20th centuries. Lindley Darden (1991) applied techniques from artificial intelligence in an attempt to program theoretical and experimental strategies followed in the development of Mendelian genetics. Paul Thagard (1988, 1991, 2000) advocated a full-blown “computational philosophy of science” and went on to develop an account of conceptual change based on a notion of “explanatory coherence” which he implemented in a computer program. Paul Churchland (1989) applied his own “neurocomputational perspective” to philosophy of science topics such as the nature of scientific theories. It will be noted that all this work is quite heterogeneous, involving people in different fields appealing to different aspects of the cognitive sciences and focusing on different topics and different historical (p.77) periods or figures. The cognitive study of science has been decidedly multidisciplinary (See Carruthers, Stitch, & Siegal, 2002; Giere, 1992, 2008; Gorman, Tweney, Gooding, & Kincannon, 2005).

As an example of more recent work in the cognitive study of science I would cite Nersessian’s (2008) Creating Scientific Concepts. Nersessian has been the most prominent promoter of the idea of the cognitive study of science, and this work represents 25 years of sustained research on the topic of conceptual change in science. Nersessian thinks that the most important part of conceptual change is the creation of the new concepts that will become part of the later conceptual structure. Moreover, she thinks that this process of concept creation has a microstructure that can fruitfully be studied in actual historical and contemporary settings by selectively applying notions from contemporary cognitive science. For Nersessian, new concepts are generated from old concepts in specific problem situations by a process of “model-based reasoning.” Individual agents begin with a “mental model,” in her words “a structural, behavioral, or functional analog representation of a real-world or imaginary situation, event, or process” (2008, p. 93). She emphasizes three components of model-based reasoning: analogy, visualization, and thought-experimenting. In the case of dynamic systems, for example, thought-experimenting is a matter of mentally simulating the operation of an imaginary system. Individual mental models may be “coupled” with public visual models in the form of drawings or graphs, as in the case of Newton’s famous drawing of a cannon ball shot around the world from off the top of a mountain. She also emphasizes the importance of intermediate “hybrid” models incorporating features of both a source and target domain.

Rich content for Nersessian’s book is provided by two “exemplars” of model-based reasoning: Maxwell’s development of electromagnetic theory following Faraday’s researches on magnetic induction, and a contemporary think-aloud problem-solving protocol experiment designed by the cognitive scientist and education researcher John Clement. In the latter exemplar, a person with a PhD involving both mathematics and physics was asked to explain whether, keeping all other variables constant, a spring with twice the diameter would stretch more, less, or the same as the original spring, both loaded with the same small weight. In the course of this long experiment, the subject developed a new (for him) concept of torsion in springs. Both exemplars are explained in chapter-long presentations and referred to throughout the book.

The Cognitive Study of Science and the Psychology of Science

The cognitive study of science is obviously related to the psychology of science. Figure 4.1 shows how I think they are related both to each other and to neighboring disciplines. Both the cognitive study of science and the psychology of science are represented as components of science studies. That they strongly overlap is indicated by the fact that cognitive psychology appears twice in the diagram, once as part of the cognitive sciences and once as part of psychology. I suspect this duplication may (p.78)

                      The Role of Psychology in an Agent-Centered Theory of Science

Figure 4.1

reflect a real disciplinary tension among those whose primary identification is with cognitive psychology.

The main difference shown here is that the cognitive study of science looks to cognitive science for inspiration while the psychology of science looks to psychology as a whole (Feist, 2006). The cognitive study of science has strong positive ties with both the history of science and the philosophy of science while being in tension with the sociology of science. The psychology of science, being less well established, has fewer ties with the other science studies disciplines.

Scientific psychology has typically been regarded as the study of fundamental psychological mechanisms that manifest themselves in different contexts. Yet it is clear that the fundamental psychological mechanisms possessed by Europeans did not change appreciably between 1400 and 1900, during which period Western culture went from being theologically based to being to a significant extent scientifically based. In 1400, many of the best minds were focused on theology. In 1900, many were focused on science. So one cannot explain the Scientific Revolution, the Enlightenment, or later developments, in terms of fundamental psychological mechanisms themselves. We have to look beyond the mere existence of cognitive mechanisms to make psychology relevant to any answer to my fundamental question. This means that the psychology of science cannot be a component of psychology in the way that developmental psychology and cognitive psychology are components. Rather, it is more like educational psychology or organizational psychology. Here there are well-established cultural practices, and the psychology of participants is relevant to understanding the practice. Which parts of fundamental psychology are most relevant depends on the practice being studied.

The same holds for the cognitive study of science. It cannot be part of the cognitive sciences in the way that artificial intelligence and neuroscience are parts. It is applied cognitive science. Nersessian’s work well illustrates the idea that the cognitive (p.79) study of science involves studying the operations of fundamental cognitive capacities in the special context of doing science. Thought experiments, for example, are not unique to science, but utilize the same basic cognitive functions as deliberating regarding future actions and anticipating the movements of other bodies. In these studies, however, one does also learn something new about the fundamental cognitive capacities; namely, that they can be deployed in these new ways, and how they function in these new contexts.

Distributed Cognition

Thus far I have discussed one way in which basic human cognitive capacities enter into the practice of science; namely, when basic capacities are deployed in new ways in a scientific context, as when scientists use the basic human resource of reasoning by analogy. There is also a second general way in which basic human cognitive capacities enter into the practice of science; namely, when scientists use these basic capacities to create artifacts that make possible wholly new capacities for learning about the world. There are two sorts of such artifacts, material and symbolic. Both types played major roles in the Scientific Revolution. Microscopes, telescopes, and the air pump were among the most influential material artifacts. Analytic geometry and the calculus are prime examples of symbolic artifacts. The use of such artifacts has recently been incorporated into the cognitive sciences under the umbrella of “Distributed Cognition.” As will shortly become evident, I think this incorporation may create difficulties for the still reigning paradigm according to which cognition is nothing but computation.

A standard source for the concept of distributed cognition within the cognitive sciences is Ed Hutchins’s (1995) study of navigation. This is an ethnographic study of “pilotage,” that is, traditional navigation near land as when coming into port. Hutchins argues that individual humans may be merely components in a complex cognitive system. No one human could physically do all the things that must be done to fulfill the cognitive task, in this case repeatedly determining the relative location of a traditional navy ship as it nears port. For example, there are sailors on each side of the ship who telescopically record angular locations of landmarks relative to the ship’s gyrocompass. These readings are then passed on, for example, by the ship’s telephone, to the pilothouse where they are combined by the navigator on a specially designed chart to plot the location of the ship. In this system, no one person could possibly perform all these tasks in the required time interval. And only the navigator, and perhaps his assistant, knows the outcome of the task until it is communicated to others in the pilothouse.

One might wish to treat Hutchins’s case merely as an example collective cognition. The cognitive task—determining the location of the ship—is performed by a collective, an organized group, and, moreover, in the circumstances, could not physically be carried out by a single individual. Hutchins’s conception of distributed cognition, however, goes beyond collective cognition. He includes not only persons but also instruments and other artifacts as parts of the cognitive system. Thus, among the components of the cognitive system determining the ship’s position are the (p.80) telescopic devices used to observe the bearings of landmarks and the navigational chart on which bearings are drawn with a protractor-like device. The ship’s position is determined by the intersection of two lines drawn on the chart using bearings from the two sightings on opposite sides of the ship. So parts of the cognitive process take place not in anyone’s head, but in an instrument or on a chart. The cognitive process is distributed among humans and material artifacts. This incorporation of material artifacts makes the notion of distributed cognition conceptually powerful but also troubling for a paradigm that equates cognition with computation.

Given the notion of a distributed cognitive system, we can see the history of science since the Scientific Revolution as a progression of every more powerful such system for producing scientific knowledge: from Galileo and his telescope, Torricelli and his barometer, to E. O. Lawrence and his cyclotron. In fact, I think we can scale up Hutchins’s example to consider archetypical cases of contemporary science such as the Hubble telescope. Hubble has produced genuinely revolutionary observations. For example, in January 2003, the Space Telescope Science Institute released a remarkable image produced by the Advanced Camera for Surveys aboard the Hubble. The process that produced this image involved electronic detectors sensitive to light in the infrared part of the electromagnetic spectrum. The output of the detectors was fed into an onboard computer and put into a form in which it could be transmitted to a Tracking and Data Relay Satellite from which it was retransmitted to the White Sands Complex near Las Cruces, New Mexico, from which it was again retransmitted by domestic satellite to the Data Operations Control Center at the Goddard Space Flight Center in Greenbelt, Maryland. From there it was routed to the Data Capture Facility and finally on to the Space Telescope Science Institute in Baltimore where it was studied by astronomers and other space scientists. Each step in this process in some way modifies the initial input and contributes to the construction of the final image.

A remarkable feature of this particular image is that it involved gravitational lensing. During the exposure, the Hubble telescope was pointed directly at a massive cluster of galaxies estimated to be 2.2 billion light-years away. In accordance with the general theory of relativity, this mass acts like a lens by warping space around it and thus effectively bending light passing by. Scientists who have studied the data claim that the image records light emitted from galaxies roughly 13 billion years ago, when the universe was only one billion years old.

In designing, building, and operating a distributed cognitive system, humans of course employ the basic cognitive capacities studied in cognitive psychology. But there is much more going on in a distributed cognitive system. I would argue that a distributed cognitive system is a hybrid system that contains three distinct types of elements: physical, computational (symbolic), and human. In a broad sense, of course, the whole system is a physical system, but computers and humans are importantly different from the systems studied in the physical sciences such as physics and chemistry. That is enough for my distinctions here.

More serious, from my point of view, is the presumption, part of the computational paradigm in both psychology and the cognitive sciences, that the whole system is a computational system. In spite of its paradigmatic status, this seems to me (p.81) just plain mistaken (Giere, 2003, 2006). When we look at a crucial physical component, the interaction between the detectors and incoming light is just a physical interaction. There are no symbols in this system, and thus no rules by which any symbols are manipulated, so, on a strict understanding of computation, there is no computation in this system. Of course, this interaction can be described by quantum theory, but that does not make the interaction itself computational even though the theory may be expressed in the form of a computer program. The output from the detectors, however, does become computational as soon as it is fed into the onboard computers. And the signal, supported by various physical systems, remains computational all the way to the final computer generated images studied by astronomers and astrophysicists.

Here again there is controversy. It is also part of the computational paradigm in cognitive science that human cognition is computational, so the scientists who study the images produced by the telescope are themselves computational systems. This paradigm has been challenged in recent years, for example, by advocates of “dynamic systems theory” (Thelen & Smith, 1994). And even though there is a subject called “computational neuroscience,” I think neuroscience supports the dynamic systems approach. Again if we invoke a strict understanding of computation as the manipulation of symbols according to explicit rules, there is no computing going on in a human brain simply because there are no symbols in the brain, only neurons. Describing the brain in computational terms does not automatically make it a computational system. One has to take the additional step of taking the description literally rather than merely metaphorically. Most cognitive scientists seem more than willing to take this step. Even Hutchins, who recognizes that the instruments aboard his navy ships are analog rather than digital devices, nevertheless insists on describing the whole navigational system as computational. I do not think we have to make this move. I realize, of course, that this is currently a controversial stance in both psychology and cognitive science. It may even go against some understandings of a commitment to scientific realism (Giere, 2006).

There exists another way of denying the genuinely hybrid character of distributed cognitive systems, this time from the human side. Some enthusiastic supporters of the idea of distributed cognitive systems advocate treating such systems as having an “extended mind” (Clark, 1997, 2008). So the smart phones of people who keep lots of information in their phones are regarded as literally part of the memory of these people. The mind, which remembers, extends to and includes the smart phone. One can make this idea seem somewhat plausible in the local context of a single person with a phone. It makes little sense when applied to a scientific distributed cognitive system such as the Hubble telescope. Here one would have to say that the mind of the Hubble system extends 2.2 billion light-years out into space to include the cluster of galaxies incorporated into the system as a gravitational lens. And one would have to imbue the system as a whole with the desire to determine the age of the universe, the belief that this age is around 14 billion years, and the epistemic responsibility to justify this belief. These are the kinds of things that creatures with minds are expected to do. But none of this makes much sense applied to the Hubble Telescope System as a whole (Giere, 2004, 2006).

(p.82) It is important to be clear that these claims in favor of extended minds are not matters of scientific discovery. It is not as if someone could empirically investigate the Hubble system to determine whether or not it has a mind. Rather, this is a suggested revision in how we think about such things. It is a recommendation to extend our commonsense notion of mind to more inclusive systems. It is also a pronouncement on how cognitive science should develop. I am not alone in questioning the wisdom of taking this path (Menary, 2010). Most of the criticism, however, remains at the level of ordinary individuals. Here cognitive studies of science and the psychology of science can contribute by introducing consideration of the larger scale systems that are now common in the sciences.

An Agent-Centered Approach

I think that the concept of a distributed cognitive system is very useful for understanding how humans produce scientific knowledge. Yet, as argued above, this concept rests somewhat uneasily within the cognitive study of science because of the commitment to the paradigm of cognition as computation within cognitive science. I have therefore come to embrace a somewhat broader view for understanding how doing science produces the remarkable knowledge we now have, an agent-centered approach. This approach takes as fundamental what is common to the approaches of Kuhn and his followers and also to both historians and sociologists of science, namely, that the fundamental unit of interest is a human scientist. Theories are conceptual structures created by scientists. Performing experiments is an activity carried out by scientists. What counts as scientific knowledge reflects the judgments of a scientific community. Questions of method and social interaction will, of course, come up in such an approach, but in the context of the activities of individual agents. It is taken for granted that the cognitive makeup of humans must be part of this account, though by no means the whole story.

What is most different about an agent-centered approach is that it privileges the human component of a distributed cognitive system. It treats humans not merely as a locus of more computation, but as genuine agents with such attributes as beliefs, intentions, responsibility, and, yes, consciousness. This means granting humans a degree of autonomy, though without insisting that humans somehow operate outside the causal structure of the world. Importantly, in an agent-centered approach, it is humans, and only humans, who serve as a locus of agency in a distributed cognitive system. It is human agents who design such systems for various purposes and determine how and when they will be used. And it is human agents who determine what has been discovered and the significance of discoveries both within the total body of scientific knowledge and beyond.

A consequence of making humans the locus of agency in distributed cognitive systems is that the humans, and only the humans, bear epistemic responsibility for the claims that emerge from the deployment of such systems. I think it is important for human society that there be some definite place to put responsibility for making claims to knowledge of the world. The best place to put that responsibility, I think, is with individual scientists. Since the practice of science is now largely a collective (p.83) enterprise, perhaps this can be expanded somewhat to include a scientific community. But it must be clear that the community has no moral standing that does not derive from the individual members. Extending agency to a distributed cognitive system as a whole would excessively dilute responsibility. It would be hard to fix blame for claims that emerged without adequate empirical support. Blaming the whole system has few consequences. No one loses his or her reputation or employment. But the threat of such losses to individuals is conducive to the proper functioning of the scientific enterprise.

There are two challenges to an agent-centered understanding of scientific knowledge production that are grounded in the contemporary practice of science itself. One, as noted above, is that science is now very much a collective enterprise. The existence of scientific papers with a hundred “authors” is no joke. Thus, even if individual scientific agents are taken to be primary, some account must be given regarding the interactions among individuals in scientific groups. There has been some work along these lines within the cognitive study of science. Dunbar (2002) has examined scientific reasoning taking place in weekly lab meetings in molecular biology and immunology labs. Nersessian (Nersessian, Kurz-Milcke, Newstetter, & Davies, 2003) has investigated reasoning and representational practices employed in problem solving in biomedical engineering laboratories. But, by and large, the study of scientific collectives has been left to sociologists of science (Knorr-Cetina, 1999). Here there is an opportunity for a social psychology of science drawing on principles of social psychology rather than cognitive psychology. Thus far, however, a social psychology of science remains more promise than reality (Feist, 2006, Chapter 6).

As is clear from my description of the Hubble telescope, computers are a ubiquitous feature of modern science. Indeed, computers are the most significant scientific artifact created perhaps since the Scientific Revolution. In a half to a quarter of a century, they have massively changed the way science is done. It is now obvious in ways it was not quite so obvious before World War II that to do science is to become part of a distributed cognitive system in which computers play a central role. Yet it seems that neither the cognitive study of science nor the psychology of science has had much to say about the cognitive or psychological aspects of computer use in the sciences. There is much room here for an increased role for psychology in an agent-centered approach to science.

Acknowledgments

I would like to thank the editors for inviting me to participate in the conference that led to this volume and particularly for comments on an earlier draft of this chapter that resulted in considerable improvements.

References

Bibliography references:

Biagioli, M. (1999). The science studies reader. New York: Routledge.

Carey, S. (1985). Conceptual change in childhood. Cambridge, MA: MIT Press. (p.84)

Carruthers, P., Stitch, S., & Siegal, M. (Eds.) (2002). The cognitive basis of science. Cambridge: Cambridge University Press.

Chi, M. T. H. (1992). Conceptual change within and across ontological categories: Examples from learning and discovery in science.” In R. N. Giere (Ed.), Cognitive models of science, Minnesota Studies in the Philosophy of Science (Vol. 15, pp. 129–186). Minneapolis: University of Minnesota Press.

Churchland, P. M. (1989). A neurocomputational perspective: The nature of mind and the structure of science. Cambridge, MA: MIT Press.

Clark, A. (1997). Being there: Putting brain, body, and world together again. Cambridge, MA: MIT Press.

Clark, A. (2008). Supersizing the mind: Embodiment, action, and cognitive extension. Oxford: Oxford University Press.

Darden, L. (1991). Theory change in science: Strategies from Mendelian genetics. New York: Oxford University Press.

Dewey, J. (1938). Logic: The theory of inquiry. New York: Holt.

Dunbar, K. (2002). Understanding the role of cognition in science: The science as category framework. In P. Carruthers, S. Stitch, & M. Siegal (Eds.), The cognitive basis of science (pp. 154–170). Cambridge: Cambridge University Press.

Feist, G. J. (2006). The psychology of science and the origins of the scientific mind. New Haven, CT: Yale University Press.

Friedman, M. (1999). Reconsidering logical positivism. Cambridge: Cambridge University Press.

Gardner, H. (1985). The mind’s new science. New York: Basic Books.

Gentner, D., Holyoak, K., & Kokinov, B. (Eds.). (2001). The analogical mind: Perspectives from cognitive science. Cambridge, MA: MIT Press.

Gentner, D., & Stevens, A. L. (Eds.) (1983). Mental models. Hillsdale, NJ: Lawrence Erlbaum.

Giere, R. N. (1988). Explaining science: A cognitive approach. Chicago: University of Chicago Press.

Giere, R. N. (Ed.). (1992). Cognitive models of science. Minnesota Studies in the Philosophy of Science (Vol. 15). Minneapolis: University of Minnesota Press.

Giere, R. N. (2003). The role of computation in scientific cognition.” Journal of Experimental & Theoretical Artificial Intelligence, 15, 195–202.

Giere, R. N. (2004). The problem of agency in scientific distributed cognitive systems.” Journal of Cognition and Culture, 4(3–4), 759–774.

Giere, R. N. (2006). Scientific perspectivism. Chicago: University of Chicago Press.

Giere, R. N. (2008). Cognitive studies of science and technology.” In E. J. Hackett et al. (Eds.), The handbook of science and technology studies (pp. 259–278). Cambridge, MA: MIT Press.

Giere, R. N., & Richardson, A. (Eds.) (1996). Origins of logical empiricism. Minnesota Studies in the Philosophy of Science (Vol. 16). Minneapolis: University of Minnesota Press.

Gorman, M. E. (1992). Simulating science: Heuristics, mental models and technoscientific thinking. Bloomington: Indiana University Press.

Gorman, M. E., Tweney, R., Gooding, D., & Kincannon, A. (Eds.). (2005). Scientific and technological thinking. Mahwah, NJ: Lawrence Erlbaum.

Gruber, H. E. (1981). Darwin on man: A psychological study of scientific creativity. Chicago: University of Chicago Press.

Hutchins, E. (1995). Cognition in the wild. Cambridge, MA: MIT Press. (p.85)

Knorr-Cetina, K. (1999). Epistemic cultures: How the sciences make knowledge. Cambridge, MA: Harvard University Press.

Kuhn, T. S. (1962). The structure of scientific revolutions. Chicago: University of Chicago Press.

Lakatos, I. (1970). Falsification and the methodology of scientific research programmes. In I. Lakatos & A. Musgrave (Eds.), Criticism and the growth of knowledge (pp. 91–195). Cambridge: Cambridge University Press.

Langley, P., Simon, H. A., Bradshaw, G. L., & Zytkow, J. M. (1987). Scientific discovery: Computational explorations of the creative processes. Cambridge, MA: MIT Press.

Latour, B., & Woolgar, S. (1986). Laboratory life (2nd ed.). Princeton, NJ: Princeton University Press.

Laudan, L. (1977). Progress and its problems. Berkeley: University of California Press.

Menary, R. (Ed.) (2010). The extended mind. Cambridge, MA: MIT Press.

Nersessian, N. J. (1984). Faraday to Einstein: Constructing meaning in scientific theories. Dordrecht, Netherlands: Nijhoff.

Nersessian, N. J. (2008). Creating scientific concepts. Cambridge, MA: MIT Press.

Nersessian, N. J., Kurz-Milcke, E., Newstetter, W. C., & Davies, J. (2003). Research laboratories as evolving distributed cognitive systems. In R. Alterman & D. Kirsch (Eds.), Proceedings of the Cognitive Science Society, 25. Hillsdale, NJ: Lawrence Erlbaum. The Cognitive Science Society, pp. 857–862.

Piaget, J. (1929). The child’s conception of the world. London: Routledge & Kegan Paul.

Simon, H. A. (1966). Scientific discovery and the psychology of problem solving. In R. Colodny (Ed.), Mind and cosmos (pp. 22–40). Pittsburgh, PA: University of Pittsburgh Press.

Thagard, P. (1988). Computational philosophy of science. Cambridge, MA: MIT Press.

Thagard, P. (1991). Conceptual revolutions. Princeton, NJ: Princeton University Press.

Thagard, P. (2000). How scientists explain disease. Princeton, NJ: Princeton University Press.

Thelen, E., & Smith, L. B. (1994). A dynamic systems approach to the development of cognition and action. Cambridge, MA: MIT Press.

Toulmin, S. (1972). Human knowledge. Princeton, NJ: Princeton University Press.

Tweney, R. D. (1985). Faraday’s discovery of induction: A cognitive approach. In D. Gooding & F. A. J. L. James (Eds.), Faraday rediscovered: Essays on the life and work of Michael Faraday, 1791–1867 (pp. 189–210). New York: Stockton.

Tweney, R. D., Doherty, M. E., & Mynatt, C. R. (Eds.) (1981). On scientific thinking. New York: Columbia University Press.

Tweney, R. D., Mears, P., & Spitzmuller, C. (2005). Replicating the practices of discovery: Michael Faraday and the interaction of gold and light. In M. E. Gorman, R. Tweney, D. Gooding, & A. Kincannon (Eds.), Scientific and technological thinking (pp. 137–158). Mahwah, NJ: Lawrence Erlbaum.