Jump to ContentJump to Main Navigation
Moral Reality$

Paul Bloomfield

Print publication date: 2001

Print ISBN-13: 9780195137132

Published to Oxford Scholarship Online: November 2003

DOI: 10.1093/0195137132.001.0001

Show Summary Details
Page of

PRINTED FROM OXFORD SCHOLARSHIP ONLINE (www.oxfordscholarship.com). (c) Copyright Oxford University Press, 2017. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a monograph in OSO for personal use (for details see http://www.oxfordscholarship.com/page/privacy-policy). Subscriber: null; date: 25 February 2017

(p.179) Appendices

(p.179) Appendices

Entropy, Healthiness, and Goodness

Source:
Moral Reality
Author(s):

Paul Bloomfield

Publisher:
Oxford University Press
DOI:10.1093/0195137132.005.0001

Abstract and Keywords

An argument for the nonreducibility of thermodynamics to statistical mechanics is given and a nonreducible ontology for entropy, health, and goodness is suggested. The argument is based on the modal fact that the putative reduction makes it impossible to distinguish between what is impossible and what does not exist, as well as making the mortality of the human body contingent.

This appendix is strictly for those interested in metaphysics. It may be considered a vestigial appendix for reasons discussed in chapter 1: we are committed to realism about health, independent of questions concerning its possible reduction to some more basic set of properties, and it serves well as a model for goodness, so our realism is equally independent of such questions. Nevertheless, there remains the strictly ontological question of whether or not goodness and healthiness are amenable to a reductionistic program, and if they are not, what sort of problem is this—if it is a problem at all? We could, perhaps, rely on science to tell us the status of health, but in general scientists are not interested in ontology, and for reasons that will emerge below, relying on them in this case is perhaps not the best way to get at the metaphysical truth in any case.

When I first began to look into healthiness as a potential model for goodness, I quickly found that health was generally understood in terms of proper biological function, about which there was a very large literature (as noted in chapter 1). It never seemed proper to me, however, to consider healthiness as a functional property, for “healthiness” as a predicate describes how well a function is performing, and healthiness is therefore a property of a functional property, just as malfunction or sickness is. (p.180) Health is a measure of how properly (or well) a function is performing. Still, it seemed to me then (as it does now) that understanding health requires mastering the literature on proper functions. And although I was entranced by the richness of this wonderful work in the philosophy of biology, in the end I found myself dissatisfied—in general, because there was too much emphasis on biology and not enough on metaphysics, and in particular, because much of the literature accepts an assumption that I think leads it astray ontologically: namely, that there is in principle a distinction between the function of biological entities and the functions of artifacts (like internal combustion engines). Now, whereas it is true that different stories are to be told about where biological and artificial functions come from, this does not mean that these “different kinds” of functions need different ontologies; sight and taste have different etiological stories behind them, yet redness and sweetness are still both secondary properties. Moreover, it seemed to me that all functions are alike ontologically insofar as they all are properties of complex systems. (Thus, health will be a property of a property of a complex system.) Animals and machines are both complex systems and, as such, have much in common. So, if we are to understand the ontology of health as a property of functions, we must begin by exploring the ontology of properties of complex systems.

Another similarity among all functions is that without regular maintenance (and even with), they eventually break down or malfunction (Theseus'ship aside). Machines wear out; living things die. Health in bodies is a measure of how well or efficiently an organ or organism is operating, but organs and organisms are always on their way to wearing out and failing. Health is a measure of structural integrity. I realized at this point that I had left the study of biological systems behind and now was looking at biological systems as just a special case of physical systems. The dialectic shifts from biology to physics.1 Even before I began my (p.181) research into the foundations of thermodynamics and statistical mechanics, I knew that entropy is a measure of structural disintegration. Entropy is a property of complex systems; it is in fact what might be called a reciprocal property to healthiness or proper function: the degree to which an item is functioning properly is the degree to which it lacks entropy, and vice versa. (This thought is present in Schrodinger's (1955) wonderful little book What Is Life? which is unsurprisingly fine in physics but rather weak in metaphysics.) (See also pp. 45ff.)

Now, entropy is a property that has received much attention within physics and much is known about it. If entropy and health are in fact reciprocal properties, then they presumably have the same ontological status (as do positive and negative charges or black and white). It seemed to me at that point, and still seems to me today, that if we can determine the ontological status of entropy qua measure of structural disintegration, we will thereby have our ontology of health qua measure of structural integrity. Then we apply these results to goodness for the reasons adduced in the body of this book, and voila—we have a completed moral ontology.

Entropy brings a strong dialectical advantage into the picture. The successful reduction of goodness seems as unlikely as the reduction of healthiness or of biological functions, for these seem to presuppose ends (like survival and procreation) that are similarly resistant to reduction. It has long been recognized that biologists have not been comfortable with the question of how easily biology reduces to chemistry and physics, as mentioned in chapter 1. Aside from functions, fitness, and health, biologists have to appeal to concepts like environment, ecology, niche, and so on, and these do seem superficially resistant to reduction. Biologists, however, have been put off by accepting a seemingly extravagant ontology of nonreducible properties, especially since these properties did not seem to be required by the sciences of chemistry and physics. But with the introduction of entropy into the picture, biologists (and moral realists) have a perfect tu quo que response to the “hard‐core” scientists and empiricists. We may now say, “If entropy is a property of physics and resists reduction, then biological (or moral) properties that have put up (p.182) similar resistance cannot be more of an ontological problem for us than entropy is for you.”

It is not uncommon for discussions of thermodynamics to begin with a list of “Things That Don't Happen,” and here are a few examples.2

  1. 1. One end of a spoon that is resting on a table never spontaneously gets hot while the other end cools down.

  2. 2. Once milk has been stirred into a cup of coffee, it never spontaneously collects itself in a portion of the cup.

  3. 3. Whereas a source of light emits a beam that radiates out from its source, it a spherical beam never converges inward toward a point.

These examples are all normally considered impossible reversals of so‐called irreversible processes, such as the conduction of heat from something hot to something cold until a uniformity of temperature is reached or the spread of a gas to uniformly fill a container. Another example of an irreversible process is the development of an organism: oak trees never turn into acorns, and fetuses never turn into zygotes. As noted by Reichenbach (1958), biological processes are special cases of irreversible processes.3

These processes were studied systematically by the engineer N. L. Sadi Carnot in the first quarter of the nineteenth century, while he was investigating how engines are able to produce work. If we consider chapter 2's discussion of the epistemology of practical rationality and phronesis, itis worthwhile noting that Carnot was an engineer; it was his practical interest in the skill of making engines that led him to his theoretical investigations. He became interested in general laws that govern how physical systems work. Thermodynamics, as a theoretical discipline, grew out of the practical skill of building machines that produce work efficiently. It is, in a strict sense, a product of practical rationality.

(p.183) There are three laws of thermodynamics. The “zeroth” law (so called because it was originally left implicit) is that equilibrium is transitive: if system a is in equilibrium with system b, and b is with c, then a is with c. More famously, the first law of thermodynamics, that energy cannot be created or destroyed, is also known as the law of the conservation of energy. The second law, which will occupy our attention, has many different but provably equivalent formulations. Here are three definitions and then five different ways of expressing it:

Definitions:

  1. 1. “Work” is understood as a transfer of energy from one body to another by means of exerting force.

  2. 2. “Heat” is understood as a transfer of energy from one body to another by means of a difference in temperature within the bodies. (In the case of work done by friction, work is equivalent to heat.)

  3. 3. “Efficiency” is understood as the ratio of work an engine does (i.e., what you get) to the heat or energy it absorbs (i.e, what you pay for).

Equivalent Formulations Of the Second Law Of Thermodynamics:

  1. 4. There are no perfectly efficient engines. (Engines always use up more energy than they can convert into work.)

  2. 5. It is not possible to change heat completely into work with no other changes taking place (Lord Kelvin's formulation).

  3. 6. There are no perfectly efficient refrigerators.

  4. 7. If there are two bodies at different temperatures, the first higher than the second, then it is not possible for heat to be transferred from the second body to the first body with no other changes taking place (Clausius'formulation).

  5. 8. In any thermodynamic process that proceeds from one equilibrium state to another, the entropy of the system+ the environment either remains unchanged or increases.

It can be shown that each of these are derivable from the others, but in particular (4) is a quick restatement of (5), as (6) is of (7). Formulations (4) and (5) say that heat cannot be extracted from one body and made to do work on a second body without externally doing some other work (expending some extra energy or heat) on the system as a whole in order to convert the extracted heat to work: more simply, heat cannot be made to do work without doing extra work to the heat. Formulations (6) and (p.184) (7) say that heat cannot be sucked out of a cooler body and put into a hotter body without other work being done on the system as a whole. The empirical result that most strongly supports (4)–(7) has been the inability to build a perpetual motion machine: we cannot build a machine that could “by cooling the surrounding bodies, transform heat, taken from its environment, into work” (Fermi, 1937, p. 29).4

The last formulation of the second law concerns “entropy.” Entropy can be given a precise mathematical formulation, but it will suffice for our purposes to understand it as a quantity that measures how far a system is from equilibrium, homeostasis, or homogeneity. A system that is highly ordered, wherein its parts exhibit a physically differentiated structure or a heterogeneity, is one that has low entropy. If we have a container of a specific gas in a room that contains normal air and we open the container, the gas and the air will initially be separated and will come to be uniformly mixed. The initial state has a low entropy, and the final state has a high entropy. To conceptually connect (4)–(7) with (8), a particular system can undergo a decrease of entropy, only by causing an increase of entropy in its environment that is larger than the decrease within the system. In other words, the net result of all isolated physical processes or interactions between systems and their environments results in an increase of entropy in the ensemble. To conceptually connect the notion of entropy with that of an engine or refrigerator, we can note that the entropy of a system is inversely proportional to the amount of work (or heat) that can be got out of the system. Stated as plainly as possible; the second law says that systems and their environments, taken together, always move toward states of higher entropy. In expending energy to do work, there is always waste. All physical systems, taken as such, have a natural tendency to gain entropy. Entropy always increases; structure always breaks down. Order and structure always and eventually give way to disorder and homogeneity. This is the way of all things.

Now, we are in a position to see why thermodynamics, and the second law in particular, resists reduction. Thermodynamics is thought dogmatically to reduce to statistical mechanics. Statistical mechanics has, as its resources, classical mechanics and mathematical statistics. Classical mechanics, systematized by Newton, can appeal to matter, stuff, particles, and so on, as well as to forces and laws of nature such as gravity, electromagnetic (p.185) forces, and so on. The challenge for statistical mechanics is to explain the second law of thermodynamics, given its resources. The problem is that all the laws of classical mechanics are considered “symmetric” insofar as they describe interactions that could be theoretically reversed (are “reversible processes”) and still be described by the very same laws. To get a handle on this, imagine watching a movie of dust floating in a sunbeam; one couldn't tell just from watching whether the movie was playing in reverse or not. Thus, this sort of Brownian motion is symmetric. The problem is that the second law of thermodynamics is asymmetric. Systems always move toward greater entropy: if one were to watch a movie of a flower's life, one would be able to discern if the movie were in reverse or not. The challenge to statistical mechanics is to explain how asymmetry can reduce to symmetry. And there are reasons to think that it cannot.

What must first be explained, if any reduction is to be satisfactory, is why it is that entropy invariably increases. At least superficially, given the resources of statistical mechanics, there is no reason that increases in entropy should be any more common than decreases. From the point of view of statistics, we should find that all physical systems develop in such a way that they can be plotted along a bell‐curve distribution of entropy increases and decreases. Experience, however, shows a distribution that is far from the shape of a bell curve, for we experience only increases in entropy. To successfully reduce thermodynamics to statistical mechanics, some explanation must be given for why the distribution of actually observed systems deviates so far from the symmetrical bell curve.

Ludwig Boltzmann, a physicist in the late nineteenth century, attempted to carry out the reduction of the regularity of entropy increases in our observations by saying that it is a local phenomenon, one determined by a posited set of initial conditions of the system as a whole. Boltzmann hypothesized that we just happen to be living in a part of the universe or in a part of time in which entropy regularly increases. But there are going to be other parts of the universe (or times) that are different, such that the universe as a whole is a system in equilibrium. We just happen to be in a pocket of the universe where entropy is regularly increasing. (This is sometimes referred to as a “weak anthropic principle”: it is just a coincidence that there is a place where things always develop in the manner in which we have always experienced them, but since we are there, it is hard for us to see it as mere coincidence.)

There is a variety of problems with this attempted reduction of the second law. The first is that the positing of these special initial conditions (p.186) to explain our observations smacks of being ad hoc: in appealing to these sorts of conditions so that our observations may be explained, physics is availing itself of facts that transcend the subject matter of statistical mechanics. Moreover, if these initial conditions are assumed to be such that they can account for the “local” effects of entropy increase, then they appear to be both unverifiable and unfalsifiable. Similar criticisms are explored in more detail by Sklar (1993), and the interested reader can pursue them there. Here we will consider different objections to Boltzmann's reduction.

Boltzmann was empirically led to conclude that there is a certain kind of interaction between particles in which a slow‐moving particle could collide with a faster particle, whereby the slower particle loses (at least some of) its speed, imparting its loss to the faster particle, which moves away at an even faster pace (Reichenbach, 1958 p. 54). Thus, Boltzmann (thought he) saw that it was possible for entropy‐decreasing interactions of this kind to predominate, and what we see as irreversible processes (such as the conduction of heat) are not in principle irreversible but are only very, very unlikely to reverse themselves. Insofar as this particular kind of interaction is possible, entropy decreases are possible, even here, in our part of the universe; it is just very, very improbable that we find a preponderance of this kind of interaction in our place and time because of the initial conditions of the universe. It could be the case, however, that a gas, evenly distributed within a container, spontaneously compresses itself into a corner. Such a spontaneous compression is not impossible, just highly improbable. Moreover, there will be parts of the universe in which such occurrences are the norm.

So, Boltzmann is thought to have successfully reduced thermodynamics to statistical mechanics by explaining away the appearance of asymmetry within the former as just that: an appearance only. It appears to us as if there is an asymmetry in nature, insofar as our observations are of systems whose entropy always increases, but this “asymmetry” is just a local phenomenon that can be explained away by saying that it is the causal result of contingent initial conditions of the universe. But if Boltzmann's reduction is really to be successful, we must be sure that everything that we think needs explaining is explained.

Imagine being a witness to a spontaneous compression of an evenly distributed gas within a container. If we were to observe such a compression, we would (pretheoretically) normally ask for an explanation of why the gas behaved in such a manner. The scientist has two kinds of response: either (1) admit that the phenomenon existed and deny the (p.187) existence of an explanation for it (an explanation that transcends appeal to the resources of statistical mechanics and the initial conditions of the universe, which deem the compression to be a mere extraordinary coincidence), or (2) deny it altogether as a phenomenon that physics is in the business of explaining. In the first response, the physicist would reply that if we want some further explanation for what seems to be more than just an incredibly improbable but evident coincidence, we are asking for something that does not exist. There simply is no further explaining to be done. The mere asking demonstrates a lack of understanding of how things are. The second response is more sophisticated and is articulated by Karl Popper (1959). He first notes that such a compression cannot be repeatable: for if a second compression occurred, it would falsify the probability distribution that is describing our observations, which find such compressions to be very, very, very improbable. (If such improbabilities repeat themselves, then we conclude that our initial assessment of their improbability was inaccurate.) And since the phenomenon is unrepeatable, Popper concludes that it would not be a “physical effect” that places it outside the realm of science (p. 203). He explains this conclusion by referring to his earlier discussion of “objectivity” (p. 46), where he says that, “any controversy over the question whether events which are in principle unrepeatable and unique ever do occur cannot be decided by science: it would be a metaphysical controversy.” One wonders what to say about the Big Bang. Be that as it may, given either of these two scientific responses to singular spontaneous decreases of entropy, the conclusion we get is that the second “law” of thermodynamics is actually just a contingent statistical regularity and is not a law of nature at all—if by “law of nature” we mean a description of universal regularities within events that are due to causal necessity.5 If Boltzmann is correct, the second law is merely a description of a statistical regularity that allows for occasional irregularities.

Before proceeding further into why this result is unsatisfactory, a very brief review of our present position in this discussion will prove helpful. The debate can be cast as one that takes place between what might be called “science” and “metaphysics.” Scientists are those who are concerned with making observations and based on these, coming up with theories that allow them to make accurate predictions of the future—period (Feynman, (p.188) 1985). For them “metaphysics” automatically has a pejorative connotation, and they prefer to limit their attention to empirically observable and verifiable phenomena. Science can scrutinize itself, but only from a scientific point of view; so, one plausible criterion for theory acceptance within science is that a theory is acceptable to the degree that it is falsifiable by the scientific method (Popper, 1959). Since scientists are concerned only with observations and their predictions, any talk of the existence of events or causes that cannot be scientifically observed (if only because they are unrepeatable) is eschewed.

The metaphysician, of course, considers this empiricism and verification of science as itself a metaphysical stand. But the metaphysician says that this stand is incomplete; for example, the metaphysician will want to draw distinctions in reality where the scientist sees none. Of particular interest here, the metaphysician will want to insist on a difference between events that do not occur because they are improbable and those that do not occur because they are impossible. This is to insist on a difference between events that won't happen and events that can't happen. Scientists (and philosophers of science with an empirical bent, like Quine, Popper, and van Frassen) will say that since these two kinds of nonoccurrences have no observable consequences (since they are all “events that do not occur,” they lack observable consequences), no problem for a theory can be adduced if it determines that something that was heretofore thought to be impossible is actually found to be merely (highly) improbable. If the number representing an improbability is small enough, science can ignore it (Popper, 1959, p. 202). Thus, any talk of the difference between an improbable event and an impossible event is mere “metaphysical speculation.” The metaphysician, however, does not accept as an argument the pejorative tone that usually accompanies this phrase and wants the debate over the significance of these small numbers to press forward regardless of their strictly empirical significance. The scientist declares that the debate is already over since all the empirical data have been accounted for.

This kind of disagreement can be generally characterized as follows: by the lights of the metaphysician, the scientific method is unfalsifiable, for it refuses to accept as evidence anything that cannot be accounted for by the scientific method. By the lights of the scientist, the metaphysician is demanding that we draw distinctions in reality that make no real difference. This is quite a problem, for which a general solution may not be possible. There is a way out, however, in the particular case of the putative reduction of thermodynamics.

(p.189) Let's begin with a reminder that it was the physical possibility of a particular kind of interaction between particles that allowed Boltzmann to conclude that entropy decreases are possible (however improbable). Let's also take another quote from Popper (1959) about spontaneous compression:

I do not, for example, assert that the molecules in a small volume of gas may not, perhaps, for a short time spontaneously withdraw into a part of the volume, or that in a greater volume of gas spontaneous fluctuations of pressure will never occur. What I do assert is that such occurrences would not be physical effects, because, on account of their immense improbability, they are not reproducible at will. (p. 203)

So far, so good. But what must be kept at the front of our minds is that thermodynamics, and its second law, is supposed to apply to all physical systems and not just to gases in containers. It is supposed to explain to us why perfectly efficient machines cannot be built, why all things run down, why all biological systems develop as they do (as a result of absorbing energy from their environment), and why they all eventually die. We should be able to count on thermodynamics to explain to us why we are mortal and why we always see acorns growing into oak trees and never the opposite. Now, if we accept the reduction of thermodynamics to statistical mechanics, we find that the scientist tells us that many events we had heretofore thought of as impossible are merely improbable; such events as engines running with perfect efficiency, systems that do not wear down and out with age, and biological systems that live forever are just highly, highly improbable. The reduction of thermodynamics to statistical mechanics leaves open the physical possibility that an oak tree should “grow into” an acorn.

And now, we ask whether or not the metaphysician is making distinctions that make no difference when asking if the impossibility of immortality differs from its improbability (however high). Isn't the difference between what is impossible and what is improbable a difference in reality we should want to insist on? Isn't the possibility that an oak might turn into an acorn a possibility that it is important to recognize? Even if it won't ever occur, isn't the fact that it could occur a fact that gives us an important insight into the nature of reality or the nature of nature? If these are questions worth asking, if they even make good linguistic sense in English, then the success of the scientist's reduction must itself be questioned. And if, in fact, we think that it is impossible and not merely (p.190) (however highly) improbable for oaks to turn into acorns, then we must reject the reduction as incomplete.

To see where things go wrong for the reduction, we must ask the scientist (and philosophers of science like Popper) why a spontaneous compression of gas must be short lived. Granted, such a compression may not occur again (lest we falsify our probability distribution, which deems such compressions unrepeatable), but if it does occur, the scientist can give no reason why it should not last; that it does last is just even more unlikely. The longer it endures is directly proportional to how improbable it is. But given statistical mechanics plus initial conditions, there is nothing in principle, there are no physical or causal laws of nature, that prevent it from lasting. The scientist cannot say that the compression cannot last. Note that such a lasting of compression would not provide falsification conditions for our probability curve; we are now only focusing our attention on a hyperattenuated portion of the tail of the distribution curve, a portion way out at the end, but which still contains some (very small, perhaps infinitesimal) measure of probability. So, contra Popper (1959), such an unusual but lasting phenomenon might be observable, even though unreplicable or unpredictable. And so it would be possible for us to sit and watch some gas (or perhaps it is milk that had been stirred into a cup of coffee) that has all spontaneously collected in the corner of the room for an hour, a day, or a year. Surely such a long‐lasting event, though unrepeatable, would be an empirical event within the provenance of science, but when we ask the scientist why this is happening, the scientist must reply that there is no explanation.

The point comes through more clearly if we stop trying to imagine gases compressing “all by themselves” and attend to more concrete examples of what the scientists'position deems possible. Whereas Carnot's initial result was supposed to have shown why a machine cannot run with perfect efficiency, the reduction of thermodynamics to statistical mechanics revises this result. Now remember Fermi's (1937) words (quoted above): we cannot build a machine that could “by cooling the surrounding bodies, transform heat, taken from its environment, into work.” On the contrary, the physicist Gold (1962; also cited by Horwich, 1987) explores the ramifications of using the initial conditions of the universe to explain the contingency of the direction of entropy by asking us to imagine that we have a “flashlight” that is pointed into the nighttime sky. As we turn the switch, light from the depths of the universe converges (backward) on our flashlight, making “the tungsten filament extremely hot.” If this phenomenon lasted as long as the switch was on, the filament's heat could (p.191) be perpetually transformed into work. It would be a perpetual motion machine. Here is Gold's conclusion:

True. . . there are no laws of physics infringed, but all the steps that would be described are so wildly improbable that this does not merit a serious discussion.

The notion of cause and effect is difficult to dismiss; yet there is no place for it in this discussion. All that could really be defined is “related events.” (p. 409)

(One wonders what the quotation marks around “related events” are for.) Stable decreases in entropy might just happen, for no reason, without explanation, as a very large coincidence. So, here we are told perpetual motion machines are possible but they simply won't occur because of their high improbability. If however, per impossible, one should come across such a machine and insist on an explanation for how it can do what Carnot had taught us was impossible, we would be told by the scientist that we are looking for explanations when there are none to give. If we were to insist that a perpetual motion machine could not “just happen,” we would have already entered the realm of metaphysical speculation. And although such a response might be fully cogent when looking at the situation from the point of view of science, it might also be taken as evidence for the narrowness of the scientific perspective. The problem, I think, becomes even worse when we look at biological systems as a subset of the physical systems describable in thermodynamic terms.

The second law of thermodynamics does not apply only to the eventual breaking down of machines and the inevitable death of biological systems. The development of an organism from its earliest developmental stages to maturity is the result of an exchange of energy between the system and its environment. When an organism is young and healthy, and it, qua engine, is “being built,” its entropy levels are actually decreasing. We must remember, however, that its decrease is more than compensated for by an increase in the entropy of its environment. As it grows into its more complicated, mature form, entropy levels within the organism itself begin to increase until death occurs. What we learn from thermodynamics is that such local entropy decreases that occur in youth and adolescence are always temporary. Remember that the reduction to statistical mechanics yields the result that the directionality of all increases or decreases of entropy is contingent. That we observe these regularities within the development of biological systems is merely a statistical regularity that has no more “underlying cause” than the initial conditions (p.192) of the universe. And given this, it must then be possible and not contrary to any laws of nature for an old person to spontaneously begin to grow younger and eventually turn into a baby. There would be no explanation for such an event; it is just highly (perhaps infinitesimally) improbable. Science tells us that it is possible for oaks to “grow” into acorns. If such a freak of nature did occur, however, there would be no deep explanation for this extraordinary phenomenon; it would merely be a very large coincidence.

So, let's compare a few propositions:

  1. (9) There are no square circles.

  2. (10) There are no effects that come before their causes.

  3. (11) There are no humans that can fly unaided.

  4. (12) There are no golden mountains.

  5. (13) There are no perpetual motion machines.

  6. (14) There are no immortal biological systems.

  7. (15) There are no oaks that turn into acorns.

These are all ostensibly about the same kind of thing, namely, a kind of thing that does not exist. From a purely observational or empirical standpoint, none can be observed and none has any causal effects. From the point of view of metaphysics, these may differ in modal status; by this, I mean that there may be different explanations for why these various objects do not exist. Proposition (9) is true because the logical law of noncontradiction prevents objects from being both circular and square; the concepts or definitions of “square” and “circle” are contradictory. Proposition (10) is said to be a “metaphysical” or “broadly logical” truth: a truth that points out some necessary, noncontingent, but not strictly logical connection between, in this case, our concepts of causation and time. Such truths are said to have a synthetic a priori status (and are the subject matter of Kant's First Critique). Proposition (11) is true because the causal laws of nature prevent humans from unaided flight; were these contingent laws of nature to change or were they to have been different all along, it might be possible for humans to fly. The laws being what they are, however, make the inability of human flight causally necessary. Proposition (12) is merely a contingent truth; there could be a golden mountain, but there just happens to be none.

Drawing these distinctions has always proven difficult for science, if only because science sets out from the start to explain only what can be observed and to make predictions of what can be observed in the future. Philosophers with an empirical bent have spent much time trying to sort (p.193) out these issues, including how it is that we can even refer to objects that do not exist, and in the end many deny empirical validity to modal distinctions. In pursuit of the nonreducibility of thermodynamics, let it be agreed by all parties that there are no useful distinctions to be drawn among (9), (10), and (11). We will call square circles, effects that precede causes, and flying humans “impossibilities.” Still, even from the scientific point of view, there is some reason to want to draw a distinction between (9)–(11) and (12). Science is interested in what causal laws of nature actually do obtain, and those that do obtain do not rule out the possibility of golden mountains, as they rule out the impossibilities. Golden mountains are just unlikely to actually exist, and it was purely an empirical discovery that they do not. So, let us call golden mountains and their ilk “contingencies.” The question we are then faced with concerns the status of (13)–(15). If the reduction of thermodynamics to statistical mechanics were complete, then (13)–(15) should be grouped with the contingencies and not with the impossibilities.

Now, since we have left the sphere of the observables behind, the scientist will already have balked at the course of our dialectic. We have entered the realm of metaphysics, but we should note that we are pushed there by science itself: first, because of the above‐mentioned difference between (9)–(11) and (12); second, because it is surely revisionary of pretheoretical thought to think that it is only contingently true that there are no immortal biological systems, or oaks trees that turn into acorns. Here we can begin by calling on the kingpin of empiricism himself, David Hume, who in (an admittedly atypical) foray into what might be called ordinary language philosophy says, “One would appear ridiculous, who would say, that 'tis only probable that the sun will rise tomorrow, or that all men must dye” [1739] (1978, p. 124). If science is going to tell us that what we had thought were impossibilities are actually only contingencies, then it owes us an explanation for why we should accept this result. And the preservation of a reductionistic scientific theory is not an adequate explanation. Moreover, if we did empirically discover both a golden mountain and an oak that becomes an acorn, there may very well be reason to be dissatisfied with science's explanation of both of them as contingent accidents; we might be satisfied with such an explanation in the case of the golden mountain, but it would not do for the oak‐acorn. I submit that were we to encounter an oak that is growing into an acorn, we would look at it, not as we would were we to discover a golden mountain, but as we would were we to witness a human flying (or perhaps even an effect preceding its cause).

(p.194) Statistical mechanics went wrong by assuming that a system is no more than a collection of interactions of particles: since it is possible for single interactions between particles to be such that, if all interactions had this characteristic, decreases in entropy would occur, then (the inference goes) entropy decreases within complex systems are possible. But complex systems, such as a cyclical engine, cannot be such that all the particles that constitute them may interact with one another and with their environment in a way that is both frictionless and ordered; at the very least it is the cyclical nature of such an engine that makes this impossible. Even God could not build a frictionless engine that would run and remain frictionless without God's constant intervention. (Of course, if God constantly intervened, any causal laws could be broken.) The lesson is this: one cannot extrapolate what is possible for a system as a whole from what is possible for parts of it when treated as individuals.

The conclusion to draw is that physics is stuck with nonreducible properties, and so are biology and morality. Wholes (systems) are not merely the sums of their parts; they do not reduce. This is a hard ontological conclusion to grasp, but it is the conclusion. I suppose that it is possible that the preceding argument is unsound for reasons I cannot see and that one day entropy and healthiness and goodness will be successfully reduced to some lower level properties. If so, then far be it for a metaphysician to say it ain't so. But it is properly for the metaphysician to make this judgment and not for the physicist, the biologist, or the ethicist.

Notes:

(1.) Here is a quotation on the effects of aging on brain cells from Dr. Sherwin Nuland's (1993) excellent book How We Die: “Like the muscle of the heart, brain cells are unable to reproduce. They survive decade after decade because their various structural components are always being replaced as they wear out, like so many ultramicroscopic carburetors and plugs. Though cell biologists use more abstruse terminology than do mechanics (words like organelle and enzyme and mitochondrium), these entities nonetheless require just as efficient a replacement mechanism as do their more familiar automotive analogs. Like the body itself and like each of its organs, every cell has the equivalents of pinions and wheels and springs. When the mechanism to exchange the aging parts for new wears out, the nerve or muscle cell can longer survive the constant destruction of components that goes on with it” (p. 55).

(2.) Physics texts relied on are Fermi (1936); Gold (1962); Feynman (1963); Halliday, Resnick, and Wacker (1993); and Layzer (1975). The texts in philosophy of science that I found most useful were Reichenbach (1958), Popper (1959), Horwich (1987), and Sklar (1993).

(3.) All actual physical processes are irreversible. Some physically possible systems undergo “reversible processes” in which entropy neither increases nor decreases (e.g., adiabatic systems described by the Carnot cycle), but such a system has never actually been found or made and will not concern us further.

(4.) Another kind of perpetual motion machine could produce energy ex nihilo and thus run forever. Such a machine, which could break the first law of thermodynamics, will not be discussed.

(5.) This can be a notion of a law of nature that can be reformulated to accommodate Hume's rejection of causal necessity. The important part is that the regularity must be universal, or exceptionless, to be a law of nature.