Information, Decisions, and the Limits of Informed Consent
Information, Decisions, and the Limits of Informed Consent
Abstract and Keywords
For many years, the dream of bioethics has been to confide medical decisions to patients and not to doctors. The favoured key to doing so has been the doctrine of informed consent. The success of informed consent depends on two things. First, patients must be able to understand and remember the information doctors give them. Secondly, patients must be able to analyse that information and use it to make a decision. The first of these requirements has been studied extensively. The second requirement for the success of informed consent has, in contrast, been virtually unstudied. This chapter examines a case study involving prostate-specific antigen (PSA) screening to gain further insight into the way patients think about their medical choices.
The human understanding is not a dry light, but is infused by desire and emotion, which give rise to ‘wishful science’. For man prefers to believe what he wants to be true. He therefore rejects difficulties, being impatient of inquiry; sober things, because they restrict his hope; deeper parts of Nature, because of his superstition; the light of experience, because of his arrogance and pride, lest his mind should seem to concern itself with things mean and transitory; things that are strange and contrary to all expectation, because of common opinion.
For many years, the heart’s wish of bioethics has been to confide medical decisions to patients and not to doctors. The favoured key to doing so has been the doctrine of informed consent. The theory of and hopes for that doctrine are well captured in the influential case of Canterbury v. Spence: ‘[t]rue consent to what happens to one’s self is the informed exercise of a choice, and that entails an opportunity to evaluate knowledgeably the options available and the risks attendant upon each’.1
Anxious as bioethicists and courts have been to promulgate this doctrine, they have been less anxious to discover how well it works; the bioethical tradition has been more interested in articulating principle than testing practice. But, as Don Herzog drolly warns, ‘theory had better not (p.108) be what you get when you leave out the facts’.2 So in this paper we will briefly review some of the empirical literature on informed consent and present findings of a study on the way men decide whether to use PSA screening to detect prostate cancer. This will lead us to reflect on the limits of informed consent.
The success of informed consent depends on two things. First, patients must be able to understand and remember the information doctors give them. Secondly, patients must be able to analyse that information and use it to make a decision. The first of these requirements has been studied extensively. Despite prolonged struggles to improve informed consent, the kind of success hoped for remains elusive. As Cassileth et al. wrote some years ago, ‘[i]t is well known that many patients, despite all efforts to the contrary, remember or understand little of what they agree to during the consent process’.3 And as Cassileth et a different al. said, studies of informed consent ‘have shown that patients remain inadequately informed, even when extraordinary efforts are made to provide complete information and to ensure their understanding. This appears to be true regardless of the amount of information delivered, the manner in which it is presented, or the type of medical procedure involved’.4 Furthermore, the sicker patients become, the less they understand and retain.
The second requirement for the success of informed consent—that patients be able to analyse the information they are given—has, in contrast, been virtually unstudied.5 Scholars have been interested in what patients hear, but not in how they consider what they hear. Yet what evidence we have is unsettling. As Irving Janis says, ‘[t]he stresses of making major decisions and the various ways people deal with those stresses…frequently result in defective forms of problem solving that fail to meet the standards of rational decision making’.6
Yet this should not be surprising. In a recent book, The Practice of Autonomy: Patients, Doctors, and Medical Decisions, 7 Schneider suggests a number of reasons to expect that patients’ decisions will not easily meet ‘the standards of rational decision making’. To begin with, many people regard making decisions of all kinds as forbidding work, and medical decisions are exceptionally challenging. Doctors themselves must often (p.109) struggle to draw sound conclusions from unreliable data and problematic theories. Thus the information patients receive is often frustratingly uncertain. Worse, doctors most comfortably speak the language of medicine, a tongue that dismays even the brightest and best-educated patients. What is more, information can never be put in completely objective terms, yet often neither doctor nor patient recognizes the assumptions and preferences reports and recommendations silently incorporate.
The Practice of Autonomy further suggests that medical decisions are made yet harder because of their social and moral context. The practice of medicine is becoming bureaucratized. This means that an astonishing number of people may have information and opinions to contribute to medical decisions, that the players change rapidly, and that responsibility is diffused. In addition, while some medical decisions present a single issue at a single moment, more often patients face a series of decisions over days or even months whose individual importance is often not apparent at the time. Even the non-technical aspects of medical decisions may baffle patients. For instance, people’s ‘values’ are often more obscure than the theory of informed consent assumes, and (reasonably enough) they change over time and with experience. Nor will it always be clear what conclusions are to be drawn even from well-established and stable preferences.8
Furthermore, most medical decisions are made by sick people, and sickness impairs thought. When you are ill you are weary. When you are ill you are distracted by a regiment of unfamiliar problems, not least reconciling yourself to your disease, reconstructing your future, and coping with the quotidian. You may want to avoid facing the dismal facts of your illness. You may even want to ‘deny’ your condition (often a wise deception). You may not find your medical condition absorbingly interesting. (We even have a pejorative term—valetudinarian—for people too fascinated by their illness.) And you may be so frightened that you cannot think lucidly and dispassionately.
All this may help us understand why Janis speaks so discouragingly about how patients address decisions. It also helps explain the emerging evidence about how patients go about making decisions. One of the plainest elements of that evidence suggests that patients often make decisions with a rapidity that forecloses the systematic deliberation many students of decisions prescribe and the doctrine of informed consent presupposes. This has been most extensively studied among people asked to donate a kidney. They tend to decide instantly whether to donate or to decline. As one study put it, ‘[n]ot one of the donors weighed alternatives and rationally decided. Fourteen of the donors and nine of the ten donors waiting for surgery stated that they had made their decision immediately (p.110) when the subject of the kidney transplant was first mentioned over the telephone, “in a split-second”, “instantaneously”, and “right away”’.9 In short, ‘all the donors and potential donors interviewed…reported a decision-making process that was immediate and “irrational” and could not meet the requirements adopted by the American Medical Association to be accepted as an “informed consent”’.10
The most detailed, circumstantial, and vivid descriptions of how patients make medical choices appear in the memoirs so many of them have written about the experience of illness. Many of these memoirs, like the studies of kidney donors, report truncated decisions. For one lymphoma patient, for example, ‘[n]ot even a split second was needed to opt for chemotherapy despite all I had heard about it’.11
Such instantaneous decisions are possible partly because many patients seem to fix on one factor, make it the basis of decision, and then close their minds to new data. (This psychological conservatism is often called the ‘anchoring heuristic’.) Penny Pierce, one of the closest students of how patients make medical decisions, reports such thinking among many of the breast cancer patients she studied.12 Schneider frequently observed it among people asked to choose a dialysis modality. Such patients:
often seem to listen until they hear some arresting fact and then make it the basis of their decision. For instance, as soon as some patients hear that hemodialysis requires someone to insert two large needles into their arm three times a week, they opt for whatever the alternative is. When some other patients hear peritoneal dialysis means having a tube protruding from their abdomen, they choose ‘the other kind of dialysis’.13
Not only do many patients decide quickly and consult only a few criteria—or even a single criterion—but even patients well educated and reflective enough to write memoirs regularly describe no decisional process at all. Instead, they invoke intuition, instinct, and impulse. An AIDS patient, for example, wrote, ‘I’ve learned to listen to my inner voice for guidance when choosing treatments. If I get what Louise refers to as a “ding” (a strong instinct) about a vitamin, herb, drug, or other treatment, I try it’.14 A multiple sclerosis patient ‘got a flash’, found that a ‘little light flashed inside my head’, came to ‘trust my own intuition’, and asked why (p.111) she should not ‘play my hunches’.15 Even patients particularly committed to making their own decisions on rational bases often cannot, even in retrospect, explain their choices. For instance, a Rice sociologist with prostate cancer who tried as hard as anyone could to make decisions systematically wrote, ‘[w]ithout knowing precisely why or being able to provide a clear rationale, I decided I would ask Peter Scardino to perform my surgery’.16
A Case Study: Screening for Prostate Specific Antigen
Our survey of the evidence about the two requirements for successful informed consent suggests two things. First, we have a great deal of evidence about how much patients understand and retain of what they are told by their doctors about their medical choices: In brief, troublingly little. Secondly, we have only scant evidence about how patients analyse what they hear and remember. But that evidence gives us good reason to doubt that their analyses meet the expectations of the bioethicists who advocate informed consent or the judges who demand it.
To gain further insight into the way patients think about their medical choices, let us examine a case study. The most common cancer among men attacks the prostate. Traditionally, physicians tried to detect prostate cancer before its symptoms became acute by ‘digital rectal examination’, that is by trying to feel the cancer in the prostate. However, this method is roughly as effective as it is pleasant, at least where the cancer is in its early stages. This has made it seem desirable to find another way of identifying men with this common and potentially fatal disease. The best current way to do so arises from the fact that distressed prostates emit abnormally high levels of a protein called prostate-specific antigen. A number of physicians thus favour screening men by testing their blood for elevated PSA levels and then, where the PSA is elevated, performing an ultrasound examination and, usually, a biopsy.
Other physicians, however, disagree. These PSA sceptics make several points. First, they observe that many things besides prostate cancer can distress a prostate and that therefore the PSA test provokes numerous biopsies that reveal no cancer. Indeed, at least 70 per cent of the men with elevated PSA levels do not have cancer. The 30 per cent who do are generally distinguished from the 70 per cent who do not through a biopsy. While the PSA test is relatively inexpensive and only trivially burdensome (it is a blood test often performed on men who are already having blood drawn for other purposes), few men find the biopsy agreeable. Furthermore, it is both expensive and fallible.
(p.112) Secondly, opponents of PSA screening say that prostate cancer often grows so slowly that most men who have the disease do not die from it. Autopsies of men who were not known to have prostate cancer found evidence of the disease in up to a quarter of the 65-year-olds and 40 per cent of the 85-year-olds. One estimate is that 10 per cent of all men contract prostate cancer but only 2 to 3 per cent of these actually die or suffer seriously from it. This suggests that, for most men, the best reaction to a diagnosis of prostate cancer may be not to treat it but to monitor it to try to discover whether it is developing rapidly enough to threaten the patient’s well-being seriously.
Thirdly, opponents of PSA screening note that the treatments for prostate cancer—surgery and radiation—can be painful and that they are likely to cause quite trying complications. In some studies, up to 60 per cent of the men treated became temporarily impotent or incontinent, and 30 per cent of those treated suffered from one of these conditions permanently. Others had persistent infections, and some had permanent rectal problems. Some even died. Since the treatment will be unnecessary for many men, this means that these complications would—for those men—have been pointless. For a number of other men, treatment cannot vanquish the cancer. For these men, too, the unpleasantness and complications of treatment may well outweigh its benefits.
The studies necessary to determine whether PSA screening is on balance worthwhile are under way but are years from completion. Meanwhile, opponents of screening say, there are hints that screening—on the average—increases life expectancy by only a few days and that screening may even reduce one’s ‘quality adjusted’ life expectancy by a few days. In short, the PSA sceptics fear that, on average, screening does not improve health and longevity and thus is not worth the cost. The PSA proponents, on the other hand, are more hopeful that studies will eventually show that the cost-benefit analysis favours screening, and they tend to believe that it is better to try and fail to treat all the victims of the disease than to let those who could be saved die.
Several attempts have been made to resolve this controversy among physicians by bringing them together to write guidelines. These attempts, however, have failed. Instead, these groups have recommended that each patient be given the evidence and decide for himself.17 In short, the medical dispute among doctors has been deferred to their patients under the aegis of informed consent.
How well will this work? How will men confronted with these conflicting arguments analyse them? To find out, we interviewed forty men who were 40 to 65 years old. They varied widely in income and occupation. However, they were on average better educated than the American population. Only (p.113) nine of them had no college experience (three of whom did not finish high school).18
The interviews were generally held in the interviewee’s home or office and lasted from one to two hours. A central feature of the interview was an attempt to give the men the kind of information about whether to be screened for PSA that an exceptionally conscientious physician striving to be as neutral as possible would offer. In other parts of the interview, the men were asked about their health, their experience with prostate problems and tests, their relationships with physicians, their views about participating in medical decisions, and their skill in handling simple arithmetic.
In general, these were not men who shrank from the burden of decision. A number of them not only aspired to make sound decisions, but felt obliged to do so. They disparaged friends or even spouses who had avoided the responsibility of making medical choices. They often spoke contemptuously of such people and their dangerous course.
[When] they don’t decide…they abdicate their responsibility. They would do anything anybody told them, any crackpot idea that a brand-new, greenhorn doc might dream up.
My wife goes to the doctor, and if her doctor told her to eat a bushel of horse manure a week, she’d do it, and she wouldn’t question it.
The thinking of forty men over a prolonged discussion is not easily summarized. When humans speak, their ideas are fluid, incomplete, and even contradictory. These men were no different. Nevertheless, two central and significant generalizations are inescapable: first, only two of the forty seemed to change their minds about PSA screening despite all the information they were given. This may be partly because three-quarters of them had already had a PSA test and because prostate cancer and screening for it have by now entered into public discourse.19
Secondly, even though the interviews offered patients an exceptionally favourable basis for making well informed and well reasoned decisions, the men seemed to have considerable difficulty in doing so. More specifically, participants frequently seemed swayed by unexamined assumptions which led them to ignore or misunderstand the information they were (p.114) given. More specifically still, the interviewees relied crucially on what might be called principles of folk wisdom which often seemed to repel, rather than induce, thought. An examination of some of these principles will reveal much about the way these men thought about the problem they confronted.
Prevention is good. Public-health and cancer education have done their job almost too well. Our interviewees had fully imbibed the principles that ‘prevention’ is better than treatment, that nothing is more crucial to combating cancer than catching it early, and that screening is vital to early detection. Thus one respondent said:
My mother is a retired registered nurse. I’ve got a lot of health professionals in my family. I’ve been aware of health and health care all my life…. I’ve been blessed with good health, for the most part, and I just did not want to run the risk. I didn’t want to do something stupid…. [I]f there’s a test, or an exam, or something, I’m going to take it…. I just want to be preventive, instead of [regretting] after the fact.
[My body’s] like a machine…. [I]f there’s a flat tire, I’ll go ahead and change it. If the oil’s low, I’ll go ahead and change it…It’s by taking preventive measures like this [test] I’ve been able to maintain a reasonable amount of good health…
And in like vein:
Respondent:I honestly believe the knowing, and having the option of prevention, outweighs all the other risks [of PSA screening]…. [I]f you do the proper things, it’s just like starting a car. You can have the key, and if you don’t unlock the door and stick it in the ignition, you’re not going anywhere. But if you do the proper things—stick the key in the door, unlock the door, stick it in the ignition, put the seat belt on for safety—…you’re going to go somewhere…This is good sense, this is good medicine.
Interviewer: So you’ve said that ‘prevention’ really overrides this uncertainty about PSA?
Respondent: The availability of prevention has to be part of the system, part of the schedule of benefits [for an HMO]…. I mean, I think of prevention. I’m not always that way, but prevention—you’re always in control of prevention.
Of course, the interviewees were commonly doing more than applying the general lesson of prevention and screening. The advocates of PSA screening have had much the better of the controversy in the media, and the blessings of such screening seem to have been well preached by celebrities like Bob Dole, Norman Schwarzkopf, and Arnold Palmer. As one of the participants remarked, prostate cancer ‘is all over the TV now’. This has had its effects.
PSA screening may well be wise because it may make it possible to detect disease early and thus to treat it more effectively. Screening is (p.115) controversial precisely because many estimable authorities accept that view. But such a position is reasonable only after one has grappled with the proposition that, in the particular case of PSA screening, the general argument in favour of screening does not work. Many of the interviewees seemed so powerfully driven by an idealized version of ‘prevention’ that they had difficulty hearing, understanding, and analysing any reason PSA screening might be undesirable.
To put the point a bit differently: screening often works just as it is supposed to. It works for easily apprehended reasons. The virtues of screening have been drummed into the public over many years of virtuous advertising. As the passages quoted a moment ago suggest, screening is easily analogized to familiar and desirable practices, like routine maintenance of one’s car. All one’s educated intuitions, in short, make PSA screening seem like common sense and the arguments against screening seem foolish. Taking those arguments seriously requires an uncomfortable and burdensome re-examination of what feel like settled questions. Personal experience suggests to most people that such re-examinations are rarely worth the effort, and they are thus resisted.
Many of these men were also diverted from thinking clearly about their choices by their tendency to call PSA screening ‘prevention’. But PSA screening does not prevent disease, it reveals it. Effective prevention relieves people of the consequences of disease and treatment, and prevention is often virtually free of risk. Thus prevention can be much more effective than screening, and conflating the two ordinarily makes screening unduly attractive.
Control is good. Some years ago, ‘control freak’ was a term of disparagement. Today, Americans feel with increasing conviction that people should seize and keep control over their circumstances.20 Control even takes on a moral dimension, for ‘taking responsibility’ often means taking control. PSA testing appealed to a number of men because it seemed to be a way of taking control of and responsibility for their health: ‘[t]here are a limited number of things that you can control in your life…. I like to keep as many of those as possible’. PSA screening looked attractive because it was seen as a form of ‘prevention’, and prevention was seen as a way of having control: ‘[p]revention—you’re always in control of prevention’. More than half the participants used negative stories about other people who had failed to assume responsibility for their health by using PSA screening.
Now, if they don’t get a PSA, and then they get [cancer], I have no sympathy for ‘em. That’s just stupid on their part. They could have prevented it, but didn’t. They could all die for all I care…[W]hy should we pay for their unnecessary medical (p.116) care? No doubt it’s their doctor’s fault too. A doctor is supposed to prevent things, not ignore them.
The association of PSA screening with ‘control’ suggests another reason men may be reluctant to grapple with the argument against screening. That argument disturbingly suggests that, in the present state of knowledge, medicine fights prostate cancer ineptly. Taking that argument seriously means confronting medicine’s limits with disquieting directness. In addition, that confrontation challenges another idea of psychological importance—that if you live right, you will live long, that you can avoid all harm if you are just careful enough. As one interviewee said, ‘[i]f you avoid all these things that are bad they got these days, you’ll be rewarded with life. You have to take care of yourself, get the proper checkups and test’. In short, the desire for control provides another reason to accept PSA screening with little thought and to resist examining the argument against it.
Information is good. The survey literature now insistently suggests that most patients believe they want a good deal of information about their illnesses.21 The participants in this study shared a nearly axiomatic belief that information is always good to have. Some of them had quite comprehensible reasons. For example, some of them saw PSA testing not as a way of detecting cancer but as a way of hearing comforting news: ‘[i]f you have a negative test, then you say, hey, you’re really reassured that nothing is going to happen’.
Another common reason for wanting information is a belief that forewarned is forearmed:
Everything affects your life, but that [prostate cancer] affects the end of your life, so you need to know…Nobody anticipates when they’re gonna die…. If it happens, it happens, but if you know it’s going to happen, you put yourself into an advantage situation of being able to accomplish things that you’ve put off, things that you’ve wanted to do, or…maybe experimental medication…
Or, as another man put it:
I would rather know what information’s available, and which way to go, so I’ve got all the information to make some kind of a sensible decision of what I’m gonna do with myself…
Other men seemed committed to ‘more information’ even if its usefulness might be obscure. These men might acknowledge the possible disadvantages of PSA testing but then suggest that even a misleading PSA is better than no test. One man said that a PSA test is:
not a gamble. I’m mean, do it. It’s silly not to…[L]ogic would dictate the tests are there, they’re available, and they’re reasonably accurate, even if they’re not a hundred percent.
You’re attempting to try and find out what’s going on [with the prostate]…[T]he PSA may not be exact, but at least it is some measure, and as time goes on it will become more precise, but nonetheless, it’s something.
These men are recruiting a standard aphorism from common sense—that half a loaf is better than none, that some information is better than none. Yet the aphorism is inappropriate, since the uncertainty lies not just in the accuracy of the test, but also in what to do if cancer is diagnosed. The argument against screening is that men may be best off doing nothing when diagnosed with cancer. The interviewees seemed to have difficulty examining this argument, however, since little seemed more counter-intuitive to them than the suggestion that information might not be useful.
Even participants who seemed to acknowledge some of the arguments against PSA screening emphasized how important ‘knowing’ is:
I didn’t understand [PSA statistics before], to be honest with you. I didn’t realize about all these numbers, and it may sound silly, but I still like the idea of doing the blood test, only because I’m always curious about these things. I just like to see.
The same man said:
But if I get a positive result, I’m not sure I’ll do anything. The potential [adverse effects of treatment] here, which would make life very unpleasant, outweigh the small possibility of dying.
He saw PSA screening, then, as a way of putting off a decision about how to respond to prostate cancer until the evil moment of knowledge actually arrived.
But if I start to get a positive result, then that’s something I should find additional information about, look into, you know, really make a decision about.
The independent worth of knowing is further suggested by the fact that, although three-quarters of the participants wanted to be screened, almost none thought he would want to be treated for cancer should it be discovered. This states a plausible position—I really want to know if I have cancer—but it also raises intriguing questions with which the interviewees may not fully have engaged. Was this actually what these men believed? If they were not going to be treated, would they really be happier burdened by the knowledge of their disease? Was their desire for information simpliciter strong enough to lead them to endure the risks and disadvantages of trying to discover whether they had prostate cancer? Would they stick with their decision not to be treated were cancer discovered?
Other participants put their preference for information in yet starker terms. As one frankly said, ‘I can’t explain why [I want screening]. I just like to see tests’. And another participant felt so intensely that information (p.118) is good and ignorance bad that he angrily saw the argument against PSA screening as part of a conspiracy to keep him in ignorance:
You can’t put the genie back in the bottle. The awareness is there. People like myself are spreading the word, of advantages of PSA. I don’t care if [the cancer] is latent or active…[W]ho do you think shows up at those meetings [on cancer screening nights]? Opinion leaders, people that want the information. Now, you [showed] your statistical work [to me], but it’s the opinion leaders that tell ten others. You unleashed the dragon. [The speaker at the screening night] said, pure and simple, just like that—…he knows how many other groups are talking about [PSA].
There is, of course, much to be said for knowing about one’s health. However, here, as with the other two axioms we have explored, the danger is that the simple principle ‘information is good’ operates so powerfully and is accepted so uncritically that men do not hear and consider the arguments that the information provided by PSA tests may be bought at a high price (because the PSA test itself produces so many false positives) and is unexpectedly uninformative (because there is—PSA sceptics argue—no satisfactory evidence about what men with prostate cancer should do and thus reason to think they should do nothing).
Technology is good. Another common axiom of American folk wisdom is the steady progress of technology and medical science. Some of the interviewees saw PSA screening as the ‘state of the art’ and believed they should take advantage of the best medical science had to offer:
We already know about heart disease. We already know about certain forms of cancer that are caused by smoking. We know about emphysema—that’s usually a by-product of smoking. Right now, prostate cancer is a treatable problem. You know, [PSA] is a tool. Right now there isn’t anything else.
Some saw being screened as a necessary best step toward the next technology:
If I were presented with a positive PSA test, I guess the most logical thing is to get a second confirming PSA test. But there will be another test that the medical community will come up with in the future, and that will work better than the PSA…. If I don’t get the PSA, then I won’t know to get that [other] test, I won’t be able to benefit from the advance…There was a time when the PSA didn’t exist, after all, and men were subject to cancer without warning. Now, the PSA is here, and something else will be discovered soon.
The folk wisdom is right—medicine surely has made gratifying progress. Here again, however, the problem is to apply that wisdom discriminatingly. Sometimes the best medicine can do is not worth doing. Sometimes progress comes slowly. It now appears, for example, that studies which will determine the usefulness of PSA screening will take many years to complete.
(p.119) Statistics are lies. Several of the participants scorned the arguments against PSA screening because they shared folk wisdom’s scepticism of, and even contempt for, statistics. That scepticism is epitomized by one man’s use of the cliche ‘you can prove anything you want with statistics’. Another man said scornfully, ‘[t]hey can take the statistics and bend them any way you want to’. Similar doubts led other men to such conclusions as a belief that all statistical uncertainty was automatically a ‘fifty-fifty chance’, so that either choice was appropriate, or a ‘toss-up’.
Respondent: The numbers [don’t matter]…. I don’t want to take chances with all that stuff. I might die, I might not. I might get those [side effects of impotence and incontinence], I might not. Either way, I got a fifty-fifty chance, you know, I might as well guess!
Interviewer: Hmm. Remember those numbers here aren’t exactly fifty-fifty, your chances could be worse, maybe of getting a side effect, or maybe a lot better, like living for years without problems [from the cancer].
Respondent: Yeah, I hear you. But I figure it’s a gamble, an even chance either way, you know, fifty-fifty. Since you don’t know, you know you’re saying thirty percent here, you might as well guess either way. You got an even chance of good or bad.
This scepticism of statistics could shade into an acid distrust of those who use them:
Now the person [who is] saying the PSA tests aren’t that valid…What would happen if their mother went in and got a pap smear, and it was positive, or their father went in and got a PSA that was a 5 [somewhat elevated] ? By God, right then and there they’d want to do everything possible to see what was going on. Yet it’s very easy for them to say, ‘Joe Blow down there, he may not have it—cause he’s got a PSA of five’. When you start throwing statistics around, I think it’s a copout, in a way, for these people. I always say, ‘[I]f [I] was your mother or father, or your son or your daughter, what would you do?’ And if they’re telling the truth, they’re gonna say, ‘[W]ell I’d do everything possible’…You place it in a personal context—bullshit! Hey, you know, statistics.
‘I knew someone once who…’. One of the best-studied defects in human reasoning is the tendency to prefer a few vivid examples to systematic but dry statistics. Our interviewees were as prone to this failing as the rest of us. Almost every participant cited stories of friends, relatives, and celebrities who had been saved by testing:
I think my impression initially was that [my physician] didn’t want to do the test, and I insisted that we do it…. I think about Bo Schembechler [once the University of Michigan football coach and a sainted name in Ann Arbor]. He had a prostate operation, and [a friend of mine], and somebody else, too, a pretty renowned citizen—oh! Schwarzkopf, General Schwarzkopf.
You know, a one in 1000 chance may not sound like much, but I had an aunt that was told she had a one in 1000 chance of having a blood clot go to her brain through a procedure she was going to have, and it happened…[I]t’s all risky, but I still think it provides a framework for decision making even if it’s not totally accurate, because you can’t have complete accuracy.
‘If it weren’t for bad luck, I’d have no luck at all’ 22 Finally, some men implicitly relied on folk beliefs about a purposive fortune. A quarter of the participants complained quite seriously about their bad luck. From this they concluded that PSA testing might be bad for the general population but necessary for themselves:
Respondent:Oh, I understand you all right, and I don’t think most people should have a PSA…. I still want it because bad things happen to me. I’m the guy with bad luck, the 1%.
Interviewer: You told me you don’t have a family history of cancer, right?
Respondent:Yeah, but I’m just like [the men with a family history], [or] just like the black people…I’ll get cancer because I get everything else.
The Lessons of the PSA Study
It is never wise to make too much of a single study, especially a small one in an underdeveloped field—here, how patients make decisions. We certainly do not believe that in this short paper we have fully described the way even these few men thought. Nevertheless, the results we have reported are reassuringly plausible, and they are consonant with and help explain what we know about how patients think about medical choices. We are thus emboldened to address the irresistible question—did these men evaluate the information they were given well?
The answer, of course, depends crucially on what ‘well’ means. One approach is to ask whether they evaluated the information well by what we might call ‘public’ standards, by the criteria of medical rationality. Did they, in other words, appraise the evidence in the way we would expect from, say, a panel of experts asked to establish guidelines for screening? The answer to the question thus defined seems to be no.
The problem lies not in the men’s substantive decisions, for experts themselves have failed to find a basis for agreement about screening. The problem, rather, lies in what we might call procedure, in the way many of the men handled the information they were given. For they often did not (p.121) seem to assimilate and assess it. Instead, they seemed to short-circuit their evaluation of the data by falling back on axioms current in American culture. These axioms were not necessarily problematic in themselves, although some of them were (like the facile dismissal of all statistics). Axioms usually gain currency because they capture important truths. However, the axioms our interviewees relied on could seem so right (and could in the proper circumstances be so unexceptionable) that they made it seductively easy for the participants confronted with an unappealing and counter-intuitive proposition (PSA screening is not a good bet) to dismiss the information without reflecting on it and instead to leap to a conclusion.
Our hypothesis, then, is that even patients whose informed consent: is solicited in virtually ideal circumstances may be misled in analysing the data by their tendency to rely too uncritically on culturally powerful but inapt axioms. This hypothesis gains credence from the evidence we reviewed earlier that patients often make medical decisions unreflectively. In particular, our study suggests one quite plausible explanation of the evidence that patients reach conclusions with a rapidity that virtually excludes reflection, namely, that patients can decide quickly because they rely on cultural axioms to make scrutiny of complexity unnecessary.
If our description of the interviewees’ reasoning is correct, they did not, pro tanto, analyse their choices in the way American public policy has generally tried to encourage. First, their reasoning does not seem consonant with the animating assumptions of informed consent as they have ordinarily been understood. That doctrine assumes that patients will grapple as directly as possible with the advantages and disadvantages of their medical choices. Why proffer thorough recitals of complex information about troubling choices if consideration of them is thus to be short-circuited? (And if you do not provide such recitals, how are patients to make wise choices?) Similarly, we have devoted untold dollars and effort on public health education intended to persuade people to make decisions about their health (about safe sex, about smoking, about cancer, and so on) in an unashamedly conventional and rational way.
We have been suggesting that, judged by ‘public’ standards, the interviewees’ methods of evaluating the evidence they were given seem unsatisfactory, that if panels writing guidelines for screening used such methods, indignation would be the reaction. Obviously, however, these men are private citizens who may make private decisions in whatever way seems best to them. In other words, one interpretation of the PSA study is that informed consent was a success, that the men took the information they were given, applied their own ‘values’ to it, and reached the results those values dictated. This is true at least in the narrow (but not trivial) sense that people formulate and evince their values by making decisions.
But there are important senses in which this interpretation of the study seems false. It is quite unlikely that many of the men wanted to make (p.122) decisions in the way they seem to have done. For one thing, most people believe at least in principle in the usefulness of conventional rationality in analysing scientific problems. And indeed, the interviewee’s occasional comments at least hinted that they generally espoused just such views of how to approach medical problems. One man, for example, said that ‘people should not just make irrational, emotional decisions’. Another man—a mathematician—was avowedly afraid of making a bad decision because of his fear of the disease: ‘I could never look at the numbers and decide like I do at work’. Yet another man found satisfaction in being ‘more with-it now,…more rational’.
Furthermore, the problem with the men’s methods was not just that they fell short of widely shared standards of rationality. It was that the axioms seemed often to preclude the men from confronting the contending arguments directly. Few people believe this is a good way to approach difficult and dangerous decisions. Most people believe that making good decisions requires listening to the arguments on both sides carefully enough to understand them. (Another cultural axiom is that there are two sides to every question.) Certainly the participants insisted vigorously on being given ‘the proper information of what I need’ and felt that ‘the only way for him [the patient] to make a decision is to be well informed’ with the kinds of information usually thought necessary for making conventionally rational decisions.
This raises the question why men could make decisions in ways they themselves regarded as undesirable. There are at least two cogent possibilities. First, people regularly fail to meet their own standards in many areas, particularly where, as here, performance is difficult. Secondly, these men probably did not realize just how they were making decisions, since the psychological mechanism at work is one which ordinarily does not reveal itself to its user.
In short, it is unlikely that the interviewees would have chosen to rest their decisions on these cultural commonplaces in so significant a way. When bioethicists and lawyers discuss informed consent, they speak of patients making decisions by applying their ‘values’ to the data they are given. But whatever ‘values’ denotes, it connotes someone’s deepest beliefs. And it is that connotation that gives the principle of informed consent so much of its force. The axioms that often figure prominently in the interviewees’ thinking, however, will rarely reflect their more basic and considered beliefs. Rather they tend to be convenient generalizations which people find useful in ordinary affairs but which they are not much committed to. Analogously, for example, Nisbett and Ross report that, while people regularly depend on unreliable ‘heuristics’ in reasoning about unfamiliar subjects, they abandon them when dealing with familiar ones.23 Indeed, it (p.123) is all too likely that the ‘values’ represented by these axioms may conflict with other beliefs which represent more truly what the interviewees ultimately think.
The Cure for the Ills of Informed Consent
The problems patients have in understanding and retaining what they are told are well known. And evidence is beginning to accumulate about the difficulty patients have in analysing the information they are given and making decisions about it. The hypothesis we investigated in the preceding section helps substantiate the suggestion that patients often seem to resolve medical questions with a speed that would inhibit thoughtful consideration of the information they are given. Added to the other doubts we have already reviewed about how patients receive and process information, the hypothesis raises questions about what can be hoped for from informed consent.
The conventional response to concerns of this kind has most typically been: ‘[t]he only cure for the ills of informed consent is more informed consent’. Many of these suggestions have to do with ways to convey information more effectively, as by improving the way forms are worded, or by having people other than doctors explain choices to patients, or by making educational videos part of informed consent. As it has become clear that such changes do less than had been hoped, doctors have been urged to expand the range of information they impart, and the range of situations in which they offer informed consent. (The movement away from guidelines and toward patient choice in PSA screening exemplifies the latter tendency.) A sense of the ambition—one might almost say desperation—of these proposals is to be found by examining a recent article by Geller et al. in the Hastings Center Report. 24 Among its recommendations are:
(1) There should be an ‘in-depth exploration by providers of patients’ affective and cognitive processes’, since ‘[p]roviders who rely on a discrete or short-term approach to informed consent are unlikely to succeed at understanding fundamental patient beliefs and preferences and thereby have little hope of obtaining truly informed consent’.
(2) Providers should ‘explore uncertainties and limitations both in the provider’s own knowledge and in the state of the science’.
(3) ‘[I]f they are to facilitate truly informed decisionmaking on the part of their patients, providers must understand and disclose their own motivations, beliefs, and values to patients’.
(5) Finally, ‘informed consent ought to be individualized…and take place in the context of an ongoing relationship with a trusted health care provider’.
People are driven to such effulgent visions of informed consent in part by the strength of the autonomist ideal in American life, law, and medicine. More particularly, they are not insubstantially motivated by the rise of the view among some bioethicists and even some doctors and patients that, as a matter of good medical practice and even as a matter of moral duty, patients ought to make their own medical decisions even if they would rather delegate them to someone else.25 Those who espouse this ‘mandatory autonomism’ must hope to perfect informed consent for want of a better way of achieving their goals. But is such perfected informed consent practicable?
Confronting the Limits of Informed Consent
Before we answer the question we have just posed, we should pause for a stipulation and reassurance: the doubts the paper has expressed about informed consent do not require anything like abandoning it. There are many reasons for this but space for only a few. First, sometimes informed consent works in something like the way bioethicists and courts envisage. Some people are well-situated to make medical decisions. Some ‘medical’ decisions can be well made by many patients. (The choice of dialysis modality—which often raises more questions about the patient’s social and professional life than about medicine—is one example.) Secondly, most people want at least some of the information the doctrine of informed consent intends for them to have, even if they do not want to make decisions themselves. Thirdly, some of the information given in informed consent helps patients care for their illness better even if it does not help them make medical decisions. Fourthly, some people may want to make their own decisions even if they do so clumsily. Fifthly, informed consent may have value even if it is only a ritual, for it reminds doctors of their duties of concern and deference to their patients, duties it is easy for them to forget and neglect in the press of the other duties that surround them.
The question, then, is not whether to discard informed consent, but what to expect of it. The material we have surveyed raises the possibility that there are real limits to our ability to solve the two problems of informed consent, and thus real limits to what we may reasonably hope (p.125) for from it. The PSA study illustrates a number of those limits. Not the least of these is time. In the artificial setting of this study, time could be lavished on a single medical question in a way that would be flatly impossible in most ordinary medicine (and that would have been impossible even in that imagined Eden before health management organizations). Yet interviewees still came away from this educational extravagance without having fully understood and confronted the arguments presented to them.
But why is this surprising? Teaching and learning are both humblingly difficult, as any student and any teacher knows. Yet teachers teach and students learn in virtually ideal settings compared to those in which doctor and patient must labour. In the earlier pages of this paper we sketched a few of the intellectual and social factors that make medical decisions forbiddingly and even repellently difficult for patients. And when the subject of the teaching and learning is as fraught with disturbing ideas and with unrecognized and unreliable assumptions as medical decisions, it is inevitable that people should almost struggle to avoid the labour of learning and the chore of choosing.
Indeed, a substantial number of patients expressly say, when asked, that they do not want to make their own medical decisions.26 And the sicker patients are, the less likely they are to want to make their own medical decisions. The task of education is always daunting. How much more daunting must it be when the learners do not wish to use what is being taught?
The PSA study suggests another practical limit to the scope of informed consent. The participants in that study seemed often to rely on powerful cultural axioms that allowed them to dismiss much of what they were being told. Yet they probably did not fully realize what they were doing, and it seems likely that physicians trying to inform them would likewise not realize all that they were thinking. Furthermore, patients must often be influenced by misapprehensions of whose existence or strength their physicians are unaware. For example, patients who have agreed to become research subjects considerably over-estimate their chances of benefiting from the experimental treatment even when they have been told what those chances actually are.27 These research subjects ‘systematically misinterpret the risk/benefit ratio of participation in research because they fail to understand the underlying scientific methodology’.28 Like our interviewees, they are (p.126) saved from difficult choices by misplaced reliance on a cultural truth: ‘[m]ost people have been socialized to believe that physicians (at least ethical ones) always provide personal care. It may therefore be very difficult, perhaps nearly impossible, to persuade subjects that this encounter is different…’.29
What is more, it appears that even willing physicians have had trouble in overcoming this kind of misapprehension:
The investigator in one of the projects we studied offered his subjects detailed and extensive information in a process that often extended over several days and included one session in which the entire project was reviewed. Despite this, half the subjects failed to grasp that treatment would be assigned on a random basis, four of twenty misunderstood how placebos would be used, five of twenty were not aware of the use of a double-blind and eight of twenty believed that medications would be adjusted according to their individual needs.30
Doctors should surely do their best to give patients the information they want. But it is time to consider the possibility that doctors will never be able to communicate to patients all the information they need in a way that they can use effectively. It is the rare physician who has the skill and the time to probe deeply enough into the patient’s mind to discover the misapprehensions of fact and the inapt reliance on truths that distort what patients hear and think about the problems they face. It may therefore be time to chart the limits of informed consent and to consider what other approaches might help patients secure what they want and need from medicine.31
(1) 464 F 2d 772, 780 (DC Cir. 1972).
(2) ‘I Hear a Rhapsody: A Reading of The Republic of Choice’ (1992) 17 Law & Social Inquiry 147, 148.
(3) Barrie R. Cassileth et al., ‘Attitudes Toward Clinical Trials Among Patients and the Public’ (1982) 248 Journal of the American Medical Association 968, 970.
(4) Barrie R. Cassileth et al., ‘Informed Consent—Why Are Its Goals Imperfectly Realized?’ (1980) 302 New England Journal of Medicine 896, 896.
(5) See Carl E. Schneider, The Practice of Autonomy: Patients, Doctors, and Medical Decisions (New York, 1998), 92–9.
(6) Irving L. Janis, ‘The Patient as Decision Maker’, in W. Doyle Gentry (ed.), Handbook of Behavioral Medicine (New York, 1984), 333.
(9) Carl H. Fellner and John R. Marshall, ‘Kidney Donors—The Myth of Informed Consent’ (1970) 126 American Journal of Psychiatry 1245, 1247.
(11) Mary Alice Geier, Cancer: What’s It Doing in My Life? (Pasadena, Calif., 1985) 18.
(12) Penny F. Pierce, ‘Deciding on Breast Cancer Treatment: A Description of Decision Behavior’ (1993) 42 Nursing Research 20, 23.
(14) Louie Nassaney with Glenn Kolb, I Am Not a Victim: One Man’s Triumph Over Fear & AIDS (Santa Monica, Cal., 1990) 86.
(15) Rachelle Breslow, Who Said So?: How Our Thoughts and Beliefs Affect Our Physiology (Berkeley, Cal., 1991).
(16) William Martin, My Prostate and Me: Dealing with Prostate Cancer (New York, 1994) 114.
(17) See, e.g., ‘Screening for Prostate Cancer’ (1997) 126 Annals of Internal Medicine 480.
(18) More complete information about this study and its results will be published in subsequent papers.
(19) This is in interesting distinction to an earlier study which found that, ‘compared with controls, patients who received the information intervention were much more likely to indicate no interest in pursuing PSA screening…and were much less likely to indicate high interest in screening…’: Andrew M. D. Wolf et al., ‘The Impact of Informed Consent on Patient Interest in Prostate-Specific Antigen Screening’ (1996) 156 Archives of Internal Medicine 1333, 1333. At least two factors may account for the difference between our results and Wolf’s. First, Wolf’s respondents began the study less well informed about PSA. Secondly, public discussion of prostate cancer and PSA screening has proliferated in the years between the two studies.
(22) Perhaps we should say that we borrow this quotation not from an interviewee but from the classic TV show Hee-Haw.
(23) Richard Nisbett and Lee Ross, Human Inference: Strategies and Shortcomings of Social Judgment (Upper Saddle River, NJ, 1980).
(24) Gail Geller et al.,‘“Decoding” Informed Consent: Insights from Women regarding Breast Cancer Susceptibility Testing’ (1997) 27 Hastings Center Report 28.
(27) See, e.g., Christopher Daugherty et al., ‘Perceptions of Cancer Patients and Their Physicians Involved in Phase I Trials’ (1995) 13 Journal of Clinical Oncology 1062; Nancy E. Kass et al., ‘Trust: The Fragile Foundation of Contemporary Biomedical Research’ (1996) 26 Hastings Center Report 25; Sjoerd Rodenhuis et al., ‘Patient Motivation and Informed Consent in a Phase I Study of an Anticancer Agent’ (1984) 20 European Journal of Cancer & Clinical Oncology 457.
(28) Paul S. Appelbaum et al., ‘False Hopes and Best Data: Consent to Research and the Therapeutic Misconception’ (1987) 17 Hastings Center Report 20, 21.
(29) Paul S. Appelbaum et al., ‘False Hopes and Best Data: Consent to Research and the Therapeutic Misconception’ (1987) 17 Hastings Center Report at 23.