Jump to ContentJump to Main Navigation
Fat EconomicsNutrition, Health, and Economic Policy$

Mario Mazzocchi, W. Bruce Traill, and Jason F. Shogren

Print publication date: 2009

Print ISBN-13: 9780199213856

Published to Oxford Scholarship Online: October 2011

DOI: 10.1093/acprof:oso/9780199213856.001.0001

Show Summary Details
Page of

PRINTED FROM OXFORD SCHOLARSHIP ONLINE (www.oxfordscholarship.com). (c) Copyright Oxford University Press, 2019. All Rights Reserved. An individual user may print out a PDF of a single chapter of a monograph in OSO for personal use.  Subscriber: null; date: 17 September 2019

Economic Evaluation Tools for Evidence-Based Policymaking

Economic Evaluation Tools for Evidence-Based Policymaking

(p.92) 3 Economic Evaluation Tools for Evidence-Based Policymaking
Fat Economics

Mario Mazzocchi

W. Bruce Traill

Jason F. Shogren

Oxford University Press

Abstract and Keywords

This chapter aims to quantify the damaging effects obesity by using an economic toolkit. It offers case studies to support the conceptual analysis. It assesses the cost of obesity and distinguishes between private costs paid by an individual and the social costs borne by the rest of society. It discusses the role of intervention and alternative measures like quality adjusted life years (QALY). It recommends the evaluation of policies to learn lessons for the future and the gathering of more data to make the methods easier to use.

Keywords:   quality adjusted life years, cost, private cost, social cost, intervention, obesity

Imagine a situation in which everyone is well educated, well informed, and self-controlled, and the costs of obesity are borne only by obese people. People either pay their own medical expenses or pay health insurance whose costs reflect their risks. This implies that society does not bear any ‘external’ medical costs. Similarly, less job productivity due to obesity implies lower wages (see Box 3.1 for a discussion of this issue). Assume that all markets operate under perfect competition so that the prices of all goods accurately reflect the cost of the resources used to produce them. Now suppose that a government agency decides to ‘induce’ people to eat less chocolate and more fruit and vegetables so that we all lose weight and become healthier.

The economist would ask whether this inducement would be justified by considering its ratio of costs-to-benefits. From an economic perspective, the answer is ‘no’: the cost–benefit ratio is not just small, it would be negative. While it costs money to implement this new regulation, no benefits are produced. Rather the benefits are negative. But how can benefits be negative even though there are fewer overweight people, better health, and reduced medical expenditures? The reason is that people were doing exactly what they wanted (i.e. maximizing their own utility) before the intervention. By definition, they must now be less satisfied by being induced to change their behavior to purchase a less preferred set of goods. The loss in utility from being unable to select one’s favorite bundle of food items exceeds the gain in utility from being healthier; otherwise, they would have made the opposite decision before the intervention.

But policymakers frequently assume that all gains in lifespan and health status are gains to society. They do not address the point that people may have previously chosen the probability of a shorter life or less health rather (p.93) than giving up utility by eating more cheaply or more pleasurably. People cannot have it both ways. So, if overweight and obesity arise from free will, why should governments intervene to fight it? Why is obesity so prominent in governments’ policy agendas?

Recall from Chapter 2 that markets can fail to provide a socially optimal outcome if there is a distortion between private and social welfare. Social welfare is greatest when each person bears all of the costs of their choices without imposing unwanted costs on the rest of the society. Negative externalities and market failure arise when obesity generates costs for non-obese people. Chapter 2 discussed how markets fail to price all the costs because many medical costs and employment expenses are borne by society, not just those who are obese. If people do not consider all the costs of their decisions, government intervention in markets may work to improve social welfare—provided the social benefits of intervention exceed social costs.

(p.94) We now distinguish more carefully between the social and private costs of obesity. The social costs are those caused by the externality that people impose on others when they are overweight; the private costs are those they bear themselves through lower wages, ill health, and premature death. We discuss the likely magnitude of the social costs, and the costs and benefits which should be counted when evaluating policy interventions.

We begin the chapter with a review of the various efforts to quantify the social costs in order to have a view of their social importance. We proceed to review methods employed by public health professionals and economists to compute the benefits of intervention, essentially through the development of measures of the utility associated with states of ill health and premature death. These can be used in ex ante policy evaluation (decision rules before a policy is implemented to determine whether it would be good value for money) and in ex post evaluation (determining afterwards whether a policy was cost-effective), and we discuss modern economic tools for such evaluation.

Measures of the Direct and Indirect Costs to Society of Obesity

Quantifying costs also allows one to predict the benefits of obesity reduction policies. Bad nutrition and lifestyle choices generate obesity, obesity leads to bad health, and bad health generates costs to individuals and society. If one accepts this model,6 the costs of obesity to society can be assessed through a quantification of its direct costs (to the health care system) and indirect costs (defined as lost productivity).

In this model the main obstacle to monetizing outcomes is the incomplete evidence about the actual link between obesity and obesity-related disease. For example, we know that the prevalence of diabetes is increasing.7 It is not straightforward, however, to determine why: is it the relative contribution of obesity, bad nutrition, and other lifestyle factors, or simply increased life expectancy? Similarly, different lifestyle factors provoke heart conditions, including smoking and lack of exercise, and it proves difficult to separate them from the obesity component. It is common in the public health literature to measure the medical cost of obesity by attributing to it a share of relevant non-communicable diseases; for example, hypothetically, half (p.95) of type II diabetes cases are caused by obesity. The assessment of indirect costs, like reduced productivity, may be also based on comparison of the number of days of work missed through ill health by obese and normal-weight people, assuming they are the same in all other respects.

A detailed study by the National Audit Office (2001) in England gives an estimate of the medical costs of overweight and obesity of about £500 million ($1 billion). This report has detailed estimates of the cost share attributable to food for the various diseases. For example, 36% of hypertension cases are estimated to be attributable to obesity, 47% of type II diabetes cases, and 29% of colon cancer cases. After allowing for the cost per case and the total number of cases, the hypertension-related share of the total cost of obesity is 29%, diabetes 26%, and colon cancer 2%. Around 50% of these costs are for prescriptions, 40% are hospital costs, and 10% are GP (doctor) costs.

The recent government Foresight Report8 designed to think about the long-term consequences of obesity accepts £1 billion (about $2 billion) as the best estimate of the current medical costs of overweight and obesity in England and, using a model to project obesity forwards to 2050, estimates that the health care cost would rise to £7.1 billion if current trends continued, representing a rise in the share of health care costs attributable to overweight and obesity from 1.4% to 10.1%. £1 billion represents about 0.1% of national income (GDP).

Popkin et al. (2006) allow dietary factors to have direct impacts on health, as well as the indirect effects caused by obesity. For example, eating more fruit and vegetables and whole grains affects health directly by reducing cancer and indirectly by reducing obesity. High intake of saturated and trans fats can have negative direct health effects. For China, Popkin and his colleagues estimate that the combined direct and indirect effects of diet contribute $3.9 billion to medical costs. A further $2.2 billion are imposed by low levels of physical activity, of which about a third are through the obesity pathway. In total, diet-related costs represent about 0.3% of GDP.

The impact that weight loss has on medical costs remains uncertain. But this information is needed to calculate the cost–benefit ratio of policy interventions to change diet and reduce weight. Without this information, one must assume that a person who reduces his weight, say from BMI 30 to 25, has the same medical costs as someone whose BMI has always been 25, an uncomfortable assumption. Wolf (2002) reviews research (p.96) which suggests that the cost of weight-reducing pharmaceuticals may be offset by savings in drugs to treat hypertension, diabetes, and hyperlipidemia, which decrease owing to weight loss. But more evidence is needed; it is unlikely that the weight–cost relationship associated with weight gain is symmetrical with that for weight loss. Prevention may be more cost-effective than cure.

Wolf and Colditz (1998) have calculated the medical costs of obesity for the US. Their results are dated (1995 data), however, and probably underestimate the current position. They estimate that the annual medical costs of obesity (not overweight) are between 5.5% and 7% of total medical expenditure.

In estimating the costs of obesity, one should control for all other individual characteristics correlated with obesity which might wrongly be attributed solely to obesity by a simple comparison of the medical costs of obese and normal-weight individuals. These include social deprivation, smoking, and alcohol consumption. One method to control for such factors is in a multiple regression framework. Finkelstein et al. (2003) use individual medical expenditure data (from the Medical Expenditure Panel Survey) combined with self-reported height and weight (to obtain BMI), and use regression analysis on US adults to estimate medical expenditure as a function of demographic variables (race, age, region, income) and dummy variables for overweight and obesity. The regression analysis shows the extra medical costs associated with obesity and overweight while controlling for demographic factors (but not smoking and drinking). They find that the additional medical costs of the overweight are 11.4%; for the obese, the additional costs are 37.4%. Combined with overweight and obesity prevalence data this implies that 3.7% and 5.3% of total medical expenditures are associated with overweight and obesity.

Finkelstein et al. also break down the additional costs attributable to excess weight. Out-of-pocket expenditure, which includes payments by the uninsured and payment not covered by insurance, is 11.1% higher for overweight individuals and 26.1% higher for obese individuals compared to normal-weight individuals. The gap is higher if one considers costs covered by the main public insurance schemes, Medicaid and Medicare. Medicaid is a publicly funded program for low-income individuals and families who meet some further requirements, such as children, pregnant women, seniors, and people with disabilities, to cover (partially or fully) their medical expenses and health services. Medicare is also a public insurance scheme, entirely funded by the federal government, which mainly supports those over 65, who represent one quarter of the US (p.97) population. Obese individuals eligible for the Medicaid insurance category face costs that are 39.1% higher than their normal-weight counterparts (36.8% for Medicare). The costs of these programs are borne by all taxpayers, including slim taxpaying workers, thus the higher spending for publicly insured individuals can be regarded as an obesity externality.9

Critics contend that studies which calculate the annual costs of obesity have not accounted for the impact of obesity on life expectancy. If a person’s annual medical costs are twice as high per year, but he lives half as long as the average person, then his medical cost burden on society is neutral. This remains an unsettled issue: studies suggest that the number of years of costly disability associated with obesity is not much different from that for the non-obese population; obese people usually do not die at retirement age. For example, Allison et al. (1999) estimate that 4.3% of lifetime health care costs are associated with obesity, slightly below the 5.3% annual figure of Finkelstein et al. But a recent study in the Netherlands suggests that because obese people die younger and with lower terminal medical costs than normal-weight people, their cumulative medical costs from age 20 on are $371,000 compared to $417,000 for those of normal weight.10

Tucker et al. (2006) developed a health cost simulation model based on US data. They account for both annual medical costs and life expectancy for black and white males and females over a wide range of BMIs from 24 to 44 and over a range of ages. Life expectancy for 60-year-old white females peaks at a BMI of about 29 and then falls steadily; but for black females and males, life expectancy continues to rise to a BMI of around 40. For white males, life expectancy falls steadily throughout the BMI range but at the age of 60 a white male with BMI 24 has a life expectancy of 21 years, only three years longer than one with a BMI of 44. Despite these surprising relationships, medical costs associated with overweight and obesity rise almost linearly throughout the BMI range, albeit not rapidly.

The UK National Audit Survey study11 also estimated the indirect costs of obesity. They found that 6% of deaths are obesity-related, and more than a third occurred before retirement age. They computed total lost earnings, equaling £827 million (around $1.6 billion) in 1998. To this was added £1.3 billion ($2.6 billion) for days missed from work through sickness related to obesity. They recognized that this is likely to be an (p.98) underestimate of sickness because medical certificates are not required for the first five days of illness. The figure is about 0.2% of GDP, the same as costs attributable to health care. The UK Foresight study,12 however, argues that there is little certainty about this figure, which could be as high as £10 billion per year ($20 billion).13


Seven lessons emerge from our review of the costs of obesity:

  1. 1. If our interest is to focus on external costs of obesity—those imposed on others as a by-product of private consumption decisions, we should focus on medical (direct) and employment (indirect) costs. Evidence suggests that medical costs of overweight and obesity may amount to 5% of health care costs in the US, less in the UK, and presumably less still in countries with lower obesity prevalence. In terms of GDP, medical costs of overweight and obesity are less that 0.5%.

  2. 2. Estimates of the medical costs of obesity depend on the prevalence of obesity and the sophistication of a country’s medical system. A country with a primitive health care system that does not treat the disease (or its manifestations like cardiovascular disease or diabetes) will have low medical costs associated with obesity but may find itself with high indirect costs as people take more days off work through illness. To our knowledge no one has measured the trade-offs between direct and indirect costs.

  3. 3. The medical costs of obesity have generally been measured within an epidemiological framework. This approach attributes a share of the prevalence of each disease to diet and apportions to diet that share of the costs of treating the disease. But the method assumes that the various causes of a disease are independent, which is unlikely. For example, an obese smoker’s risks from coronary heart disease might not equal the sum of the risks to an obese non-smoker and a normal-weight smoker. Significant interactions exist. Given that risk factors are clustered in disadvantaged social (p.99) groups, these interactions could be economically important. Thus, a statistical approach which is able to single out the specific effect of obesity after accounting for all other risk factors is preferred. While the limited statistical evidence available today suggests that the epidemiological and statistical methods produce similar results, disentangling these synergic impacts remains a high research priority.

  4. 4. Popkin recognizes that one should account for direct impacts of dietary components (diet quality) on health as well as indirect impacts through obesity. Direct impacts can be substantial.

  5. 5. Measuring lifetime medical costs rather than annual medical costs is appropriate. Limited research is equivocal as to whether lifetime costs of obese people exceed those for normal-weight people, so this is an important area for future research. Much of the policy debate surrounding obesity relates to health care costs, so knowing whether they are positively or negatively associated with obesity is vital to evidence-based policymaking.

  6. 6. Data on how medical costs change as BMI increases are scarce. Normally the traditional cut-off for obesity is BMI 〉30. This is because the normal approach to measuring medical costs is based on the epidemiological approach and there does not exist a dose–response relationship between BMI and disease prevalence. In theory, finer gradations are possible using the regression approach; in practice, this still needs to be explored. Given that life expectancy is little affected by BMI below 30, more work on relationships between BMI and medical treatment for BMI between 30 and 40 is needed. More research is also needed to understand the symmetry in the relationship between medical costs and changing BMI.

  7. 7. Researchers disagree on the appropriateness of considering indirect costs to the economy resulting from lost productivity caused by days off sick and early death. It is unclear if a person who is off work for a few days causes their employer’s productivity to suffer proportionately. Nor is it obvious that a premature death represents a drop in income per head for the surviving population. One could argue that a country’s earlier investment in a person’s education and training would not have been fully repaid if they died early. The quoted figures for indirect costs, such as from the UK National Audit Office or Popkin’s study in China, put the indirect costs above the direct medical costs, so this is an important conceptual and empirical issue.

(p.100) Evidence-Based Interventions and their Costs and Benefits

We just examined the probable magnitude of the direct medical costs and indirect costs to the economy of overweight and obesity. These costs are substantial but do they justify public intervention to change public behavior to reduce obesity? From the economic perspective, this depends whether the benefits of the intervention outweigh the costs.

The costs of intervention are measured as the sum of compliance and other costs to firms (e.g. costs of reformulation, labeling, lost advertising revenue, and lost demand) and costs to government of regulation (bureaucratic and costs of inspection and other monitoring). Governments sometimes go further than this and require specific assessments of the impact of interventions on small firms, sustainability, regional employment, and gender equality, though these are rarely quantified.

Measuring the benefits of intervention is conceptually more complex. At its simplest, benefits are measured as medical costs and productivity losses avoided due to the intervention, but even this requires an assessment of the impact of the intervention on diets (in the short and long terms), the impact of diets on obesity (and over what time period), and the impact of obesity on health (again taking into account the lag between obesity and health outcomes). Future outcomes should then be discounted, as we discussed in Chapter 2.

Economists believe that the benefits of intervention should be broadened to recognize and measure the utility people forgo when they are ill or die prematurely. But utility is not easily quantified and, even if it could be, how does one compare a unit of utility (a util) with a monetary cost? Two possible solutions are to find out how much people are willing to pay, and to sidestep the issue by ensuring that the same amount of money is spent in providing one util of utility by all interventions.

We remind readers to question whether an intervention to reduce obesity really increases utility. If someone is overweight as a result of taking a utility-maximizing decision based on health risks, intervention reduces utility by definition. Regulation forces people to move away from their preferred bundle of consumption goods, health, and leisure to a less preferred package so that even if people are healthier, they are not necessarily happier. There are instances when this is a reasonable argument, others when it is not. It depends whether people are overweight because they are poorly informed or poorly educated, or because they have made unhealthy choices even though they were informed and educated. One (p.101) might argue, for example, that nutrition education and labeling (together permitting informed choice) are examples of interventions taken to correct informational market failures. If as a consequence people are enabled and respond by choosing a healthier diet, their utility increases. The usual complex measurement questions to be addressed are: how much do people adjust their behavior, how does this affect their health, how much utility does this give them, and is this enough to warrant the extra costs imposed on industry and government? The same argument can be made about funding research to develop, say, trans fat free canola oil. People are provided with an extra choice; if they choose it, they do so because they expect increased utility. Is the extra utility, associated with the health benefits, sufficient to outweigh the costs of the research? In contrast, fat taxes and thin subsidies are intended solely to change behavior and are much harder to justify as enhancing utility, though they may still be justified in terms of correcting an externality—the medical and productivity costs imposed on society.

We see the importance of distinguishing between private and social benefits in the analysis of policy interventions: benefits can accrue either to everyone in society or to just a few. Clarity is crucial in distinguishing what is appropriate to measure and under what circumstances.

In relation to private benefits, economists’ preferred measures are (i) how much people would be willing to pay for adopting the intervention, or (ii) the amount of money people would be willing to accept in compensation if the intervention is not adopted. These measures are called the willingness-to-pay (WTP) and willingness-to-accept (WTA). In theory, these measures of value are similar for goods with close substitutes (bread, milk), but can differ significantly when the good has few substitutes (e.g. your health).

A person’s WTP is the maximum he or she would pay for policy change, which would leave them indifferent as to whether the intervention took place. If they actually had to pay more, they would be worse off (in utility terms) than without the intervention. If they paid less, they would be better off. A non-monetary concept of well-being is converted into a monetary equivalent. The difficulty is one of measurement; while you can ask a person directly in a public opinion survey how much they would be willing to pay for action, you can question the answer since you know that in reality they would not have to pay. In direct questioning (e.g. stated preference methods like contingent valuation) respondents tend to state much higher values than those they would be willing to pay in a real situation. In response to such problems, methods have been (p.102) developed using market settings, such as eliciting willingness-to-pay values from real choices using experimental auctions.14 These are auctions in which people reveal their preferences by spending money on different goods and products. These goods differ in their characteristics owing to policy intervention such as food irradiation or genetic modification. For example, after providing information about the adverse consequences of eating too much salt, one might observe the choice of participants in an experimental auction among substitute products with different salt levels. It is much harder to think of a relevant set of product attributes that could be used to evaluate obesity and its health consequences.

To avoid having to find WTP or WTA, a modification on the theme of cost–benefit analysis (CBA) is cost–utility analysis (CUA). This focuses on the ‘utility’ gained by those subject to the intervention measured through quality adjusted life years (QALYs) or disability adjusted life years (DALYs). These methods are now widely used in public health to account for utility, by recognizing that people value poor health less highly than good health, and interventions that make people feel better are valuable. Policy alternatives would be ranked according to their cost per QALY gained. Both QALYs and WTP are measures of utility, so how do QALY and WTP measures of utility relate to one another? The answer depends on how QALYs are measured; we proceed to address this issue now.

The aim of QALYs is to measure in a single figure the impact of an intervention that increases both life expectancy and the quality of life. Conceptually this is done by assuming that different states of ill health can be assessed in terms of the utility they provide relative to perfect health over a period of one year. For example, a case of food poisoning that caused five days of mild diarrhea might be unpleasant, but would be unlikely to reduce one’s quality of life considered over a whole year by more than 1%. Let’s call it 1%, in which case the person’s QALY over the year would be 0.99. Thus, an intervention that prevented the case of food poisoning would yield 0.01 QALYs, though if it saved a million people from having food poisoning every year this would amount to 10,000 QALYs per year. If the intervention also saved 100 lives, total savings would be 10,100 QALYs per year. If such an intervention cost $100 million and an alternative investment of $100 million into, say, safer rail travel saved 500 QALYs per year, then the food poisoning intervention can be said to be more cost-effective. In principle all interventions should have the same cost per QALY gained. Otherwise a reallocation of resources from (p.103) high to low cost per QALY saved interventions could save more QALYs for the same total outlay. This is the principle underlying the decisions of the National Institute for Health and Clinical Excellence (NICE) in the UK. The institute is responsible for deciding whether new health interventions by the country’s National Health Service (NHS), for example the use of a new drug, should be permitted. Given the limited resources of the NHS, not all new drugs can be afforded even if they improve health. NICE requires an estimate of the costs and quantification of the QALYs gained and has approved new interventions when the cost per QALY is below about £30,000 ($60,000).15

Table 3.1. The Nice Eq-5D quality adjusted life years valuation technique


1. I have no problems in walking about

2. I have some problems in walking about

3. I am confined to bed


1. I have no problems with self-care

2. I have some problems washing or dressing myself

3. I am unable to wash or dress myself

Usual activities (work, study, household, family, or leisure)

1. I have no problems performing my usual activities

2. I have some problems performing my usual activities

3. I am unable to perform my usual activities


1. I have no pain or discomfort

2. I have moderate pain or discomfort

3. I have extreme pain or discomfort


1. I am not anxious or depressed

2. I am moderately anxious or depressed

3. I am extremely anxious or depressed

In the approach used by NICE, QALYs are measured using a validated questionnaire designed to elicit people’s well-being in various conditions of ill health relative to perfect health. For example, a specific valuation technique, EQ-5D, is recommended by NICE in the UK and by authorities in the Netherlands, Norway, Italy, Hungary, Poland, Portugal, Canada, and New Zealand.16 Health states are evaluated across five dimensions, each with three health states as in Table 3.1.

(p.104) People complete a questionnaire and give an evaluation of their overall well-being (between 0 and 1). A statistical analysis of a large survey has been used to estimate weights people attach to the various health states (these vary by country). Given the weights, which are publicly available in Szende et al. (2007), a risk assessment needs to specify the health benefits associated with a policy intervention in terms of the reduced number of people suffering each category of ill health in Table 3.1 to calculate the QALY gains. For example (purely hypothetically), if the average scores for diabetics from Table 3.1 is 1,2,2,2,2, they can be aggregated into a single score through the appropriate weights and compared with the average score of someone in perfect health. This makes it possible to estimate the relative quality of life. For example, one could estimate that a diabetic only has 50% of the quality of life of someone in perfect health. If obesity caused 1 million cases of diabetes, this would correspond to 0.5 million QALYs lost every year to diabetes.

This discussion suggests that the calculation of QALYs is straightforward and uncontentious, but this is not so. First, QALYs can be improved by discounting distant over proximate years of health saved. People value a year of good health now more highly than a year of good health in fifty years’ time, so a policy to reduce, say, salmonella illness today might be valued more than an intervention to reduce obesity, which reduces risks of cancer, diabetes, and heart disease in the distant future. As discussed in Chapter 2 in relation to time discounting, the choice of discount rate is problematic. Something like 3% p.a. is typical for government projects, implying that €1 in one year’s time is worth only 97 cents today and €1 in twenty years’ time is worth only 1/(1.03)20 = 55 cents now.

What about children? Is an intervention that protects young children more valuable because they have more years of life ahead or less because they can be replaced at low cost—and no money has so far been invested in their education and training? That the economist’s perspective would ask such an unpleasant question does not make it any less relevant in a world of scarcity. The World Bank uses an approach which attaches the greatest value to 25-year-olds: money has been invested in their education and they have a full working life ahead of them.17 Implicitly, this means that one is valuing only the social contribution of lives saved. But the approach is at odds with economists’ notions of utility, which consider how much people will pay to save the lives of children relative to adults. Evidence suggests that parents rate their babies’ lives as more valuable than their (p.105) own. For instance, parents often buy organic baby food although they do not eat organic food themselves.

In a world of rudimentary data, these issues may be academic. A simpler approach that avoids associating utility values to states of ill health is to look only at the life years (LYs) saved by an intervention (these can also be discounted). This approach ignores ill health and concentrates only on mortality, but may be acceptable if closely correlated with QALYs. Studies by Chapman et al. (2004) and Robberstad (2005) suggest that in over 80% of cases priorities associated with alternative policy interventions are unchanged when ranking by LYs or QALYs.18

Valuing the lives saved is a necessary part of any economic impact assessment of food policy. The most common approach is to calculate the value of a statistical life (VSL). The VSL reflects how much the representative person would pay to reduce the probability of death. While one might think that no one would give up their life for any amount of money, we make these risk–money trade-offs everyday, e.g. speeding in a car. People will pay to increase the probability of living longer, e.g. we pay for airbags and other safety features. Consider an example from Mason et al. (2006). Suppose an intervention affects 100,000 people and reduces the probability of death to each by 1 in 100,000. This implies that the intervention would save, statistically, one life. Suppose, further, on average each person was willing to pay $10 for this reduction in risk. The total value of the one life saved is $10*100,000 = $1 million. Actual figures used in the US and UK are typically between $3 million and $7 million.19

Problems arise when thinking how to measure the VSL and with interpreting its consequences for food policy. If rich people will pay more for food and health, for example, does this mean a rich person’s life is more valuable than a poor one’s? What is the implication for policies to reduce health inequalities? Should we value an extra year of life for a 70-year-old the same as for a 20-year-old? We do not delve deeply into these issues here. Rather we note that if the VSL is accepted as the value an average member of society places on life, if we divide by average life expectancy, L (about forty-five years remaining in a population of average age around 35), it is reasonable to assume that the value attached to each year is approximately VSL/L, or about $90,000 if VSL is $4 million (about the value used by the Department of Transport in the UK). This is a reasonable (p.106) starting point for the valuation for a QALY, although it would be improved by discounting distant over proximate years saved.20 The figure is higher than the implicit value of $60,000 placed on a QALY by NICE, but close enough not to cause alarm.

We summarize this section:

  • Efficient resource allocation requires society to measure the costs and benefits of alternative interventions.

  • Measuring costs is conceptually straightforward, though it may be challenging in practice to elicit from companies the true costs of complying with a regulation. Firms have an incentive to overstate the costs and an accounting procedure assumes that they will not alter their production or marketing processes or product formulations in response to the new incentives provided by the regulation. For example, front-of-pack nutrition labeling through a traffic light system may induce firms to reformulate their products to move froma red (unhealthy) signal to amber. However, other firms may use cheaper ingredients to move from, say, the healthy end of amber to the unhealthy end of amber since the signal to consumers would be unchanged. Such responses to incentives are hard to predict or measure through the usual survey methods used by regulators.

  • Measuring benefits is problematic because interventions affect quality of life and life expectancy. Public health officials and economists have adopted a utility-based approach and implicit monetary valuation of a QALY.

  • Economists would like to apply the monetary value of QALYs gained by an intervention, set it against the costs of the intervention, and assess whether benefits exceed costs to decide whether the intervention is worthwhile. The public health approach relies on cost–utility analysis as a rationing mechanism, but this does not reveal whether a policy intervention would pass a benefit–cost test.

  • Economists are more concerned than public health professionals to distinguish private and social benefits when evaluating interventions. QALYs are measures of private benefits, as such they should be ‘counted’ when evaluating an intervention only to the extent that they reflect an increase in people’s utility, which is not necessarily the case with interventions to reduce obesity.

(p.107) Ex Post Economic Evaluation of Policy Intervention

This chapter emphasizes how economists can make policy intervention more effective by evaluating the costs and benefits of obesity. But we have also pointed out that existing data are inadequate for rigorous assessment. While governments have increasingly required cost–benefit analysis prior to policy measures (ex ante analysis), interventions are inadequately monitored and assessed once the intervention has been completed (ex post analysis). This is important if one wants to learn, for future interventions, from how firms and consumers actually respond to incentives.

Economics cannot control the world like an experiment in a laboratory. In the lab, a researcher can create the counterfactual: the control for the road not taken. In the real world the experimental factor cannot be varied while holding everything else constant; nevertheless, the last decades have witnessed progress in bringing principles of the experimental approach to the policy evaluation scene, especially thanks to the econometric work of the 2000 Nobel Laureate James Heckman. Evaluating the impact of an intervention means looking into the difference between the actual outcome of the policy measure and what would have happened without the policy. Suppose the objective of evaluation is the impact of a national public information campaign on daily salt intake. A correct evaluation is not the measure of salt intake before and after the campaign, because salt intake might have been changed by other factors than the policy (e.g. price changes). A possible option would be to compare the salt intake of subjects reached by the information campaign relative to the intake of subjects not reached by the campaign. For example, one could compare residents in another part of the country (if such a regional experiment could be organized) or in another country. But again, other uncontrollable factors may differ between the two groups. When an experimental design is possible, the distinction is between the treatment group (the group which is reached by the intervention) and a control group, which should be similar to the treatment group in all characteristics but the exposure to intervention.

Economists call this specifying the correct counterfactual. Comparing the costs obese people impose on the health service to those of non-obese people requires one to hold constant all other factors influencing health costs. We want to compare two people who are identical in all respects except that one is obese. This has been referred to in the smoking literature as the ‘non-smoking smoker’, someone who has all the attributes of a smoker (attitudes, income, demographics, and social class) but does not (p.108) smoke. The point is that obese people also tend to have lower incomes and lower education, and smoke and drink more. All associated factors might contribute to a person’s health care costs, but comparing the health care costs of obese and non-obese people would attribute them all to obesity; this is a common mistake.

The problem in social science is finding an appropriate control group. Biases are likely because of differences in both observable characteristics (e.g. age, education level, prices, income, etc.) and, more importantly, in non-observable characteristics, like those due to selection bias. Selection bias arises because subjects in the treatment and control group differ in the starting conditions.

For example, if an information campaign aimed at reducing salt consumption is broadcast through TV advertising, heavy TV watchers are more exposed to the intervention. However, heavy TV watchers might also be more inclined to adverse health consequences for other reasons than consuming salt (for example, because of reduced physical activity). The control group also needs to include heavy TV watchers for a proper assessment of the health impact of the information campaign. Accounting for all sources of bias remains a challenge.21

Various methods have been devised to take account of self-selection bias,22 in which a counterfactual group is built by selecting a set of people who match the intervention group with respect to a selection of variables. The success of the operation depends on whether the set of variables chosen is sufficient to eliminate systematic differences. The more complete the set of possible variables, the more expensive and time-consuming the experiment.

A modern econometric route to non-experimental ex post evaluation is the use of difference-in-difference estimators, also known as natural experiments. Two differences are computed: (1) the difference in the outcome variable for the treatment group, before and after the policy intervention; and (2) the difference in the outcome variable for the comparison group, before and after the intervention. The method is able to allow for dissimilar characteristics between the treatment and control group by assuming that these dissimilarities do not vary over time, so that by comparing the changes in outcomes between the two groups the actual policy impact is isolated. This method can be linked to the multivariate modeling approach by regressing the difference variable (before and after the (p.109) intervention) on a set of observable characteristics plus a dummy variable to distinguish between the intervention and control groups.

Endogeneity is a further problem in modeling (recall the case study in Box 2.2). People make choices for numerous variables simultaneously, for example, smoking, calorie intake, and exercise. The amount of exercise taken depends upon calorie intake and vice versa; in statistical terms, the variables are endogenously determined. Explaining obesity requires one to account for this endogeneity.23 A model that attempts to explain obesity as a function of calorie intake and exercise while assuming that those two variables are exogenous would produce statistically biased estimates unless simultaneous equation estimation methods were used. Policy evaluations based on biased coefficient estimates would be misleading at best.

Among examples of evaluations and estimations dealing with non-experimental data, Morris (2007) accounts for endogeneity and exploits matching to find a significant impact of obesity on employment in the UK, while Tchernis et al. (2007) reverse previous evaluations of the impact of the US School Breakfast Program (SBP) and the National School Lunch Program (NSLP) after allowing for selection bias. Previous evaluations suggested that participation in the SBP was associated with an increase in children’s weight (hence a negative impact of the program), while the NSLP was found to be unrelated to weight outcomes. Accounting for selection bias means acknowledging that overweight children are more likely to participate in the programs, hence affecting the final evaluation. After correcting for the bias, it was shown that the SBP is indeed an effective instrument to reduce children’s weight, while the NSLP is detrimental.

In summary:

  • It is useful to conduct ex post policy evaluation. Past experience can help us to understand the potential effectiveness of new policies.

  • Since we cannot control the world like an experiment in a laboratory, other tools are needed. Modern econometrics provides these tools. Methods have been developed to avoid the biases inherent in any uncontrolled experiment, notably involving specifying the correct counterfactual, avoiding selection bias, and compensating for endogeneity.

  • Data availability problems continue to restrict formal evaluation of past policy. This is due to the relative newness of obesity policy interventions and the difficulty in obtaining a time series of comparable data covering the pre- and post-intervention. Generating this data remains an important area for future research.

(p.110) Conclusion

Markets matter for health policy; markets work and markets fail. If markets fail, government intervention is beneficial; otherwise health-improving measures may be ineffective or they may end up worsening social welfare. To move from theory to practice, this chapter has provided further findings and guidance:

  1. 1. The costs of obesity can be internal to the market if they are borne by obese people themselves. For example, evidence suggests that obese people, especially women, have lower salaries and more difficulties in finding employment, and die younger. Obese people also suffer the health consequences of their decisions.

  2. 2. If we focus on external costs of obesity—costs imposed on others owing to private consumption decisions—society should focus on medical (direct) and employment (indirect) costs. Evidence suggests that medical costs of overweight and obesity may amount to 5% of health care costs in the US, which is 0.5% of GDP. Indirect costs associated with days off work are less precise and more controversial. Estimates suggest that they equal or exceed medical costs.

  3. 3. Private costs of obesity, those associated with ill health and premature death, have not been measured. Instead, quality adjusted life years, or QALYs, that attach a utility value to these conditions could be used. A QALY value of $60,000 is a useful benchmark to evaluate the benefits of a policy intervention designed to revamp diets, reduce obesity, and improve health. The value of the QALYs can be set against the costs of the intervention to determine if welfare improves.

  4. 4. From a social cost–benefit perspective, private benefits should be included when the intervention enables people to make more informed choices. They should be excluded when the intervention pushes people away from their preferred behavior; if, for example, they adopt a higher level of health at the expense of other goods and services, e.g. eating more than they should, or eating fast foods. Healthier is not necessarily happier. Here only the social benefits of intervention should be counted on the benefit side. In evaluating interventions it is important to be clear what costs should be counted and why.

  5. 5. Ex post evaluation of policies is important to learn lessons for the future, but difficult outside the arena of the controlled experiment. Modern econometric methods have shed light on how this may be achieved (p.111) but lack of data has made these methods difficult to use. This will change over time, and in Chapter 4 we provide examples of such procedures.


Bibliography references:

Allison, D. B., R. Zannolli and K. M. V. Narayan (1999), ‘The Direct Health Care Costs of Obesity in the United States’, American Journal of Public Health, 89/8: 1194–9.

Baum, C. L. and W. F. Ford (2004), ‘The Wage Effects of Obesity: A Longitudinal Study’, Health Economics, 13/9: 885–99.

Bhattacharya, J. and M. K. Bundorf (2005), The Incidence of the Healthcare Costs of Obesity, NBER Working Papers 11303 (Cambridge, Mass.: National Bureau of Economic Research).

Blundell, R. and M. Costa Dias (2000), ‘Evaluation Methods for Non-Experimental Data’, Fiscal Studies, 21/4: 427–68.

Carpenter, C. S. (2006), ‘The Effects of Employment Protection for Obese People’, Industrial Relations, 45/3: 393–415.

Cawley, J. (2004), ‘The Impact of Obesity on Wages’, Journal of Human Resources, 39/2: 451–74.

Chapman, R. H., M. Berger, M. C. Weinstein, J. C. Weeks, S. Goldie and P. J. Neumann (2004), ‘When Does Quality-Adjusting Life-Years Matter in Cost-Effectiveness Analysis?’, Health Economics, 13/5: 429–36.

Finkelstein, E. A., I. C. Fiebelkorn and G. J. Wang (2003), ‘National Medical Spending Attributable to Overweight and Obesity: How Much, and Who’s Paying?’, Health Affairs, 22/4: W219–W226.

—— C. J. Ruhm and K. M. Kosa (2005), ‘Economic Causes and Consequences of Obesity’, Annual Review of Public Health, 26: 239–57.

Fisher, G., T. Kjellstrom, A. J. Woodward, S. Hales, I. Town, A. Sturman, S. Kingham, D. O’Dea, E. Wilton and C. O’Fallon (2005), Health and Air Pollution in New Zealand: Christchurch Pilot Study (Wellington: Health Research Council, Ministry for the Environment, Ministry of Transport).

Garcia, J. and C. Quintana-Domeque (2007), ‘Obesity, Employment and Wages in Europe’, in K. Bolin and J. Cawley (eds), The Economics of Obesity (Amsterdam: Elsevier).

Hayes, D. J., J. F. Shogren, S. Y. Shin, and J. B. Kliebenstein (1995), ‘Valuing Food Safety in Experimental Auction Markets’, American Journal of Agricultural Economics, 77/1: 40–53.

Heckman, J. J., H. Ichimura and P. E. Todd (1997), ‘Matching as an Econometric Evaluation Estimator: Evidence from Evaluating a Job Training Programme’, Review of Economic Studies, 64/4: 605–54.

(p.112) McPherson, K., T. Marsh and M. Brown (2007), Tackling Obesities: Future Choices: Modeling Future Trends in Obesity and the Impact on Health (London: Government Office for Science).

Mason, H., A. Marshall, M. Jones-Lee and C. Donaldson (2006), Estimating a Monetary Value of a QALY from Existing UK Values of Prevented Fatalities and Serious Injuries, University of Birmingham, Department of Public Health and Epidemiology Reports RM03/JH13/CD.

Morris, S. (2007), ‘The Impact of Obesity on Employment’, Labor Economics, 14/3: 413–33.

National Audit Office (2001), Tackling Obesity in England (London: Her Majesty’s Stationery Office).

Oliver, J. E. (2006), Fat Politics: The Real Story behind America’s Obesity Epidemic (New York: Oxford University Press).

Paga´n, J. A. and A. Da´vila (1997), ‘Obesity, Occupational Attainment, and Earnings’, Social Science Quarterly, 78/3: 756–70.

Paraponaris, A., B. Saliba and B. Ventelou (2005), ‘Obesity, Weight Status and Employability: Empirical Evidence from a French National Survey’, Economics and Human Biology, 3/2: 241–58.

Popkin, B. M., S. Kim, E. R. Rusev, S. Du and C. Zizza (2006), ‘Measuring the Full Economic Costs of Diet, Physical Activity and Obesity-Related Chronic Diseases’, Obesity Reviews, 7/3: 271–93.

Rashad, I. (2006), ‘Structural Estimation of Caloric Intake, Exercise, Smoking, and Obesity’, Quarterly Review of Economics and Finance, 46/2: 268–83.

Robberstad, B. (2005), ‘QALYs vs DALYs vs LYs Gained: What Are the Differences, and What Difference Do They Make for Health Care Priority Setting?’, Norsk Epidemiologi, 15/2: 183–91.

Roglic, G., N. Unwin, P. H. Bennett, C. Mathers, J. Tuomilehto, S. Nag, V. Connolly and H. King (2005), ‘The Burden of Mortality Attributable to Diabetes: Realistic Estimates for the Year 2000’, Diabetes Care, 28/9: 2130–5.

Shogren, J. F., S. Y. Shin, D. J. Hayes and J. B. Kliebenstein (1994), ‘Resolving Differences in Willingness-to-Pay and Willingness to Accept’, American Economic Review, 84/1: 255–70.

—— J. A. Fox, D. J. Hayes and J. Roosen (1999), ‘Observed Choices for Food Safety in Retail, Survey, and Auction Markets’, American Journal of Agricultural Economics, 81/5: 1192–9.

—— J. A. List and D. J. Hayes (2000), ‘Preference Learning in Consecutive Experimental Auctions’, American Journal of Agricultural Economics, 82/4: 1016–21.

Szende, A., M. Oppe and N. J. Devlin (2007), EQ-5D Value Sets: Inventory, Comparative Review and User Guide (Dordrecht: Springer).

Tchernis, R., D. L. Millimet and M. Hussain (2007), School Nutrition Programs and the Incidence of Childhood Obesity, CAEPR Working Paper 2007–14.

(p.113) Tucker, D. M. D., A. J. Palmer, W. J. Valentine, S. Roze and J. A. Ray (2006), ‘Counting the Costs of Overweight and Obesity: Modeling Clinical and Cost Outcomes’, Current Medical Research and Opinion, 22/3: 575–86.

van Baal, P. H. M., J. J. Polder, G. A. de Wit, R. T. Hoogenveen, T. L. Feenstra, H. C. Boshuizen, P. M. Engelfriet and W. B. Brouwer (2008), ‘Lifetime Medical Costs of Obesity: Prevention not Cure for Increasing Health Expenditure’, PLoS Medicine, 5/2: e29.

Viscusi, W. K. and J. E. Aldy (2003), ‘The Value of a Statistical Life: A Critical Review of Market Estimates throughout the World’, Journal of Risk and Uncertainty, 27/1: 5–76.

Wada, R. and E. Tekin (2007), Body Composition and Wages, NBER Working Papers 13595 (Cambridge, Mass.: National Bureau of Economic Research).

Wild, S., G. Roglic, A. Green, R. Sicree and H. King (2004), ‘Global Prevalence of Diabetes: Estimates for the Year 2000 and Projections for 2030’, Diabetes Care, 27/5: 1047–53.

Wolf, A. M. (2002), ‘Economic Outcomes of the Obese Patient’, Obesity Research, 10: 58S–62S.

—— and G. A. Colditz (1998), ‘Current Estimates of the Economic Cost of Obesity in the United States’, Obesity Research, 6/2: 97–106.

World Bank (1993), World Development Report 1993: Investing in Health (New York: Oxford University Press).


(1) See references in Finkelstein et al. (2005) and Cawley (2004).

(2) Baum and Ford (2004); Pagán and Dávila (1997).

(3) Bhattacharya and Bundorf (2005). Instead, Wada and Tekin (2007) argue that using a fat composition index rather than the BMI leads to consistent results and both males and females experience wage reductions.

(4) Garcia and Quintana-Domeque (2007).

(5) Paraponaris et al. (2005).

(6) For example, Oliver (2006) argues bad nutrition choices generate bad health, but obesity is not necessarily the missing link.

(7) See Wild et al. (2004); Roglic et al. (2005).

(8) McPherson et al. (2007).

(9) Finkelstein et al. (2003).

(10) Van Baal et al. (2008).

(11) National Audit Office (2001).

(12) McPherson et al. (2007).

(13) Economists still debate about whether these are genuine costs to society. If an employee is sick, does the productivity of the firm suffer proportionately or do other employees work longer and harder to compensate? And if a person dies before retirement, are his or her lost earnings wholly a cost to society or is the cost to society limited to the share he or she would have paid as taxes? And should this amount in any case be offset by any savings in state pension payments?

(14) See Shogren et al. (1994, 1999, 2000); Hayes et al. (1995).

(15) This is not a hard-and-fast rule; some flexibility is allowed to take into account the degree of uncertainty, benefits to specific disadvantaged subgroups, and so forth.

(16) Szende et al. (2007).

(17) World Bank (1993).

(18) Chapman et al. (2004); Robberstad (2005).

(19) See Viscusi and Aldy (2003), who examine the VSL of over sixty studies in ten countries.

(20) Fisher et al. (2005).

(21) Blundell and Costa Dias (2000).

(22) Heckman et al. (1997).

(23) See e.g. Rashad (2006).