Jump to ContentJump to Main Navigation
Evaluating Health PromotionPractice and Methods$

Margaret Thorogood and Yolande Coombes

Print publication date: 2004

Print ISBN-13: 9780198528807

Published to Oxford Scholarship Online: September 2009

DOI: 10.1093/acprof:oso/9780198528807.001.0001

Show Summary Details
Page of

PRINTED FROM OXFORD SCHOLARSHIP ONLINE (www.oxfordscholarship.com). (c) Copyright Oxford University Press, 2019. All Rights Reserved. An individual user may print out a PDF of a single chapter of a monograph in OSO for personal use.  Subscriber: null; date: 17 September 2019

Evaluating mass media approaches

Evaluating mass media approaches

(p.145) Chapter 11 Evaluating mass media approaches
Evaluating Health Promotion

Kaye Wellings

Wendy Macdowall

Oxford University Press

Abstract and Keywords

Broad spectrum interventions aimed at reaching the general population make use of mass communication approaches such as TV, radio, press, billboard posters, and leaflets. This chapter examines the difficulties inherent in evaluating mass media campaigns. Topics discussed include the scope of interventions, evaluation research, process evaluation, outcome evaluation, selection of indicators/outcome measures, and measuring unintended consequences.

Keywords:   health promotion, mass media interventions, health services, evaluation research, outcome evaluation

Broad spectrum interventions, intended to reach the general population, make use of mass communicational approaches such as TV, radio, press, billboard posters and leaflets. These media are important sources of health information. Not everyone can be reached through community approaches, and high profile communication can reach hidden groups within the general population. Box 1 highlights some of the roles that the mass media can fulfil.

Evaluation is particularly important in the case of mass media interventions because of the high costs involved. Yet the problems inherent in evaluation of all health promotional activities are exacerbated in broad spectrum approaches. The major strength of the mass media (their ability to reach a wide audience) paradoxically presents the greatest challenge for evaluation. Whereas the target (p.146) audience of an intervention using a formal educational or clinical setting is more easily followed up, surveillance of the mass audience is more difficult. There is less control over the destination and reception of preventive messages and thus they may fail to reach audiences for which they were intended, or they may reach audiences for which they were not intended and be misconstrued. Furthermore, mass media interventions may have unintended consequences over which health promotion agencies have little control.

Two important themes are of particular concern in the context of mass media interventions:

Observed effects will be smaller

Broad spectrum interventions do not target high risk individuals, who have greater scope for change. Change at the level of a large and undifferentiated population is likely to be smaller.

Effects are more difficult to attribute to mass media intervention

Attributing outcomes to a specific intervention is complicated where mass communicational techniques are used. An effective campaign will have an effect far beyond its original remit, creating media discussion, providing the impetus for local efforts, and so on. The effects of the intervention are not easily distinguished from other events concurrent with it or subsequent events triggered by it.

The scope of interventions: individual change and social diffusion

The strength of mass media, according to some, lies in helping to place issues on the public agenda and in legitimating local efforts, raising consciousness about health issues, and conveying simple information (RUHBC 1989, Tones et al. 1990). What the mass media do less effectively is to convey complex information, teach skills, shift attitudes and beliefs and change behaviour in the absence of other enabling factors.

(p.147) Two models are applicable in the evaluation of mass media interventions. The first, the ‘risk factor’ or epidemiological model, is principally concerned with changing individual health-related behaviour, based on the premise that this will change health status. The second, the ‘social diffusion’ model, has more to do with the process of intervention and its catalytic effect, and the interaction between the component parts (Rogers 1983). If mass media interventions are effective, it is likely to be because they activate a complex process of change in social norms rather than because they directly change the behaviour of individuals.

An explicit objective of many mass media campaigns, then, is to change the social context and to effect a favourable climate in which interventions could be received. The college principal/publican/garage proprietor, previously doubtful about the propriety of installing a condom machine in his sixth form college/pub/garage feels reassured and validated by a government-backed mass media campaign promoting condom use. The young person, motivated to use condoms by the same campaign, is further encouraged to do so by their ready accessibility in the college/pub/garage in which he studies/drinks/buys petrol. This is sometimes known as ‘diffusion acceleration’.

The discrete contribution of different components is difficult to assess. Influences on our behaviour are multiple and are as likely to counteract, as to be in unison with, health advice. The biggest changes in behaviour, and hence in health status, are likely to come about through forces other than public education. For example, smoking behaviour is determined by the price of cigarettes, by restrictions on smoking in public places and by voluntary impositions of bans (e.g. by a member of household).

Because of the high cost of use of the mass media, a campaign of short duration can consume a large proportion of the funds available for preventive interventions. A valid aim, therefore, may be to prompt coverage of the campaign by the free media.

The evaluation process: stages of research

Evaluation research begins with a developmental component, in which the potential for intervention in any health problem is described, along with (wherever possible) identification of factors that might (p.148) facilitate or obstruct the delivery. This is followed by a formative evaluation in which the candidate intervention is pre-tested. During the course of the delivery of the programme a process evaluation is undertaken (see also Chapter 6) and finally, an outcome evaluation is carried out, which examines effects, effectiveness and efficacy (see Chapter 5 for a discussion of effectiveness and efficacy). If effectiveness is demonstrated and the service continues, routine monitoring and audit subsequently ensure quality of service delivery and continued efficacy.

A circular process

Development of the research and evaluation process is optimally seen not as linear, but circular, i.e. data from the outcome stage of evaluation will feed back into further development of the intervention, closing the loop. Subsequent generations of programmes will benefit from insights into the effectiveness of the last. An important function of evaluation is to provide a means of detecting and solving problems and planning for the future. Providing retrospective feedback on success or failure at the end of an intervention provides guidance when it is too late to do anything about it. Ideally, the process should be continuous, tracking the progress of initiatives over time and feeding back information that can help operational decision-making, although in some situations this may not be possible or desirable (for further discussion of this issue see Chapter 6).

Formative research

Formative research involves exploratory work to guide the design of the intervention. An important component of formative research is the pretesting of materials, as there is potential for messages to be misunderstood. It is important to know whether an intervention failed in its mission because it was not heard, or because it was not understood. Formative research, which typically uses focus groups, is useful in checking that an audience understands the language and images used.



                      Evaluating mass media approaches


(p.151) The formative phase of evaluation aims to anticipate possible unforeseen outcomes. These are often favourable. For example, as a result of AIDS education using mass media interventions, the ruling on TV advertising of condoms was changed in several countries, including France, the UK and Ireland (Wellings and Field 1996). But they may also be unfavourable (substitution effect, see Chapter 7).

Process evaluation

Research should be capable of revealing not only whether a campaign has succeeded or not but why, so that the findings can be used to guide future developments. Process evaluation is often narrowly conceived in terms of measuring ‘dose’ or exposure – either objectively in terms of the extent to which the campaign was aired (number of TV spots; broadcasting times; frequency and duration; audience figures; numbers of posters/leaflets displayed) or subjectively (TV spots seen; time spent watching; time spent reflecting; level of interest) and in this sense more closely resembles audit.

While outcome evaluation focuses on the goals of a programme, process evaluation is important in providing insights into what factors may hinder or facilitate their achievement (see Chapter 6). Potentially favourable effects of a campaign can be seriously attenuated by an adverse response from the public. By definition and design, exposure to the mass media is universal. Tailoring messages to specific target groups is less easily achieved and problems of social and political acceptance can arise where messages are seen by those for whom they were not intended. Process evaluation also has valuable potential in helping to uncover unintended consequences of intervention.

Process evaluation must take account of the journalistic backdrop against which the campaign is launched. Journalistic coverage of a campaign in the free media will influence the way in which the campaign is received by the public, and may mediate between the originators of a campaign and the intended audience, influencing the selective receipt of messages.


                      Evaluating mass media approaches

Fig. 11.2. National press headlines following unplanned advance publicity given to the teenage pregnancy campaign: October 2000.


Outcome evaluation

Two key criteria for outcome evaluation are the size of effect and the possibility of attributing the outcome to the intervention. On both of these criteria, mass media work is problematic for evaluation.

1 Size of effect

The size of the expected effect is often not specified in the intervention plan, but a vital question is how large the effect has to be in order to make the case for the intervention having worked. Gains made in the case of mass media interventions may be modest for a variety of reasons.

The size of the target population

Effects will be smaller where the target group is a large and heterogeneous mass.

The nature of outcome measures or end points

Changes may be small because the wrong level of objectives has been chosen. Where measures of morbidity and mortality are used as (p.154) outcomes in interventions aimed at the general population the sample needs to be impossibly large, and the scale of effect may still be too small to interpret.

The scope of the intervention

Little effect may be seen because the end points are too narrowly conceived in terms of individual behaviour. Procedures need to be developed which attempt to measure effectiveness in terms of changes wrought in the social context.

The time scale

The time scale looked at may be too short. Health promotional efforts must be of long duration to have significant effects; despite being small scale, initial differences may be sustainable. The 30-year anti-smoking campaign in the United States is an excellent example of the potential of such sustained efforts.

Some of these issues are dealt with below, under the heading ‘Selection of indicators/outcome measures’. Although difficult to interpret, small effects may be of greater consequence where large numbers of people are involved and, even if unpromising when looked at from an individual perspective, can be important in public health terms (see Chapter 3). It is at the level of subgroup activity that achievements become observable, hence the importance of disaggregating data by groups of interest (segmentation – see Chapter 7).

2 Attributing outcome to intervention

A major challenge in assessing the effectiveness of mass media approaches to prevention is that of attributing outcome to intervention, that is, ensuring that the apparent observed effects are truly the outcome of public education campaigns and not the result of a priori differences or differential exposure to something else, such as the mass media generally or local preventive interventions.

Hybrid interventions seem to work better than those with only one component, and the success seems to lie in the interaction effects (p.155) between component parts. A valid goal for an intervention may be to accelerate trends in already existing behaviour rather than initiate new trends. The effects may be enhanced by synergy; where there are multiple coinciding influences on behaviour, the outcome is likely to be more marked.

‘Background noise’ is often considered problematic in the context of outcome evaluation. The task of outcome evaluation is to look at the extent to which a mass media intervention is successful in harnessing environmental influences to its aims. Instead of trying to disentangle the variables, efforts need to be made to quantify the interaction effects. It may be beyond the power of an evaluation to determine which elements of the programme were effective. Moreover, it is likely to be misleading to attribute, to a particular focused action, an effect that may well have been the product of a complex mobilization producing norm change. Evaluation efforts need to find ways of measuring the catalytic effect of mass media interventions.

Selection of indicators/outcome measures

Outcome evaluation requires the use of indicators by which the outcomes can be measured. Outcome indicators should be determined by the objectives of interventions, and lack of clarity regarding the objectives of mass media interventions is a common problem in evaluation. If objectives are set at an inappropriate level this can threaten the apparent success of the intervention.

Outcomes relating to proximate points along the causal pathway – awareness of risk, intention to change, modification of attitudes – are more feasible to measure but less attractive in terms of ‘proof’ of effectiveness, while distal endpoints are more attractive in terms of scientific rigour, but success in achieving these goals will be remote. In most cases, the proximate outcome variable in the biomedical model will be the adoption and maintenance of behaviours that reduce risk and these may constitute the indicators themselves (e.g. condom use, increased uptake of immunization, change in drinking/smoking habits). Intermediate outcomes might include measures such as serum cholesterol levels and weight, and relate more indirectly to the goals of (p.156) intervention. The more distal outcomes – incidence of disease or mortality – are not sensitive indicators for the general population, for reasons already discussed.

In practice, a variety of outcome variables is needed, some proximate to the intervention, some intermediate, others more distal. The social diffusion model of health promotion has implications both for the size of the effect needed, and for attributing outcome to intervention. It may be that the effects achieved appear small, but they may prove sustainable because the intervention has triggered a process of diffusion. Therefore a time element needs to be built into the evaluation.

Measuring unintended consequences

Operating within the narrow bounds of a goal-directed model of evaluation will miss identifying possible adverse outcomes. Prior identification and definition of all the outcome variables may result in unforeseen and unintended effects going unrecognized and unrecorded.

Selection of research design

The question ‘What are the methodological approaches which will allow us to support or reject claims of success?’ follows logically from the question of how success can be measured. Some would argue that the only legitimate goal of evaluation is to assess effectiveness using experimental methods. Randomized controlled trials (RCTs) are rigorous methods of evaluation, but they are not applicable in every case. The success of the experimental approach depends on being able to ensure that observed differences in outcomes do not arise from factors other than the intervention under investigation. Added to the problems of RCTs identified in Chapter 5, there are others that are either unique in the case of mass media interventions, or accentuated by mass media interventions.

The broader the intervention, the more global its remit, the more far reaching its effects, the greater the interaction with other social (p.157) forces and movements (and ironically the more interesting the outcome), the less amenable it is to evaluation by a RCT. Some of the biggest influences on health-related behaviours and health status occur at national levels. Experimental evaluations are poorly suited to interventions aimed at changing the social context. There are several problems of research design for mass media interventions.


Campaign effects may be dependent on local circumstances that may not be generalizable to other areas or in the future.


Experimental and quasi-experimental designs are difficult to apply to mass media campaigns because by definition virtually everyone is exposed to them.

Comparison group

Mass media interventions have problems identifying and maintaining the integrity of a comparison or control group. Many people and behaviours are not amenable to random allocation.

Size of effect

The tendency to use relatively high discount rates (see Chapter 4) in the evaluation of health programmes does not favour health promotion programmes using the mass media. Only small effects are likely to be seen. Where there is a great deal of background communication going on, the intervention may provide only a very small increment.


The image of pristine treatment and control communities associated with the notion of the controlled trial is a false one. Trials attempting to give a communication treatment to one place and not to a neighbouring place ignore the social process at work.

Practical, ethical and, in some cases, economic obstacles may also impede the implementation of experimental strategies. The options (p.158) in terms of experimental approaches that can be applied to the evaluation of mass media interventions include:

  • Lagged exposure or phased implementation (staggering interventions over time);

  • Area comparisons (comparing areas with and without interventions); and

  • The application of media weight bias (comparing populations exposed to media interventions with those not so exposed).

Alternative approaches

There is little point in finding out whether an intervention works better than another or none at all, until we have first established what effects it has and whether there are, in addition to those intended, effects that are unintended and possibly adverse. If the intervention fails to achieve the goals set for it, the question of whether it did so more cost-effectively than another intervention or none at all will be irrelevant (see Chapter 4 for a discussion of measuring cost-effectiveness). Programmes using the mass media should be evaluated with a methodology that respects their character and the way they work, but is credible enough to influence policy decisions. An eclectic approach to research and evaluation is called for; in the words of the late Geoffrey Rose, we need researchers with ‘clean minds and dirty hands’. Alternative approaches, including natural experiments, correlated time series and other non-experimental and quasi-experimental approaches are needed.

Measures of effect

Several methods have been used to measure effects and effectiveness.

Retrospective reporting

In this, respondents are simply asked if they have gained in knowledge, changed their attitudes or modified their behaviour. This method may be the only one available in many cases, yet, because of the absence of baseline data, suffers from biases introduced by desirability response and recall difficulties.

(p.159) Longitudinal designs

These have advantages over a cross-sectional design and are more appropriate to understanding the process of behaviour change. Panel studies, where the same group of people are questioned repeatedly over time, have disadvantages of attrition yet these disadvantages may be balanced by the advantages of being able to track changes in behaviour in the same individuals over time.

Time series data

Evaluation of effectiveness typically uses a pre- and post-test design and these methods are less complex and less costly than RCTs. Time series data use narrative to make the case for observed effects being attributable to the intervention, that is, the equivalent of ‘telling a story with data’. Ideally before and after intervention data are needed. Such studies offer some improvement over one shot studies but are still susceptible to desirability response and provide no assurance that what is being measured is the effect of a particular intervention and not a generalized response to the health issue.

Correlated time series

Correlated time series data using pre- and post-test data and statistical modelling techniques, help trace the causal pathway to the objectives, identifying intervening valuables and also control for a number of problems of inference. Strength of effect after controlling for confounding is taken as credible evidence that the intervention was causing the effect. Structural equations, used to examine the direct effect of the intervention, and the indirect effect on intervening variables though the use of regression models, are able to assess the strength of each of the factors.

Knowledge attitude and behaviour surveys

Survey investigation is the mainstay of data collection procedures. Typically, knowledge attitude and behaviour (KAB) surveys are used to investigate exposure to, recall and comprehension of campaign messages and self reported behaviour change. KAB surveys have limitations (p.160) in the extent to which they can monitor changes wrought in the social context, since their focus is on the individual. They also present problems of validity and reliability and are susceptible to social desirability response.


One solution to the problem of bias in the collecting process has been to attempt to triangulate results, or to cross-validate against other data sources that might provide more objective measures of behavioural change. A good deal of information is available in this respect at relatively low cost and might include a selection from the following: sales figures (of, for example, cigarettes, alcohol, condoms, low-fat spreads), subscriptions to exercise classes, health clubs; immunization uptake; screening uptake e.g. mammography, HIV test data, helpline statistics, morbidity and mortality data and media reports. The combination of behavioural and clinical measures also offers potential for triangulation, helping to verify inferences drawn from self-reported data, despite methodological and scientific difficulties.

Media analysis

As noted above, programmes may work because they activate a complex process of change in social norms rather than because they produce behaviour change directly at the level of the individual. Media analysis provides a valuable indicator of changes in the social context. This requires the use of a reputable media cuttings audit agency, or assiduously keeping a cuttings file. Where the intervention is under trial in one region, such that one area receiving the intervention is compared with another, which does not, local media audit is particularly important.

Independence of the evaluation team

The choice of agency to carry out the evaluations is pivotal in determining the quality of the data produced, the manner in which it was used and its impact on future campaigns. There is clearly a political dimension of evaluation, since it may show projects as not as effective (p.161) as the originators believed they would be. Inevitably, where those commissioning the evaluation are also responsible for the campaigns, it is more difficult to ensure objectivity and impartiality.


Bibliography references:

Kitzinger, J. (1991) Judging by appearances: Audience understandings of the look of someone with HIV. J. Comm. and App. Psych, 1 (2), 155–163.

Rogers, E. (1983) Diffusion of Innovations. The Free Press, New York.

RUHBC (Research Unit in Health and Behavioural Change) (1989) Changing the Public Health. Chichester, John Wiley and Sons.

Tones, K., Tilford, S., and Robinson, Y. (1990) Health Education, Effectiveness and Efficiency. Chapman and Hall, London.

Wellings, K. and Field, B. (1996) Stopping AIDS. AIDS/HIV and the mass media in Europe. Longman and the European Commission, New York.

Wellings, K., Grundy, C., and Kingori, P. (2001) Press Coverage of the Young People's Campaign, October 2001: Report prepared for the Department of Health. London School of Hygiene and Tropical Medicine, London. (p.162)