Jump to ContentJump to Main Navigation
Fuzzy ManagementContemporary Ideas and Practices at Work$

Keith Grint

Print publication date: 1997

Print ISBN-13: 9780198775003

Published to Oxford Scholarship Online: October 2011

DOI: 10.1093/acprof:oso/9780198775003.001.0001

Show Summary Details
Page of

PRINTED FROM OXFORD SCHOLARSHIP ONLINE (www.oxfordscholarship.com). (c) Copyright Oxford University Press, 2019. All Rights Reserved. An individual user may print out a PDF of a single chapter of a monograph in OSO for personal use.  Subscriber: null; date: 23 October 2019

Mismanaging Risk: The Madness of Cows

Mismanaging Risk: The Madness of Cows

(p.146) (p.147) 6 Mismanaging Risk: The Madness of Cows
Fuzzy Management

Keith Grint

Oxford University Press

Abstract and Keywords

The previous chapter talked about two essential elements of leadership that were significantly dependent on the theoretical approach. The first was the extent of the elimination of uncertainty from the appraisal system through considering objective approaches while the second involved the extent to which the theoretical perspective on organizational autonomy modifies the role given to leadership. While the uncertainty problem is conventionally viewed to be the concern of individual leaders, this chapter attempts to look into how the uncertainty problem may be taken on in terms of the utility of scientific knowledge. Since science may enable us to veer away from the Aristotelian binary of error and truth, this chapter explores the specific case of mad cow disease. Particularly, science may be utilized in dealing with ignorance and in establishing the foundation for managers to arrive at rational decisions.

Keywords:   leadership, rational decisions, organizational autonomy, uncertainty problem, mad cow disease, science, ignorance

‘Was the cow crossed?’

‘No, your worship, it was an open cow.’

A. P. Herbert, 1935: ‘The Negotiable Cow’, in Uncommon Law

IN the last chapter we considered two elements of leadership which are radically dependent upon the particular theoretical approach used: the extent to which uncertainty could and should be eliminated from appraisal systems through recourse to various forms of allegedly objective approaches, and the extent to which one’s theoretical perspective on organizational autonomy alters the role of leadership. It has been suggested throughout the book that under conditions of uncertainty— which appears to be the norm for most organizations and managers—responsibility for organizations tends to be located in the hands of individual leaders, even if their power to control the organization is extremely limited. This chapter takes the uncertainty problem one stage further by examining whether we are likely to be able to fall back upon the utility of scientific knowledge in a situation of extreme organizational uncertainty and risk. In other words, if we are going to experience greater uncertainty and environmental turbulence in the future, can we assume that a more scientific approach will necessarily help managers to manage? Convention has it that where the world appears fuzzy, the application of science can distil the ambiguous continuum into an Aristotelian binary of truth and error. Thus, in the case of mad cow disease, science can gradually dispel our ignorance and provide a solid foundation for managers to take rational decisions. Once we are certain that we know all about the problem we can also address the issues of culpability and prevention: who or what was to (p.148) blame, and how can we eradicate the problem? A binary principle already operates in this arena: either individuals were at fault, or the system is the problem. The former assumption is examined here through an ‘action approach’, the latter through a ‘structural approach’, and, within each, the alleged party (individual or system) is either guilty or innocent. The former (action) approach suggests a high degree of individual choice, the latter (structural) approach suggests a high degree of determinism. What can a fuzzy approach offer us here? Constructivism asserts that the critical issue is epistemological— how we know things to be as they are claimed to be. It is rooted in a scepticism of ‘truth claims’, but has, as I have argued throughout this book, significant practical consequences. The entire debate hinges around modernist notions of science and technology and proffers an alternative route into the complex issues that confront all those engaged in the ‘environmental debate’ (see Bansal and Howard, 1997, for an extended review of this).

But why and where do mad cows fit into conventional management? Surely this is either a biological or a political problem, but not really a managerial issue? As we have previously seen, one of the points raised by approaches that are critical of conventional theories is the whole idea of categorization and boundary construction. By categorizing mad cows as a biological problem we immediately divest ourselves of any responsibility for it, and look to biologists to solve it for us. By categorizing them as a ‘green’ problem we refer to politicians and the environmental lobbyists. Each categorization induces different ideas about culpability and problem resolution, but the significant point is to note how these categorizations do not lie within the problem but within our method of categorization. If we consider the extent to which farm animals form an important element in agriculture, and that agriculture is a major industry in all countries, then what happens to mad cows, and why they are ‘mad’, becomes a business problem as much as anything else. If, for example, a certain famous ‘natural’ water-bottling company suddenly found its water supply contaminated, then an environmental problem becomes a major business headache. In other words, not only can managerial problems in allegedly marginal areas prove enlightening for all (p.149) kinds of apparently unrelated problems, but the very categorization of problems as ‘relevant’or ‘irrelevant’ is part of the technique by which agendas are set and power deployed. As one might expect from the implications of chaos theory discussed in chapter 3 just because the government is allegedly in power does not mean that it can control the course of events— as the unravelling of the mad cow problem will reveal. But there is another lesson for all organizations here, and that is to beware of the power of boundary construction generally. For example, those organizations that are intent on bench-marking their way to the top would be wise to avoid assuming that only organizations in precisely the same market are going to prove useful for them—far from it: some of the most valuable comparisons can only be made when the notion of comparison is unhinged from the idea of similarity. In sum, if you think that mad cows and the response of the authorities to them is only of relevance to farmers then perhaps you should think again.

From the state of nature to the end of nature

After the battle of Agincourt in 1415, at least in Shakespeare’s version of Henry V, the Duke of Burgundy rails against the effects of war and beseeches the Kings of both France and England to look upon the battle as an opportunity for peace to prevail, to drive out the savagery of war with the abundance of peace; he longs for the lush garden of France to recover and for the rusting ‘coulter’ (plough) to be put back to work—to tear up the weeds of war, to ‘deracinate such savagery’. Deracination—to pull up by the roots or to remove from a natural environment—embodies the traditional approach to nature in the West: without human control over it, not only can nature not be exploited to the benefit of humans but nature is actually an anarchic formation which benefits only ‘weeds’, those parasitic, displaced things that have no place in a ‘properly’ ordered world. So keen were the Victorian naturalists in Britain on making sure things were in their proper place—in this case ferns in copper pots by the fireside—that many (p.150) hillsides lost all their ferns to the collectors (Prance, 1996: 22). European colonists in North America seem to have ‘civilized’ the ‘wilderness’ they ‘discovered’ with a similar kind of ruthless reordering of the natural world (Nash, 1983). Of course, the ‘natural’ world was itself, at least in part, the consequence of prior human reordering by earlier human populations. For example, many of the western prairies were not ‘naturally’ devoid of tree cover, but kept clear by regular burning to ensure the growth of vegetation congruent with maintaining a stock of game animals for hunting (Pyre, 1982). After the European colonization, ‘naturally’, the ecology would again have been altered by the invasion of ‘alien’ plants, animals, and diseases brought by the colonists (Crosby, 1986). The displaced ferns and the clearings of the American ‘wilderness’ can, as Williams (1972) has argued, be related to several developments, all linked to the Enlightenment events that appear to be responsible for a gradual change of perspective from one where society and nature were essentially intertwined to one where society succeeds in dominating nature (see MacNaghten and Urry, 1995).

Three issues are particularly important here. First, as science and the rational analysis of the material world developed, so the focus shifted from explaining how humans and nature related to each other in religious terms to explaining how the various elements in nature acted. In effect, the question shifted from why the world of nature was as it was, to how it worked.

Second, political debates erupted concerning the significance of the relationship of humans to nature, not in terms of explaining the relationship but in terms of how humans were different from nature. Thus, Rousseau (1968) argued in the mid-eighteenth century that humans in their ‘natural (pre-social) state’ were originally good, but had been infected by human institutions. One can read here a contemporary parallel with Lovelock’s (1987) Gaia hypothesis, where the world is perceived as a living organism, previously unspoilt in its ‘state of nature’ but now rebelling against the destruction wrought by humanity. In the mid-seventeenth century Hobbes (1968), on the other hand, argued the opposite: that in the ‘state of nature’ humans were ‘naturally’ evil—the ‘Yahoos’ of Gulliver’s (p.151) Travels—and had to be socialized by social restraints. Either way, nature was now irrevocably separated from humanity, although many early European accounts of the native populations of the Americas suggested that their state of nature provided the necessary but oppositional touchstone through which the Europeans constructed their own identity as ‘civilized’ people (Hall, 1992).

Third, the growth of the market economy, presaged by Adam Smith (1974), implied that the operation of the economy was itself subject to natural ‘laws’. Of course, where such ‘natural’ economic laws did not exist—in ‘uncivilized’ lands—then humans had the ‘natural’ right to introduce them: enter the colonies. Even those politically opposed to the imperialist developments, like Karl Marx, were nevertheless unaware of any possible limits to natural resources and considered nature merely as a resource to be exploited for the benefit of (all) humanity.

Such domination can be perceived through the relationship of humans to the animals that they eat. Indeed, Fiddes (1991) has argued that the slaughter and eating of animal flesh is a powerful symbolic re-enactment of human domination. After all, we seldom eat animals killed by accident or killed by other carnivorous animals—they somehow seem to be ‘dirty’ if we accidentally run them over in our cars, but ‘clean’ if they are wrapped in cling film and look as distanced from a once-living creature as possible. This is rather similar to Thomas More’s early sixteenth-century identification of cleanliness with godliness in Utopia, where:

There are special places outside the town where all the blood and dirt are first washed off in running water. The slaughtering of livestock and cleaning of carcasses is done by slaves. They don’t let ordinary people get used to cutting up animals, because they think it tends to destroy one’s natural feelings of humanity. (1965 edn: 81)

All of these arguments have recurred within what has been termed a modernist debate, that is, a debate in which the progressive domination of nature through the rational application of science is the measure of civilization. Indeed, McKibben’s (1990) consideration that we in the West have now reached the ‘end of nature’ (in so far as our relationship with nature is no (p.152) longer direct but always mediated by technology) is the logical end point of this approach (see also Yearley, 1992). However, even though many current green movements are antipathetic towards such an anthropomorphic view of the world, the main attempt to limit damage to the environment has occurred through the same modernist channels. In effect, while the debates about pollution are as acrimonious as ever, the actions of most greens have been premised upon the use of the very same scientific techniques which allegedly instigated the problem in the first place. In short, the business applications of science may have polluted the world, but only the business of applied science has the ability to stop the pollution, and only through greater scientific knowledge can we begin to construct sustainable developments (see Hajer, 1996 on this approach). For example, drought may be caused by farming or land-use patterns, and it is not always possible to (re)irrigate such lands since this tends to increase the level of salinity in the soil unless the soil is well drained (an expensive option) or irrigated with relatively ‘pure’ water (another expensive option). However, during the 1990s plant physiologists and geneticists have been experimenting with enzyme inhibitors to develop salt-resistant strains of plants, such as Brassica napus, a genetically engineered form of oilseed rape (Guardian, 18 April 1996). ‘Eco-Realists’, such as Easterbrook (1996), suggest that human induced pollution is declining through the application of science, and, anyway, such pollution is marginal compared to the self-destruct systems operating within nature itself. In a similar vein there have been claims for several years that developments in information technology will eliminate the need for many to make physical journeys, so that we can shop and work from home, etc. Let us hope so, because if everyone who wants a car gets one we may have to live in them.

There are several different responses to this modernist approach. Arguments by the likes of Ulrich Beck (1992) suggest that we have now moved from one form of society to another, in which the very foundation stone of the way we live has changed. Whereas the challenge of the eighteenth, nineteenth, and twentieth centuries was to accumulate wealth, the challenge from now on will be to avoid risk, for we are about to enter the ‘Risk Society’. By this, Beck means that disputes (p.153) which were previously centred over the distribution of ‘goods’ are increasingly likely to be centred over the distribution of ‘bads’. Where most people used to worry about whether they had food, shelter, and work, in the future they will be more concerned with whether they are eating ‘mad cow’, or drinking poisoned water, or breathing polluted air. As Beck argues, such risk is not new—people have always been poisoned by pollution or died in high-risk adventures—but there are differences now. First, the risk has changed from being personal to being universal. In Beck’s example, Columbus took a voluntary personal risk in his adventures, but we face an involuntary and global risk from the likes of Chernobyl. On the other hand, Chatterjee and Finger (1994) and Yearley (1996) have suggested that local, rather than global, action is the only practical way forward for environmentalists—especially in the absence of meaningful and powerful transnational environmental organizations. Second, the risks have changed from being visible to being invisible: it was clear to all that the streets of most medieval towns were littered with filth, but we cannot see the deadly viruses that infect our food and air, nor the immediate risk of pouring 20 tons of aluminium sulphate into drinking water—as occurred in Camelford, England, on 6 July 1988, where the cluster of leukaemia in the town has generated considerable public alarm (Observer, 14 July 1996). Third, previous risks were the consequences of an undersupply of hygienic technology, whereas today’s risks are a consequence of an oversupply of production. For example, on 28 May 1996 the British Ministry of Agriculture, Fisheries, and Food (MAFF) announced that nine unnamed brands of baby milk had been found to contain phthalates—synthetic chemical compounds— which were in excess of the European Union (EU) precautionary limit, but below the safety limit. Phthalates may reduce fertility if present in sufficient quantities, but since, according to the government, there was ‘no risk’, there was ‘no point in naming the brands’ (Guardian, 29 May 1996).

We might add a fourth ingredient to Beck’s list of differentiation, and that would be that the late twentieth-century crop of ecological threats are not just unseen and unknowable but, by their very randomness, pose a serious question about the ability of humans to control nature. Thus, not only are we (p.154) threatened by natural mutations that appear to outstrip our knowledge of disease eradication, such as ‘superbugs’ that live on, rather than die from, antibiotics, but we appear to be essentially unable to control nature and secure the kind of progression to a better life that science once promised us (see Vidal, 1996). The question then is can modernist science dispel modernist risks? We are, according to Beck, ‘living on the volcano of civilization’.

A fifth form of change could also be added to Beck’s list, and that is an increasing doubt about the ability of science to generate knowledge that the public accept as ‘true’. So whether the risks are actually increasing or not, and whether they are changing in form, is compounded by an increased scepticism on the part of the public that scientists know what they are talking about with any certainty. We only have to run through the changes to ‘healthy eating’ since the 1970s to know that, almost by definition, whatever is regarded as healthy today will turn out to be unhealthy tomorrow. Indeed, the business of dieting in the 1990s is a huge industry and, as Klein (1996) suggests, since the rate of obesity seems to increase in direct proportion to the number of people who diet to lose weight, perhaps we should do the opposite: eat fat to get thin.

On the other hand, there is, of course, a range of pressure groups, such as green political parties, Friends of the Earth, Greenpeace, and the like, which have for several years been attempting to influence the direction taken by governments and by businesses in an effort to persuade them to protect the environment. Many of these groups began life as protest movements, and the direct action of those like Greenpeace carry resonances of these origins. However, while direct action still occurs within movements dedicated to animal rights, or ‘deep green’ groups committed to the prevention of the development of new roads, etc., many of the more established groups have become considerably more professional and scientific in their approach to the environment. In other words, they have modernized themselves. Students of democratic parties will recognize a parallel debate here, as democratic organizations take on the clothes of their autocratic opponents to engage with them on a more level playing-field—what Michels (1949) referred to in 1915 as ‘the iron law of oligarchy’: all democratic institutions (p.155) become oligarchical over time. Or, in this (olive)green version, ‘the iron law of olivarchy’: all anti-modernist protest organizations become modernist over time. So what often starts out as an emotional rebellion against the consequences of ‘science’ is ultimately forced to take on the clothes or camouflage of the scientists if they wish to be successful. To put it another way, if you want to persuade the government to stop building nuclear plants on your doorstep then mere protest is unlikely to be effective—after all, such a project means a considerable number of jobs and a consequential increase in local spending. However, if you can garner the resources to fund a research project undertaken by ‘legitimate scientists’, which ‘proves’ a link between such plants and cancer amongst the local population, then you have a much better chance of success. The moral of the tale is clear, and it is one reason why so much effort is now spent by green protest groups in monitoring pollution in as systematic and scientific way as their resources will allow: facts count.

Now the rub: Greenpeace’s scientific assessment of the dangers of dumping the Brent Spar oil rig at sea in 1995 was probably wrong—but it worked in so far as Shell changed its mind. A similar controversy raged through Britain in 1996 and concerns bovine spongiform encephalopathy (BSE), popularly known as ‘mad cow disease’; the next section outlines the story before we begin to assess the utility of competitive forms of analysis.

Relatively mad cows

In the mid-1980s a Kent cow developed strange patterns of behaviour that involved stumbling, falling, and a general inability to balance; it was diagnosed as BSE, or mad cow disease. A further twenty cases were discovered in 1987 and this rose eightfold by the following year, when a committee under Sir Richard Southwood, Professor of Zoology at Oxford, met and recommended that all infected cattle be destroyed and that the feeding of cattle or sheep protein to other cattle or sheep be halted. It also asserted that, although no one knew what caused (p.156) BSE nor how to cure it, it could not be passed on from cows to humans and had very little to do with human health. The numbers of cattle with BSE, and of those slaughtered, are shown in Figure 6.1.

This is not the first time that Britain has been at the centre of a cattle ‘plague’ of one sort or another. As Atkins and Brassley (1996) note, trans-species diseases ‘plagued’ the nineteenth century in particular and bovine tuberculosis (TB) was held responsible for 8,00,000 human deaths before it was finally eliminated in 1960. Compulsory slaughter was first carried out to fight the rinderpest outbreak (cattle plague) in 1865—though the then Archbishop of Canterbury suggested that a ‘Day of National Humiliation’ might work better (ibid.: 15). Then, throughout the late nineteenth and early twentieth century, TB in cattle continued, despite a rash of parliamentary acts designed to stem the disease. The compulsory mass slaughter of infected animals had first been mooted in the 1890s, but the policy was not executed until 1960 (sixty years after Finland had dealt with a similar problem by a mass cattle slaughter).

                   Mismanaging Risk: The Madness of Cows

Fig. 6.1 Cattle with confirmed BSE and Cattle Slaughtered, 1988–95 MAFF

(p.157) The Gowland Hopkins Committee in 1934 summed up the position thus:

The total eradication of bovine tuberculosis is generally agreed to be the only complete solution to the problem of tuberculosis milk ⋯ But any such scheme must take account of conditions peculiar to Great Britain. In the first place, the incidence of tuberculosis among our cattle is so high that the wholesale slaughter of infected animals ⋯ is out of the question. Not only would its immediate cost be prohibitive, but it would also seriously contract the supply of milk, (quoted in Atkins and Brassley, 1996; 16)

As we shall shortly see, the 1990s BSE case is uncannily similar in the litany of mistakes made by politicians.

Feeding herbivores boiled-down animals, although a relatively novel idea, started to happen in Britain and Europe in the 1960s. However, in Britain alone the rendering process was operated at temperatures of less than 100°C—despite warnings from Bram Schreuder, of the Institute for Animal Science and Health (IASH) at Lelystad in Holland, that this process was dangerous and that Britain should switch to the level of 130–140°C used elsewhere. This warning, in 1980 (six years before the first case of BSE), was ignored (The Economist, 30 March 1996) (see appendix 1 for the timetable of related events).

Most—but not all—bacterial and viral diseases survive by using their own proteins and are destroyed by boiling them at 100°C; it is these proteins that the victim can destroy through its own immune system. However, encephalopathies, such as BSE and its human equivalent Creutzfeldt-Jakob disease (CJD), depend on their victims’ proteins and therefore remain invisible to the host’s immune system. Encephalopathies seem to be caused by a misshaped protein, called a prion, which does not contain any genetic material, but enwraps a healthy protein, thereby forming another malignant prion, and so on in a chain-reaction, until the brain tissue of the host becomes sponge-like. The only known form of transmission across hosts is the consumption of infected nerve cells. The pattern of transmission across species is complicated: mice can be infected with BSE, but hamsters appear to be resistant.

However, despite earlier claims that transmission could not occur from cattle to humans, the possibility that humans can (p.158) develop CJD from BSE was raised by the work of James Ironside, (The Economist, 30 March 1996: 25–7), and on 20 March 1996 a scientific committee of the government admitted that there might be a link between the two diseases. (Estimates of human deaths from CJD vary from zero—because, the optinists claim, it cannot be caught from BSE—through ‘hundreds’, to all those who have eaten beef in the last ten years.) Four days later MacDonalds stopped using British meat in its burgers. The following day Douglas Hogg, the British Agriculture Minister, suggested the slaughter of 4.5 million cattle to his cabinet colleagues, but the plan was thrown out. By 27 March 1996, the EU had imposed a ban on the export of British cattle, a ban that was, according to Keith Meldrum, Britain’s chief veterinary officer, ‘hasty, ill-prepared, disproportionate and unscientific’. Douglas Hogg even proclaimed that ‘British beef can be eaten with confidence’ and that the government had ‘no intention of adopting any such measure [mass slaughter]’. Nevertheless, beef sales plummeted, the British beef industry appeared doomed, and the government ordered the temporary removal of cattle of more than 30 months old from the food chain. Within a week this had become a permanent ban under an emergency control order, and the government became committed to compensating cattle-owners (Dispatches, Channel 4, 5 December 1996). Since the claim was that cows cannot pass BSE on to each other, the cull of 42,000 cattle announced on 24 April, and which started on May 3, should have removed the problem—that is, until the publication of advice by the Scottish Office on the same day which suggested that, since two-thirds of cows with BSE were born after the infected feed was banned, they may be able to pass it on to their calves through maternal transmission, though ‘not at any significant level’ (quoted in Guardian, 4 May 1996). Again, on the same day, the EU’s worldwide ban on the export of British beef was challenged in the High Court by the National Farmers’ Union and the International Meat Traders’ Association, which argued that the ban was ‘disproportionate’, ‘a misuse of power’, and had been ordered ‘merely for political expediency’(Guardian, 2 August 1996).

On 17 May Douglas Hogg offered to increase the cull of cattle to 82,000. By this time the German beef industry was also (p.159) collapsing, as the consumption of its own beef had dropped 70 per cent. In Britain, without any immediate prospect of the export ban being lifted, Prime Minister John Major announced a policy of non-co-operation with the EU. By the first week of June the government had produced a 120-page report detailing the cause of the problem—meat from infected animals had been mixed with ‘ordinary’ feed—and announced that the eradication programme would eliminate BSE shortly after the year 2000 (when 1,000 cases would remain). Two weeks earlier an Oxford University team estimated that between 15,000 and 24,000 animals would develop the disease between 1996 and 1999 (Oxford University Gazette, 16 May 1996).

The culling programme came unstuck when several slaughterhouses pulled out of the contract—on threat of exclusion from their ordinary trade with supermarket chains, which did not want their customers to consume beef slaughtered at the same places that dealt with infected animals. By 18 June, Hogg had rejected the possibility of any further cull to placate the still hostile EU, saying that such a measure would require a further 67,000 cattle to die (cattle born between 1989 and 1990). Two days later this number was added to the growing list of animals needing to be culled, and the number of legislative bills blocked by the British government at the EU had reached 100. A further two days saw the end of the first crisis, when an agreed plan to slaughter 1,20,000 cattle in all was exchanged for a lifting of the British legislative blocking at the EU and the removal of the ban on certain beef by-products. The agreement was either a ‘victory for common sense’ or a ‘disaster’, depending on which paper one read, but the formal ban on beef exports remained and on 12 July the European Court of Justice rejected the UK government’s bid to overturn it, for ‘despite the harm being done to Britain’s beef industry the ban was a legitimate measure to protect public health’ (quoted in Guardian, 13 July 1996; see appendix 2 for the estimated costs of the BSE crisis). It added that no requests had been made to import British beef from non-EU countries since the ban.

By 1 August the government admitted that the ‘impossibility of maternal transmission’ had now moved from ‘transmission impossible’ to ‘transmission possible’: cows could, after all, and despite all the evidence, pass BSE on to their calves. This (p.160) had followed a 1991 statement from David Maclean, then the Junior Agriculture Minster, who had asserted that it was of ‘no significance to public health’ and chief veterinary officer Keith Meldrum’s comment that, ‘if transmission does occur it is unlikely to be a major feature of this disease’, and a 1994 statement from an Agricultural Ministry spokesperson that ‘the theory of maternal transmission is basically rubbish’. Luckily the Spongiform Encephalopathy Advisory Committee (SEAC) agreed that this still did not pose an increased risk to public health (quoted in Guardian, 2 August 1996). Indeed, on 3 August MAFF suggested that after a series of experiments there was no evidence that BSE could be transmitted from mother to calf by milk, and that no further experiments were being undertaken; in fact, MAFF asserted on 5 August that ‘claims about new tests are rubbish. There are no plans to conduct any more.’ (quoted in Observer, 11 August 1996). Unfortunately for MAFF, the German authorities remained unimpressed and on 5 August they called for a selective cull of calves of BSE infected cows (Guardian, 6 August 1996). A week after the MAFF denial of the milk experiment the ministry decided that, after all, there was an experiment under way.

An independent group of Oxford University scientists then claimed that as many as 7,00,000 BSE-infected cattle may have been fed to humans before the bans became effective (Guardian, 29 August 1996). However, the good news was their claim that, even without culling, the disease would die out naturally by 2001. Even better news followed on 26 November when Professor Anderson, from Oxford’s Zoology Department, suggested that this figure could now be brought forward to mid-1998 because there were possibly as few as 150 cattle left in Britain that were less than 30 months old and likely to develop BSE (Guardian, 27 November 1996). Doesn’t time contract when BSE takes hold? Two weeks later news of a further cull of 1,00,000 older cattle was leaked to Channel 4 (12 December 1996) and admitted by Douglas Hogg on 16 December, in the hope that further elements of the European ban would be lifted. We shall meet the problem of unconditional gestures in the final chapter on negotiating.

The government’s position on the protection of the consumer and the safeguarding of the beef industry has, with (p.161) some exceptions—such as the 1980 IASH warning on ‘cold’ rendering—been driven by a reliance on the scientific experts. However, a major problem for the government has been the extent to which knowledge has proved to be scientific, true, and trustworthy; these are not all the same. For example, the British government has been insisting for a long time now that British beef is safe to eat because: (a) the infected offal is no longer in the food chain (this assumes that every slaughtered animal is inspected by a responsible person who can prove that no trace of offal ever slips back into the food chain, however accidentally); (b) the disease does not occur in the non-offal meat (this assumes that the previous ‘truth’—that humans couldn’t get the disease from cows at all—is not a testimony to the accuracy of the current scientific truth); and (c) that even if the scientists were wrong, the chances of any particular person contracting the human form of BSE, CJD, are infinitesimally small. In fact, the chances of that person being born are much smaller, but this doesn’t stop people being born; the chances of winning the national lotteries in most countries are also smaller, but it doesn’t stop governments encouraging people to take a (lottery) risk that may just come off, at the same time as suggesting that we take a (mad cow) risk that is inconceivable.

The apparent absence of leadership and responsibility mirrors that which transfixed Exxon when the Exxon Valdez oil tanker ran aground in 1989, and no senior executive talked publicly for a whole week after the disaster. In contrast, when the Piper Alpha rig blew up in 1988, Occidental’s head, Hammer, flew straight to the scene of the disaster and minimized the damage to the company through his public appearances. Unless those responsible for similar environmental disasters take immediate actions that exceed expectations, and unless mistakes are admitted, it seems unlikely that crises wither away quietly and much more likely that the media will ferret around for further damning evidence (Pedler, 1996).

In what follows I take three different approaches to the issue of environmental ‘pollution’ to demonstrate not just that theory can add something of value to the debate on the environment, but to reveal some of the differences that theories make for managerial practice. At a very general level, these (p.162) encompass: a structural approach, in which the structures in the environment or organization coerce individuals into certain forms of behaviour; its opposite, an action approach, in which the focus is upon the conscious action of individuals and groups in the construction of their world; and finally, we look once more at how constructivism poses the issues. Constructivist accounts sidestep the debate of the other two approaches and, as I have already suggested, concentrate more upon the epistemological areas of knowledge construction and verification. The discussion draws liberally upon different examples of ecological concern, but also spends a considerable time running back, or perhaps that should read ‘wobbling’ back, over the ‘mad cow’ problem in Britain.

Structural approaches

Structural approaches tend to locate the causal explanation for events in the structure of the environment or organization or whatever phenomenon is believed to lie outside the individual. Structural accounts are usually relational, that is to say, it is the relationship between elements that generate the motivation to act. Thus, my action is determined by my place in the social order and the direction of my organization is determined by its relationship to its environment. In the most extreme form of this kind of perspective, the views, interpretations, and motivations of actors are deemed irrelevant to the unravelling of events. The implication is that individuals and groups act when external pressures coerce them.

In this form of argument the cause of the social protest about BSE lies in the environment which manifests BSE itself. Thus, phenomena external to the protesters cause the protest to occur: excessive road-building causes road protest in the same way that increased environmental pollution generates green parties. This ‘structural’ response is the equivalent of seeing trade unions and labour parties as the response to the early growth of the industrial working class in the nineteenth century. Since, by the 1990s, the working class has been fragmented into a myriad of different forms, and since its problems (p.163) have also altered radically, the consequence is the development of new problems ‘out there’ and a new response, as seen in green parties which have only a very loose identification with social class (see Scott, 1992). This particular approach has a long and substantial history of support from Marx to Dahl, and it does assume a direct connection between external ‘reality’ and consequent mobilization by those affected by the ‘reality’.

The focus on external causation and away from the volitional acts of individuals also implies that the responsibility for error shifts from the individual mistake-maker, and the scientific resolver of the mistake, to the institutional facilitation and resolution of crises. A persuasive account of another disaster waiting to happen—the Challenger space shuttle, which blew up in 1986 killing all seven crew—can also be developed through a structural account. Conventional accounts of the disaster focus on the faulty construction of the ‘O’-rings, which sat between sections of the solid-propellant rocket-booster and failed to prevent the escape of gas created by the ignition. Here, a ‘technical mistake’, presumably made by an individual, led to a catastrophe, as the importance of the launch took precedence over the potential safety problem. But a space shot like Apollo 11 involved over 56,00,000 parts, so even a 99.9 per cent success rate leaves you with 5,600 defects. Hence, the system itself is problematic, irrespective of the action of individuals. And when the opportunity for large profits is added into the mixture (NASA paid $120 for nuts and bolts that cost $3.28 in the local hardware store), you have a disaster waiting to happen.

Vaughan’s (1996) account of the same disaster stresses instead the way the entire NASA culture operated to normalize such problems, to the extent that the management ignored advice to the contrary. It was not, then, that a rule had been broken by a deviant individual, but that the organizational culture redefined what counted as a rule that could be broken. It is this ‘normalization of deviance’, an issue still concerned with the structure of the organization, therefore, that can explain how such catastrophes regularly occur. In the BSE case it is not that a deviant individual mixed the wrong feed by mistake, or that an individual scientist made a mistake, but rather (p.164) that the institutional system under which farmers operate acts to encourage the ‘normalization of deviance’.

The opposite may also occur. That is, the scientific establishment may close ranks to ostracize those who actively challenge the norm. We can see this historically in attacks upon Galileo, as well as in contemporary (1990s) developments over Denis Henshaw’s research which suggests there may be a plausible link between overhead power lines and cancer through the concentration of radon and other potential carcinogens around the cables themselves. In this case, Henshaw has been accused by the public relations officer of the National Radiological Protection Board, Matt Gaines, of ‘setting up his case for more [research] funding’ (quoted in Times Higher Education Supplement, 29 March 1996). Whether or not there is a link is not the point here. What matters are the consequences for the institution of its culture. In Sweden the possibility of a link has led to ‘cautious avoidance’: they do not build very close to power cables. In Britain, because there is no scientific consensus on the ‘truth’, building continues under power cables. If house buyers’ behaviour mirrors beef-eaters’ behaviour, then the absence of irrefutable evidence will not stop a fall in the value of houses built under or very close to power cables.

Another structural way of ‘hunting-the-scapegoat’ is to consider the extent to which political muscle encourages forms of farming that are likely to do the most damage to the environment. For instance, under EU agricultural subsidies operative in the 1990s, the highest subsidies provided are often for those crops that require the most chemical additives; for instance, growing linseed (£520 per hectare). Now, the bizarre issue here is that the EU also pays farmers not to produce anything at all. In the ‘set-aside’ scheme a farmer can claim £340 per hectare for not growing anything. Why, you might be wondering, does the EU pay farmers for doing nothing? In theory, it is because European farmers currently overproduce and there is no market for the resulting crops. Under almost every other system overproduction leads to contraction, widespread unemployment, and plant closure. But in farming we insist on paying farmers compensation for overproduction— and it is the structure that generates the results, not the acts of individuals.

(p.165) The same has occurred with BSE. If food factories produce contaminated produce they are closed down, period. But for farmers with contaminated cows an array of compensatory payments are being devised. Thus, we have a system that encourages farmers to farm as intensely as possible, using the maximum amount of chemical agents, and which encourages them to adopt ‘novel’ cost-reduction strategies, like feeding sheep infected with the scrapie virus to cows. If ever there was a case for going back to less intensive farming to protect ‘nature’, it is this. But the main point here is to note how the system is held responsible for the problem: we cannot blame farmers for being rational, can we? After all, they are in the farming business to make a profit and the system determines where the highest profits come from. In short, the generation of the problem lies within the structure of the system and so must its resolution.

Action approaches

An alternative approach is to shift from this ‘structural’ to an ‘action’ approach, in which it is the action of an individual or group which brings about the focus on the issue and not vice versa. For instance, an action approach implies that individuals do not wait passively for the system or environment to indicate what to do, but play an active role in moulding that environment: don’t wait for your customers to tell you what they want, tell them what they need.

In the social class case it could be argued that people do not have objective social class positions which determine their political response, but that they come to identify themselves with a particular social class as a consequence of taking certain actions. If I consider my identity to be determined by my work, then perhaps I regard myself as first and foremost a member of the working class. But in this perspective the division between social classes is not ‘out there’ but in academic and government papers, which try—and sometimes succeed—in imposing their pattern upon the population. If, though, I decide that my identity actually relates to my skin colour and/or gender and/or (p.166) sexual orientation, then the term social class may have no relevance to me whatsoever. In the green case this would be to suggest that people do not join green parties because the environment is being polluted, but that they are persuaded that the environment is being polluted—and that something can be done about it as a consequence of the actions of the green party. It is, in effect, not pollution ‘out there’ that coerces people, but people’s action that allows pollution to be ‘discovered’ and stopped. For example, dead fish in a stream do not persuade people that the stream is polluted, because it may be that God has struck all the fish dead, or that all the fish are old, or that there is no explanation and therefore nothing that can be done about it. But if we are persuaded that the site of a chemical factory near the stream is the cause of the pollution, then we can interpret the evidence of dead fish in a different way. The data remain the same: the fish are dead, but the explanation changes and, as a result of the changing explanation, we may decide to act. The consequences of this shift are profound because it now seems that the more green parties and research we have, the more likely we are to ‘discover’ an increasing level of pollution. This is the equivalent of flooding an urban area with police and ‘discovering’ a massive increase in the crime rates over the last year—when there were no police and no means for the population of alerting the police as to alleged criminal acts.

For green activists the problem has always seemed to be that many issues are regarded as single issues: once the mad cow problem is identified and action taken, the population can relax and forget about the environment. But note how this suggestion reproduces the structural account—the outside reality causes the mobilization of the population. An action approach might suggest that environmental and other social ‘problems’ are related. In short, that the siting of the most polluting plants in the poorest countries, and poorest areas of a country, is not coincidental but a manifestation of a link that exists between poverty and pollution. Hence, what appears to be a single issue—pollution—could conceivably be the foundation for a politically oriented group to mobilize the local population and to create, not simply wait for, the agent of social change (see Szasz, 1994, on this). Mad cows might not be just an ‘accident’ (p.167) that can be solved, but a consequence of a politically and economically biased system that protects the landowner at the expense of the non-landowners.

Action approaches, then, tend to reject the idea of external forces coercing individuals, and focus instead upon the sense making activities of actors. Under this form of explanation it is the interpretive actions of people that explain what is going on, and not any system that determines their action. Moreover, the action approach also recognizes the responsibility of actors for their own action. Thus, individuals can no longer blame the system for the problem but have to bear the consequences of individual deviance, such as a Worcestershire farmer fined £10,000 for selling cattle in 1995 without revealing that the herd had been previously infected with BSE (Guardian, 16 April 1996). Relatedly, we hold managers personally responsible for deviant or illegal actions, even though the market system in which they operate might induce them to act illegally in order to maintain competitive advantage.

Much of the debate about BSE hinges on boundary destruction, in particular the destruction of ‘natural’ boundaries by deviant individuals. If we return to the discussion about ‘nature’ we will see that there is a degree of ‘naturalness’ and ‘innocence’ attached to the state of nature before the arrival of humans. Thus it is only when humans begin to interfere with the naturalness of the world that things go wrong. In the mad cow case the boundaries that nature imposes upon its fauna suggest that some are vegetarians, some carnivores, and some omnivores. When any of these categories cross the boundaries, trouble ensues. Hence, vegetarians are regularly assailed by omnivorous humans for their ‘unnatural’ behaviour. But the most sacred boundary is not where carnivores are forced to eat vegetables, but when vegetarians are forced to eat meat. Enter the sheep-eating cow complete with certain damnation written across its forehead. So we do not need all the scientists to prove any links because the problem is a simple but dastardly transgression of a moral boundary by deviant individuals.

Such a boundary explanation also serves the interests of the government well, since it provides an explanation that also implies that BSE—while caused by the virus breaking the boundary between sheep and cow—cannot be transferred (p.168) across the ‘stronger’ boundary between humans and any animal. A guilty individual obviously caused the problem by breaking the boundary taboo between animals, but people cannot break the animal-people boundary because they are ‘naturally’ impervious. Well, they were until a US experiment suggested that scrapie could not be to blame because it had a completely separate pathogenesis from BSE. In other words, BSE may be a new disease and not one inherited from sheep scrapie after all (Newsnight, BBC 2, 4 July 1996).

At one level, this looks like a structural explanation: the structures that divide different categories of creature are intended to ensure the replication of all, and when these structures are breached then the problem occurs. On the other hand, we have to consider the extent to which the boundaries are ‘out there’ in the system itself, or merely a consequence of our own interpretation. For example, are humans ‘naturally’ omni vores? And if they are, does this make vegetarians ‘unnatural’? If cows eat animal by-products is this morally wrong or biologically wrong? If we could prove the practice to be safe, and a survey of all cows resulted in no objections, would this make it all right?

It is also possible that persuasive scientific evidence about risk reduction can lead to an unintentional increase in risks where the actions of individuals interpret policies in ways contrary to those intended by the systems’ designers. For example, Adams, J. (1996) has suggested that the consequence of seat-belt legislation, intended to reduce accidents, may well be the opposite. If we all drive Volvos that are built like tanks, and are belted and air-bagged in a cocoon of safety, we may well start driving more dangerously because we think we are safer. Of course, the opposite is true for pedestrians who may be regularly mown down by such ‘safe’ drivers, as the ‘risk compensation hypothesis’ kicks into effect. The consequence in the BSE case might be that a massive increase in the consumption of alternative products itself generates a health risk, as the demand for lamb persuades farmers not to check for contamination from Chernobyl, or chicken riddled with salmonella, or if large numbers of people become vegetarians without knowledge of the necessary vitamins previously derived from meat.

(p.169) But what happens if no knowledge exists of the external picture? What if we are not told about mad cows or ozone holes and the like? Do pressure groups form to defend interests when there is no information to show T that their interests are being affected? Can individuals be held responsible for their action if they literally do not know what they are doing? Since the degree of veracity with which the various proclamations about mad cows seems to correlate as much with my choice of shirt as with anything else, should we look elsewhere for another way of reflecting upon the problem of the environment? In the last section I turn to a contemporary form of analysis, constructivism, to see whether this may shed new light on the topic.

A constructivist approach

As I have already explained, the constructivist approach adopts a sceptical attitude towards knowledge of all forms, in the assumption that we can never achieve an account of the world except through language. Since language is not a mirror of the world but a producer of it, we are constantly faced with developing our ideas about the world through a permanently opaque fog of words. We can, in effect, never get to the world as it is, but only to an appreciation of that world through language. For example, we cannot touch mad cow disease and know that it is mad cow disease. However, we can be persuaded by ‘experts’ that a wobbly cow is indeed suffering from the disease or that the tissue under the microscope is an example of BSE—but these two cases require us to be persuaded: the ‘facts’ do not speak for themselves. Naturally, we can dispute what the ‘expert’ is saying, but since we don’t have an equivalent level of expertise (otherwise we wouldn’t need the ‘expert’) we are really not in a position to generate legitimate alternative explanations of the phenomenon. In other words, the things which exist do not have an ‘essence’ that speaks to us directly, and therefore some human has to intervene and, through the medium of language, persuade us as to the ‘truth’ of the matter under consideration.

(p.170) The modernist account, grounded in ‘hard’ science, suggests that we can ‘know’ the ‘truth’ about BSE and quantify the risks. But the trouble seems to be that the population at large simply does not believe the ‘truth’, as revealed by the scientists. Perhaps, then, the issue of risk ought to be relocated away from the mathematicians and towards the context in which the risk occurs. Since the ‘truth’ of BSE appears to have mutated almost as many times as the disease itself, on what criteria should people heed the advice of the scientists? We know politicians are economical with the truth, but the whole of the modernist era has revolved around scientific truth. In the BSE case the original research into it concluded that beef offal should not be banned from human consumption because ‘it was most unlikely BSE will have any implications on human health’ and because, in the words of the chair of the committee Richard Southwood, ‘We felt it was a no-goer. MAFF already thought our proposals were pretty revolutionary’ (quoted in Wynn, 1996). In effect, it would appear that the scientists’ policy proposals were shaped by the political ‘realities’ as they perceived them, at the same time as the political realities are shaped by the scientific ‘truth’.

This is not to say that all scientists believe the same thing. Far from it: Richard Lacey, a microbiologist, suggested in the House of Commons in 1990 that a generation of people might be killed if his worst fears were realized, a comment that stung Wiggin, the chair of the House Select Committee on Agriculture to say that Lacey was ‘losing touch completely with the real world’ (quoted in Times Higher Education Supplement, 19 April 1996). Yet it may have been Lacey’s comment in the Sunday Times, recommending that people under the age of 50 should not eat beef, that inaugurated the scare. What marks Lacey out from many on the ‘other side’ is his refusal to fall into the ‘proof’ trap. As he says: ‘You can’t prove that smoking causes lung cancer in an individual…what you are talking about is epidemiological association’ (quoted in ibid.: 19 April 1996). This is similar to the debate surrounding the British government’s attempt to overturn the export ban on beef; it felt that because ‘a definitive stance on the transmissabiliy of BSE to humans is not possible’, the ban should be lifted. But, the view of the European Court of Justice was that, because ‘the (p.171) risk of transmission cannot be excluded’, the ban should stay (see Macrory and Hession, 1996). In other words, the dispute surrounds the significance of scientific proof: does no evidence for a link mean that there is no link? This particular problem has enormous ramifications for all kinds of businesses: does no unequivocal evidence of leukaemia in the surrounding area mean nuclear processing plants are safe? Does no unequivocal evidence of a direct link between respiratory diseases and traffic mean that we should maintain vehicle production? Does no unequivocal evidence of a direct link between health problems and the use of VDUs mean that we can continue to use our computers without fear for our safety—or are the potential costs of an equivocal answer too economically damaging?

Political and economic ‘realties’ also impeded the development of research into BSE. As Anand and Forshner (1995) argue, the intervention of politicians into the debate in a normative role is seldom constructive (Gummer, the then British Minister of Agriculture fed his 4-year-old daughter a beefburger on television at the time when we now ‘know’ it was probably not a good idea). Without a scientific background politicians tend to leap into action to defend scientists (when the evidence suits them) and then find themselves in a rather awkward situation when the scientific evidence is inverted. Yet the inviolable role allotted to science also means that research monies into environmental issues tend to be distributed only to natural scientists and not to social scientists, who might be able to offer alternative advice to that which has, in the BSE case, landed several government ministers in the veritable mad cow muck. As Anand and Forshner conclude: ‘the residual impression is one of individual competence shackled to systems which were unhelpful to the maintenance of confidence in the beef and related industries. This food scare, like the“salmonella-in-eggs” crisis, illustrates the fact that in the absence of complete information, negotiation, and to an extent persuasion, play a key role’ (1995: 231).

The critical issue is that the government appears to insist that such environmental problems are wholly technical and can, therefore, be resolved by scientific experts. In this case Dorrell, then the British Health Minister, insisted that he could do nothing because he was merely reflecting what the scientists (p.172) said. Yet we ‘know’ that the BSE problem has little to do with the technical problem because we don’t actually know much about the disease. Hence, for instance, interest shown in a new claim by Wisniewski that scrapie may be transmitted by hay mites (Observer, 21 April 1996). That is the problem—it is an issue of risk management not science. We need to consider what makes institutions generate patterns of action that lead to or exacerbate problems, not merely find the individual culprit or ask a scientists for the ‘true’ picture and assume this will persuade a rightly suspicious population which, like mad cows, seems to have been fed something rather dubious. Since opinion surveys suggest that 66 per cent of the British public already mistrust government scientists, there seems little point in genuflecting towards them as an act of scientific faith (Pedler, 1996).

This is an important point, because in the absence of any information how are we to know that a problem exists? In other words, it is only through the actions of the media in some form or another that information exists and a scare is ‘created’. One implication here is that control over information is crucial for those involved in data suppression or whistle-blowing activities. Any review of the BSE scare should acknowledge how important the construction of the scare through the mass media is: no information—no scare.


In this chapter I have sought to demonstrate that the relationship between the environment and humans is not something that can be taken for granted or left to the (scientific) experts. Nature is simply too important to be left to the scientists. Using BSE as a case study, three different perspectives have been brought to bear on the problem to suggest that the difficulties faced by those charged with protecting the environment are not simply reducible to securing a better purchase on ‘reality’. Different theoretical approaches embody different epistemo logical frameworks, and a reversion to the ‘iron law of olivarchy’ can only provide for a limited advance in our (p.173) knowledge of the environment. Whether business is attempting to deploy an environmentally friendly strategy, or green activists are trying to inhibit the intentions of business, the problem of what should be done is essentially located in how we know what should be done. This is not to suggest that the ambiguities of knowledge impel us to take no action at all, but to suggest that companies and individuals consider carefully the lessons of the BSE quagmire and note how an overreliance on the truth of scientific knowledge can itself lead to a series of actions that look remarkably like the actions of a mad cow. Or, as the realist mad cow said, ‘I can’t catch BSE because I’m a sheep.’

But where does this case study fit into the world of chaos and strange attractors discussed in chapter 3, or the change models considered in chapter 2? The chaos approach is, unfortunately perhaps, all too applicable. In the short term the government had little control over events, which appear to have spiralled out of control almost from the first day—whatever the government did or said, somehow or other events conspired against them to undermine every attempt to restore order to the situation. And therein lies the strange attractor and the link to the autistic organizational model of chapter 4. For here is an organization (the British government) in which the apparently chaotic state of its strategy reveals a fundamental irony: it is consistently inconsistent—the strange attractor sits waiting to pull back to ineptitude every attempt to resolve the problem because it is an inherently autistic organization. It is autistic not in the sense that it does not listen to anyone, but that irrespective of what anyone says it always acts in the same arrogant way, which involves subordinating everything to a quest for political advantage and damage limitation. Here lies the final lesson for organizations: the strange attractor that bedevils them all—but also makes them work—is that they are all irredeemably political arenas.

In the final chapter I want to review the main arguments of the book and take us to the heart of managerial practice: negotiating.



IASH warning on ‘cold’ rendering


BSE discovered at the same time as research into the problem drops by 25 per cent

July 1988

government bans feeding of cows and sheep to other cows and sheep

August 1988

government insists on slaughter of infected animals

January 1989

Southwood Committee Report: most infectious tissue (brain, spinal cord, tonsils, etc.) banned from food for human use (by this time approximately 446,000 infected cattle consumed by humans)


maternal transmission announced as being of no significance to public health


maternal transmission theory condemned as ‘basi cally rubbish’

January 1994

Dispatches, Channel 4 (‘BSE: The Human Link’, 26 January) documentary claims a possible link between BSE and CJD

November 1995

stripping meat from cattle backbones illegal (by this time approximately 700,000 infected cattle consumed by humans)

March 1996

new variant of CJD (nvCJD) discovered; EU ban on export of British beef

April 1996

cull of 42,000 cattle announced

May 1996

cull of 82,000 cattle announced; Britain blocks EU legislation

June 1996

cull of 120,000 cattle announced; EU agrees to lifting ban on beef by-products; Britain desists from inhibiting EU legislation

July 1996

European Court of Justice rejects Britain’s attempt to have the world ban on beef exports removed; evidence emerges that sheep can catch BSE from cattle

1 August

maternal transmission claimed to be possible

5 August

‘Claims about new milk tests are rubbish’, according to the government

12 August

new milk tests are being carried out by the government agency responsible

22 August

it is claimed that 2 million infected cattle have already been exported to Europe

29 August

it is claimed that 700,000 infected cattle have already been fed to humans; it is announced that BSE will die out in cattle by 2001

19 September

this last claim is denied

22 September

the second claim of 29 August is definitely denied

8 October

government offers a £45 million beef-aid package to farmers

28 October

140,000 young cattle considered for culling

26 November

it is claimed that BSE will die out in mid-1998; £3.3 billion aid package to farmers announced

16 December

13th CJD victim dies; 100,000 extra ‘old’ cattle cull accepted by the government


Estimated costs of existing BSE crisis: £3.2 billion

Estimated cost of culling entire herd: £15–20 billion

Slaughter of every animal: £10.2 billion in compensation

Costs of slaughter itself: £2 billion

Cost of redundancy of all involved: £1.1 billion

Maximum fall in GDP of 1.2%

Current account deficit up by £6.5 billion

(All this is premised on no outbreak of CJD, The Economist, 30 March 1996; 8 months later, by November 1996, 14 new cases had been confirmed.) (p.176)