Regulating the Regulators: Policies for Reform
Regulating the Regulators: Policies for Reform
Abstract and Keywords
Examines the overall pattern of different forms of control – oversight, mutuality, competition and contrived randomness – and suggests that each could be used more effectively to secure public objectives in regulation inside government. The regulators themselves appear not to apply the principles they apply in regulating to themselves. It is notable that there is little attempt to work out the costs to the public sector bodies of being regulated, and weak evidence of principles of competition and oversight, applied so assiduously to the regulatees, being applied to the regulators. The search for some consistency in approach offers a possible avenue for reform.
This chapter turns from analysis and description of regulation inside government to discussing policy and evaluation. What weaknesses need attention? How might regulation in government be improved? Drawing on the four‐part analysis of types of control over bureaucracy, sketched out in Chapter 1 and applied in Chapters 3 and 9, this chapter explores issues of policy and institutional design under each of those four headings. The argument is in the form of an immanent or ‘physician, heal thyself’ critique. We claim that if any of the control techniques available to the regulators are of value in modifying the behaviour of public‐sector bureaucracies, then those techniques must apply as much to the regulators of government as to the regulatees. Following this analysis, we identify four ‘deficits’ in the current institutionalization and behaviour of public‐sector regulators. They are:
A well‐designed institution . . . is both internally consistent and externally in harmony with the rest of the social order in which it is set.
R. Goodin, The Theory of Institutional Design
• A mutuality deficit
• A competition deficit
• An oversight deficit
• A randomness deficit
Chapter 2 showed that regulation in government is a grouping, not a group. To the extent that regulators in government interact with one another at all, it is only within particular families, like the British and Irish Ombudsmen Association (which extends across the public and private sectors),1 and the groupings of auditors, and, more recently, professional inspectorates. Much the same could be said about regulation of business, which also tended to be institutionally fragmented. For example, the ‘Of‐type’ utility regulators recognized each other as part of a common family group and met periodically, but did not see themselves as part of a wider regulatory family which included all the other actors (like trading standards officers or health and safety regulators) attempting to influence business behaviour. In the same way there were no fora for regulators in government in which good practice or overall philosophy could be discussed across the various domains of regulation, or ‘hot spots’ examined (even though, as Chapter 3 showed, methods of operation cut across the different institutional families of regulators). In 1998 a step was taken in this direction by the creation of a Public Audit Forum, chaired by the Comptroller and Auditor‐General, to ensure common standards and practice for public‐sector audit across the UK. The Forum comprised the four main UK public audit bodies described as well as other stakeholders from central government, local authorities, and the health service, and its creation was prompted by the work of the Nolan Committee on Standards in Public Life.
The notion of a mutuality deficit, however, goes beyond a simple absence of rich (or in many cases, any) communication among bodies doing related work. It also means regulators in government did not for the most part seek to shape each other's behaviour or call one another to account in collegial fora. As will be noted later, the commonest response to jurisdictional overlaps among regulators was a search for modi vivendi through coexistence rather than collegial accountability. There were some exceptions to this (p.212) pattern. For instance, as discussed in Chapter 6, the ombudsman club, the British and Irish Ombudsman Association, was reluctant to admit institutions not constituted in what it considered to be an acceptable form, notably the Prisons Ombudsman (to which it gave only Associate status), and there is no doubt the Audit Commission's approach of league‐table performance ratings influenced the working of the professional inspectorates over local government.2 But in general the relationship among regulators in government seemed neither fully collegial nor fully competitive, and this theme will be pursued further in the next subsection. There is a risk of a system that achieves the worst of all worlds, reaping neither the advantages of mutuality nor of competition and falling somewhere in the middle.
Given this cross‐domain ‘mutuality deficit’, it is not surprising that coherent principles and practices tended to be conspicuously absent in regulation within government. Indeed, that mutuality deficit seems likely to be even worse than applies to business regulators, because in the absence of overseers concerned with synoptic regulatory impact analysis, there was little scope for mutuality‐based self‐regulation by the regulators of government in the shadow of formal oversight. At the time of writing, it was only the regulatees who had anything approaching a synoptic view of the system, since they were most aware of the various and contradictory demands being made of them.
Unless the mutuality deficit can be defended as inevitable and/or beneficent, how could it be reduced? Three possibilities (in roughly ascending order of the formal changes that would be required) include cross‐domain fora for regulation in government, a more coherent career for those working in this domain, and restructuring of the organizations within it to bring disparate specialists together under broader ‘umbrellas’. Cross‐domain fora, the least radical way to tackle the mutuality deficit, would in principle require no formal changes from the present institutional set‐up, simply more opportunities for regulators across the different domains to exchange (p.213) ideas and develop common approaches and codes of practice. As noted in Chapter 2, a meeting we held for the various regulators of government in 1997 was the first opportunity many of them had ever had to discuss common issues, such as compliance costs and principles of sanctioning. If such exchanges were to move beyond first principles and develop cumulative common understanding among the players, a regular forum would be needed. Possible models include the fora for European regulators at the European University Institute at Florence, like the European Competition Forum and particularly the regular meetings of the heads of the new European agencies, which bring together regulators across different policy domains to discuss generic issues including institutional procedure, accountability, and financial control (see Kreher 1996; Ehlerman and Laudati 1997). More ambitiously, we could even conceive of an Institute of Public Sector Overseers, along the lines of the British and Irish Ombudsman Association, and such an institution could include prizes, fellowships, and distinctions as well as meetings.
Another way of achieving a similar result—which could be an addition rather than an alternative to the forum approach—would be to place more emphasis on the role of individuals carrying ideas from one domain to another through a career structure cutting across the 130‐plus public‐sector regulator organizations. The transfer of innovation and ideas in organizations often comes more from mobile individuals than summit meetings, and a career corps (along the lines of the French grands corps) is one well‐established way of cementing a grouping into a group.3 Indeed, as discussed in Chapter 4, such a notion lies behind the creation of the Senior Civil Service in 1996 for the top positions in central government (and the establishment of analogous systems in many other countries). Why not apply the same logic of a cohesive go‐anywhere grand corps with a common cursus honorum to regulators inside government? Something of that sort seemed to be happening below the topmost (Director‐General) level for the utility regulators, and a similar practice for regulators of government could bring the same pay‐off. At the least a more general development of secondments and career (p.214) exchanges, hitherto undertaken on a very limited basis (for example between NAO and the European Court of Auditors, and NAO and the Audit Commission) might help to carry over good practice from one domain of regulation to another.
A third and more radical way of tackling the mutuality deficit would be to restructure the boundaries of the multiple organizations operating in the field rather than simply promoting exchanges or developing a common career structure among the fragmented institutions regulating government. It would tackle the problem at source by reshaping the organizations themselves. In regulation of business and society, an umbrella approach to structuring regulatory organizations has developed, notably in the Health and Safety Executive (and Commission) and the Environment Agency.4 These organizations combine particular specialized units at operating level (like the Railways Inspectorate or the Nuclear Installations Inspectorate) to reap the advantages of specialization and a ‘common language’ (low relational distance, as discussed elsewhere in this book) with a common cross‐domain capacity for policy analysis, high‐level sanctions, strategic direction, and clout in the higher reaches of government. Apart from combining the advantages of high and low relational distance simultaneously in a coherent organizational structure, such a structure obliges different specialists to work within a common overall policy framework. Applying a similar logic to regulation inside government would imply developing two‐tier structures in which a super‐regulator brought together a range of different regulatory specialists. There might well be a constitutional argument for developing different umbrella structures for regulation of central and local government, and for separating the parliamentary and executive umbrellas, but there seems no intrinsic reason why grievance‐handling and other forms of regulation should always be located on different institutional planets. Again, in utility regulation we can find examples of grievance‐handling combined with other forms of oversight under one roof, for instance in the case of OFTEL (since there is no separate telecommunications consumer council to share the grievance‐handling aspect of regulation).
Earlier chapters of this book have shown that regulators in government frequently employed competition as a device to improve performance among their charges, notably by fostering league‐table rivalry over performance indicators (Bentham's ‘tabular‐statement’ and ‘comparison and selection’ principles of public management (Hume 1981: 161) dressed up for the information age). By contrast the regulators themselves were exposed only to rather sporadic and ad hoc competition, as noted earlier. Like all bureaucracies they engaged in periodic ‘turf wars’ for policy responsibility (for example in the struggle between the Parliamentary Commissioner for Administration and the Data Protection Registrar for oversight of the provisions for ‘open government’ introduced in a 1993 White Paper and a similar struggle that developed over responsibility for Freedom of Information legislation four years later) or less overt conflict (for example in the rivalry between the NAO and Audit Commission over parts of NHS audit (Bowerman 1994)). And, as shown in earlier chapters, they conflicted in the sort of pressure they placed on the bureaucracies they oversaw (for example, in the ‘input‐output’ cost‐based focus of the Audit Commission compared to the professional best‐practice focus of the professional inspectorates and the conflicting pressures put on prison governors by different regulators, as described in Chapter 6). At first sight, competition between rival administrative values entrenched in different regulatory institutions might seem to approximate to some of the conditions for ‘collibration’ as identified by Andrew Dunsire (1978: 1990), that is, the maintenance of contradictory pressures within organizations as a condition of their effective control from the top or outside). But there was rarely an obvious point at which these contradictory pressures could be manipulated by ‘selective inhibition’ of the rival forces (a crucial part of Dunsire's analysis of how ‘fingertip control’ can operate in large bureaucratic organizations). Much of the selective inhibition seemed to come from climates of opinion, a changing Zeitgeist, or even from the regulatees themselves rather than from identifiable high public officeholders.
Whatever may be the truth of that, regulators of government were not themselves systematically exposed to the sort of competition and rivalry they imposed on public‐service organizations, (p.216) particularly in local government, education, and health. Indeed, the response of regulators to the considerable overlap among them was often to collaborate rather than compete, particularly over the debatable lands lying between audit, professional inspection, and funder‐regulation. As noted in Chapter 5, the Audit Commission, originally instituted as a counterweight to the putative ‘professional aggrandizement’ focus of the specialized inspectorates, increasingly came to do joint work with them on social services, fire, police, and education (to the point of being formally brigaded with the Social Services Inspectorate after 1996 in the review of local‐authority social‐services provision) and joined with the Housing Corporation in 1994 to review the value for money of housing associations (Audit Commission 1996: 14).
In interviews and discussions, regulators were often sceptical about the value of competition applied to their own activities as opposed to those of the organizations they oversaw. Indeed, one member of an organization particularly noted for its zeal in publishing league‐table lists of saints and sinners within its domain declared he was ‘not sure it was a good idea for public‐sector organizations to compete with one another’ over regulatory tasks. But there were exceptions. For instance, we were told that competition between the NAO and Audit Commission was valuable for motivating middle‐level staff and that the Audit Commission's techniques and in particular its style of presentation had been a major spur to the NAO (I42). Unless competition among regulators in government is to be dismissed as undesirable in principle (a doctrine which would be strikingly at odds with those applied elsewhere in the UK public service or in business regulation in the EU), the problem is to find forms of competition which do not involve excessive costs on the organizations being overseen. At least three forms of competition might be considered, namely more league‐table comparisons among regulators as a group and between regulators and their charges, more systematic sunset‐ and market‐ testing of regulation in government, and more scope for an element of choice of regulator by public‐service organizations.
More systematic league‐table comparisons among regulators would simply extend the logic of publishing comparative performance indicators to regulators in government. Chapter 2 noted that performance standards for performance setters tended to be markedly less developed than those for the bodies the regulators over‐saw. (p.217) Regulators of government tended to measure their own performance in isolation from one another, and such performance standards as there were (in common with most public‐sector organizations) tended to refer to economy and workflow rather than effectiveness. Regulators' response to this issue when we raised it in discussions tended to be that no meaningful comparative measures of performance could be developed for their activities, and to stress the uniqueness of each regulatory organization. Certainly, there are difficulties in measuring the performance of bodies whose objectives are multiple and conflicting and where it is hard to measure outcome as opposed to output. But those characteristics are shared by most public‐sector bodies; they are not special to regulators of government. A possibly more distinctive feature of regulators of government that might make performance measurement problematic is that their effects are usually indirect, coming about at one remove through the improvement of other organizations' performance, which further complicates the identification of cause–effect links that besets any performance assessment. But even that problem is not unique to regulators (similar ‘value‐added’ measurement problems arise in teaching, social work, and medicine, for example), and indeed many regulator organizations in government seem more like what James Q. Wilson (1990: 159) terms ‘procedural’ organizations (whose work can be observed but not the outcomes of that work) than ‘coping’ organizations (where neither work nor outcomes can be observed).
Moreover, even if there is something that makes for special problems in measuring the performance of regulators in government as opposed to other public‐sector organizations, each regulator within government was by no means alone on the planet. As Chapter 2 showed, there were at least 130 ‘firms’ in the business, and from the analysis in Chapter 3 of the methods they employed it is not clear that their activities were markedly more diverse than the differences within local government or the higher education sector (for instance, between All Souls College and a school of nursing), yet that diversity has not prevented attempts by the regulators to rank those organizations on a common scale. Indeed, given the dramatic growth of investment in regulation within government, it seems all the more important that comparative information on efficiency improvements, speed of communication, and quality‐of‐management indicators (such as clerical‐grade (p.218) absenteeism and turnover within the organization) should be routinely collected and published on a league‐table basis for regulators as well as for service providers, and not confined to gnomic sentences in a set of separate annual reports. In addition, following European Union practice (by DG XIX) as described in Chapter 8, there is a case for regulators to include themselves in the league tables they publish of relative performance or practice among their charges.
A further step along the scale of competitiveness, more systematic exposure to sunset‐ and market‐testing regulation in government, might also be expected to put regulators under more pressure to stay lean and achieve results. ‘Sunset‐testing’ exposes regulation in government to competition with other policy priorities and claims on resources, yet none of the regulators we examined worked under an explicit ‘sunset’ regime. It might be argued that true sunset arrangements for public policies or institutions can hardly apply in the UK's constitutional context (in contrast to that of the US), since legislative or policy settings can always be reversed by the government of the day. But it is debatable how far this argument should be pressed. After all US sunset provisions typically stem from ordinary legislation, rather than from any constitutional entrenchment, which can also be overturned by any subsequent decision by the enacting or any subsequent legislature. The idea that there is some British constitutional reason why regulation in government should be an empire on which the sun never sets seems unconvincing, and indeed there are examples of sunset provisions (in all but name) in other policy domains. For instance the utilities‐ licensing regimes provided for price control to end after four or five years, requiring reformulation and licence modification if price control was to be retained.
In contrast to sunset provisions, market‐testing exposes regulation to periodic competition with other providers of the service. The scope for systematic market‐testing may be greatest for regulatory activities that involve multiple units doing the same thing and large‐scale structures capable of mobilizing resources for setting up a market. Both are features which characterize schools inspection in England and may explain why OFSTED has developed the approach of franchising inspection activities to certified teams of inspectors, in a way that would not be so easily possible for small specialized inspectors. But it is not clear why the same approach (p.219) could not be applied to social‐services inspection, or to large areas of the work of the ombudsmen or public auditors.
A third and more radical application of competition to regulation in government would be to allow regulatees more scope than they currently have to choose their regulators. Such choice would need to be limited if it were not to lead to a ‘race to the bottom’ in regulatory standards (see McCahery et al. 1996) and earlier experience with the principle—as in the pre‐District Audit days when local ratepayers elected their own auditors or the ability of local authorities to choose their own auditors up to 1982—shows up some of its limits.5 But the principle has been established in some domains of regulation in government, notably in state‐funded schools, where the opportunity to opt out of local‐authority oversight and choose instead to be regulated by a central body was established in 1992. Similarly, the original bill which later became the Education (Schools) Act 1992 originally provided that school governors should be allowed to choose their own inspectors.6 Likewise the Financial Services Act 1986 gave firms a limited choice on whether to be regulated directly by the Securities and Investment Board or by the appropriate self‐regulatory organization.7 It is possible at least to imagine a regime in which it was possible for central and local authorities to decide—within defined limits—which public‐audit institutions they wished to be audited by or whether they wanted to be subject to efficiency scrutiny from Whitehall's Efficiency Unit or the Audit Commission.
3. An Oversight Deficit?
Chapter 2 showed that regulation in government was a large and growing industry that was itself largely unregulated. In an era when codes of conduct in public service became commonplace, there was no code in this domain akin to the principles established for market‐testing. No unit in government was responsible for (p.220) comprehensive oversight or review of the field, and none of the top mandarins we spoke to in the higher reaches of Whitehall was in a position to take a balanced overview. External reviews were rare and such in‐depth external reviews as there were tended to be on an isolated case‐by‐case basis, such as the review of the Audit Commission by an independent consultant (commissioned by the Commission itself) in 1995 or the review of the Commission for Local Administration by the Department of the Environment in 1995.8 As Chapter 2 showed, a billion‐pound‐plus industry appears to have grown up in a relatively ad hoc way, with no systematic attempt to take stock. No one ever appeared to have compared the amount government invests in regulating itself with the amount it spends on regulating business and society, and asked if the balance made sense. It seems to be a dramatic case of ‘no‐one‐in‐charge‐government’ (Bryson and Crosby 1992: 4ff.).
No‐one‐in‐charge‐government can have its virtues. But they are not advantages that the various regulators of government typically extolled for their own regulatee clients. If there was an oversight deficit in this case, there is an argument for a more systematic approach to overseeing the overseers, without arriving at the infinite regress of who oversees the overseers' overseer. At least three related elements could be included in a more coherent system of oversight: a common set of principles applying to regulators in government which included systematic logging of compliance costs, coherent principles of investment in regulation within government, and appraisal of the balance between external and reflexive regulation.
Chapter 2 commented on the absence of any general code of conduct and of any coherent system of logging compliance costs for regulation within government, in sharp contrast to the regulatory impact analysis routinely adopted for most business regulation. Reading across the notion of compliance costs from business to government can be problematic, but the narrow view of compliance costs taken in Chapter 2—the direct costs of interacting with the regulator in the process of scrutiny—is both measurable and important, given what many public‐service practitioners said about the cost in expensive staff time of dealing with regulators. Regulators (p.221) often tried to argue that there were in effect no compliance costs attached to their activities, since they were seeking only to generalize good practice. A typical regulator response was that ‘We only ask for information that any well‐run organizations should have for its own internal management.’ We find this view disingenuous. Ratcheting all organizations up to the best‐practice standards at the top will inevitably involve extra costs on those below best practice, which will often be the majority; repeated demands for multiple forms of information, coming from different regulators or even from the same regulator, are far from costless to respond to, and best‐practice standards of management that may be easily achievable for large, well‐resourced organizations may be much more costly for smaller organizations to achieve (cf. Hogwood et al. 1998).
The regulators' view of compliance costs as an issue only for business firms, not public‐sector bodies, does not accord with what we learned from the selected regulatees we talked to about the costs of senior staff time in dealing with regulators (see Chapter 2). Indeed, the compliance‐cost problem seems likely to be of increasing importance in regulation of government, since regulators can ‘costlessly’ (that is, at no cost to their own budgets) impose ever greater reporting requirements on their charges, and general changes in public management have tended to produce a more fragmented structure of service‐delivery bodies facing increasing collective‐action problems in protesting against compliance costs imposed on them by regulators. Since many aspects of compliance costs comprise a fixed‐cost claim on resources regardless of organizational size, compliance costs in government regulation will tend to become relatively more burdensome as service‐delivery units became smaller. The parallel with the regulatory ‘squeeze’ on small firms thus becomes closer, and the absence of any official effort to monitor regulatory impact and compliance costs more glaring.
There were isolated exceptions to the ‘Nelson's‐eye’ vision of compliance costs among regulators in government, so different from the culture that has prevailed in the assessment of business regulation in recent years. And there were instances of best practice that could be built upon, albeit typically in the form of isolated ad hoc reviews rather than regular reporting. For instance, in the face of indignant protests from university Vice‐Chancellors and (p.222) Principals about heavily increased regulatory compliance costs being imposed on their institutions with no additional public funding to cover those costs, the Higher Education Funding Councils assessed the costs of their surveys of academic standards in teaching and research.9 In the regulation of Training and Enterprise Councils a 1995 efficiency scrutiny of the contract between TECs and the Department for Education and Employment focused on the costs of reporting on TECs and suggested that the regulatory burden from DfEE should be reduced to lower those costs (DfEE 1996: 132). Perhaps surprisingly, local‐authority associations tended not to produce overall assessments of compliance costs imposed on local authorities by central‐government regulators (as shown in Chapters 2 and 5), but they have been critical of the failure to extend deregulation effectively into this domain:
Responsibility for coordinating the collection of information about compliance costs of regulation in government could either be brigaded with the central units responsible for orchestrating regulatory impact analyses for business regulation (the simplest and quickest solution), or assigned to some super‐regulator unit, either in a central agency or free‐standing, with responsibility for overall policy development and general overview of regulation within government.
To date the results of this [deregulation] initiative have been very disappointing. Significant controls such as CCT, capping and capital . . . were excluded. The initiative has concentrated on controls by local authorities (e.g. control of dogs on roads, for which the rules have been simplified) rather than those imposed on local authorities by central government. . . . Departments other than the DOE appear to be less than committed to the initiative. (Select Committee on Relations Between Central and Local Government 1996: ii. 116)
More coherent principles of investment in regulation within government need to be developed along with a more systematic analysis of compliance costs and benefits. At the time of writing there seemed to be no general or clearly stated principles governing such investment. If small ‘lean’ organizations are the most effective way to regulate large, complex, and controversial privatized utilities, why did regulation of government need an investment that is so many times larger? If, as it repeatedly claimed, the NAO saved (p.223) seven times its own costs in any one year (compliance costs did not enter into this claim), why not massively increase it?10 The stock assumption seemed to be that the bottleneck to expansion was the limited capacity of the Parliamentary Public Accounts Committee to process the NAO's value‐for‐money reports; but if those reports were as effective in saving costs as the NAO claimed, a random PAC scrutiny system or some form of franchising could be contemplated. In the absence of any coherent look at investment levels in regulation of government, resourcing will be wholly governed by happenstance and special pleading, and will not be justifiable by any coherent investment criteria—an approach which most regulators would roundly condemn if they found it in the organizations they oversee.
Reflexivity or responsiveness in public‐sector regulation does not necessarily require a central‐oversight regime, but it is closely bound up with compliance costs and investment in regulatory bureaucracy. By reflexivity is meant oversight practice which builds heavily on organizational self‐regulation, aims to modify self‐regulation at the margin (for instance by making regulatees write down the principles by which they work or by operating regimes of ‘enforced self‐regulation’ (Ayres and Braithwaite 1992)), and only intervenes at the point where self‐regulatory processes have clearly broken down. Reflexivity in that sense was far from unknown within government (it appeared for example in the Treasury's post‐ 1994 system of general running‐cost controls together with provision to put erring departments ‘in the clinic’ by exposing them to a full oversight system of control) but it appeared to be at best patchily applied, particularly outside Whitehall, and EU practice, as discussed in Chapter 8, might have valuable lessons to teach national‐level regulators of government.
4. A Randomness Deficit?
This book has shown that regulators in government vary widely in the extent to which they consciously use random devices in their (p.224) activities, like random tax audits. The Social Services Inspectorate used elements of randomness in planning its scrutiny programme and the Prisons Inspectorate followed a policy of unannounced snap inspections of prisons in addition to its programme of announced visits. As noted elsewhere in the book, while our research was in progress, OFSTED announced it planned to introduce a similar policy for school inspections. But randomness does not seem to be generally applied as a tool by regulators of government. We argued in the last chapter that many of the developments in public services in recent decades have the effect of displacing traditional elements of ‘contrived randomness’ in organizational control, and developments in regulation do not appear to be making up the deficit by sharply increased use of randomness allied with oversight.
Using lotteries in some form as a device of management or regulation is often controversial. It may be that the practice of public management has reached such an advanced level of development that the use of randomness can be relegated to the list of outmoded administrative devices along with wooden tally‐sticks, sealing‐wax, pipe‐rolls, and red‐tape itself. But we doubt it. More plausibly, it may be, as some Whitehall denizens commented when we raised this issue, that the environment of contemporary policy and bureaucracy, and the career paths of the people within it, is so widely perceived as turbulent and unpredictable that it is hardly necessary for further elements of randomness to be artificially contrived on top of the normal chaos. But if much of the village character of the central core of Whitehall remains (as claimed in Chapters 4 and 9), there may still be a randomness deficit at a personal level (where relational distance is low) while there is a randomness surplus in policy development. At least three devices could be used to ensure that the advantages of the use of chance were applied to regulation in government: random selection of regulatee units, random assignment of regulators to regulatees, and elements of randomness in managing regimes involving multiple performance indicators.
Random selection of regulatee units for scrutiny, where it was used at all, tended to be applied for the regulation of the lower‐status organizations in public administration—prisons, schools, social‐care institutions, and routine financial operations. But in some ways the greatest need for more randomness in oversight was at the top and the centre, where relational distance between regulator (p.225) and regulatee tended to be lowest. There is accordingly a case for extending the random‐selection principle to the scrutiny of central departments by public auditors or budget overseers rather than negotiating audits eighteen months in advance or relying on strategic overall targets and predictable review schedules.
A closely related principle is that of random assignment of regulators to regulatees. As was noted in Chapter 1, William Niskanen (1971) has advocated the application of such a system to legislative oversight of bureaux and policy programmes, as a means of avoiding ‘capture’ of the oversight process by legislators who are not representative of the majority of users. For regulation of bureaucrats by other bureaucrats, a similar principle can be invoked—and indeed the tradition of limited tenure in any particular position by top public servants (stretching back to the standard three‐year term in post in the classical bureaucracy of imperial China) may help to bring it about. In fact, given the contradictory doctrines we noted at the outset of the book about best practice in regulation of government—whether regulation is best conducted by ‘poachers turned gamekeepers’ (low relational distance) or strangers and outsiders (high relational distance), or by some mixture of the two—a system of assignment of regulators to regulatees which made it unpredictable which of those principles will be invoked for any particular case could in principle achieve the advantages of both through the anticipated effects on the regulatee organization it is likely to generate.
Third, the same principle could be invoked for the design of the battery of performance indicators that so frequently figure in contemporary regulation of service‐delivery bureaucracies (but markedly less so, as we have observed, for regulators themselves). Ever since Peter Blau's (1955) classic study of the way performance measurement of a welfare agency produced counter‐productive outcomes (through caseload measures which led workers to encourage dependency rather than independence on the part of their clients) it has been well known that performance‐indicator regimes tend to skew organizational behaviour in unintended ways through managerial activity to hit the targets, typically producing side‐or even reverse‐effects. Such behaviour is commonly discussed in the context of the contemporary vogue for quantitative performance measures in public bureaucracies. One way to counter the distorting effect that such performance measures can generate is deliberately to build elements of randomness into the weighting (p.226) placed on any one out of a set of predeclared performance indicators. If managers are aware of a wide‐ranging set of performance indicators by which they may be judged, but cannot predict what weight will be placed on each one, some of the grosser types of indicator‐induced distortion will have a much more uncertain payoff. This effect often seems to come about by accident (by managers being unable to predict the weightings of indicators by which they will be judged because of the vagaries of the political and bureaucratic climate and the ubiquity of ‘garbage can’ processes (Cohen et al. 1972) in public policy). But if dense multi‐indicator performance‐assessment regimes are here to stay in public management, the time for more deliberate and self‐conscious insertion of random terms into indicator weights may have arrived.
As explained at the outset, the argument above has been based on the assumption that ‘what is sauce for the goose is sauce for the gander’—that is, the presumption that principles of regulation which are effective at one level or in one domain should normally be expected to be effective in other contexts unless the conditions can be shown to be radically different. From our analysis in Chapters 2 and 3, it is difficult to make a convincing case that regulators stand out from all other public‐sector organizations in terms such as multiplicity of objectives, internal diversity, or the indirect character of the ‘production’ process. It consequently seems hard to argue that the organizational complex of regulators of government should itself remain largely unregulated, or that the design principles for regulating that complex should deviate radically from the design principles used to regulate other public‐sector organizations.
In some cases, notably competition and oversight, we have found a marked instance of ‘do as I say but not as I do’ in the disparity between how regulators control and how they are controlled—a disparity which seems hard to justify. But the cases of mutuality and randomness seem different in two ways. First, mutuality and randomness applied at one level (control of clients by regulators) may also apply at the other level of control by regulators themselves. For example, random assignment of regulators to clients is (p.227) as much a check on the regulator organization as the client organization, and mutuality can also be applied to face‐to‐face accountability of regulators to their clients. Second, the four deficits referred to earlier often seem to apply as much to regulation of their charges by the regulators as to regulation of the regulators. Even if mutuality remains strong (albeit supplemented by more formal paper‐driven oversight) at the core of Whitehall, it tends to be weak in much of the rest of the public sector, and deliberate randomness is the exception, not the rule, in relationships between.
The methods discussed above for filling the four deficits are to some degree in tension, like the stays of a mast. But it has often been observed that most principles of institutional design tend to have a two‐edged character (Goodin 1996: 40), and, as with the stays of mast, even though they pull against one another to some degree, they may achieve in combination what no single one could do on its own. A system of regulating the regulators which relied exclusively on any one of the four approaches sketched out above would be likely to have major and predictable limitations. And any formal institutional arrangement for manipulating the tensions among these approaches (following Dunsire's (1978) principle of collibration) is unlikely to be devisable. Apart from the sheer engineering difficulties of institutionalizing collibration on other than an opportunistic way (cf. Hood 1996b: 216–25), there is constitutional difficulty in assigning any single institution to regulate the regulators, given the aspiration to make many of them independent of ‘politics’. Accordingly, any overall control of the four different approaches probably needs to be thought of as like the weather.11 That is, the balance among these approaches might vary as political and social climates alter, but could not go beyond a certain point, like a masthead which bends to stress but is brought back by its opposing stays—at least up to the point where an extremity of stress brings the whole structure down. A system which built the four approaches discussed earlier into countervailing tensions could be a more balanced approach to institutional design for regulators of government.
(1) See James (1997: ch. 9) for an extensive discussion of the Association. A meshing of public and private also applies to the audit family in so far as many public‐sector audits are carried out by private accounting firms—in some cases, indirectly exercising state power via surcharge actions. We are indebted to Martin Loughlin for reminding us of this point.
(2) e.g. since 1996 a joint Social Services Inspectorate and Audit Commission team has reviewed local‐authority social‐services provision and since 1994 the Housing Corporation has worked jointly with the Audit Commission to review the value for money of housing associations on the basis of the Commission's previous work (Audit Commission 1996: 14).
(4) The England and Wales authority which brought together several previously separate regulatory agencies such as the National Rivers Authority and HM Chief Inspectorate of Pollution.
(6) Source: Education (Schools) Act 1992 in Current Statutes Annotated 1992, 38‐2, Introduction and General Note.
(8) The review suggested that CLA was so ineffective it should be abolished, a finding rejected by the DOE, which had commissioned the review.
(10) The Audit Commission also made dramatic claims for the potential savings achieved through its activities, which it put at £4.3bn. between 1983 and 1991 (Audit Commission 1991: 17). To the extent that savings on this scale were indeed directly related to the Commission's activity, there would also seem to be a case for massive expansion.