Making Audits Work: Auditees and the Auditable Performance
Abstract and Keywords
This chapter examines the manner in which auditing works by virtue of actively creating the external organizational environment in which it operates. It also addresses how the consequences of auditing can be schematically explored in terms of decoupling and colonization. The discussion begins by considering some general criticisms of the New Public Management (NPM) and their implications for auditing. Three cases are offered, which show the tension between concepts of auditable performance derived from quality assurance systems and one which is rooted in the specialist judgement and knowledge base of different service professionals. It is important to know the ‘regulatory paradoxes’ which surround audit. The discussion of auditable performance measurement indicates how ‘the anxious ruler tries to make his illusions come true by way of a mixture of minute controls and rigorous isolation’.
In Chapter 4 it was argued that economic pressures exist for new ‘techniques’ to be acceptable and that audit cloaks its fundamental epistemological obscurity in a wide range of procedures and routines. From time to time audit practitioners have worried about the erosion of judgement and the imposition of too much structure on the audit process, but the history of financial auditing and the recent history of other forms of audit suggest that codification and formalization is continuing, particularly where new programmatic demands are made of existing skills. Of special significance is the emergence and exportation of the self-auditing management system as an auditable object. The hopes and aspirations articulated by governments and delegated to auditing institutions are progressively specified and operationalized by the creation and maintenance of such a management system.
So far the manner in which audit technique is made to appear reasonable and is harnessed to programmatic intentions has been considered. However, this is only one sense in which audits are made to ‘work’. Of equal interest is the manner in which the audit process interacts with the audited domain. In this respect it is instructive to look beyond financial auditing and to consider the impact and consequences of some of the auditing practices stimulated and reorganized by the New Public Management agenda. The advantage of this focus is that these audit practices have often been introduced into organizational contexts where they have not previously existed or have existed only in an informal and undeveloped manner. Audit functions here less as a practice of verification and more as an explicit vehicle for change in the name of ideals such as ‘cost effectiveness’, ‘efficiency’, ‘quality’ and so on. This suggests a close relationship between the audit explosion and the need to install auditable performance measures. In short, audits work because organizations have literally been made auditable; audit demands the environment, in the form of systems, and performance measures, which makes a certain style of verification possible.
The discussion begins by considering some general criticisms of the NPM and their implications for auditing. Two extreme analytical possibilities are (p. 92 ) considered—decoupling and colonization—as a basis for assessing the effects of auditing in three different contexts. Auditing within higher education, medicine and, ironically, the regulation of financial audit share operational features and raise similar difficulties. In particular, the three cases illustrate the tension between a concept of auditable performance derived from quality assurance systems and one which is rooted in the specialist judgement and knowledge base of different service professionals. This is essentially a tension between audit and evaluation and this theme is considered explicitly in the context of performance auditing. Overall it is argued that audits work if they are themselves not too closely subject to their own values of effectiveness. Furthermore, there is increasing, if unsystematic, evidence of the dysfunctional side-effects of auditing and this suggests that audit must itself be evaluated rather than audited.
New Public Management: Criticisms And Reactions
Chapter 3 suggested that the New Public Management was an assortment of ideas and orientations driving the reform of public administration. In different sectors and across different countries, elements of the NPM have been used selectively to suit particular needs. It is difficult to deny that enhanced financial control, the elimination of genuine waste and possibilities for fraud, and the creation of incentives to provide higher quality services are desirable as ends. No doubt there have been many benefits of this kind and there is a need for a balanced assessment of the impacts of the NPM in particular settings. However, the problem is that the NPM and the promotion of a private sector management style has been driven less by a sober empirical evaluation of consequences and more by faith in the presumed benefits of abstract management (Bogdanor, 1994) and a universalistic approach to administrative design (Hood, 1991:9–10). It is this insensitivity to side-effects and empirical consequences which has been the object of considerable criticism.
The attempt to separate service provision and governance arrangements, as described in Chapter 3, represents an intention to replace state bureaucracy with the managerial tools of accounting and audit (Self, 1993:159). In mimicking market structures, ‘management’ has emerged as a portable technical skill ‘divorced from specialised experience and knowedge about particular subjects, equally applicable to the private and public sectors, and primarily concerned with the efficient use of resources’ (Self, 1993:169). In the context of the criminal justice system it has been said that:
the attraction of managerialism as a political strategy is that it keeps the rhetoric of criminal justice intact while demolishing the structures which, however imperfectly, previously enabled their realization … there is now in the ascendant an ideology (p. 93 ) which wholly legitimates the pursuit of administratively rational ends over substantive justice goals … the auditing process has had a profound impact upon the practice of criminal justice in Britain …. At first sight … the traditional structures of accountability appear to be reinforced by the new system. The contrary view argues that auditing undermines these traditional structures. Instead of officials being responsible to ministers for their decisions, ministers are forced to rely upon the professional values of accountants and auditors …. Accountants are no longer simply providers of financial information; they are at the forefront of decision making. Policy making thus moves outside recognized political channels (Jones, 1993:192–9).
These and similar criticisms suggest that reforms in Britain, such as the Financial Management Initiative, pose a radical challenge to the culture of the organizations to which they are applied: ‘Systems of administration, control and evaluation, however technical they may appear, are also expressions of a series of underlying beliefs and values’ (Neave, 1988:18). Some of this challenge is undoubtedly intentional, such as the need to install greater awareness of the resource implications of organizational decision making. But equally there are unintended and undesired consequences. One of the ironies of the NPM is that while it insists that ‘public services must invest much more heavily in the currency of measurable outputs … some fundamental aspects of NPM reforms themselves appear to have remained almost immune from such requirements’ (Pollitt, 1995:135). Indeed, the NPM can be characterized by a lack of self-evaluation, except in relatively simple financial terms, and is committed more, in the style of Gosplan perhaps, to making sure reforms work, or at least look as if they do, than to self-evaluation. Hence there is a use of consultants to implement all-purpose reform rather than to evaluate its particular organizational effects (Laughlin and Broadbent, 1995).
This is not to say that such a process of evaluation of reforms, such as FMI or the separation of purchasers and providers in the NHS, would be an easy matter. Quite the opposite is true. Rather, the point is that the institutional foundations of such an evaluation are slim and are dominated by the need to show that things are working well, that objectives are being achieved. Not only can this lead to the continuation of certain practices irrespective of mounting evidence that they do not work, but transactions costs, a core element of the conceptual armoury of the NPM, actually rise: ‘Each side of this quasi-market relationship has to maintain staff with expertise in the technical aspects of contracting in a way that was not necessary under the previous administrative hierarchy, where trust and hierarchical authority substituted for detailed accountancy’ (Pollitt, 1995:147).
To conclude, the NPM is problematic because it puts itself as doctrine beyond question. A broader based evaluation of the financial and non-financial consequences based on sensitivities to the social and organizational context of reforms would be an intolerable policy burden, especially (p. 94 ) where longitudinal pilot studies and cost-benefit analyses introduce delay and discussion. As part of this policy commitment, NPM based programmes also presume that the forms of audit on which they depend are efficacious and without unintended dysfunctional side-effects. The general argument of the sections which follow is that auditing works because it creates an environment of auditable performance and this leads to questioning the effects of imposing such auditable measures. Before considering three illustrative cases in detail it is worth considering what may be at stake in these effects.
Auditable Performance: Decoupling or Colonization?
Accounting information systems do not simply describe a pre-existing economic domain but, to varying degrees, serve to constitute a realm of facts, to make a world of action visible and hence controllable in economic terms. Creative accounting practitioners have always known that profits can be ‘what you like’ and that for every financial accounting rule there is a way to frustrate the purpose of the rule while appearing to comply with it. Tax and accounting regulators engage in a constant struggle to catch up with these schemes. But the point about accounting and information systems goes even deeper than these preoccupations with ‘fiddling’ or discretion in rule interpretation. It concerns the mutual constitution of information systems and forms of behaviour, the multiplicity of ways in which accounting information can be used, appealed to and even ignored.
There is a growing literature on the consequences of attempts to measure and report performance in accounting terms and to develop practices of comparison and evaluation around such measures. For example, it is argued that internal accounting creates a factual domain in terms of cost which allows state activity to be conceived explicitly in economic terms. Performance can be disciplined through such measures which are ‘advanced in the name of their presumed potential rather than their practical possibility or actual consequences’ (Hopwood, 1984:176). A study of the introduction of diagnostic cost categories in medicine suggests the inherent ‘decision ladenness of accounting numbers’ (Chua, 1995:113) and their ability to influence what is regarded as significant. On this view economic reality is emergent and negotiated through figurative accounting practices which necessarily promise a control that they cannot deliver. Accounting becomes an expanding and self-preserving structure and builds the network of facts which make monitoring and audit possible. According to Chua, technically flawed accounting numbers command consent because they can hold together diverse purposes and because they have a kind of legitimacy which has little to do with their technical properties and much to do with the creation of a certain kind of window on operations, a window which (p. 95 ) allows one to trace the operational translations of political programmes into detailed costing systems.
At the most concrete level accounting practices depend on systems of classification embodied in books and records, manuals of instruction, computer printouts and so on. Transactions must be captured and recorded, invoices must be processed and, through selective assembly, aggregation and analysis, economic activity must be represented in a form which corresponds to programmatic demands for disclosure and performance measurement. In turn, these financial accounts can themselves provide a base for further calculation of performance and solvency ratios; ratios which often serve specific regulatory functions and which reinforce norms of performance. A certain kind of administrative objectivity is created whose logic is to standardize, which prefers rules over unfettered judgement and which hides its processes of selection. In this way, the factual environment of financial auditing is created:
To provide an account in … the auditing world … means adhering to descriptive devices (numerical and narrative) that are by and large conventional and arbitrary. They are neither right nor wrong but stand as coding or reporting standards that are ‘generally accepted’ as adequate for the task. They can be regarded as strategic representations, collectively validated by members, designed to put the organization's best foot forward (Van Maanen and Pentland, 1994:81).
If one accepts these broadly constructivist themes one must confront a simple question: how does this world of accounts and related forms of audit connect to other worlds, not just those for enhancing manufacturing performance but also those for teaching children, curing the sick, trading on derivative markets, policing the streets, prosecuting offences, enabling sustainable growth, and so on? Discussion can be organized around two extreme possibilities, both of which represent different kinds of audit ‘failure’ but which are never likely to be found in a pure form. The first type of failure is that the audit process becomes a world to itself, self-referentially creating auditable images of performance. The audit process is decoupled or compartmentalized in such a way that it is remote from the very organizational processes which give it its point. The second type of failure is that, regardless of intended changes to the audited organization, the audit world spills over and provides a dominant reference point for organizational activity. Organizations are in effect colonized by an audit process which disseminates and implants the values which underly and support its information demands.1 The audit process can be said to fail because its side-effects may actually undermine performance.
In their classic article, Meyer and Rowan (1991) have suggested that formalized control systems have more to do with myths of control in the (p. 96 ) environment of organizations than with real improvements in operational efficiency. From this point of view the management system structure described in Chapter 4 is an ‘institutionalized product’ adopted primarily for external legitimation purposes; it rarely functions like its blueprint. The technical structure of such a system, its rules and procedures for ensuring the loop of self-observation, embodies norms which originate in the environment. The point is that while such building blocks of rational organization have an ‘explosive organizing potential’ (Meyer and Rowan, 1991:46) and can lead to new organizational structures, they can also be decoupled from core organizational activities in such a way that ‘evaluation and inspection systems are subverted or rendered so vague as to provide little coordination’. Through the creation of compartmentalized organizational units for dealing with external assessment, audit and evaluation can be rendered ceremonial in such a way as to deflect a rational questioning of organizational conduct.
Meyer and Rowan regard external evaluation primarily as a destabilizing and delegitimizing activity that organizations will wish to buffer from their core activities. In this way buffering sub-units manage an interface with external assessors which leads, in the case of quality assurance, to the certification of formal systems elements. From this point of view, audits are ‘rationalized rituals of inspection’ which produce comfort, and hence organizational legitimacy, by attending to formal control structures and auditable performance measures. Even though audit files are created, checklists get completed and performance is measured and monitored in ever more elaborate detail, audit concerns itself with auditable form rather than substance. From time to time these ritualized audits are perceived to fail, inquiries take place and new technical guidance is issued which, depending on the audience, represents a radical overhaul (for the ears of regulators) or simply a codification of what most auditors do anyway (for the ears of practitioners).
Notwithstanding the analytical attraction of the motif of decoupling, the issue is ultimately an empirical one. One prima facie sign of decoupling is the creation or enhancement of organizational sub-units explicitly to manage the external audit process (audit committees, internal auditors, audit officers, etc.). Another would be the extent to which managers ‘devote more time to articulating internal structures and relationships at an abstract or ritual level, in contrast to managing particular relationships among activities and interdependencies’ (Meyer and Rowan, 1991:61). But there are also reasons to doubt whether pure decoupling in the sense described above ever takes place. Explicit attempts to compartmentalize the external audit process are expensive and ‘efforts by clinicians to neutralize the impacts of reforms can be crude and counterproductive’ (Ezzamel and Willmott, 1993:125). This means that the external audit process is rarely sealed off from the rest of the auditee organization, despite strategies with that intention. (p. 97 ) Internal audit officers may ‘change sides’ and may use their new found power to advance internal changes. The external audit may even be desired by parts of the organization to exercise leverage over other parts. And ways of talking around audit processes inevitably percolate into other areas of organizational life. In this way the external audit process cannot remain permanently buffered.
The other extreme to consider is that the values and practices which make auditing possible penetrate deep into the core of organizational operations, not just in terms of requiring energy and resources to conform to new reporting demands but in the creation over time of new mentalities, new incentives and perceptions of significance. In short, against the image of decoupling, audit processes may contribute to the construction of a new organizational actor (Laughlin, 1991). On this view the supposed, and perhaps questionable, distinction within organization theory between the front stage of formal performance and the back stage of informal process is eroded as formal elements, such as accounting operations, become ingrained in habits and classifications for control purposes.2 At the extreme, the organization aspires to be omniscient and ‘employees enjoy the loyalty of others. They welcome audits, reasonable monitoring, and documentary proof of their activities’ (Marx, 1990:5).
The diverse programmes which fall under the umbrella of NPM have a certain kind of colonization as an explicit goal. The intention is not only to remedy weaknesses in financial control practices but also to challenge the organizational power and discretion of relatively autonomous groups, such as doctors and teachers, by making these groups more publicly accountable for their performance. In this way VFM auditing is explicitly a vehicle for organizational change. However, colonization is rarely successful and monolithic for a number of reasons. The institutional environment of organizations is not usually homogeneous and consistent. Different institutional logics of evaluation exist. Financial and non-financial conceptions of performance live uneasily side by side, as do the governance demands of rules and of economic efficiency. In addition, different professionals articulate competing claims to expert problem solving (DiMaggio and Powell, 1991); accountants are naturally more comfortable with matters of economy and efficiency rather than effectiveness. Accordingly, the processes of organizational change demanded by the NPM and its auditing agencies produce varying forms of conflict and resistance, of which explicit decoupling strategies are one.
What is ultimately at stake in the question of colonization is the possibility that auditing is a ‘fatal remedy’ (Sieber, 1981). The point is not just that audit may be decoupled ritualistically or that it permeates the auditee (p. 98 ) organization totally. It is rather that the imposition of audit and related measures of auditable performance leads to the opposite of what was intended, i.e. creates forms of dysfunction for the audited service itself. This issue is ultimately an empirical matter. As the following three case studies show, much depends on the relation between relatively consensual internal structures of self-evaluation, which have always existed in varying degrees of formalization, and forms of external audit with a probative orientation towards external audiences (taxpayers, citizens, shareholders, and so on). In short, do auditable standards of performance come, or are they perceived as coming, from inside or outside the organization? The example of UK academics will be considered first.
Making Auditees: Researchers And Teachers
Since the mid-1980s a new theology of ‘quality, efficiency and enterprise’ has emerged in higher education. Explicit strategies of ‘financial compression’ (Neave, 1988:12) have been accompanied by a considerable number of institutional changes, most notably the replacement of the University Grants Committee (UGC) with the University Funding Council (UFC), subsequently renamed the Higher Education Funding Council (HEFC) (there is a separate agency for Scotland). The UGC had effectively acted as a buffer between the state and universities whereas the HEFC is more explicitly an agency of central government. Like the UGC before it, the HEFC provides funding to universities by way of a block grant and in the past this was calculated and notionally split between teaching and research. Since the mid-1980s, the HEFC has begun to introduce evaluative mechanisms to control the allocation and use of these (diminishing) funds.
These institutional changes signalled a fundamental shift in evaluative philosophy which may be found in many other areas, a shift from process based local forms of self-evaluation to standardized measures of output. New evaluatory mechanisms and indicators have been created to operationalize quality initiatives in teaching and research, and with new information demands new patterns of authority have emerged. For example, Vice Chancellors (VCs) in universities now assume the role of chief executive overseeing policy and resources committees, and academics can no longer dabble in managerial roles which fit uneasily with an older value base. Universities are being forced to be more entrepreneurial, and specialized educational consultants have emerged as part of a new market for advice. These changes reflect what Neave has called the ‘evaluative state’, an attempt to enhance the self-government of universities leaving state agencies with a monitoring role. The question is whether these measures are intended to build ‘consensus around those options that evaluation (p. 99 ) may reveal or whether the purpose of evaluation is to bend a recalcitrant academia to what the government deems to be “the new reality”’ (Neave, 1988:16). The evaluative mechanisms which exist at the time of writing can be broadly divided between research and teaching.
Auditing Academic Research
As part of its control over the element of the block grant for research, the HEFC conducts Research Assessment Exercises (RAEs). These ‘audits’ are intended to rate academic subject areas in universities on the basis of the quality of their research and to allocate central government funds accordingly. The first such exercise was conducted in 1986 and was subsequently repeated in 1989, 1992, and 1996. Over time as the assessment process and its classifications have developed, increasing amounts of research money have been allocated in accordance with the results. However, the precise link between RAE results and funding depends on complex formulae with caps and safety nets to ensure that the results are not too drastic. This means that the apparent economic logic of RAEs is heavily compromised by other more pragmatic values. Many commentators regard the RAEs as creating explicit pressures for separating research and teaching universities and for providing incentives for raising private research funds. In particular, there seems to be a long term trend to shift the mix of funding from the block grant system towards organizations, such as the Economic and Social Research Council (ESRC), who are purchasers of research.
The programmatic intention of these arrangements is to focus on the accountability of research funds both in the sense of quality and of financial control. Regarding the need to control the use of research funds, the HEFC (1993) explored the possibility of timesheets for academics. The intention was to focus on the accountability of research monies but, as the report from a firm of management consultants suggested, accountability could be operationalized in a number of different ways. There was an explicit desire to make research funds auditable and the consultants, Coopers and Lybrand, proposed a scheme which was detailed enough to provide the HEFC with assurance but not so demanding as to be unacceptable to the academic community (Coopers and Lybrand, 1993).
The specific timesheet proposal reflects a more general trend: the need to measure at an appropriate level of detail to make auditability possible. This is a level of detail which has little to do with accuracy or even representational faithfulness, but which reflects a certain legitimized style of technical elaboration. Audit requires not just an auditable reality but one which reflects institutional myths about the appropriate level of formality (see, Power, 1996b). Nothing came of the proposals for academic timesheets. Many VCs were dismissive of the prospects of minute monitoring and the (p. 100 ) HEFC itself was unenthusiastic. But the programmatic aspirations which drove the HEFC to explore this mechanism exist elsewhere in the system and show how rhetorics of auditability, measurement and accountability are intertwined.
One of the unintended but predictable consequences of the RAEs has been to create incentives to teach less and write more. According to Trow, the RAEs have created subtle ‘accountings’ for research staff with little knowledge or desire to understand their costs and impacts. Staff in the former polytechnic sector are now under pressure to engage in research for the first time, and there are visible transfer markets for research academics in the run-up to such exercises (which only increase the costs of research as a whole). More problematic still, it has been argued that the RAE is in fact a ‘fatal remedy’ in terms of its impact on existing research culture. Cycles of research have changed in favour of publication in prestigious journals rather than books. Scientists are changing research habits,3 and a whole menu of activities for which performance measures have not been devised have ceased to have official value. Editing books, organizing conferences and, paradoxically, reviewing and facilitating the publications efforts of others fall out of account.
In the context of natural scientific research, the perennial issue of harnessing science to wealth creation has taken a new turn. Science policy in the UK (White Paper, 1993) has imported many of the hard managerialist elements of the NPM (Trow, 1993). Sherman (1994) has shown how the patent has come to occupy a central position in the performance measurement of science, stimulating Intellectual Property and Technology Audits to maximize exploitative opportunities which have not been recognized by scientists themselves. In addition, the programmatic values of VFM have been applied to science with the imperative of maximizing returns on public funds. Such a manner of governing science and rendering it auditable represents a fundamental shift in the fulcrum of evaluation with consequent impacts on the conduct of science. In short, research must now be organized to be assessable.
A study of the implementation of Total Quality Management (TQM) in a North American scientific laboratory demonstrates the consequences of using inappropriately deterministic performance measures in contexts like fundamental research, where there is high uncertainty of outcomes. Defensive anxiety and escalating distrust of unscientific TQM experts are the product of ‘inappropriately detailed management systems’ (Sitkin and Stickel, 1996:210) leading to deskilling and heightened in-group/out-group perceptions. There is also a tendency towards ‘more boring but patentable paths in … research.’ However, Law and Akrich (1994) suggest that these images of colonization are too crude to capture the complexity of adjustments to outside forces. In their study of a laboratory subject to an increasingly commercial framework, in which users are replaced by customers and (p. 101 ) the good seller must control costs, ideals of VFM are not sufficient to displace traditional senses of getting the job done. The growth of courses in financial management for scientists suggests at least the longer term emergence of hybrid experts.
Overall, tensions remain about the role of science, the mix of applied and basic activity and the timescales for evaluation and accountability to peers, patrons, and publics. The UK white paper has stimulated discussion on the necessary conditions for innovation, on the relation of science to wealth creation and on the relation to the citizens whose taxes support scientific endeavour. In this sense the formerly self-evident values of a self-regulating ‘big science’ have been challenged by a new alliance of managerial ideals and radical populism (Fuller, 1994). Add to this related concerns with scientific fraud and one has all the conditions for making science into an auditable object.4
Teaching Quality Audits
At the time of writing the institutional arrangements for teaching quality assurance in higher education are, to say the least, confusing. The Committee of Vice Chancellors and Principals (CVCP) established an Academic Audit Unit (AAU) in 1990 which was subsequently transformed, in name at least, into quality audit under the control of the Higher Education Quality Council (HEQC), owned jointly by higher education colleges and universities. The HEQC has a broad management brief concerning itself not merely with teaching quality but with quality assurance more generally for the whole organization. These arrangements also overlap with another set of evaluative mechanisms, this time quality assessment conducted by HEFC bodies in England, Scotland, and Wales at the level of academic departments. Needless to say there has been much discussion about these arrangements given such an obvious overlap. And behind theological debates about the difference between assurance and assessment VCs seem to be reluctant to give up control of the evaluation process.5
Willmott (1995:1019) has noted that the AAU was originally concerned with internal systems of quality assurance6 and its preoccupation with formal structure resembles the environmental auditing field. Even when academic standards were perceived as high by an AAU visit, there were criticisms of the internal system for controlling these standards, particularly where it depended too much on trust. Pressures were created to adopt a certain management style which, in Willmott's view, were corrosive of old ideals without themselves being effective.
Overall, universities have created ‘buffer’ elements which mirror and track external regulatory institutions and which present a legitimate face. For example, in 1992 the London School of Economics created an internal (p. 102 ) academic audit unit to mirror the AAU. In 1994 this was subsumed within the teaching quality assurance committee to liaise with and respond to the HEQC on quality matters and on matters of teaching quality assessment under the HEFC. In short, auditing initiatives stimulate the creation of internal sub-organizations or compartments. From this point of view, audit is always a ‘layered’ activity organized around systems of control. For example, teaching quality assessments were initially self-assessment based and the HEFC emphasized the quality of systems in place to allow credible self-assessment. Such moves to stimulate consensual self-auditing are visible in other areas and the model is borrowed explicitly from BS 5750: standards are set by universities and colleges and quality is intended to be a measure of achieving those self-determined standards.
Unsurprisingly, this process of institutional adjustment and the realization that teaching quality relates to systems rather than to actual teaching has led to conflict. It has been suggested that ‘the British Government is motivated more by its desire to control the academic community than by its quest for top quality higher education’ (Trow, 1993). From this point of view, these self-auditing arrangements have been installed with a view to discipline rather than learning. A ‘hard’ managerialism has displaced trust and ‘elevates institutional and system management to a dominant position in higher education’. Performance criteria and mechanisms of continuous improvement have been created which are intended to operationalize abstract ideals of accountability as well as to provide a competitive element.7
VCs themselves have often been resistant to quality initiatives, expressing concern about league tables, and the HEQC, an agent of the new regulatory mood, is nevertheless hesitant about directly linking quality initiatives to funding, preferring instead to emphasize local diversity. There have also been complaints about the quality of auditors, appeals about assessments and growing resistance to the increased and uncosted bureaucratic demands (paper mountains and red tape) of quality auditing. Heated debate on these matters has taken place in the pages of the Times Higher Education Supplement.8
A fundamental problem concerns the ineffectiveness of quality assessment initiatives in a climate of fiscal restraint. Despite indicators of rising standards, the public success story, there is also evidence that staff are covering up falling standards9 consequent upon rising class sizes and associated administrative burdens. In short, the quality assurance process is decoupled as an expensive ritual. Equally, there is distrust about how performance indicators embodied in quality systems for teaching will be used, particularly those which emphasize student feedback and which simplify the significance of student reactions with a ‘dipstick approach’.10 Trow (1993) points to different orientations to teaching (delivery, challenge, scholarly exploration, and creation) which co-exist, whose diversity necessarily (p. 103 ) defeats single model assessment and whose effects cannot always be measured when students are still students. The drift towards delivery philosophies of teaching, supported by hard managerial assumptions, is transforming teaching from a relationship into a transaction which can be made auditable in isolation. Trow argues that while the creation of adequate procedures for complaints are important, teaching ultimately depends on appointing competent teachers whose motivation is independent of attempts to audit them. The performance culture of rewards and penalties is a refusal to trust this motivational guarantee with the result that teaching will be orientated to the expectations of the customer rather than to shifting and transforming those expectations.
Academics as Auditees
Supporters of the new managerialism and auditing argue that complaints about bureaucracy tend to hide the fact that existing systems of control were always inadequate and that the new world refuses to defer to self-appointed professional authority. Audit is a legitimate demand and the requirement that universities impose mechanisms for self-review is not unreasonable. Against this positive view Parker and Jary (1995) have argued that there is a progressive McDonaldization of the university. Even ritualized compliance has a percolative second-order impact (Laughlin, 1991) in which orientations to teaching and research have been affected and colonized by a hard managerialism. However, academics in their new role as auditees are hardly ‘totalized’ and ‘docile’ subjects and Pritchard and Willmott (1997) argue that there is also resistance to easy imbibing of commercial discourse.
Between these extremes of managerial colonization and resistance there is a continuing struggle (‘debate’) about teaching and research quality as the auditable object. The concept of quality has hovered uneasily between definitions which emphasize outcomes and those which emphasize the processes for determining outcomes. Such arguments are not just semantic but have implications for the way in which quality can be monitored. Instead of superficial audits and hard managerialism there are demands for ‘real’ evaluation with a social scientific base in order to make the side-effects of audit visible. In contrast to myths of productivity gains, ‘it has been a case of academics reluctantly cutting per student class contact times, teaching much larger classes, reducing the number or length of written assignments, sacrificing time for research and scholarship, and so on’ (Pollitt, 1995:142).11
In conclusion, questions of audit colonization and decoupling in the field of higher education are complex and ongoing. There is much evidence for both. Assessment systems damage intra-departmental relations as crude languages of evaluation trickle into local administrative deliberations. At (p. 104 ) the same time a game is played and individual departments are coached to make themselves auditable.12 The mechanics of research and teaching assessments are instruments of central oversight and control which are simultaneously dependent on the creation of layered self-assessment structures. Assurance and comfort are passed upwards between internal and external auditors. Whether such arrangements really do build on existing consensual cultures of self-auditing, thereby growing from the bottom up, or whether auditing is perceived as externally and crudely imposed is a fundamental issue which recurs in the medical context.
Making Auditees: Doctors And Nurses
Medical auditing is hardly a new practice. Like many other elements of the audit explosion it is partly the product of reassembling existing routines and harnessing them to new programmatic intentions. Thus within medicine there is a long tradition of case reviews, data gathering and attempts to improve diagnostic practice and patient care (Pollitt, 1993a; 1993b). Formalized medical auditing seems to have originated in the USA although British doctors have tended to prefer ad hoc approaches to evaluation based on the interlinking of medical records (Dent, 1993:257). Although transatlantic influences have percolated into British medical practice, most audit work has been locally specific and institutionally indistinguishable from first order clinical practices. However, in the past ten years as particular elements of the National Health Service have been subjected to NPM style reforms, the meaning and practice of medical auditing has begun to change.
The most significant structural innovation in the field of health care provision has been the creation of quasi-markets for medical services. The intention is to stimulate more effective use of increasingly limited resources by creating an element of competition between those who supply medical services (hospitals and general practitioners) and those who must now explicitly purchase those services (health authorities and general practitioners). This separation of purchasing and providing institutions reflects NPM ideals of autonomization and disaggregation. It is through elaborate contracting arrangements that purchasers can exert control over the nature and quality of the medical services which they wish to purchase from newly constituted entities for provision (hospital trusts and fund holding general practices). It is also within this created space for contracting that medical auditing in the UK is evolving beyond its humble and obscure origins. For GPs some form of audit has become a condition for the payment of certain categories of income and regional Medical Audit Advisory Groups (MAAGs) have been created (out of audit enthusiasts) with a quasi-consulting role to visit practices and encourage the audit cycle.
(p. 105 ) It is hardly surprising that these reforms have been the subject of massive debate, criticism, and comment. The politics of health care provision has always been intense and fears persist that the creation of these new managerial markets will undermine the whole ethos of health care on which the NHS is based (e.g. Self, 1993:140–1). Furthermore, there is evidence of complaint by medical practitioners themselves and of resistance to market based changes which intrude on their professional autonomy (e.g. Broadbent et al., 1992) even though, as the manager-clinician interface evolves, distinctions are increasingly blurred (Ezzamel and Willmott, 1993). Since the late 1980s these broadly based reactions and reservations have begun to focus on medical auditing as it has started to assume a more prominent programmatic role.
Between 1989 and 1994, approximately £220 million was allocated specifically for the development of medical auditing practice broadly defined as ‘the systematic, critical analysis of the quality of medical care, including the procedures used for diagnosis and treatment, the use of resources, and the resulting outcome and quality of life for the patient’ (White Paper, 1989).13 The proposed operational structure for auditing resembles that of the management system described in Chapter 4: objectives and standards must be defined, practice must be observed, results of observations must be compared with standards and there must be feedback with the aim of improvement. However, in the medical context, in contrast with environmental audit, the idea of audit is indistinguishable from this management process as a whole. And behind vague definitional issues, and the conceptions of quality assurance in medicine which they embody, lie complex tensions. Notably, these tensions are revealed in the form of hierarchical struggles between medical practitioners and emergent managers over the control of the evaluation process (Pollitt, 1993a:162). Who decides on objectives and who is the audit intended to serve (Nolan and Scott, 1993)?
Medical Autonomy and Decoupling Strategies
Professionalism may be characterized in part by the self-control of quality (Pollitt, 1990:435). Nowhere is this ideal of self-control so firmly entrenched as it is in medicine. As Pollitt has argued, the NHS context shows clearly the politics of quality assurance in which practitioners have sought to resist not only managerial definitions of quality but also managerial participation in the definition process. According to Pollitt, quality as an issue is diversely and ‘tribally’ interpreted by virtue of its vagueness and this has implications for the style of quality auditing considered to be appropriate.
Initial challenges posed by the possibility of external evaluation of medical practice were transformed and defused by the Royal Colleges who, in a series of publications, articulated a conception of medical audit which (p. 106 ) was voluntary, local and the preserve of medical specialists with no disciplinary implications. Even a formalized system of peer review like that existing in the USA was preempted (Pollitt, 1993a:163). This strategy can be interpreted as a form of ‘inverse decoupling’. Instead of defusing external evaluatory initiatives by ritualistic compliance, the mechanics of evaluation are co-opted into core practices and made invisible to external monitoring agencies, other than by assertions that audit has taken place. In short, medical audit was initially more the preservation of the internality of existing evaluation practice and less the internalization of external initiatives. The initial use of earmarked money for research purposes indicated a refusal to distinguish between auditing and learning processes more generally: ‘most of us are happy with … “low grade audit”: simply collecting interesting information and using it to help us to improve the care we give.’14
Decoupling strategies have also been driven by a willingness to delegate audit activity to medical practitioners. Without a tradition of knowing what to ‘do’ with audit results, purchasers have tended to be content with the fact that an audit was done rather than with knowing what exactly was done: ‘so long as some form of audit was being performed, nobody outside the clinical staff were interested in the results or the methods.’15 Decoupling also has its origins in practitioner suspicion about the new wave of audits, particularly given the perception that medical audit was already operating as a successful part of practice and did not require extension or formalization (Laughlin et al., 1994:104). There is also evidence of resistance by general practitioners (Broadbent et al, 1992) resulting in minimalist and formal responses to audit rather than in utilizing it for cultural change. However, these formal responses necessitate administrative changes and the creation of resource intensive absorbative functions (Laughlin et al., 1994).16
Is this a story of successful professional decoupling of audit initiatives? Or, even in this form, is the practice of medical audit a Trojan horse for more disciplinary machinery? There has been much discussion on the shape and development of medical auditing by practitioners. Much of this has been concerned to maintain the conception of audit as a learning process to facilitate professional development, a bottom-up, holistic process led by professional activity (Nolan and Scott, 1993). These views have been reinforced by general suspicion that a more abstract and standardized form of medical auditing would be nothing more than cost-cutting in disguise.
Audit could never continue to exist as a private clinical activity and ‘even soon after [medical] audit was established, the potential for audit to operate in a wider environment was recognised’ (Exworthy, 1995:31). Packwood et al. (1994) echo this view and draw attention to the growing encroachment (p. 107 ) of public accountability demands within medical auditing, building on a fragmented and messy, though largely consensual, base of existing routines and practices. Medical audit has begun to emerge as a management priority, particularly as purchasers and providers learn new roles, as hybrid doctor-managers are created and as pressure for multi-discipline representation increases. Of course, management involvement is explicitly and officially premissed on sensitivity to practitioner autonomy and workloads: ‘successful audit takes place in a culture that does not attribute blame’ (quoted in Exworthy, 1995:78). But the emergence of audit committee structures is a clear signal of a trend: greater management involvement in the purposes, processes, and results of medical audit. And many practitioners remain suspicious of this involvement and of the possibility of a routinization of medical practice (Black and Thompson, 1993:850).
The case of psychotherapy practitioners, as newly conceived providers of services to purchasers (health authorities and general practitioners), illustrates the disturbing potential of audit.17 The development of audit for psychotherapeutic services is consistent with developments in other specialties. However, where psychotherapy differs is that there is a long and extreme practitioner history of controversy about effectiveness. Different schools of thought and practice exist together with competing professional sensitivities about the nature of care and the meaning of outcomes. The introduction of audit has heightened tensions in the field for a number of reasons.
First, a suspicion of external management encroachment exists, as it does in medicine more generally, and there are the usual preoccupations with preserving autonomous self-audit structures. However, these reactions are amplified by concerns that, in the absence of demonstrable criteria of auditable performance based on successful clinical outcome, psychotherapeutic services may face a crisis of demand. Second, audit has intensified longstanding practitioner tensions about ‘measurable change’. There are general worries about an incompatibility between short term audit cycles and the longer term cycles of therapy. There is also another key issue: whoever gains control of the audit process can legitimate their concept of therapy over rivals by building it into accreditation processes. In short, the prospect of audit has heightened inter-professional rivalries, supporting Dezalay's (1995a) argument that regulation is a stake in professional competition.
Ultimately the question of colonization hangs on the management of tensions between managers and clinicians (Gain and Rosenhead, 1993:11), a process which is constantly developing. Although audit can be hijacked for internal purposes, anxieties persist about its uses in the future (Black and Thompson, 1993:854–5). Pollitt (1993a; 1993b) argues for an explicitly institutionalized decoupling or ‘insulation’ to provide a buffer between internal (p. 108 ) and external audit arrangements. In this way there can be a clear distinction between medical self-reflection and accountability. But it remains to be seen whether such a clear distinction can be maintained, especially as the evidence from other audit contexts suggests the existence of institutional pressures to combine these roles and to ensure the external visibility of internal control. However, at the time of writing it is too early to say whether decoupling or colonizing tendencies or, most likely, a mix of the two, will harden into institutionally stable arrangements. What can be said with some confidence is that, as with financial and other forms of audit, medical auditing remains a contested field.
Medical Audit: A Contested Terrain
There can be little doubt that the application of NPM based reforms, and the role of medical audit as an instrument of these reforms, has constituted an enormous environmental disturbance to health organizations. This much was probably intended. The problem for medical auditing is that, despite many attempts at definitional closure (Dent, 1993:263), it is likely that it will need to satisfy multiple expectations, particularly where it is coupled to external review processes (Exworthy, 1995:101). Operational definitions and textbooks have been created to develop the principles articulated by the Royal Colleges.18 Contracting functions continue to evolve and the normative and operational boundaries of medical auditing are far from being fixed. Indeed, as in the case of financial auditing discussed in Chapter 2, a certain lack of clarity about the role of medical audit allows it to express clinical and managerial aspirations simultaneously, aspirations which are themselves blurred. For example, Nolan and Scott (1993:760) point to the continuing ambiguity of the Department of Health definition of medical audit.
Some argue that medical audit serves accountability programmes imperfectly because so much information remains private. Again the parallels with other forms of auditing are striking: audit often does not coincide with information release, especially where practitioners control the process. Furthermore, even its internal use value has been criticized: audit cycles remain incomplete, it is impossible to attribute change to audit and patients’ (the customers?) perspectives play little role. All in all medical audit is a fragile practice which can be ‘readily ignored or omitted, its results argued away as idiosyncratic, its insights seen to be duplicated by other sources, its purposes conflicting, with no perceptions of any serious detriment to medical practice resulting from its absence’ (Packwood et al., 1994:310). But supporters of medical audit contend that such doubts about the operational role of audit are likely to be self-fulfilling, leading to further non-completion of cycles and increased medical complacency (Black (p. 109 ) and Thompson, 1993).19 Faced with evidence of operational weaknesses, enthusiasts are pressing for more resources and status (Thomson and Barton, 1994).
The field of medical audit is evolving and the meaning of the practice is being negotiated between increasingly overlapping medical and managerial concerns. Purchasers are learning to be ‘principals’ who must monitor their contracts and hybrid professionals in multi-discipline teams are constantly demanded. The language of quality assurance and the need for performance indicators which focus on outcomes is also taking shape. It is here that the concerns of medical practitioners echo those of academics and critics of quality assurance. The medical profession, like all professions, tends to prefer evaluation orientated towards quality of process (Dent, 1993:262). As medical auditing becomes part of a quality assurance system, it concerns itself with the auditable object of managerial capability rather than directly with care itself and this cannot but impact on medical autonomy. In addition there are operational worries about crude forms of performance measurement: ‘the need to introduce audit in short order will make easily collected, quantifiable data very appealing’ (Nolan and Scott, 1993:762).
Medical auditing was never initially intended as a public accountability device, and practitioners have worked hard to maintain its status as a heuristic tool to improve practice. As it is slowly moving into the orbit of public accountability new managerial demands are made of it with the expectation that older roles can be maintained. Inevitably, the medical quality assurance system is emerging as the primary auditable object: ‘the changing environment of CA [clinical audit] has meant that purchasers and provider managers are not necessarily concerned with quality of clinical care per se but increasingly with the systems established to ensure that quality is developed and maintained’ (Exworthy, 1995:95).
In conclusion, medical audit seems to moving inexorably away from its local, ad hoc, bottom-up origins towards a more standardized, national framework, a process which necessarily weakens local professionals vis-à-vis newly created auditors: ‘Top down models generally apply a generic instrument administered by outside assessors, whereas bottom-up systems are generated and largely applied by practitioners themselves’ (Nolan and Scott, 1993:762). As bottom-up schemes make contact with, and are transformed by, top-down accountability requirements, the audit process may heighten conflict, especially where direct impacts on clinical decision making can be demonstrated. Yet, like other audits, the value of medical auditing becomes harder to demonstrate the more it is disengaged from local learning processes. It is rather a practice that must be made to work.
(p. 110 ) Making Auditees: Financial Auditors
In the two cases considered above, higher education and medicine, one can see something of the belief in the reforming and revelatory power of auditing (McSweeney, 1988). From this one might conclude that one cannot audit audit itself, that it puts itself beyond the possibility of evaluation of its costs and benefits. However, this is only partly the case. Even though the audit explosion has been accompanied by the a prioristic faith of the NPM, it is nevertheless the case that the audit of audit takes place. Indeed, it is perhaps an inevitable extension of the logic of making things auditable discussed in Chapter 4 that it should arrive back at the doors of accountants themselves, who are often accused of being the agents of the NPM. The audit of audit discussed below demonstrates that the logic of auditing cannot be easily identified with a conspiracy of the large accounting firms, and has more to do with the broader shifts in regulatory philosophy discussed in Chapter 3. Accountants are clearly agents of this shift; but they are subjects of it too.
The history of financial auditing presented in Chapter 2 tells a story of crisis-driven developments in the regulation of auditors, a dialectic of failure in which standards of technical practice have been codified and codes of ethics have been formalized. As accountants and the firms in which they operate have become more explicitly commercial in orientation, there have been greater demands for regulation and almost constant preoccupations with the problem of independence. Many institutional structures have been created as a consequence of these pressures for reform. Despite the myth that independence is a ‘state of mind’, there is now in the UK a Chartered Accountants Joint Ethics Committee (CAJEC) and a Joint Disciplinary Scheme (JDS) for determining and enacting policy in the sphere of ethics and professional behaviour. The JDS has always been controversial, criticized both for doing too much and too little.20 At the time of writing the Chartered Association of Certified Accountants (ACCA) signalled their intention to withdraw from it on grounds of cost and the fact that it was contributing to the discipline of members of other institutes. The feeling that these self-regulatory arrangements have been falling apart21 has led to the search for new structures with greater independence from the professional bodies and greater legitimacy and effectiveness (Mitchell et al., 1991).
It was argued in Chapter 2 that where a product or service is ambiguous, it is to be expected that practitioners will invest heavily in ethical codes and procedures. In other words, the guarantee of quality in ‘inscrutable markets’ is somehow to have trust in the people. But equally these people must be competent and there is a need for the knowledge base discussed in Chapters 2 and 4 to be credible and enforceable. While the UK Auditing Practices Board and its predecessor body have been concerned with articulating and (p. 111 ) codifying best audit practice, they have not been concerned with enforcing practitioner compliance with technical standards. In the UK this task has been entrusted to the Joint Monitoring Unit (JMU), essentially an inspectorate for audit quality.22 The JMU was created for the purpose of financial services regulation (Cooper et al, 1994) and was restructured and given further resources to fulfil its new role in ‘guarding the guards’. The ACCA created its own regulatory body.
Following the European Eighth Directive on the regulation of auditors, the UK adopted various monitoring requirements into the Companies Act 1989. Section 25 of this Act requires the auditor to be ‘registered’.23 The numerous requirements relating to the regulation of registered auditors’ competence and integrity were delegated to the professional bodies. An important effect of the Eighth Directive measures was to strengthen arrangements for the regulation of firms rather than individuals. It is the firms who must ensure that partners and staff are ‘fit and proper’ persons. This means that a layered regulatory system has been created and that the primary object of regulatory interest is the organizational control system at the level of the firm. This mix of enabling legislation and delegation to professional structures reflects more general commitments to private government coupled to state oversight (Chapter 3), exercised through the requirement that professional bodies report to the DTI on the operation of the scheme.
The Problem of Small Practitioners
The operational problem confronting the JMU resembles that of any audit: how can it be done economically and credibly? Initially a stratified sampling approach was adopted, reflecting the skewed nature of the population of firms engaged in auditing. Of nearly 9000 firms it was decided to make 250 visits to those having at least one listed client and 150 visits to the rest. However, it is in relation to this second category of smaller firms that the JMU has encountered most difficulty. From its earliest days under the Financial Services Act the JMU had encountered problems with the auditors of investment businesses regulated under the Financial Intermediaries and Money Brokers Regulatory Authority (FIMBRA). Following the fraud related collapses of Barlow Clowes and Dunsdale Securities (Chapter 2), concerns were raised about the competence of auditors to practise in a complex area when they only had one or two FIMBRA clients.
If there were operational problems for the JMU itself in dealing with small firms, given its own resource constraints, small firms nevertheless complained bitterly about the costs of regulation; costs which they argue were disproportionately high for them and for which the benefits were at best doubtful. Registration fees had been raised in the early 1990s to pay for the new regulatory system and to provide the £4 million budget for the (p. 112 ) JMU. Constant worries about rising membership costs and concerns about the nature of control exercised by the professional institutes demonstrate the heterogeneity of the UK accountancy profession as well as the distinctive and powerful position of the large firms. Not only did these regulatory initiatives heighten big firm-small firm tensions, they also created difficulties for professional institutes as organizations in their own right as they struggled to reconcile the dual role of regulator and trade association.24
The early years of JMU appear on the surface to have been a regulatory success and paint a picture of a strong inspectorate unafraid to be critical. In its first annual report, the JMU reported that very few firms were up to the mark and overall there seemed to be a high level of unsatisfactory inspections.25 One might expect this from a body wishing to establish its legitimacy early on. But in the joint ICAEW/ICAS/ICAI report to the DTI of 30 September 1990, it was emphasized that the JMU is not just an inspector in the deterrence mode. It also plays an advisory and educational role in assisting firms to improve audit quality. The regulatory style is that of a cooperative process attempting both to win over practitioners to the benefits of being ‘inspected’ and also to convince the DTI of its regulatory credibility. Nevertheless, the JMU was never free of the criticism that the quality audit it promoted was based on the large firm model and was inappropriate and expensive for smaller firms.
There are numerous ironies surrounding the work of the JMU. First, while auditors themselves have always pushed the added value of the audit process, many small practitioners would not buy such stories from the JMU when they came to be audited themselves. Second, given that the JMU faced similar operational problems to auditors, it seemed reasonable to ask whether JMU inspections were themselves of sufficient quality. Third, the regulatory initiatives, of which the JMU was a part, showed up deficiencies in the professional institutes’ knowledge of their members. And fourth, the problem of auditing small practitioners may have been decisive in steering the UK to accept in principle the abolition of the small company audit against the longstanding resistance of the Inland Revenue. At a stroke the number of entities requiring JMU visits was cut. Overnight audit quality became more auditable.
Making Audit Quality Auditable
Some of the criticisms of the JMU by small audit practitioners are similar to those relating to quality assurance more generally. For the quality of auditing to be itself auditable, it was necessary for the JMU to inspect firms’ own quality assurance systems, i.e. their systems for inspecting their application of, and compliance with, auditing standards. This systems approach (p. 113 ) is endorsed by Holden (1995:21), the ex-head of the JMU: ‘the whole focus of future monitoring should be around the effectiveness of the firm's quality assurance review.’ Audit quality assurance is therefore an internal audit process which is occasionally validated by JMU inspectors. Auditing, like higher educational teaching, is only auditable if the quality systems approach is taken. Once this approach is adopted then documentary appearances are vital. One of the most common audit failings identified by the JMU was a failure to perform a ‘close down review’. Literally this means a failure to finish the audit, but in fact it is impossible for the JMU to distinguish between a failure to finish the audit and a failure to write up the audit as finished.
The bureaucratic excesses about which small practitioners complained were reflected in new markets in which they could buy standard working papers and advisory publications on how to manage a visit from the JMU. Some accountants, like academics and doctors, recognize the ritualistic and expensive nature of the process. The JMU makes practitioners get the file looking neat and one senior practitioner complained that ‘Countless hours are spent “upgrading” files, not to produce evidence of their audit opinion but to satisfy voracious JMU inspectors’. He/she went on to suggest how it is possible that model files may be created for bad audits and calls for a review of ‘actual work carried out’, rather than just the file.26 Ironically the quality assurance approach conflicts with auditors’ own self-image that good auditing is a function of experienced professional judgement, which by definition is self-policing. The JMU can only observe and audit this judgement process at one or two removes by observing the control systems. Audit judgements are made auditable by creating these compliance systems. The big question is do audits really improve? It is difficult to know since the quality assurance of financial auditing does nothing to overcome the essential epistemic obscurity of audit. Criteria of effective auditing which are independent of procedures would be needed for this (Chapter 2).
To conclude, there is some evidence of costly ritualistic compliance for JMU quality assurance purposes. However, it is striking that whereas researchers and doctors and many others are being pushed towards outcome based auditable performance measurement, auditors themselves are being inspected in terms of their process because the outcome of the audit process, the production of assurance, is obscure and defies measurement.27 When Day and Klein (1987:232) argue that the notion of performance as outcome is ‘genuinely difficult and elusive’ resulting in a drift towards a process which is the preserve of the professional, they could have been describing a situation in which the audit of financial auditing currently finds itself. One could say that the audit of quality finds its natural home in the context of financial auditing.
(p. 114 ) In Search of The Auditable Performance: Audit, Evaluation, and Effectiveness
In all the specific cases discussed above a common pattern is visible. Existing structures of self-reflection on practice, which have traditionally been ad hoc, local and under the control of practitioners themselves, have been harnessed to regulatory initiatives in the environment. Despite explicit pleas to differentiate learning and accountability, internal and external auditing, one can discern the steady transformation of internal control cultures into externally auditable objects. Auditees have adopted strategies to deal with these developments but, formally at least, systems with very similar general features are being developed in diverse contexts to provide an auditable surface for the organization. It is not that self-auditing is giving way to external auditing but that both are being reshaped to ‘fit’ each other; audits must always work.
A crucial part of this reshaping is the construction of auditable performance which can be embodied in a management system and which may be reported to external parties. I have already suggested that financial auditing values have influenced the development of financial accounting. Is this direction of influence true more generally for the NPM related changes in the public sector? Does auditability drive performance measurement? Day and Klein (1990) have suggested that ‘hard data is a basic tool of inspection’ but the point can be turned on its head. Data which are inspected will seem hard, since ‘hardness’ is a function of institutionalized acceptance. To put the point another way, performance must be constructed in such a way that it can be measured, audited, and communicated to external agencies in a legitimate, rational and, yes, ‘hard’ form.28
As the case of timesheets for academics demonstrates, the actual mechanics of constructing measures which conform to ideals of replicability, calculability, visibility, portability, and legitimacy can be done in many different and contestable ways. Day and Klein (1987:92–5) draw attention to the well recognized problem of defining objectives and performance for public services whose outputs are difficult to identify (e.g. education, research, policing, auditing). Where the specialist knowledge base of the practice itself is complex (medicine) and/or internally controversial (social work, psychoanalysis) these problems are compounded. Attempts to grade casualty departments in terms of the length of time patients wait to be seen, day care centres in terms of throughputs of the elderly, schools in terms of examination results and so on all have a certain plausibility. It is widely accepted that such factors should play some role in an evaluation of the organization. But as ‘measures of the measurable’ in abstraction from local complexity, there are problems. As Klein and Carter (1988:14) put it, ‘performance is a contestable notion’ and much depends on the particular characteristics of the service organization. The significance of a measure (p. 115 ) may also lie less ‘in the practical use that has been made of it so far than in the messages sent out by its production’ (Day and Klein, 1990:29). For example, are installed performance figures ends in themselves, a basis for nuanced internal debate, or a first step towards deep cultural change in an organization?
Klein and Carter (1988) distinguish between outputs and outcomes and define effectiveness as the relation between the two. Outputs are often service activities and can be problematic to measure. Outcomes are the impacts or consequences (intended and unintended) of these outputs and it is often equally problematic to identify them and connect them to outputs. In the examples considered so far one can discern a tendency to obliterate this distinction and to abandon the causal demands of outcome based measures. Accordingly, performance measurement gravitates towards outputs and the systems for producing them; it is around these measures that a certain style of management control can be exercised unencumbered by the contingencies of how such outputs might relate to desired outcomes. In other words, the difficult connection between service activities and outcomes can be ignored in favour of the (more auditable) intermediate outputs of the activities, such as examination results or average waiting times for patients. And where these outputs are also problematic, there is a further tendency to drift towards inputs, such as costs, which are readily auditable.
The distinction between outputs and outcomes and the tendency for ‘performance’ audit to drift towards outputs is a crucial issue. What is at stake is the compatibility of two logics, broadly that of auditing on the one hand and that of evaluation on the other. Coupled to these two logics are questions of turf and of power to define what counts as adequate performance. One logic has developed from a home base in input auditing, focusing on the regularity of transactions, towards the audit of measurable outputs. The other, though not without problems and much less coherent than audit as a practice, is traditionally more sensitive to the complexities of connecting service processes causally to outcomes. The audit explosion represents a systematic shift from the logic of evaluation to that of auditing, a shift which puts auditing itself beyond evaluation.
Audit and Evaluation
Preoccupations within public sector services with needs, inputs, and professionally supervised processes came under increasing criticism in the early 1980s. Managerial imperatives began to displace both professional evaluative structures, such as peer review, and methods rooted in the social sciences. In comparison with evaluative practice, which often generates conflict and ambiguity, audit is attractive for its apparent objectivity. However, audit in new contexts is not merely neutral verification but an (p. 116 ) agent of change which creates the organizational basis for internal and external verification of economy, efficiency, and effectiveness. The rise of the Audit Commission and its emphasis on performance measurement and audit is paradigmatic of this managerial agenda.
The Audit Commission has already been described in Chapter 3. Its role as an agency of change became explicit in the late 1980s as it began to encourage the development of internal performance measures and systems for financial management (Henkel, 1991:205). Over this period a more integrated audit concept emerged. Through programmes of special studies to develop subsequent VFM auditing guidance, the Commission attempted to install the possibility of auditability in target organizations. However, it was initially a weak body and ‘in its early years, the Commission was undertaking local “value for money” audits with substantially the same type of expertise (that of accountancy) as had been deployed on the much narrower functions carried out by its predecessor body’ (Henkel, 1991:183). Slowly the Audit Commission developed new advisory roles and acquired confidence in the area of effectiveness and performance measurement, applying variants of the conceptual toolbox of management consultants, such as McKinseys. Without doubt, the Audit Commission has thrived in an institutional environment in which questions of politics were converted into questions of resource management.
The economic order of local government had to be changed before it could be regulated and the development of performance indicators was essential to this. However, operational relations between the Audit Commission, local authority elected council members and non-elected executive officers were, and remain, variable and complex. Day and Klein (1990:56) emphasize the ‘dependence of the audit on the professional expertise of those being audited’, suggesting that audits are essentially collusive in nature and that ‘pure policing’ is never possible. Accordingly, relations with auditors are not always antagonistic and much depends on the internal political and administrative style within the local authority itself. High conflict tends to give a high profile to the audit report but equally such reports can be used selectively and are not stereotypical cost cutting documents.
Despite these qualifications and complexities, the work of the Audit Commission provides a powerful model for inspectorates such as the SSI and HAS which grew out of organizations oriented towards ‘professional enlightenment rather than … political accountability’ (Day and Klein, 1990:58). The accountancy base of the Audit Commission undoubtedly gives it a legitimacy which other evaluators lack: ‘the techniques of presentation, the deployment of argument and the choice of language and symbol played a significant part in the acquisition of authority on the part of the Audit Commission’ (Henkel, 1991:224). The Commission's unique ‘combination of technical modelling and political argument’ (Henkel, 1991:216) characterizes (p. 117 ) a dominant audit style which emphasizes the importance of clear and measurable objectives within a strong managerial system of control. In contrast, organizations like the SSI have to manage the tensions created by standards of performance for an area, social work, which is traditionally rather insecure about the professional basis of what it does and in which professional process based values often conflict with external assessments oriented towards outomes.29
Although the SSI exists somewhat in the shadows of the Audit Commission and could never be transformed into a financial auditing body, its development suggests an emergent hierarchy between audit and evaluation, an issue which goes to the heart of value for money auditing and which casts the construction of auditable performance in terms of inter-professional struggle. Where outcomes, and hence effectiveness, are ambiguous or controversial for professionals themselves (such as in psychoanalysis or social work), cost imperatives and output measures tend to dominate the language of evaluation. In the climate of NPM the logic of performance audit encodes a hierarchical relation between cost considerations and non-financially based evaluation. In this way, the development of auditable performance measures is much more than a technical issue: it concerns the power to define the dominant language of evaluation within this hierarchy (Day and Klein, 1987:238). When Day and Klein (1990) argue that the essential tension between cost and quality could be solved by joint inspections, they underestimate the territorial issues at stake and this tendency for the three Es to be related hierarchically such that economy and efficiency values ‘oversee’ those of effectiveness.
As noted in Chapter 3, the operational relation between the three Es within the structure of value for money auditing attempts to steer a path between the financial logic of economy and efficiency and the more elusive set of skills required to formulate judgements about effectiveness, the performance auditing component of VFM audit. Initially it seemed that accountants were reluctant to stray beyond their expertise; in a VFM study of the care of children, there was no attempt made to draw conclusions on the relative merits of different methods of child care (e.g. fostering, residential care) even though these methods had clear cost implications (Kimmance, 1984:243). The relative merits of such methods, in terms of the objective of providing children with a chance to develop in a stable environment, were perceived as a matter for professional social workers to determine. Accordingly, accountants are not crudely unaware of problems of measurement in complex service organizations (e.g. Kimmance, 1984:236) and as Henkel (1991) has observed, many VFM auditors were caught between the restraints of their competence and a desire for impact.30
The question of the knowledge base of performance auditing has a long history, particularly in the USA where both audit and evaluation have been (p. 118 ) and are important resources for the GAO. Rather than trying to draw absolute analytical distinctions between audit and evaluation one should pay attention to how the distinction is used and to what end. For example, Chelimsky (1985) distinguishes between the two practices as follows. Audit focuses on verification, i.e. the correspondence between some operation and certain standards to which it should conform. In contrast, evaluation has two streams focusing on cost-effectiveness on the one hand and the assessment of programmes in influencing outcomes on the other. From this point of view audit is a normative check whereas evaluation provides empirical knowledge and addresses cause and effect issues; audit is orientated towards compliance as a normative outcome whereas evaluation seeks to explain the relationship between the changes that have been observed and the programme. Without normative standards of conduct, audit is undermined. Hence the importance of standards of performance which create a normative template to make an operation auditable. In contrast it is argued that evaluation is much less affected by ambiguity about standards of performance and objectives.
On this view performance can be audited when clear performance measures and standards of performance exist (Flint, 1988). Otherwise performance can be evaluated in the two ways described above. Furthermore, Chelimsky (1985:501) describes the ‘cost-effectiveness’ aspect of evaluation as an ‘auxiliary, not an alternative approach’ to the evaluation of performance. Accounting skills are important here but essentially subordinate. The Financial Controller and acting director general of DGXX in the European Commission offers a hierarchical view of the relation between audit and evaluation in direct contrast to that of Chelimsky. He uses the distinction slightly differently and reserves the term evaluation for the assessment of outcomes in contrast to the audit of cost-effectiveness. Although forms of self-evaluation are viewed as a necessary threshold for any spending to be taken seriously, cost effectiveness auditing sits above them and takes into account whether such self-evaluation programmes exist: ‘I'm not going to do the evaluation. I'm going to make sure that evaluation is being done’ (Pratley, 1995:261).
Even though Roberts and Pollitt (1994) draw attention to the crude nature of cost-effectiveness analysis, especially where it substitutes for a broader based evaluation of the relation between outcomes and intentions, it nevertheless satisfies the regulatory mood of the times.31 Conceiving of performance evaluation primarily in terms of cost-effectiveness overcomes the problems of an evaluation community whose epistemology is constructivist, and therefore less useful for policy purposes (Henkel, 1991:179), and brings evaluative practice closer to the domain of accountants. Equally, as Hepworth (1995) suggests, as performance auditing moves closer to evaluation, accountants will lose their hold on the work. Hence the contestability of performance evaluation has implications for different professional (p. 119 ) groups. In exploring the differences and commonalities between performance audit and evaluation, Pollitt and Summa (1997) draw attention to the greater rights of access and legitimacy of audit bodies like the NAO in the ‘web of power’ and the greater occupational distinctiveness of auditors as compared with evaluators. Although evaluators may have formal freedom to define objects of investigation, performance auditors increasingly work well beyond their original remits with an evidential style which is non-research based and procedural in form. Typically the audit feedback process embodied in a formal management control system reflects a naively mechanistic and self-corrective view of organizational change, with levels of often highly ritualized calculative specificity.32
To conclude, I have used the motif of the ‘auditable performance’ to suggest that the development of audit practices and the design of performance measures are not independent. This has always been true for financial auditing and accounting and in Chapter 4 the extreme case was suggested where it was the performance of the control system itself which was the relevant auditable performance. In the case of VFM auditing, and the audit of effectiveness in particular, the whole notion of performance, and hence of auditability, is contested and problematic. There are tendencies to favour the administrative objectivity of auditable measures of performance which are replicable and consistent even if they are essentially arbitrary. This is preferred to the nuances, ambiguities and qualifications which surround evaluation in all its guises. In the end, the problem has much to do with the nature, extent, and impact of management intervention in the operational judgements of service providers such as teachers and doctors (Pollitt, 1990:438). In the audit society the power to define and institutionalize auditable performance reduces evaluation to auditing.
Conclusions: Audit as Fatal Remedy?
In this chapter it has been argued that auditing and the development of concepts of performance are mutually constitutive. This is because performance is itself an ambivalent concept which can be anchored in terms of the functioning of a management control system or in terms of measures of output, which could figure in such a system, or in terms of outcomes which would remove assessment of performance from managerial and hence auditing control. The power of auditing is therefore to construct concepts of performance in its own image. The effects of this power were considered in three cases which suggest varying degrees of decoupling and colonization for the audit process. In the case of the audit of effectiveness, a part of value for money auditing, it was argued that concepts of performance were contestable in terms of competing primary orientations (p. 120 ) towards audit and cost control on the one hand and the evaluation of outcomes compared to intentions on the other. The mood of the NPM is such that audit tends to dominate evaluation and that performance tends to be measured in terms of auditable outputs.
The lesson of regulatory history is that, in the end, all experiments in control fail and lead to further reforms: ‘Good monitoring systems are hard to design. Getting the right information about the agent's performance, without drowning the principal in paper, is difficult. So too is developing a feedback system that gives the principal the needed information without interfering with the agent's work’ (Kettl, 1993:29). But auditing cannot just be understood in such formal and technical terms. Accountability and performance are constantly elusive, discipline-specific (Sinclair, 1995:221) and problematic as more elaborate, expensive and intrusive surfaces for control are constructed with little knowledge of their potential consequences. Accounting and audit practices exist in a process of near constant change, tossed to and fro between the demands of different and often contradictory programmes.
Decoupling and compartmentalization are the rule because individuals are infinitely more complex and adaptable than normalizing attempts to measure and control them; a substantive, messy rationality always reasserts itself over formal, technical rationality (Ezzamel and Willmott, 1993:127). And yet formal colonization is also always the rule because new forms of organizational language become institutionalized, percolate into domains even where active decoupling is pursued and become interpretive schemes which shift motivations (Laughlin, 1991). Somewhere between these extremes the gains and losses of the audit society must be evaluated. The key question is not just whether there are intended gains to be weighed against unintended side-effects but whether elements of decoupling and colonization lead to ‘reverse effects’ (Sieber, 1981) in which original goals of financial control and effectiveness are actually frustrated and undermined.33 In the context of ‘gesture politics’ with little interest in the longer term effects of intervention, systematic mechanisms do not exist to understand reverse effects, especially those which affect disenfranchised groups. Furthermore, where objectives are vague and unclear, and they usually are at the level of political programmes, the question of whether reverse effects have even occurred will itself be contestable.
In the case of auditing some reverse effects can be suggested. An increase in pointless information systems leads to ‘inspection overload’ (Day and Klein, 1990) and misdiagnosis (in contrast to official deregulatory and minimalist myths). Decoupling strategies are exacerbated but there is also a decline in organizational trust which creates an inhibiting and ‘anxious preoccupation with how one is seen by others’ (Roberts, 1991:366). It has also been said that the rise of the performance related contracts has led to irrevocable damage to cultures of trust (Day and Klein, 1987:235). New (p. 121 ) games are created to demonstrate quality and substantive performance declines: ‘formal controls instituted to enhance trust by increasing performance reliability can undermine trust and thus deter achievement of the very goals they were put in place to serve’ (Sitkin and Stickel 1996:197). Superficial commitments to empowerment reinforce forms of exclusion and auditors are captured or co-opted into turf battles, thereby forgetting the original intention of the programmes they serve. As the means becomes the end, there is a continuing overcommitment to create politically acceptable images of control. ‘Off the shelf’ (Nolan and Scott, 1993) audits are used to represent the auditee organization so as to make its activities less heterogeneous, less complex, and less uncertain.
These adverse effects are constantly eclipsed by the programmatic imperative that audits must work. In this respect one might compare this imperative with that which informed the detailed output targets of the former Soviet Union. This was a situation characterized by pathologies of ‘creative compliance’ (McBarnet and Whelan, 1991), poor quality goods and the development of survival skills to show that, often impossible, targets were achieved. Games are played around an ‘indicator’ culture where auditable performance is an end in itself and real long term planning is impossible.34
To conclude, there is a need to recognize these ‘regulatory paradoxes’ which surround audit and a need for ways to evaluate the audit explosion which are sensitive to the incentive effects through which micro-rationalities can subvert macro-rationality (Sunstein, 1990:432). The creation of an academic transfer market by research assessment exercises is a good example: the RAEs may be a fatal remedy for the university sector by stimulating behaviour which increases the costs for the sector as a whole. Grabosky's (1995a) solution to such ‘counterproductive regulation’ is better monitoring but this does not help when it is the monitoring itself which is counterproductive. The solution, if any, lies in making the effects of auditing visible. This means that audit will need to be evaluated rather than audited, a move which requires a prioristic policy making to rediscover the complexities of cause and effect. Even though audit has been audited, the power of auditing is itself one of the institutional barriers to the evaluation of audit.
Overall, the discussion of auditable performance measurement suggests how ‘the anxious ruler tries to make his phantasies come true by way of a mixture of minute controls and rigorous isolation’ (Van Gunsterten, 1976:142). Undoubtedly the programmatic faith in auditing reflects wider social anxieties and a need to create images of control in the face of risk. This will be considered in greater depth in the concluding chapter.
(1.) This binary classification of colonization and decoupling is too simple and obscures many other possible auditor/auditee relations, such as cooperation, anomie, and collusion (see Day and Klein, 1990:39). However, it provides a useful analytical overview.
(2.) See Bourdieu (1990a) who argues against the idea of the ‘informal’ somehow lying behind the formal. Within codification there is a constant two way process in which zones of informality are assembled and disassembled.
(4.) See Power (1994c:375). See also Shapiro (1992) who analyses data audits conducted by the US Food and Drug Administration (FDA) on sub-contracted tests. Evidence of unperformed, falsified, and modified data and lack of audit trail is reported. More generally, these results increase the momentum for the introduction of quality systems into laboratories.
(5.) See ‘V-Cs Reject Quality Red-Tape’ Times Higher Education Supplement 22 January, 1993. It is likely that a new agency will emerge from the joint deliberations of the CVCP and the HEFC.
(7.) It is no surprise that environmental auditing can be grafted on to these arrangements. BS 7750 for universities is now a realistic option and it was reported that Warwick University were undertaking an environmental audit. See Times Higher Education Supplement 4 February, 1994.
(8.) See, for example ‘Tripping on Red Tape’, Times Higher Education Supplement 12 December, 1992; ‘Concern at Pointless Quality Rules’, Times Higher Education Supplement 9 April, 1993; ‘Universities Balk at Review Team Costs’, Times Higher Education Supplement 17 September, 1993. Howarth (1995) has argued that HEFC and CVCP ‘seem determined to impose time-consuming methods of “assurance” (based on short-term observations of practice) and “audit” (based on organizational paperwork).’
(9.) See ‘The Parts Assessors Can't Reach’, Times Higher Education Supplement 4 February, 1995.
(10.) See ‘Feedback that Is Hard to Swallow’, Times Higher Education Supplement 25 December, 1992.
(11.) It has also been argued that evaluative systems devalue and compress time for public policy intervention (Puxty et al, 1994; Willmott, 1995).
(12.) See ‘Beaten with a Yardstick’, Times Higher Education Supplement 11 February, 1994.
(13.) I shall use the term medical audit in a broad sense. However, there is a distinction to be drawn between clinical auditing, relating to the entire cycle of care, include nursing and out-patient arrangements, and medical auditing which has a narrower diagnostic focus (see Exworthy, 1995). The present discussion does not rely on these distinctions.
(14.) Private correspondence with a medical practitioner, October 1994.
(15.) Private correspondence with a medical practitioner, October 1994.
(16.) Similar absorbative, decoupling strategies are visible in schools' responses to the ‘Local Management for Schools’ (LMS) initiative, which was introduced in the wake of the 1988 Education Reform Act (See Broadbent et al, 1993; Shearn et al, 1995).
(17.) These observations are based on my experience and discussions as a participant at a workshop on ‘Audit and Psychotherapy’ at the Tavistock Clinic, London, January 1995.
(18.) An editorial in a medical journal is typical, calling for multidisciplinary practice and defining audit by distinguishing it from its financial namesake. See ‘Constructive audit’ Palliative Medicine 1990 4(1).
(p. 156 ) (19.) A study of 379 audits at the University of Newcastle Centre for Health Services Research showed that while 80 per cent of audits identified the need to alter practice, this was only attempted in 40 per cent of cases. 90 per cent of audits were incomplete.
(20.) Practitioners have complained about high subscription costs and the JDS has found itself unable to conduct its own investigations following the closure of BCCI. See ‘Critical Smoke Obscures Disciplinary Reforms’ The Financial Times 8 August, 1992.
(21.) See ‘In the Twilight of Self-Regulation’, Accountant October 1994, 7.
(22.) Prior to the formation of the Audit Commission, there was an inspector of audit charged with developing local government audit practice. The Commission assumed this role (Kimmance, 1994:228).
(23.) Initially, 10000 registrants were expected covering 900000 companies. See Auditors Pay for Registration’ Independent 16 October, 1990. On the technical problems of compiling the audit register, see Fearnley and Willet (1995).
(24.) See ‘Accountancy Body Faces Legal Threat’ The Financial Times 31 May, 1991; ‘The Institute Prepares to Flex its Muscles’ The Financial Times 4 July, 1991; ‘Touche attacks ICAEW audit role’ The Financial Times 1 May, 1992.
(25.) A similar story was told by the regulators for the ACCA and the Association for Authorised Public Accountants. See ‘Auditors Called to Account’ The Financial Times 3 February, 1993.
(26.) See ‘JMU Needs to Rethink Regulation Strategy’ Accountancy Age 25 January, 1996. Whether this view is representative of small practitioner sentiment is unclear.
(27.) In this respect it is interesting to note that the NAO audits the Audit Commission and the NAO is itself audited by the firm of Clark Whitehill (whose methods would be audited by JMU). Bowerman (1994b) has argued there is no in-depth VFM audit of the Audit Commission and little attention to effectiveness issues for these auditors other than in terms of identified savings. This is because audits are audited, not evaluated!
(28.) It could be argued that within UK financial reporting for private sector organizations there have been attempts to make auditable performance ‘softer’ by de-emphasizing single figure measures of performance and by encouraging experimentation with a new narrative form of reporting, the Operating and Financial Review.
(29.) It should be noted that questions of colonization apply equally to bodies like the SSI which, like the Audit Commission, was also able to define policy through the process of inspection (Day and Klein, 1990:30).
(30.) There has been much debate about the appropriateness of an accountancy background for effectiveness audit work, often with half an eye on the pluralistic skills base of the GAO (Pendelbury and Sherim, 1990:179). Whereas financially based auditors themselves felt they could do effectiveness audits, this view was not shared by senior management in the auditee (Pendelbury and Sherim, 1991). From a different perspective, Harden (1993:36) has suggested that the concept of ministerial responsibility encourages a focus on efficiency and economy.
(31.) Despite, and perhaps in reaction to, the audit explosion there has been a resurgence of interest in evaluation (see Laughlin and Broadbent, 1995). In the UK (p. 157 ) a new journal was established in 1995 to coincide with the inception of the UK Evaluation Society. The evaluation tradition is heterogeneous as compared with auditing and the establishment of such a society, which has a European counterpart, is an attempt to institutionalize, formalize, and legitimize the field. I am grateful to Christopher Pollitt for pointing out to me that at the first conference of the European Evaluation Society in the Hague in 1994, auditors were the largest single identifiable group. This suggests that auditors themselves will play a large role in the institutionalization of evaluation.
(32.) These criticisms of audit must be balanced against a naive enthusiasm for evaluation as an alternative. There is a longstanding tradition of criticizing evaluation. For example, in the context of schools and medicine, Ivan Illich has been a prominent critic. See also Sieber (1981, Chapter 2). So the contrast with auditing is not intended to imply unconditional support for evaluation against auditing. It has even been suggested that ‘Evaluation talk is not nonsense but meta-nonsense, a more or less random arrangement in chapters, boxes, arrows and flowcharts of phrases such as: data collection points, needs assessment scales; goal progress charts; the hierarchical pyramid of goals, subgoals, basic objectives and action objective; programme completion criteria; programmatic activity evaluation forms; follow up assessment; outcome comparisons’ (Cohen, 1985:179).
(33.) Marx (1981:236–7) provides one of the most dramatic examples of the ‘reverse effect’ of performance measurement: ‘When the police organizations' system of performance evaluation, reward and promotion emphasizes quantitatively measured productivity (as tends to be the case in more professional and bureau-cratized departments), there may be a strong incentive for police facilitation of crime to meet monthly quotas.’
(34.) I am grateful to Professor R. Amman for alerting me to this sovietological comparison with the UK.