Jump to ContentJump to Main Navigation
Between Truth and PowerThe Legal Constructions of Informational Capitalism$

Julie E. Cohen

Print publication date: 2019

Print ISBN-13: 9780190246693

Published to Oxford Scholarship Online: October 2019

DOI: 10.1093/oso/9780190246693.001.0001

Show Summary Details
Page of

PRINTED FROM OXFORD SCHOLARSHIP ONLINE (www.oxfordscholarship.com). (c) Copyright Oxford University Press, 2020. All Rights Reserved. An individual user may print out a PDF of a single chapter of a monograph in OSO for personal use. date: 06 April 2020

The Regulatory State in the Information Age

The Regulatory State in the Information Age

Chapter:
(p.170) 6 The Regulatory State in the Information Age
Source:
Between Truth and Power
Author(s):

Julie E. Cohen

Publisher:
Oxford University Press
DOI:10.1093/oso/9780190246693.003.0007

Abstract and Keywords

This chapter explores changes in institutions and processes for economic regulation. The emergence of the platform as informational capitalism’s core organizational logic and of datafication as its principal logic of commodification have disrupted traditional, industrial-era approaches to defining both markets and harms, making it more difficult to articulate compelling accounts of what precisely should trigger regulatory oversight and enforcement. At the same time, settled ways of thinking about the appropriate modalities of administrative lawmaking have come under challenge. Emergent institutional models for oversight of information-economy activities are procedurally informal and emphasize ongoing compliance with performance-based standards intended to guide complex, interdependent sets of practices. Those models also reflect the influence of the managerial turn. They rely heavily on privatized self-regulation, compliance certification by professional auditors, and financialized review to minimize regulatory burdens and costs, and they have tended to be both opaque to external observation and highly prone to capture.

Keywords:   algorithms, audit, compliance, financialization, governance, managerialism, platforms, regulation, risk, self-regulation

A European says: I can’t understand this, what’s wrong with me? An American says: I can’t understand this, what’s wrong with him? I make no suggestion that one side or other is right, but observation over many years leads me to believe it is true.

—Terry Pratchett, interview

In Chapter 5, we saw that the movement to informational capitalism and the emergence of neoliberal governmentality are reshaping legal institutions for dispute resolution; here, we will see that an equally significant reconstruction of the regulatory state is underway. The design of regulatory institutions reflects prevailing legal wisdom about fair and effective process, but it also responds—and indeed, is designed to respond—to problems created by prevailing modes of economic production and resulting alignments of economic and political power. The institutions that we now have were designed around the regulatory problems and competencies of an era in which industrialism was the principal mode of development. The ongoing shift from an industrial mode of development to an informational one, and to an informationalized way of understanding development’s harms, has created existential challenges for regulatory models and constructs developed in the context of the industrial economy. At the same time, the ascendancy of neoliberal managerialism has produced new strategies for disciplining the regulatory state, reshaping its constituent frameworks and processes in ways that align with overarching ideological commitments to privatization, financialization, and expert, informationalized oversight.

Consider the following example: In September 2015, the public learned that European automotive giant Volkswagen had designed the emissions-control software for its diesel engines to comply with prescribed emissions limits only when the software detected that a vehicle was being subjected to emissions testing. At all other times, the software employed a “defeat device” to disable emissions-control functionality, resulting in emissions that vastly exceeded applicable regulatory (p.171) limits. The scandal resulted in the resignation of Volkswagen’s CEO, a precipitous drop in the company’s stock value, and a wave of fines and recalls spanning three continents.1

The Volkswagen scandal neatly encapsulates many of the challenges that the shift from industrialism to informationalism has presented for regulators. From one perspective, the automobile industry is a paradigmatic industrial-era formation. Modern regimes of emissions regulation, however, are the product of an information-era realignment in societal understanding of the harms flowing from economic development. Additionally, computer software resides at the core of the modern automobile and regulates nearly everything about its performance. The story of the defeat device revealed a regulatory apparatus pushed beyond its capabilities by the cumulative impact of these developments.

To begin with, the striking success of Volkswagen’s defeat device—which escaped detection for six years and ultimately was discovered not by regulators but by independent researchers—illustrates a large and troubling mismatch between regulatory goals and regulatory methods. Debates over emissions targets have long been deeply politicized, but the now-undeniable need for emissions regulators to move into the software audit business adds new and unfamiliar methodological and procedural problems into the mix. If regulation of automotive emissions—and thousands of other activities ranging from loan pricing to derivatives trading to gene therapy to insurance risk pooling to electronic voting—is to be effective, policymakers must devise ways of enabling regulators to evaluate algorithmically embedded controls that may themselves have been designed to detect and evade oversight.

The Volkswagen scandal also illustrates the pervasive institutional influence of economic power—and shows that influence operating on levels that are both political and ideological. In the weeks after the news broke, press coverage documented Volkswagen’s systematic efforts to stave off more intrusive regulation in the European Union and probed its close ties with the private European emissions testing laboratories that act as regulatory surrogates.2 Such efforts and ties are not unusual, however. Scholars and policymakers have long recognized that regulated industries are intensely interested in matters of regulatory capacity and institutional design. More noteworthy are Volkswagen’s apparent justifications for designing and installing the defeat device: it was deemed necessary to enable improved engine performance, which in turn enabled Volkswagen to maintain and burnish its glowing reputation as an innovator in the field of automotive design.3 Also noteworthy is European regulators’ choice to devolve primary responsibility for emissions testing to private entities that certified compliance.

The themes of innovative flexibility and privatized oversight have gained increasing traction within regulatory settings as the movement to informational capitalism has gathered speed. For the last several decades, advocacy emanating from Wall Street and Silicon Valley has pushed for deregulation and devolution (p.172) of governance to the private sector, invoking asserted imperatives relating not only to market liberty but also and more fundamentally to innovation and economic growth. The particular formulations advanced often are more accurately characterized as capital’s imperatives, and yet the intertwined themes of liberty, innovation, and growth have proved extraordinarily powerful in structuring public debate about regulatory goals and methods.

The chapter begins by identifying some important areas of disconnect between information-era activities and industrial-era regulatory constructs. Industrial-era regimes of economic regulation presumed well-defined industries, ascertainable markets and choices, and relatively discrete harms amenable to clear description and targeted response. The shift to an informational political economy has disrupted those presumptions, making it more difficult to articulate compelling accounts of what precisely should trigger compliance obligations, enforcement actions, and other forms of regulatory oversight.

Next, the chapter explores the connections between information-era regulatory problems and ongoing changes in the design of regulatory institutions. Among U.S. legal scholars, there is fairly widespread consensus that administrative law is in crisis but substantially less agreement on the reason. Perhaps unsurprisingly, administrative law scholars focus primarily on the disintegration of the legal process paradigm that has animated the regulatory state since its inception.4 As is now widely recognized, much current regulatory activity follows nontraditional institutional models. Such activity may blend policymaking and enforcement, involve public-private partnerships in rulemaking and standard setting, and/or devolve responsibility for assessing compliance to private auditors. Nontraditional regulatory models are particularly prominent in areas such as privacy, telecommunications, health, food and drug regulation, and finance, all of which are information-intensive. This is (or should be) unsurprising. As we have just seen, auditing a compliance algorithm to detect embedded cheats is a different and more difficult task than simply assessing engine outputs. Similarly, auditing a credit rating algorithm, interrogating the health implications of a new food additive, or evaluating the competitive implications of a dominant platform firm’s acquisition of an information aggregator is a different and more difficult task than evaluating a proposed merger between two grocery chains or inspecting a factory assembly line. For these and other reasons, scholars in a variety of fields, including cyberlaw, telecommunications, information privacy, and finance, have argued that regulatory processes have failed to respond—and perhaps cannot in their nature respond adequately—to the regulatory problems created by information markets and networked information and communications technologies.5

As we will see, the information-era regulatory approaches that have begun to emerge reflect the distinctive imprint of neoliberal managerialism. They are procedurally informal, standard-based, mediated by expert professional networks, and increasingly financialized. Theoretically, at least some of those attributes (p.173) might make the new models better suited to addressing information-era regulatory problems. In reality, institutional disruption has provided new points of entry for economic power. Emergent, nontraditional regulatory models have tended to be both opaque to external observation and highly prone to capture. New institutional forms that might ensure their legal and political accountability have been slow to develop.

Seeing Like a (Regulatory) State

Regulating information-era activities requires frameworks for making sense of the activities being regulated—for understanding how they work and identifying their legitimate and illegitimate modes of operation. The movement to informational capitalism, and especially the growing centrality of platforms and algorithms as modes of organization and governance, has disrupted many of the basic legibility rubrics that underlie and inform regulatory activity.

Generally speaking, economic regulation in the era of industrial capitalism has had two principal concerns: facilitating fair competition in markets and preventing harms to the public health and safety. Where markets are concerned, scholars and policymakers traditionally have defined impermissible results in terms of concepts such as market power, discrimination, and deception—benchmarks that are relatively easy to assess when markets are distinctly ascertainable, goods have fixed properties, and information about consumers is limited. In the platform-based, massively intermediated information economy, none of those things is true. Markets are fluid and interconnected and information ecologies have complex and often opaque path dependencies. Debates about how to address the health and safety harms flowing from economic activity similarly reflect both the pervasive influence and the disruptive effects of new informational capabilities. Beginning in the late twentieth century, networked digital information technologies supplied new tools for conceptualizing and modeling a wide variety of complex, systemic harms. At the same time, however, the displacement of preventive regulation into the realm of models and predictions has complicated, and unavoidably politicized, the projects of identifying and addressing such harms.

In each of these contexts, neoliberal governmentality interposes its own legibility rubrics, which revolve around the presumed existence of competition, the presumptively beneficial effects of private innovative activity, and the presumptive superiority of utilitarian methods for assessing costs, benefits, and trade-offs. Those rubrics in turn reflect the pervasive influence of the logics of performative enclosure, appropriative privilege, and innovative and expressive immunity that Part I explored.

(p.174) From Market Power to Platform Power

A core concern of economic regulation is identifying the circumstances in which economic power requires oversight. Power in markets for goods or services can translate into predatory pricing or barriers to competitive entry, while power embedded in the structure of particular distribution channels or relationships can facilitate other types of inefficient or normatively undesirable behavior. Platform-based, massively intermediated media infrastructures introduce bewildering new variations on these themes. Understanding economic power and its abuses in the era of informational capitalism requires discussions about the new patterns of intermediation and disintermediation that platforms enable, and about the complexity and opacity of the services they provide.

The industrial-era regulatory toolkit includes a number of regulatory schemes that are concerned with the illegal acquisition and maintenance of market power. In the United States, that group includes most notably the antitrust laws but also other, more specific regimes, such as the Federal Trade Commission (FTC)’s unfair and deceptive acts and practices authority. Both regimes presume the ability to define markets in the first instance, and both also presume the ability to isolate discrete practices that harm consumers in direct and observable ways. Finally, they traditionally have presumed that the ability to tie separate markets together is both rare and suspect.

Platform-based, massively intermediated media infrastructures disrupt conventional understandings of market power and market harm. Recall from Chapter 1 that the economics of two- and multi-sided markets differ in important ways from those of traditional, one-sided markets. Because platforms can define terms for each user group separately, pricing is not a reliable sign of market power, and secondary heuristics such as the competition regulator’s basic distinction between horizontal and vertical integration strategies also do not translate well to the platform-based environment.6 The complexity and opacity of platform-based, massively intermediated exchange structures have stymied courts and policymakers used to working with more traditional economic models.

An early harbinger of the conceptual difficulties that platforms create for antitrust law was the antitrust litigation against Microsoft Corporation for bundling the Internet Explorer browser with its operating system. Microsoft’s software licenses with its original equipment manufacturers (OEMs) required that personal computers be shipped with Internet Explorer preinstalled. Competing browser manufacturers argued that given Microsoft’s undisputed dominance in the personal computing market, that requirement created unfair barriers to entry. From the standpoint of antitrust doctrine formulated for the industrial era, however, the market for browsers was unusual. To begin with, it was hard to discover a price advantage that accrued to Microsoft because the leading competitors offered their software free of charge. Microsoft also asserted copyrights in its operating system (p.175) and browser software, and it invoked traditions of rightholder control over licensing to bolster its defense. Finally, and importantly, although Microsoft’s licenses prohibited OEMs from removing Internet Explorer and its desktop icons, the licenses did not prohibit either OEMs or consumers from installing and using competing browsers. In the traditional language of antitrust law, they were vertical restrictions rather than horizontal restrictions and therefore less suspect. As those who developed and prosecuted the Microsoft case recognized, however, Microsoft’s desktop environment also created powerful path-dependencies.7 Although the government ultimately obtained a judgment against Microsoft requiring it to unbundle its licensed products, the judgment issued 10 years after the complaint had been filed, and the proceedings lumbered to their conclusion without the benefit of a coherent framework for determining harm.8

In the intervening years, platform-based information ecosystems have grown ever larger and more complex. Dominant platform firms such as Alphabet (Google), Facebook, and Amazon have thrived by developing ways to offer both individual and business users a wide variety of information services while controlling both advertising markets and the harvesting and processing of user data. Many platform services are available to users at no direct financial cost, but that does not make them costless. As Chapters 2 and 3 described, loss of control over personal information creates a variety of near-term and longer-term risks that are difficult for individuals to understand—and, importantly for antitrust purposes, therefore impossible for them to value.9 As Chapter 1 explained, business users of platform services—ranging from small mom-and-pop storefronts to businesses with nationwide name recognition—also cannot easily value or monitor the quality of the services that they have purchased. The doctrinal landscape also has become more complex; as we saw in Part I, platforms use contract and trade secrecy to create and sustain competitive advantages and assert free speech interests to avert regulatory oversight.

Although antitrust doctrine and theory traditionally have prided themselves on being attuned to economic realities, they have lagged behind these rapid changes in the economic landscape. Business scholars were quick to recognize the importance of the platform business model, but antitrust thinking is far more rigidly circumscribed by preexisting conventions for economic modeling that have become enshrined in case law and government enforcement practice, so it evolves more slowly. As of this writing, there is no generally accepted definition of platform power that might replace the antitrust construct of market power and no consensus on how to remedy the effects of platform-based manipulation of the competitive environment. Only within the last few years have antitrust economists and legal scholars started working to develop a methodology for identifying the competitive injuries that platform-based environments enable. Those efforts, moreover, have met with intense pushback from neoliberal economic thinkers and think tanks.10

Other industrial-era regulatory schemes address circumstances in which high fixed costs make monopoly provision of certain services more efficient. Public (p.176) utility regulation and common carrier regulation are the two principal examples. It would be inefficient, for example, to install multiple sets of water pipes or electric cables to residential neighborhoods, or to build parallel sets of railroad tracks to move freight around the country. Instead, special regulatory regimes have emerged that take a different approach to the question of market structure. Such regimes, which date to the early twentieth century and reflect Progressive-Era and New Deal concerns about both market access and market fairness, typically incorporate both rate-setting restrictions and nondiscrimination obligations.11

Whether and when communications platforms and other information intermediaries should be subject to common-carrier or public-utility obligations is a controversial topic, and debates about those questions also have lagged behind the emergence of platform power. In the United States, the debate over “net neutrality”—or the obligation to “treat all content, sites, and platforms equally”12—has followed a tortuous path dictated in part by an obsolete statutory framework to which we will return later in this chapter. On a conceptual level, however, the net neutrality debate raises questions about the best way of adapting industrial-era notions of common-carriage and/or public utility provision to information services that are much more complex. Unlike earlier twentieth-century debates about public utility and common-carrier regulation, the net neutrality debate has not squarely addressed more general questions about the extent to which communications regulation should incorporate considerations of fairness and economic justice.13

To begin with, it is important to recognize that the “net neutrality” rubric is itself a neoliberally inflected way of answering questions about economic power and public access, because it assumes that market forces operating on an intraplatform basis will produce services of adequate variety and quality as long as access providers are prevented from blocking or throttling such services. Notably, each side in the net neutrality debate has attempted to claim for itself the mantle of innovative liberty and economic growth. Large telecommunications companies argue that freedom to experiment with high-bandwidth delivery of premium services will foster economically productive innovation. Supporters of a net-neutrality mandate, including internet companies, digital civil liberties groups, and consumer advocates, argue that price discrimination within closed platforms will threaten both widely distributed innovation and freedom of expression.

Provider incentives to engage in blocking and throttling are not the only factors affecting equal access to information services, however. There are many online services that privileged consumers take for granted, but that less privileged consumers struggle to obtain because they require higher bandwidth or more versatile platforms to be delivered effectively. The Federal Communications Commission (FCC) has long overseen a program to offer basic telephone service to the poorest consumers, and more recently developed a parallel program for broadband internet access. Both in the United States and globally, however, many lower-income users rely exclusively (p.177) on mobile platforms that are less versatile, less amenable to user customization and control, and designed to maximize data sensing and harvesting.14

From the internet access provider’s perspective, the ability to discriminate among different types of network traffic is valuable in part because it enables access providers to compete more effectively in data harvesting economies. The logics of performative enclosure (described in Chapter 1) and productive appropriation (Chapter 2) reinforce arguments framing network traffic discrimination as a business and innovative necessity. The same logics also increasingly infuse discussions about provision of essential services. At least some wireless internet providers would prefer to handle the essential-services problem via the practice of zero rating, in which usage of a designated suite of applications is not counted for billing purposes. Such arrangements—which, from the platform perspective, simply represent another permutation of the access-for-data bargain—are more affordable, but they are not neutral. They incentivize consumers to use zero-rated applications more heavily, and providers may grant zero rating designations in exchange for access to data about users’ in-app behavior or other favorable terms. Internet providers seeking to build their user bases in developing economies have relied heavily on zero rating schemes, and the net neutrality rules briefly adopted in the United States permitted zero rating subject to case by case analysis of reasonableness.15 In U.S. communications policy circles, thinking about the connections between network control, platform power, and data harvesting is still rudimentary.

By comparison, European regulators have engaged with the problem of platform power and its complex embeddedness in data harvesting arrangements far more aggressively. Over the past two decades, competition regulators have initiated several investigations of Microsoft, Google, and Facebook for alleged anticompetitive actions. In 2017, the European Commission announced that it had fined Google a staggering 2.4 billion euros for abusing its dominant position in search to the advantage of its own online shopping service, though whether that fine will survive review is yet to be determined.16 Data protection authorities, although willing to experiment with coregulation and to allow some leeway for consumer consent to data processing, have steadfastly insisted that guarantees of transparency and purpose limitation should be meaningful and that consumer consent has limits.17 As to network neutrality, European regulators generally have been inclined to view internet access as a type of public utility. They have exempted both certain high-bandwidth “specialised services” such as internet TV and zero-rating schemes deployed by European internet providers from the full force of neutrality obligations, but European telecommunications policy also incorporates strong privacy and data protection obligations.18

European policymakers also have made more concerted and thoughtful efforts to understand the economic logics underlying the platform business model and the various kinds of external costs that platform power can create. One component of the digital single market strategy announced by the European Commission in (p.178) 2015 is a research initiative on online platforms that includes investigation of those questions.19 Although it is too early to judge the success of that effort, it reflects comparatively greater openness to new ways of seeing and describing logics of economic domination.

In U.S. legal and policy discussions, to offer European regulatory actions as valid alternative models is to risk vehement and at times nearly unhinged ridicule. The historical U.S. antipathy to European-style bureaucracy does not fully explain the level of contemporary vitriol, levied indiscriminately against actions that appear to privilege dominant information providers and those that seek to restrain them.20 A more accurate explanation simply may be that the behavior of European regulators contradicts the reigning neoliberal account of optimal regulatory behavior. As we saw in Chapter 3, that account paints “regulation” as innovation’s mortal enemy. U.S. critics also charge European regulators with attempting to institute a regime of economic protectionism that would give European businesses an unfair advantage. Protectionism can flow from underregulation as well as from overregulation, however. It is true that European regulators make no secret of their desire to see domestic businesses gain ground, but it is also true that U.S. stances on antitrust and data protection have permitted a race to the bottom, fueling the rise of the dominant U.S. platform giants and hastening the accumulation of platform power.

From Market Distortion to Infoglut and Intermediation

Regimes of economic regulation also include anti-distortion rules—rules intended to ensure that flows of information about the goods, services, and capabilities on offer are accurate and unbiased. Some anti-distortion rules are information-forcing; rules in that category include those requiring disclosure of material information to consumers or investors. Other anti-distortion rules are information-blocking; examples include prohibitions on discrimination, false advertising, and insider trading. The difficulty currently confronting regulators is that contemporary conditions of infoglut and pervasive intermediation disrupt traditional anti-distortion strategies. To achieve meaningful anti-distortion regulation under those conditions, a different regulatory toolkit is needed.

The rationales behind information-forcing and information-blocking rules are straightforward. According to standard microeconomic theory, transactions between willing buyers and sellers generally will produce prices that accurately reflect the characteristics of goods and services, including any nonprice terms that meaningfully affect the quality of the good or experience.21 Sometimes, however, goods and services may have latent, complex or highly technical characteristics that consumers cannot understand fully or value accurately. In other cases, power imbalances or other structural imbalances may undercut or frustrate efforts to obtain more comprehensive and accurate information. Disclosure mandates represent attempts to correct for market failures by closing information gaps. Examples of (p.179) such mandates include food and drug labeling requirements and truth-in-lending rules. Other kinds of information flows reflect or enable systematic bias or favoritism that society views as normatively undesirable. For example, discrimination in housing, lending, and employment violates foundational commitments to equal opportunity, and insider trading and false advertising undermine confidence in the overall fairness of markets. Modern systems of economic regulation typically include numerous rules targeting both the undesirable conduct and the information flows that facilitate it.

From a regulatory perspective, the problem with infoglut is that it makes information-forcing rules easy to manipulate and information-blocking rules easy to evade. Both information-forcing and information-blocking rules are premised on the assumptions that information is scarce and costly to obtain and convey and that regulatory mandates therefore can produce meaningful changes in the nature and quality of information available to economic actors. Information-forcing rules additionally presume that consumers and investors are in a position to benefit from required disclosures. Under conditions of infoglut and pervasive intermediation, all of those assumptions fail. Recall from Chapter 3 that infoglut—or information overload resulting from unmanageable, mediated information flows—creates a crisis of attention.22 The massively intermediated, platform-based environment promises solutions, offering network users tools and strategies for cutting through the clutter. At the same time, however, it enables information power to find new points of entry outside the reach of traditional anti-distortion regimes.

Consider first the problem of how to conduct meaningful antidiscrimination regulation and enforcement under conditions of infoglut and pervasive intermediation. To enforce existing antidiscrimination laws effectively, the various agencies with enforcement authority need the ability to detect and prove discrimination, yet that task is increasingly difficult when decisions about access to credit, employment, and housing are made via criteria deeply embedded in complex algorithms used to detect patterns in masses of data. Markers for protected class membership can be inferred with relative ease and near-impunity from other, seemingly neutral data. Data-intensive methods also may seem naturally to support arguments about legitimate business justifications for decisions denying access or offering it only on unfavorable terms.23

In an era when decision-making is mediated comprehensively by so-called big data, regulators seeking to fulfill antidiscrimination mandates must learn to contend with the methods by which regulated decisions are reached—with data and algorithms as instrumentalities for conducting (regulated) activity. In general, the existing regulatory toolkit is poorly adapted for scrutinizing data-driven algorithmic models. One rudimentary gesture toward algorithmic accountability is the Federal Reserve’s Regulation B, which lists criteria for the Consumer Financial Protection Bureau (CFPB) to use in determining whether credit scoring systems are “empirically derived [and] demonstrably and statistically sound.”24 The list relies heavily (p.180) on “accepted statistical principles and methodology,” but leaves unexplained what those principles and methods might be and how they ought to translate into contexts involving automated, predictive algorithms with artificial intelligence or machine learning components.

Infoglut and pervasive intermediation also impair the ability to conduct effective consumer protection regulation. Consumer protection regulation typically involves both information-forcing and information-blocking strategies. Regulators seek to require disclosure of material information about quality and other nonprice terms, and they also attempt to prevent marketing practices that are deceptive or that prey upon vulnerable populations. In the era of information overload, however, more comprehensive disclosures do not necessarily enhance understanding. Market researchers and consumer advocates have long recognized that the increasing amounts of information associated with even basic consumer products such as prepackaged foods and over-the-counter pharmaceuticals can be bewildering. More complex goods and services often are amenable to versioning in ways that embed material nonprice terms—for example, access to technical support services, different processor speeds, amounts of digital storage, and so on—within price discrimination frameworks.25 Additionally, the traditional regulatory focus on the content of disclosures is far too limited. The way that choices are presented also matters. Techniques for nudging consumer behavior become even more powerful in platform-based, massively intermediated environments, which incorporate “choice architectures” favoring the decisions that platform or application designers want their users to make.26

Disclosures and choice architectures, moreover, are far from the only issues of concern to consumers. As Chapters 2 and 3 described, platform-based, massively intermediated online environments raise profound economic justice questions. Predictive profiles can convey valuable information about consumers’ priorities and reservation prices. Targeted advertising can ensure that consumers see only certain options, and cutting-edge behavioral microtargeting techniques that identify points of vulnerability can be used to shape and refine targeting strategies. Scholars and social justice advocates have begun to draw attention to the linkages between the new types of pattern-based discrimination enabled by data-intensive profiling and the emergence of a seemingly permanent economic underclass.27 Current consumer protection paradigms framed in terms of notice and choice are ill-suited to address these issues.

In similar fashion, infoglut and pervasive intermediation create barriers to effective financial regulation. As Chapter 1 described, financial markets have become increasingly complex in the networked information era. Networked information and communication technologies have greatly increased overall levels of access to investment-related information, and yet access also is mediated by a growing number and variety of information providers. The resulting increase in differential access to market information has prompted market regulators to push for more (p.181) regularized transparency to investors in traditional areas of investor concern—hence, for example, the SEC’s Regulation FD, which attempts to place all investors on an equal footing with regard to major corporate announcements and disclosures by publicly traded companies.28

Contemporary investors, however, have access to such a wealth of information and such a variety of investment vehicles that an equally pressing problem concerns how to make sense of it all. On that question, financial regulators have been silent. As we saw in Chapter 1, datafication and platformization have profoundly changed financial activity, disintermediating traditional points of exchange and catalyzing the emergence of markets for new, synthetic products invented by sophisticated institutional investors and traded amongst themselves. Putting investors on an equal footing with respect to data processing, analytic capacity, and access to private trading venues and investment vehicles is far less feasible—and many new financial instruments are so complex that they defy efforts to describe the associated risks.29 Increasingly, it has begun to seem as though there is one set of rules for the ordinary consumer and institutional investors serving that consumer and a very different set for the financial cognoscenti, and it also seems beyond dispute that piecemeal reforms simply encourage well-resourced investors to seek new opportunities for regulatory arbitrage.

Last and importantly, although economic regulators traditionally have not concerned themselves with election integrity, in the platform-based, massively intermediated information economy, the domains of economic and election regulation have converged. As Chapter 3 described, platform-based information environments optimized for behavioral microtargeting and user engagement have enabled new strategies for voter manipulation that exploit infoglut to sow and magnify distrust and discord. By and large, election regulators use the same information-forcing and information-blocking strategies that consumer protection regulators and financial regulators do. Candidates for office typically must make certain disclosures, and campaign finance laws set limits on campaign funding and political advertising. The ease with which campaigns of election manipulation have unfolded demonstrates that conditions of infoglut and pervasive intermediation disrupt those strategies as well. Proposals to double down on the same methods—for example, by devising new transparency requirements and extending prohibitions on advertising purchases by foreign entities to online ads—are vanishingly unlikely to achieve their stated aims.30

As with platform power, European policymakers have been far more open to comprehensive study of the challenges that infoglut and pervasive intermediation present to regulatory regimes that incorporate anti-distortion rules. The Commission’s ongoing research project on online platforms includes both technical research on mechanisms for achieving algorithmic accountability and exploration of possible new regulatory mechanisms for ensuring fairness and transparency for both individual and business users of online environments.31 In the United States, (p.182) media attention to such initiatives has been notable chiefly for its absence, and the logics of productive appropriation and innovative and expressive immunity that Chapters 2 and 3 described have effectively foreclosed comparable domestic efforts. Even policy initiatives widely heralded as both progressive and too adventurous to be politically feasible lean heavily on traditional methods and gesture only timidly, if at all, toward the problem of unaccountable algorithmic intermediation.32

From Discrete Harms to Systemic Threats

A final major concern of economic regulation involves collective health and safety harms arising as byproducts of economic activity. Such harms are a long-standing concern, but new informational capabilities have gradually reshaped ways of both seeing harms and formulating possible responses. For example, at the turn of the nineteenth century, the harms that concerned regimes of food and drug and workplace safety regulation were relatively clear and concrete—deaths caused by adulterated foods and medicines, dismemberments caused by industrial machinery, and the like. By the mid-twentieth century, policymakers had begun to recognize other types of complex and emergent harms—for example, environmental pollution caused by industrial waste and diseases caused by carcinogenic or teratogenic chemicals. They also had assimilated lessons from the 1929 stock market crash and the Great Depression about threats to the stability of financial systems.33 As societal understandings of harm evolved to encompass a wider range of systemic effects of industrial and informational development, regulatory methods evolved to include techniques for measuring, communicating about, and responding to those effects. Yet even as new informational resources and capabilities have oriented regulators toward systemic harms to be realized in the future—toward the problem of systemic threat—they also have exposed the extent to which regulatory models are politically constructed.

Systemic threats are accessible—to regulators, affected industries, and members of the public—only through modeling and representation, and techniques of modeling and representation are not neutral. Models depend on assumptions about variables and parameters that are open to contestation. Representation of a systemic threat as more or less threatening also requires the use of narratives and metaphoric frames to communicate the likelihood and magnitude of impending systemic changes. As threatened future harms have become more abstract, diffuse, and technologically complex, disputes about appropriate regulatory responses have become struggles for control over both modeling and representation. The contemporary condition of infoglut exacerbates those struggles. Finding firm regulatory footing amid a welter of conflicting models, frames, assertions and opinions—and, more recently, amid a growing torrent of misinformation, disinformation, and simulated citizen engagement—has become increasingly difficult.

(p.183) In terms of regulatory methodology, the need to contend with systemic threats creates two problems. Both are well recognized within the legal literature on regulation, but neither has been systematically conceptualized as a potential lever for institutional change in response to shifting modes of economic and technological development.

The first problem arises because threats of future harm are inevitably probabilistic. Methods for modeling and assessing a range of possible future scenarios now inform regulatory approaches in fields ranging from environmental protection to financial regulation, but the shift to a probabilistic sensibility underscores a tension between two very different approaches to evaluating asserted dangers. One approach, based on the concept of risk, emphasizes formal modeling and quantification; the other, based on uncertainty, holds that not all factors bearing on the probability of future harm can be modeled and quantified.34 The discourse of risk is conceptually crisper than that of uncertainty, and supplies a way of both describing probabilistic future harms and quantifying—and sometimes pricing—acceptable risk thresholds. For that reason, it has won influential adherents in government and business circles.35 Sometimes, however, experts in the relevant technical fields argue that risk modeling is insufficient. For example, vulnerability to data security breaches depends in part on technical configuration, in part on organizational configuration, and in part on human error, and these factors are heterogeneous and incommensurable. Computer security experts therefore have developed threat modeling protocols that explicitly incorporate both risk and uncertainty.36

The tension between risk-based and uncertainty-based approaches to evaluating systemic threats is political as well as epistemological. When risk discourses dominate threat modeling, they can become ways of black-boxing areas of uncertainty, displacing contradictory or otherwise inconvenient scientific authority, and ratifying existing distributions of resources.37 Reliance on risk assessment and risk management discourses also can induce unwarranted complacency and encourage excessive risk-taking. The financial instruments and transactions that produced the economic bubble of the 2000s and the ensuing crash incorporated extensive risk calculations, but the calculations were based on self-serving assumptions and did not model the scenarios that could lead to systemic collapse. The sheer level of complexity also introduced new uncertainties and new possibilities for market failure.38 Post-crash disputes about the Federal Reserve’s protocols for administering stress tests to financial institutions have been disputes about precisely whether formal risk assessment tools can adequately model vulnerability to future catastrophic collapse.39

If regulators are to develop effective tools for avoiding systemic breakdowns, comprehensive engagement with uncertainty-based threat modeling protocols is essential, but even good threat modeling protocols cannot tell regulators how to resolve the second problem, which arises when regulatory responses to systemic threats are crafted. The reorientation toward systemic threats underscores a tension (p.184) between two very different approaches to identifying and analyzing the trade-offs that such models inevitably present: a cost-benefit approach, which assesses proposed regulations largely in terms of their concrete, monetizable impacts, and a precautionary approach, which holds that regulators seeking to avoid foreseeable, significant harms should err on the side of caution and should consider a broader range of factors relating to human wellbeing. To its adherents, cost-benefit analysis promises a neutral, rational discourse for evaluating regulatory proposals and for charting a course between the Scylla of regulatory capture and the Charybdis of bureaucratic inefficiency. Skeptics charge that cost-benefit analysis persistently undervalues threatened harms that are diffuse, cumulative, and difficult to describe in monetized, present-value terms and persistently overstates the costs of compliance with new obligations.40 Additionally, experts in the behavior of complex systems have begun to urge more careful attention to tipping points—points at which gradual change suddenly produces a discontinuous jump.41 Within a precautionary paradigm, it is easier to justify interventions designed to prevent the system from tipping.

The tension between cost-benefit and precautionary approaches to countering systemic threats also is political as well as epistemological. As with conventions for modeling and pricing risk, conventions for analyzing regulatory trade-offs can become ways of black-boxing the harshest consequences of systemic failure and justifying regulatory deference to the self-interested decisions of private economic actors. Cost-benefit analysis is associated with an era of notable deregulation, and in practice it tends to be inflected by a distinctively neoliberal vision of regulatory minimization that seeks to downplay collective or intangible harms arising from market activities. As Frank Ackerman, Lisa Heinzerling, and Rachel Massey show, many environmental regulations now regarded as foundational would not have been adopted under the approach to cost-benefit analysis currently ascendant.42

Contestation between adherents of cost-benefit and precautionary approaches—and of the different regulatory ideologies that each has come to signify—has emerged as a defining feature of the information-era regulatory landscape. Environmental law is a paradigmatic example: it is fundamentally concerned with systemic threats accessible only via information-intensive modeling. Over the past half century, growing awareness of the acute systemic threat presented by climate change has catalyzed calls for aggressively precautionary responses—and equally aggressive pushback by threatened interests.

Similar dynamics now infuse many other regulatory domains. Federal new drug approval processes are precautionary in character, but the regulatory stance toward software in medical devices has been different. The now-discredited separation between commercial and investment banks, instituted after the Great Depression to protect the financial system against the risk of catastrophic failure, was a precautionary safeguard, and in the wake of the global financial collapse of 2008, some (p.185) banking and finance scholars have proposed reintroducing structural safeguards into financial markets.43 European data protection regulators have attempted to maintain a generally precautionary stance toward personal data protection, and some scholarly interventions call for explicit adoption of the precautionary paradigm. In the United States, where cost-benefit analysis and the logics of innovative and expressive immunity are more deeply entrenched, some participants in policy and scholarly debates about information privacy have begun to deploy environmental analogies as they seek to explain whether and how to regulate data harvesting economies more comprehensively.44 Meanwhile, expressing different regulatory and ideological sensibilities, the financial and internet industries and libertarian and neoliberal tech policy pundits have advanced a carefully crafted narrative that paints precautionary regulation as rigid, “Mother, may I?” policymaking that threatens to stifle both liberty and economic growth.45

Even when regulators can muster the will to make difficult trade-offs, however, the way to do so effectively may be less clear. In part the problem is methodological. In particular, regulatory schemes that rely on fixed targets for harmful private-sector activities—for example, dosage limits for prescription drugs, chemical contaminant thresholds for consumer products and industrial byproducts, and particulate emissions thresholds for factories and automobiles—sit in growing tension with accumulated learning about the behaviors of complex, networked systems. So, for example, existing regimes of threshold-based environmental regulation have failed to avert the continuing degradation of pollination networks, water systems, and marine ecologies, and fixed thresholds for capital adequacy and data deidentification have proved elusive. Other important dimensions of the problem concern institutional design. Countering systemic threats effectively requires more than just techniques for modeling complex, dynamic processes. As we are about to see, it also requires new thinking about implementation of such techniques and about oversight modalities.

Enacting Governmentality: Evolving Practices for Oversight and Accountability

As the story of the Volkswagen defeat device illustrates, a regulatory state optimized for the informational economy requires not only new rubrics for making sense of the activities being regulated but also new institutional mechanisms for defining obligations and overseeing compliance. It is no coincidence that settled ways of thinking about the appropriate modalities of administrative lawmaking have come under challenge with increasing frequency over the past half century. And it is no coincidence that the regulatory problems of the emerging informational economy have proved particularly disruptive. Information-intensive fields of economic (p.186) activity—telecommunications, privacy, health care delivery, and finance—have become especially active sites of experimentation with new institutional models.

Emergent institutional models share some important family resemblances. They are procedurally informal; they emphasize ongoing compliance with performance-based standards intended to guide complex, interdependent sets of practices; they are mediated by expert professional and technical networks that define relevant standards and manage compliance systems, and they are increasingly financialized. For all of these reasons, however, they also create new accountability challenges and afford new opportunities for powerful actors to shape institutional design.

Like the ongoing transformations in the judicial system studied in Chapter 5, the ongoing transformations within the regulatory state reflect the powerful shaping influence of neoliberal governmentality. Newly powerful information-economy actors and mushrooming professional constituencies dedicated to auditing and compliance monitoring have engaged in highly creative forms of institutional entrepreneurship, developing new frameworks for self-regulation and self-certification while resisting less congenial forms of institutional innovation. The ascendancy of neoliberal managerialism also has deterred legislative action to resolve the most urgent problems of regulatory mismatch, reinforcing political gridlock and investing structural obsolescence with deregulatory virtue.

The Regulatory State as Norm Entrepreneur

In the United States, the institutional model established by the Administrative Procedure Act of 1946 contemplates two general types of regulatory activity: rulemaking and adjudication. According to the modernist, legal process-based vision that animated the model at its creation, the two types are opposites: Rules of general application are to be promulgated in orderly, quasi-legislative proceedings and later applied to specific disputes in orderly, quasi-judicial proceedings. For quite a long time, however, it has been evident that rulemaking and adjudication represent endpoints on a continuum and that a great deal of activity occurs with markedly less formality in the space between them.46 The new informality is a particularly striking feature of regulatory oversight of highly informationalized activity. Regulators have worked to develop new methods of nudging and cajoling regulated entities toward more public-regarding behavior, while regulated entities have worked to shape the new informality to their own ends.

Over the last several decades, formal agency policymaking processes have become progressively more and more hobbled by breakdown and interest group capture. The suite of rulemaking procedures available to administrative agencies is widely acknowledged to be insufficiently nimble for many types of regulatory problems created by networked information technologies and processes. Internet business models in particular evolve so rapidly that a proposed rule can be obsolete before the time period for submitting comments has closed (or even before the (p.187) printed notice of proposed rulemaking has been published).47 In addition, processes initially envisioned as neutral fora for consideration of expert evidence have come to reflect the dominating influences of interest group participation and normative deep capture. Agencies too suffer the effects of infoglut; notices of proposed rulemaking on controversial issues can elicit many thousands of submissions—including, most recently, automated comments submitted using faked names and email addresses.48 We saw in Chapter 3 that for-profit actors supply regulators with a variety of information subsidies. One way for regulators to cut through the clutter is to focus on the relatively well-researched submissions by trade associations representing affected industries.49 Partly for these reasons, and partly because many information-age regulatory problems push the boundaries of existing, often decades-old statutory regimes, issued rules often bog down for years in legal challenges.

Within the space created by the limited utility and efficacy of rulemaking, scholars who study administrative governance have chronicled the emergence of other, relatively unstructured and often substantially privatized processes through which agencies make policy. Many U.S. federal agencies now routinely issue “guidances” that are intended to signal regulated entities about their interpretations of governing statutes and rules and their likely enforcement stances. Although courts are not required to defer to agency guidances, they may give them substantial weight. Some agencies also routinely publicize staff interpretations that are characterized as nonbinding but that have enormous practical impact on the conduct of regulated entities.50 Finally, regulated entities have enjoyed new types of informal input into agency policymaking. Increasingly, agencies make policy in ways that incorporate privatized information subsidies openly and directly, engaging regulated entities in dialogues intended to produce consensus on industry “best practices” and structuring some oversight functions as public-private partnerships.51

In scholarly and policy discourses, the turn toward privatization in policymaking has acquired a name—the “new governance”—and a set of justifications that express what Jodi Short has called the “paranoid style” in regulatory reform: an intense worry about the risks of state coercion and/or bumbling, combined with relative insensitivity to the ramifications of private power. The new governance paradigm distills neoliberal governmentality into “a regulatory reform discourse that is antithetical to the very idea of government regulation.”52 Particularly in highly informationalized domains, informal, coregulatory processes may produce regulatory standards more reflective of current technological practice and therefore less costly to implement and administer. But coregulatory processes also can emphasize inside baseball over participation by a broad range of affected interests, and at their most lopsided risk devolving into self-regulation with minimal oversight. Over time, such approaches have produced significant devolution of regulatory authority to the private sector.53

Even as agency policymaking activities devolve increasingly toward informality, enforcement activity is becoming more rule-ish. A leading example of this (p.188) phenomenon is the FTC’s practice of lawmaking through adjudication. In the domain of information privacy, the FTC has used its enforcement authority vigorously but unconventionally, filing unfair and deceptive acts and practices actions and then negotiating and publicizing consent decrees that include suites of ongoing compliance requirements. According to Daniel Solove and Woodrow Hartzog, the corpus of consent decrees constitutes a new common law jurisprudence of unfair and deceptive conduct.54 Institutionally speaking, the FTC’s enforcement posture reflects an especially complex calculus. The agency has no general Administrative Procedure Act rulemaking authority and no specific authority to issue general information privacy rules, so it relies on its consent decree practice to fill the regulatory gaps.55 But the FTC is not the only example of an agency creatively using its enforcement powers to engage in gap-filling and norm entrepreneurship on information-economy issues. For example, amidst an ongoing dispute over its jurisdiction to promulgate net neutrality regulation that spanned nearly a decade, the FCC used both its general enforcement authority and its merger review authority to advance net-neutrality-related goals.56

The turn to rule-ishness in enforcement, however, also incorporates a significant and largely unheralded privatization component. As Chapter 5 described, the consent decree has become an important vehicle for the managerial reconfiguration and privatization of dispute resolution, and that is true in regulatory enforcement contexts as well. So, for example, boilerplate provisions in the FTC’s data privacy and data security consent decrees impose new monitoring and reporting obligations that as a practical matter demand new managerial competencies. Consent decrees in agency enforcement actions also follow the temporal arc that Chapter 5 sketched: newly approved decrees are announced with great fanfare, but their longer term efficacy is less clear.57 In some regulatory domains, the increasing reliance on consent decrees also raises concerns about the accessibility of law that mirror those created by the shift to privatized and outsourced dispute resolution. Although privacy practitioners read the FTC’s privacy consent decrees carefully and regard published decrees as quasi-precedential, some entities, such as the Equal Employment Opportunity Commission, do not publish their consent decrees, so interested parties cannot easily monitor evolving enforcement practices.58

More informal enforcement strategies also reflect the shaping influence of private power. Ian Ayres and John Braithwaite coined the term “responsive regulation” to describe a range of extrajudicial sanctions available to regulators, beginning with persuasion and escalating through formal warnings to fines and other penalties.59 U.S. regulatory agencies make extensive use of the responsive regulation toolkit. For example, even when it chooses not to bring litigation, the SEC from time to time issues “reports of investigation” that it styles as providing it with an opportunity to “clarify” and “amplify” its views about various industry practices, and the FTC uses persuasion, investigation, threats of enforcement action, and the threat of fines for violations of existing orders to shape the ongoing evolution of industry best (p.189) practices regarding information privacy.60 Similar methods have long played central roles in European regulatory practice, which places relatively lower emphasis on litigation and relatively greater emphasis on other strategies.61 The success of responsive regulation methods, however, depends importantly on cooperation by regulated entities, and consistent resistance can force regulatory oversight into a holding pattern, unable to make meaningful headway in shifting patterns of industry behavior. As we saw in Part I, platforms and information businesses in particular have effectively stonewalled regulators seeking more detail about their information harvesting and processing practices.

The Regulatory State as Auditor

A more telling barometer of institutional disruption is the increasing prominence of regulatory activities that do not seem to fall on the rulemaking-to-enforcement continuum at all. William Simon identifies a set of emergent regulatory practices that he characterizes as “post-bureaucratic”: that are based on proactive planning rather than reactive rulemaking and on compliance monitoring rather than reactive enforcement.62 Notably, the new regulatory modalities are intensively informational and technical in character. From a political economic standpoint, they are not so much post-bureaucratic as they are postindustrial, products of the “control revolution” that began with the introduction of automated information systems into industrial-era factories and businesses and continued with the increasing informationalization of economic development.63 As implemented, they are also intensively managerial in orientation. They rely heavily on regulatory competencies such as auditing and technical standard-setting that involve specialized corps of professional experts and impose new technical challenges to public accountability.

Compliance monitoring and reporting play increasingly important roles in the contemporary regulatory landscape. Many regulatory schemes, including most notably those governing the financial system and the various markets for publicly traded securities, mandate periodic reporting on various matters. In other areas, including most notably consumer privacy, consent decrees requiring periodic reporting are an increasingly common component of enforcement practice. Compliance monitoring and reporting may entail demonstrated satisfaction of highly technical performance targets. For example, entities covered under the Health Insurance Portability and Accountability Act (HIPAA) Privacy Rule that wish to release data sets to the public must demonstrate that the data sets have been deidentified in a way that ensures sufficiently low risk of reidentification. Compliance monitoring and reporting also frequently involve audits conducted by specialized, private-sector professionals. Professional financial auditors play central roles in the modern system of financial regulation, and privacy and data security audits are becoming increasingly routine.64

A second strand in the emerging narrative of professionalized, informationalized regulation involves algorithmically mediated compliance with regulatory mandates. (p.190) As Kenneth Bamberger has detailed, regulatory regimes relying on information-systems mandates have become commonplace in a variety of information-intensive fields. Notably, most of the research and development activity surrounding algorithmic enforcement and software audit originates in the private sector, where so-called “government, regulation, and compliance” technologies and services comprise a large and growing market.65 Also notably, some industries have developed similar technologies and systems absent any regulatory mandate to do so. For example, as we saw in Chapter 4, large platform companies generally rely on automated detection and filtering systems to avoid liability for facilitating copyright infringement.

A third type of regulatory activity that is increasingly common involves technical standard-setting. Both domestically and internationally, governments have long been involved in standards policy. In the United States, the National Institute of Standards and Technology (NIST) was established in 1901 to facilitate the development of measurement conventions that would enhance the global competitiveness of U.S. manufacturing and transportation industries. On a global scale, the International Telecommunication Union (ITU) has been active since 1865 in setting standards for telegraph interoperability.66 In the digital era, however, technical standards have gone mainstream: they are core components of many regulatory regimes and appear as agenda items in the work of multiple agencies and transnational entities. Here too, privatization plays an important role. In the United States, federal law mandates public-private collaboration in standards policy. Agencies must use “technical standards that are developed or adopted by voluntary consensus standards bodies” unless that course of action “is inconsistent with applicable law or otherwise impractical.”67 NIST coordinates agency interaction with private standards bodies, and its mission has expanded to encompass everything from climate change measurement and standards for alternative energy technologies to metrics for food and drug safety and data privacy and security standards. To similar effect on a global scale, the ITU (now a specialized agency of the United Nations) has seen its mission expand to encompass telephone, radio, and television broadcast technologies and internet telephony protocols, and it has been joined by an alphabet soup of other standards bodies and initiatives that we will consider more carefully in Chapter 7.

The new regulatory modalities offer techniques for structuring, supervising, and certifying information flows. They therefore have at least the potential to address information-economy regulatory problems more effectively than older, command-and-control modalities. Whether they can fulfill that potential, however, depends on the details of their institutional implementation.

Each of the regulatory modalities just described encompasses many different possible implementation approaches, but, as noted previously, there are often pervasive mismatches between regulatory goals and necessary methods. Regulators and the public may have particular results in mind, but adequate performance in (p.191) the realm of pollution control or financial accounting or data security or antibiotic resistance, for example, cannot simply be a matter of meeting discrete, fixed targets. To be effective, standards of adequacy must apply dynamically and must incorporate consideration of a wide and heterogeneous assortment of variables that bear on the behaviors of complex systems.68 In addition, such standards generally will be technically complex, requiring special kinds of expertise to decipher.

For these reasons, the new postindustrial regulatory modalities tend to defy standard ways of thinking about regulatory accountability.69 Terminology developed by Lauren Willis to describe new types of information products and services is useful as a way of summarizing the difficulty: traditional agency rules have dashboard complexity—they may consume many pages in the Code of Federal Regulations—but they are not especially complex under-the-hood.70 Their provisions are developed via open proceedings to which multiple parties have input and their key terms are defined to supply publicly available points of common reference. The new regulatory modalities, in contrast, have dashboard simplicity but are complex under-the-hood. Reporting conventions and standards-related nomenclature can make it easy to know at a glance whether a regulated entity has met performance benchmarks or produced technically compliant products or services. The considerations and judgments that those results reflect, however, are harder to translate into forms suitable for general public understanding.

Other types of opacity reflect the private-sector origins of many performance assessment standards and practices. Consensus regarding the requirements for satisfactory performance may develop among members of a professionalized auditor class, but regulators and members of the public typically lack good access to the processes by which private-sector professionals hold themselves accountable. In rapidly changing fields, agreed standards may be nonexistent or disputed, and so the idea of “best practices”—whether for regulated entities or for the auditors purporting to supervise them—may mean very little.71 In addition, private-sector compliance regimes may involve asserted trade secrets. Statutory open government regime typically contain trade secrecy exceptions, so they are poorly adapted to ensuring transparency where a significant privatization component is involved.72

Two well-known examples of how things can go badly wrong in highly professionalized regulatory domains come from the financial context. Professional consensus on so-called “generally accepted accounting principles” (GAAP) and on criteria for issuing and revising credit ratings proved inadequate to constrain rapid changes in accounting practice that led ultimately to the 2001 bankruptcy of billion-dollar energy company Enron. The Enron scandal exposed the need for a mechanism to ensure the accountability of those providing audit and credit rating services to publicly traded companies to limit moral hazard and self-dealing.73 More recently, the global financial crisis of 2007–2008 exposed the inadequacy of mechanisms designed to ensure that large financial institutions participating in capital markets maintained adequate capital reserves. The applicable standards relied (p.192) on banks and credit rating agencies to conduct their own assessments of capital adequacy and creditworthiness using complex and often proprietary algorithms, and many components of the system were not subject to capital-adequacy requirements at all.74

Both crises triggered increased oversight, but the regimes that resulted still have been criticized for deferring too greatly to accounting and finance professionals. The Sarbanes-Oxley Act of 2002 created the Public Company Accounting Oversight Board (PCAOB) to oversee compliance with public accountancy standards, but the accounting profession retained its authority over the substance of the GAAP, and the processes by which regulators and industry representatives negotiate capital requirements remain largely opaque to the public.75 The Credit Rating Agency Reform Act of 2006 imposed a registration requirement on credit rating agencies, but that requirement failed to constrain the practice of issuing inflated ratings, including investment-grade ratings for securitizations of subprime mortgages in the run-up to the 2007–2008 crisis.76 Finally, the Dodd-Frank Act of 2010 imposed additional requirements on credit rating agencies and gave federal regulators authority to prescribe minimum capital requirements for entities that engage in swap transactions, but as of this writing, the effects of those reforms are unclear and their future is in jeopardy.77

Automation of critical compliance functions adds another layer of opacity that inheres in the code through which compliance is measured and enforced. Automated processes have obvious efficiency advantages, but such processes may not align well (or at all) with applicable legal requirements that are couched in shades of gray.78 As the example of the Volkswagen defeat device illustrates, the push to take human judgment out of the enforcement loop also raises a variety of difficult questions about how to define and audit compliance. Cary Coglianese and Jennifer Nash have observed that reliance on encoded, algorithmically enforced performance standards encourages gaming through a process they analogize to “teaching to the test”; firms have incentives to achieve a passing grade but not necessarily to do so in a way that fulfills the purpose the test was meant to serve.79 As noted at the start of this chapter, regulators charged with overseeing emissions-control regimes now must learn to audit software if they wish to do their jobs effectively.

Mastering the processes by which technical standards are developed also requires both new kinds of regulatory expertise and new public accountability mechanisms. The language of data security, digital content management, and the like is dense and technical. It resists both public comprehension and public input, and even regulatory personnel themselves may not understand the key issues well. Many U.S. agencies now employ technical experts in key positions, but their work must be translated adequately for other agency staff. In addition, anti-regulatory advocacy has coalesced around a narrative about the foolhardiness and futility of regulatory intervention in highly technical, rapidly evolving fields. As agencies such as the FCC and FTC have begun to take up more technically complex issues, industry (p.193) groups and pro-business think tanks have argued that direct government supervision of standards development will stifle innovation and slow economic development.80 Industry standard-making processes, meanwhile, are lengthy, secretive, and notoriously resistant to public interest oversight. To take just one example, the ongoing negotiations over digital copy protection standards for high-definition audiovisual content are conducted on an invitation-only basis. Groups not invited to the table have been forced to rely on black-box testing and complaints from disgruntled consumers to gain information about the protocols as implemented.81

As this brief summary suggests, however, the current regulatory landscape also includes important innovations with respect to accountability and oversight. From a traditional “administrative law” perspective, the new regulatory bodies and competencies mentioned in this Section—the PCAOB, the still-emerging constellation of rules governing credit rating agencies, the administrators at the Federal Reserve who oversee bank stress testing, the administrators within the Department of Health and Human Services’ Office of Civil Rights who oversee implementation of the HIPAA rules, and some components within NIST—seem to sit on the periphery of the regulatory state. Each oversees arcane and highly technical subject matter, and each sits within and is subject to the oversight authority of a larger and more traditionally configured administrative body. In terms of their core competencies, however, they are paradigmatic information-era regulatory bodies, with at least some amount of front-line authority over decisions that have enormous systemic impact.82 Each has important lessons to teach about the possible futures of the regulatory state, and for that reason they merit more careful study by administrative law scholars generally.

The Regulatory State as Manager

A functioning government requires a budget, and budgetary decisions therefore provide another locus for the exercise of regulatory authority. As the regulatory state has grown larger, more complex, and more expensive, budgetary controls have become more and more important. Once again, this should be unsurprising. Financial controls are another paradigmatic postindustrial regulatory technique: they are intensively informational and their effective implementation requires both constructed (informational) measures of soundness and technical information-processing capacity. Like audits and technical protocols, however, financial controls have generated unfamiliar public accountability challenges. In addition, their congeniality to concrete, cost-benefit modeling has provided new points of entry for managerial efforts directed first and foremost toward regulatory minimization.

In the United States, budgetary authority is centralized in the Office of Management and Budget (OMB), which was created in 1970 to assume functions formerly performed by the Treasury Department. Within administrative law scholarship, interest in the OMB’s activities and methods is a relatively recent development. (p.194) Beginning in the 1980s, scholars began to pay close attention to the role that the Office of Information and Regulatory Affairs (OIRA), a subdivision within OMB, plays in cost-benefit analysis of proposed regulations. As Eloise Pasachoff explains, however, OIRA is the tip of a much larger iceberg. A suite of activities, including not only cost-benefit analysis but also budget oversight, grant-making authority, and various other efficiency mandates, involves OMB pervasively in executive branch regulatory activities and enables it to assert new modes of financialized control over those activities. Some efficiency mandates, most notably the Paperwork Reduction Act, give OMB leverage over even formally independent agencies.83

Institutionally speaking, OMB’s expertise is non-topical. Although program officers in its resource management offices are assigned to particular substantive areas, appointment within OMB does not require, for example, detailed familiarity with climate science, spectrum policy, or consumer finance. Instead, it traditionally has required training in “public policy, public administration, business, economics, etc.”84 The issue here is not that OMB staff lack familiarity with the technical and policy issues that are specific to the particular activities being regulated. As Pasachoff’s description makes clear, OMB staff assigned to particular areas acquire expertise over time and reflect institutional memory the same way that staffers at agencies do. What is significant is simply that OMB’s mission calls for the involvement of a cadre of professionals whose expertise is principally oriented toward efficient management. OMB therefore has become an important fulcrum point for the ongoing managerial reconfiguration of the regulatory state.

Like the managerial reconfiguration of dispute resolution systems, the managerial reconfiguration of regulatory activity has tended to elevate values such as efficiency and technocratic oversight over others such as fairness and public-facing accountability. Accounting and management methodologies rest on sets of assumptions about how to describe, measure, and account for program costs and benefits. Those assumptions are neither transparent nor inherently neutral, and merit careful scrutiny based on both the values that they enshrine and those that they elide or omit. And, precisely because they rest on assumptions about the inherent neutrality of management-based approaches, statutory provisions designed to facilitate public oversight of government effectiveness do not join these methodological issues effectively.85 OMB’s often-technical review and approval processes also exacerbate the problems of differential access and cumulative opacity described in the previous sections. As a result, OMB oversight sometimes has seemed merely to provide additional opportunities for regulated entities to exert influence over agency outputs.86

The ongoing centralization of regulatory functions in the OMB has meshed especially well with the turn to cost-benefit analysis described earlier in this chapter, and here the ideological and political undercurrents become more powerful. Academic proponents tout cost-benefit analysis as a neutral tool for centralized, politically accountable oversight of regulatory activity, but cost-benefit rhetoric—and particularly rhetoric emphasizing the purportedly intractable conflict between (p.195) burdensome regulation on one hand and innovation and economic growth on the other—also has become a preferred mode of public policy discourse among scholars and policymakers who advocate regulatory minimization and privatization. Because cost-benefit analysis contemplates that even serious harms may be outweighed by higher levels of overall economic benefit, and because it tends to weigh the concrete costs of regulatory implementation more heavily than the more diffuse and often external benefits to be realized from compliance, it offers a particularly congenial technique for achieving that result. The increasingly tight conflation of cost-benefit review with regulatory rationality has meant that critics of regulatory minimization and privatization have found themselves placed in the unenviable role of Luddites, advancing complex conceptions of fairness and collective interdependence to counter a simpler, more accessible narrative about getting government out of the way.

The upshot is that the modern OMB has extended its influence over thinking about regulatory efficiency and efficacy in ways both institutional and cultural. In the absence of comprehensive scholarly and public scrutiny of the values encoded in government efficiency imperatives, the neoliberal hostility to regulation increasingly fashionable on both sides of the political aisle has enacted a kind of regulatory double movement, detaching regulatory authority from the various agencies to which it is assigned and re-embedding it under the oversight of a new, corporatized/managerial class concerned chiefly with minimizing the impact of regulation on economically productive activity. During the 2012 presidential campaign, a refrain oft-repeated by Republican candidate Mitt Romney concerned the business expertise that a former management consultant would bring to the executive branch. But Democrats also have gotten into the act: every administration for the last four decades has imposed new initiatives to be implemented within OMB.87

In the informational era, thinking about the proper relationship between government and management requires a more measured and constructively critical approach. The modes of financialized control practiced by OMB have not been embraced and systematically studied as core regulatory modalities—as much a part of the regulatory canon as the notice-and-comment rulemaking or the enforcement action. Put differently, financialized controls are not simply tools for achieving greater regulatory accountability. They represent a new information-era modality for the managerial exercise of regulatory power. In an enterprise as large and complex as the modern executive branch, developing the capacity for efficient management of taxpayer resources is important, but how exactly financialized controls should be incorporated within regulatory institutions attuned to the information economy is open to debate. Exercising financialized authority responsibly and fairly, and with appropriate attention to new rubrics constructed around the organizing problems of platform power, infoglut, and systemic threat, requires corresponding institutional innovation.

(p.196) The Regulatory State as Artifact

The increased emphasis on centralized management and budget oversight in contemporary U.S. regulatory practice also points to a final kind of mismatch between the regulatory state and information-era regulatory problems and practices, which is structural: The jurisdictional boundaries of the existing administrative framework were drawn with industrial-era activities in mind. Information-economy activities have developed in utter disregard of the executive branch organization chart, cascading around and across existing lines of authority. The resulting overlaps and gaps underscore the need for new oversight modalities but also the need for new approaches to matters of high-level structure and organization.

Many contemporary regulatory disputes are artifacts of outdated statutory grants of authority. Consider net neutrality again. The last set of major amendments to the statutory framework governing “telecommunications” dates from 1996, when internet services were still-emergent and not yet understood as central components of modern communications architecture and policy. Initially, the FCC classified cable broadband services as information services under the statute, but after the D.C. Circuit ruled that the statute did not permit imposition of nondiscrimination obligations on such services and invalidated an initial set of net neutrality rules, it recharacterized broadband access providers as common carriers subject to regulation under a different title of the statute and then issued new rules, now themselves revoked.88

As we have already seen, however, the telephone-based communications paradigm is too narrow to encompass all of the service-related questions that digital networked communications raise. The parts of the statute that regulate designated common carriers were designed for basic telephone service; for example, common carriers must route all calls to their destinations without blocking or playing favorites.89 Internet access providers routinely engage in traffic management for a diverse set of purposes ranging from network optimization to spam control to network security, and some network uses require higher bandwidth than others. Commercial internet access providers typically have defined tiers of pricing based on network speed and data usage rather than on the services consumers plan to use, but they also have experimented by selectively slowing or prioritizing traffic in ways that serve their own narrower interests.90 Net neutrality regulation takes aim at the latter sort of conduct, but needs to say something about the former sort and to provide guidelines for distinguishing between the two, and the complexity of that project multiplies opportunities for rent-seeking. A modern enabling statute would not eliminate all of these problems but could address some of the more obvious difficulties.

The Telecommunications Act of 1996 is hardly unusual; many agency enabling statutes are much older. The Food and Drug Administration regulates computerized medical devices pursuant to a statutory definition of “device” enacted in 1976, the (p.197) Copyright Act of 1976 contains separate broadcast retransmission provisions for cable and satellite systems and does not speak to internet retransmission at all, the Magnuson-Moss Warranty Act, which granted the Federal Trade Commission limited rulemaking authority relating to unfair and deceptive acts in commerce, dates from 1975, decades before anyone had contemplated the necessary components of privacy and data security policies; and so on.91 In many cases, political gridlock has defeated efforts to rethink obsolete statutory frameworks, but the cultural dynamic of deep capture (described in Chapter 3) also plays an important role in fostering regulatory stagnation. In a policy climate increasingly oriented toward neoliberal governmentality, even well-intentioned policymakers are relatively disinclined to embark on the complex task of designing new administrative structures better matched to information-economy activities.

Both new and old economic actors have treated obsolete laws as invitations to create business arrangements that route around existing points of regulatory control. As we saw in Chapter 1, high-profile, platform-based “disruptors” of existing work arrangements—including labor-matching sites such as Mechanical Turk and TaskRabbit and transportation-matching sites such as Uber and Lyft—argue that existing regulatory regimes do not apply to them. Instead, they recruit user-workers into arrangements that are styled as licenses to access the platform’s resources. As critics have detailed, however, provisions in those licenses cover matters more commonly addressed in employment agreements.92 And platforms’ self-interested description of their operations is incomplete; they are also structures for converting the labor of user-workers and their customers into flows of monetizable data and finance capital. Outdated statutory frameworks do not address these issues at all, and the logics of performative enclosure, productive appropriation, and innovative and expressive immunity work to make them seem both less salient and less important from a regulatory standpoint. Meanwhile, long-established industrial-economy employers also have relied on networked information and communications technologies to restructure patterns of work in ways that avoid protective regulation. So, for example, corporate giant Fedex uses sophisticated information systems to match drivers with parcels and delivery routes and structures its relationships with drivers as independent-contractor arrangements. Employers of all sizes use scheduling software to limit employees’ hours in order to avoid triggering various regulatory requirements.93

In other cases, business models designed around obsolete statutory frameworks have become entrenched in ways that systematically disadvantage new entrants. For example, in the field of music copyright, some entities administer reproduction rights in musical compositions while others administer public performance rights in those same compositions. Rights in sound recordings are held by a third group of actors. Some rights are subject to statutory licenses, but the technical details create many pitfalls for digital music services that must clear multiple rights for large numbers of works. Rent-seeking to preserve existing allocations of entitlements and (p.198) obligations has produced a powerful inertial effect, compromising efforts to clear the way for continued innovation in digital delivery of content.94

Each of the examples just given involves a single regulator, but in other cases, efforts to devise modern frameworks for regulating information-era activities confront jurisdictional difficulties that are even more intractable. Many kinds of information-related activities simply were not contemplated when the divisions of authority among the various executive branch, legislative, and independent agencies were established. Activities such as digital broadcast content protection, pharmaceutical patenting, data-driven predictive profiling, regulation of health-related information services, and regulation of financial services implicate the jurisdiction of multiple entities. Assertions of incomplete and poorly defined regulatory authority both add to the overall confusion and embolden critics of regulatory overreach.95

Consider net neutrality again. An ideal enabling statute for the modern FCC would acknowledge the full range of considerations that attend the provision of internet access and provide guidance on how to weigh them. But even if Congress managed to transcend its dysfunctions and enact such a statute, the current regulatory structure still does not permit any regulator to consider the full group of actors whose activities determine the neutrality or nonneutrality of access to networked digital communications capabilities and services. The FCC’s short-lived rules applied straightforwardly to broadband and wireless internet providers, with some exceptions for certain voice-over-internet services, and not at all to platforms such as Facebook and Twitter that do not provide internet access directly to U.S. consumers. If the question is whether an entity provides telecommunications services of the general sort contemplated by Congress in the most recent iteration of the Telecommunications Act, those distinctions may make sense. If the question is whether platforms’ self-interested mediation of the networked information environment ought to be subject to some basic nondiscrimination obligations, they seem both arbitrary and laughable.

Platforms and their government relations firms have exploited the fact that regulatory authority over their activities is incomplete and fragmented by pointing to isolated instances of apparent unfairness. For example, Google has adopted the posture of a supplicant seeking nondiscriminatory access to connection points for its Google Fiber initiative, even though it dominates the market for search and, together with other dominant platform firms, “already benefit[s] from what are essentially internet fast lanes, and this has been the case for years.”96 Proposals to create a regulatory authority empowered to impose comparable neutrality obligations on search providers, meanwhile, have drawn criticism from commentators all along the political spectrum.97 The FCC’s net neutrality rulemakings also have systematically excluded privacy-related concerns. After the Obama-era FCC issued separate rules requiring internet access providers to protect the privacy of subscriber personal information, access providers argued that those obligations were duplicative (p.199) in light of the FTC’s enforcement activities. After the acting chair of the Trump-era FTC endorsed that position, Congress invoked a new anti-regulatory instrument, the Congressional Review Act, to rescind the privacy rules and prevent their re-enactment. Meanwhile—and while challenging the Obama-era FCC’s authority to issue net neutrality rules—access provider AT&T attempted to defend against an FTC privacy enforcement action by arguing that, because it performs common carriage functions and the FTC lacks enforcement jurisdiction over issues relating to common carriage, the FTC had no authority over it at all.98

Patterns of regulatory oversight in some industries reflect a more considered commitment to regulatory fragmentation. That approach accords with the reigning neoliberal anti-regulatory ideology and its emphasis on bringing competition into government; as we have seen throughout this chapter, however, it has been notably ineffective at countering systemic threats. Consider, for example, financial regulation, which emanates from multiple agencies, commissions, and specialized boards. While fragmentation may have seemed sensible as a method of avoiding capture during the years when the financial industry also was subject to statutorily imposed fragmentation, the demise of those restrictions has opened the way for banks and other financial services firms to amass extraordinary power over both national and global financial systems. Through multiple rounds of reform legislation, commitments to fragmentation have persisted and have disabled regulators from mounting effective systemic responses.99 Similarly, regulatory fragmentation has become a salient attribute of food and drug law. Even as the range of considerations that affects systemic safety and sustainability has grown increasingly large and complex, regulators have worked to keep lines of authority distinct and limited.100

Notably, the executive branch sometimes has responded to structural obsolescence and jurisdictional overlap by creating interagency task forces and working groups. Those experiments have inspired a new subgenre of administrative law scholarship focused on developing procedures and lines of accountability for interagency entities. Such proposals, however, also promise to add new layers of regulatory complexity and magnify the scope for bureaucratic infighting.101 (The European Commission, meanwhile, has constituted a separate Directorate-General for the sole purpose of studying information-economy needs and attempting to coordinate solutions, but regulators in other directorates have not necessarily welcomed its interventions.102)

Like the new templates for large-scale dispute resolution studied in Chapter 5, interagency entities are best understood and evaluated as contingent institutional formations. They gesture toward the possibility of a different structure for the administrative state, but they are also stopgap measures that take existing structures for granted. True blueprints for a regulatory state optimized for the informational economy have yet to emerge.

(p.200) Reinventing Regulatory Practice for the Era of Algorithmic Governance

As the basis of our political economy shifts, corresponding shifts in the nature of regulatory concepts and processes are to be expected. From that standpoint, some of the changes I have described may usefully be understood through the lens of creative destruction; outdated regulatory formations are and should be vulnerable to the winds of change just as outdated products and irrelevant monopolists are. Regulatory institutions are stickier than market arrangements, however, and not only because so many aspects of their operation are codified. Regulatory institutions also have at least the potential to perform protective functions that market institutions cannot—to interpose friction where it is most needed and most justified. And for that reason, regulatory institutions also are important sites of innovation.

It is too soon to say precisely what a regulatory state optimized for the era of informational capitalism ought to look like, but it is nonetheless essential to understand current regulatory disputes as contests over that question. Transformation in political economy demands corresponding transformation in regulatory logics. Reinvigorating market oversight in the era of informational capitalism requires a willingness to rethink both competition law and public utility law from the ground up. Additionally, if protections against discrimination, fraud, manipulation, and election interference are to be preserved in the era of infoglut, regulators will need to engage more directly with practices of data-driven, algorithmic intermediation and their uses and abuses. Both projects demand more careful investigation of the kinds of power that information platforms wield and more open-minded consideration of possible corrective measures. Effective regulation in the information era also requires creative, interdisciplinary thinking about the design of regulatory methods for modeling and countering systemic threats.103

If the dysfunctions now confronting the regulatory state are to be addressed effectively, however, scholars and policymakers also must be willing to entertain the prospect of paradigm shifts in the design of regulatory institutions and in the deep structure of “administrative law” more generally. In that process, it will be important not to confuse the demands of informational capitalism, understood as a distinct system of political economy requiring effective oversight and guidance, with the demands of information capitalists. The pervasive mismatch between the regulatory instrumentalities that we have and the ones that we need does not simply call for new cadres of auditors and more cutting-edge managerial techniques but rather for creative thinking about how new structures for oversight and public accountability might develop. And even new enabling statutes for existing agencies will not necessarily address problems requiring deeper restructuring.

(p.201) In the current U.S. political climate, comprehensive overhaul of the regulatory state may seem infeasible. As of this writing, all branches of government seem intent instead on presiding over its destruction. If that approach has a silver lining, it may be that it clears the way for subsequent administrations and Congresses to rebuild and in the process to reimagine the regulatory state in a form better suited to the tasks at hand.

Notes:

(1.) Coral Davenport & Jack Ewing, “VW Is Said to Cheat on Diesel Emissions; U.S. to Order Big Recall,” New York Times (Sept. 18, 2015), A1, https://perma.cc/P95U-UW8X; Melissa Eddy, “Volkswagen to Recall 8.5 Million Vehicles in Europe,” New York Times (Oct. 15, 2015), B1, https://perma.cc/TKK6-DAFY; Choe Sang-Hun, “South Korea Fines Volkswagen and Orders Recall over Emissions Scandal,” New York Times (Nov. 26, 2015), B3, https://perma.cc/3GLF-UXDZ.

(2.) Andrew Higgins, “Volkswagen Scandal Highlights European Stalling on New Emissions Tests,” New York Times (Sept. 28, 2015), B1, https://perma.cc/69R2-BW9Z; Danny Hakim (p.326) & Graham Bowley, “VW Scandal Exposes Cozy Ties in Europe’s New Car Tests,” New York Times (Oct. 14, 2015), B1, https://perma.cc/URQ9-P6TA.

(3.) “Volkswagen Group to Reduce CO2 Emissions to 95 g/km by 2020,” Volkswagen (Mar. 4, 2013), https://perma.cc/8NQL-KMVS; Davenport & Ewing, “VW Is Said to Cheat on Diesel Emissions”; see also Craig Smith, “The Problem with Those Who Cheat,” Financial Times (Oct. 11, 2015), https://perma.cc/58TF-M6B9.

(4.) For especially rich discussions, see Daniel A. Farber & Anne Joseph O’Connell, “The Lost World of Administrative Law,” Texas Law Review 92 no. 5 (2014): 1137–90; Edward Rubin, “It’s Time to Make the Administrative Procedure Act Administrative,” Cornell Law Review 89 no. 1 (2003): 95–190; and William H. Simon, “The Organizational Premises of Administrative Law,” Law and Contemporary Problems 78 nos. 1–2 (2015): 61–100; see also Kenneth A. Bamberger, “Regulation as Delegation: Private Firms, Decisionmaking, and Accountability in the Administrative State,” Duke Law Journal 56 no. 2 (2006): 377–468.

(5.) Particularly among scholars of financial regulation, the practical and political difficulties of regulating informational activities have been the organizing problems for the last decade. See, for example, Chris Brummer, “Disruptive Technology and Securities Regulation,” Fordham Law Review 84 no. 3 (2015): 977–1052; Ronald J. Gilson & Reinier Kraakman, “Market Efficiency after the Financial Crisis: It’s Still a Matter of Information Costs,” Virginia Law Review 100 no. 2 (2014): 313–76; Henry T.C. Hu, “Disclosure Universes and Modes of Regulation: Banks, Innovation, and Divergent Regulatory Quests,” Yale Journal on Regulation 31 no. 3 (2014): 565–666. On the breakdown of regulatory models in information privacy and telecommunications, respectively, see Paul Ohm, “Broken Promises of Privacy: Responding to the Surprising Failure of Anonymization,” UCLA Law Review 57 no. 6 (2010): 1701–78; Philip J. Weiser, “The Future of Internet Regulation,” University of California Davis Law Review 43 no. 2 (2009): 529–90.

(6.) For discussions of the difficulties that attend antitrust modeling of two- and multisided markets and reviews of the literature, see David S. Evans & Richard Schmalensee, “The Antitrust Analysis of Multi-Sided Platform Businesses.” NBER Working Paper No. 18783, 2013, https://perma.cc/PNB4-DRTV; Jean-Charles Rochet & Jean Tirole, “Two-Sided Markets: A Progress Report.” RAND Journal of Economics 37 no. 3 (2006): 645–67.

(7.) Robert J. Levinson, R. Craig Romaine, & Steven C. Salop, “The Flawed Fragmentation Critique of Structural Remedies in the Microsoft Case,” Antitrust Bulletin 46 no. 1 (2001): 135–62.

(8.) See United States v. Microsoft Corp., 159 F.R.D. 318 (D.D.C.), rev’d, 56 F.3d 1448 (D.C. Cir. 1995), on remand, 87 F. Supp. 2d 30 (D.D.C. 2000), and 97 F. Supp. 2d 59 (D.D.C. 2000), aff’d in part, rev’d in part, and vacated, 253 F.3d 34 (D.C. Cir. 2001), cert. denied, 534 U.S. 952 (2001), on remand, 231 F. Supp. 2d 144 (D.D.C. 2002), aff’d sub nom., Massachusetts v. Microsoft, 373 F.3d 1199 (D.C. Cir. 2004). While the litigation was underway, the Department of Justice revised its guidelines for antitrust oversight of intellectual property-related matters, but the revised document did little to unpack questions about the power of dominant platforms. See U.S. Dep’t of Justice and Federal Trade Comm’n, “Antitrust Guidelines for the Licensing of Intellectual Property,” (Apr. 6, 1995), reprinted in 4 Trade Reg. Rep. (CCH) ¶ 13, 132 (1995); Robert Pitofsky, “Challenges of the New Economy: Issues at the Intersection of Antitrust and Intellectual Property,” Antitrust Law Journal 68 no. 3 (2001): 913–24.

(9.) Alessandro Acquisti, Curtis Taylor, & Liad Wagman, “The Economics of Privacy,” Journal of Economic Literature 55 no. 2 (2016): 442–92.

(10.) Pathbreaking explorations of platform economics and platform-based competitive harms include Ariel Ezrachi & Maurice E. Stucke, Virtual Competition: The Promise and Perils of the Algorithm-Driven Economy (Cambridge, Mass., 2016): Harvard University Press; Lina Khan, “Amazon’s Antitrust Paradox.” Yale Law Journal 126 no. 3 (2017): 710–805; Maurice E. Stucke & Allen P. Grunes, Big Data and Competition Policy (New York: Oxford University Press, 2016). For opposition to the very idea of antitrust oversight of the platform economy, see James C. Cooper, “Privacy and Antitrust: Underpants Gnomes, The First Amendment, (p.327) and Subjectivity,” George Mason Law Review 20 no. 4 (2013): 1129–46; Geoffrey A. Manne & Ben Sperry, “The Problems and Perils of Bootstrapping Privacy and Data into an Antitrust Framework,” CPI Antitrust Chronicle (May 2015), https://perma.cc/6HNJ-LFPY; Joe Kennedy, “The Myth of Data Monopoly: Why Antitrust Concerns about Data Are Overblown,” Information Technology & Innovation Foundation (Mar. 2017), https://perma.cc/X7R7-DTTA; Joshua D. Wright et al., “Requiem for a Paradox: The Dubious Rise and Inevitable Fall of Hipster Antitrust,” George Mason Law & Economics Research Paper No 18-29 (2018), https://perma.cc/MD6L-ZDR2.

(11.) On the rich range of concerns that initially animated public utility regulation in the United States, see William Boyd, “Public Utility and the Low-Carbon Future,” UCLA Law Review 61 no. 6 (2014) 1614, 1635–51.

(12.) Tim Wu, “Network Neutrality FAQ,” https://perma.cc/G7SE-BQS3 (last visited Oct. 26, 2015).

(13.) For important recent work exploring the social justice implications of telecommunications regulation, see Olivier Sylvain, “Network Equality,” Hastings Law Journal 67 no. 2 (2016): 443–98; K. Sabeel Rahman, “Private Power, Public Values: Regulating Social Infrastructure in a Changing Economy,” Cardozo Law Review 39 no. 5 (2018): 1621–89.

(14.) See Sylvain, “Network Equality”; Pew Research Center, “The Smartphone Difference” (Apr. 1, 2015), 16–19, https://perma.cc/WX7J-8QSP; see also Linnet Taylor, “Data Subjects or Data Citizens? Addressing the Global Regulatory Challenge of Big Data,” in Freedom and Property of Information: The Philosophy of Law Meets the Philosophy of Technology, eds. Mireille Hildebrandt & Bibi van den Berg (New York: Routledge, 2016), 81–105.

(15.) Protecting and Promoting the Open Internet: Final Rule, ¶¶ 151-52, 80 Fed. Reg. 19,737, 19,758-59 (Apr. 13, 2015); Arturo J. Carrillo, “Having Your Cake and Eating It Too? Zero-Rating, Net Neutrality, and International Law,” Stanford Technology Law Review 19 no. 3 (2016): 364–430; Christopher T. Marsden, “Comparative Case Studies in Implementing Net Neutrality: A Critical Analysis of Zero Rating,” SCRIPTed 13 no. 1 (2016): 1–39.

(16.) Daniel Boffey, “Google Fined Record €2.4bn by EU over Search Results,” The Guardian (June 27, 2017), https://perma.cc/9SKQ-YB79.

(17.) European Commission, Article 29 Data Protection Working Party, Opinion 15/2011 on the Definition of Consent, WP187 (July 13, 2011); European Commission, Article 29 Data Protection Working Party, Opinion 03/2013 on Purpose Limitation, WP203 (Apr. 2, 2013); Francesca Bignami, “From Expert Administration to Accountability Network: A New Paradigm for Comparative Administrative Law,” American Journal of Comparative Law 59 no. 4 (2011): 859–908.

(18.) European Commission, “Roaming Charges and Open Internet: Questions and Answers” (June 30, 2015), https://perma.cc/73DC-RQVH; Frederik Zuiderveen Borgesius & Wilfred Steenbruggen, “The Right to Communications Privacy in Europe: Protecting Trust, Privacy, and Freedom of Expression,” Theoretical Inquiries in Law 20 no. 1 (2019): 291–322.

(19.) Berten Martens, “An Economic Policy Perspective on Online Platforms,” Institute for Prospective Technological Studies Digital Economy Working Paper 2016/05 (2016), https://perma.cc/G94A-ZV22; “Online Platforms and the Digital Single Market: Opportunities and Challenges for Europe,” Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions, COM(2016) 288 final (May 25, 2016), https://perma.cc/MX8Q-P4XF; see also Report of the Standing Committee on Access to Information, Privacy and Ethics, House of Commons (Canada), Democracy under Threat: Risks and Solutions in the Era of Disinformation and Data Monopoly (Dec. 2018), https://perma.cc/S9Y3-TWRP; Australian Competition and Consumer Commission, Digital Platforms Inquiry: Preliminary Report (Dec. 2018), https://perma.cc/CX8T-RP74.

(20.) Jane Yakowitz, “More Crap from the E.U.,” Info/Law (Jan. 25, 2012), https://perma.cc/RA5F-GJU8; Adam Thierer, “The Problem with Obama’s ‘Let’s Be More Like Europe’ Privacy (p.328) Plan,” Forbes (Feb. 23, 2012), https://perma.cc/9T37-3B5S; Mike Masnick, “EU Moves to Create Internet Fast Lanes, Pretends It’s Net Neutrality by Redefining Basic Words,” TechDirt (June 30, 2015), https://perma.cc/JJ73-U4YL; Mike Masnick, “EU Official Says It’s Time to Harm American Companies via Regulations . . . Hours Later Antitrust Charges against Google Announced,” TechDirt (Apr. 14, 2015), https://perma.cc/PU5Q-99JQ; Tom Fairless, “EU Digital Chief Urges Regulation to Nurture European Internet Platforms,” Wall Street Journal (Apr. 14, 2015), https://perma.cc/T83A-ZK5J; Adam Thierer, “How Attitudes about Risk and Failure Affect Innovation on Either Side of the Atlantic,” The Technology Liberation Front (June 19, 2015), https://perma.cc/GQ2P-CZQ9. On the origins of anti-bureaucratic sentiment within U.S. legal thought, see Daniel Ernst, Tocqueville’s Nightmare (New York: Oxford University Press, 2014).

(21.) Whether the assumptions underlying the standard economic explanations ever were true is an interesting question that is beyond the scope of this book to address. For discussion, see Shlomit Azgad-Tromer, “The Case for Consumer-Oriented Corporate Governance, Accountability, and Disclosure,” University of Pennsylvania Journal of Business Law 17 no. 1 (2014): 227–92; Alon Brav & J.B. Heaton, “Market Indeterminacy,” Journal of Corporate Law 28 no. 4 (2003): 517–40.

(22.) Mark Andrejevic, Infoglut: How Too Much Information Is Changing the Way We Think and Know (New York: Routledge, 2013), 9–10.

(23.) Solon Barocas & Andrew Selbst, “Big Data’s Disparate Impact,” California Law Review 104 no. 3 (2016): 671–732. I return to the problem of algorithmically mediated discrimination and its implications for protection of fundamental human rights in Chapter 8, pp. 246–50.

(24.) 12 C.F.R. § 1002.2(p) (2018) (italics omitted); see also Office of the Comptroller of the Currency, Comptroller’s Handbook: Fair Lending Examination Procedures, Appendix B: Credit Scoring Analysis (2006).

(25.) On versioning, see Andrew D. Gershoff, Ran Kivetz, & Anat Keinan, “Consumer Response to Versioning: How Brands’ Production Methods Affect Perceptions of Unfairness,” Journal of Consumer Research 39 no. 2 (2012): 382–98; Hal R. Varian, “Versioning Information Goods,” in Internet Publishing and Beyond: The Economics of Digital Information and Intellectual Property, eds. Brian Kahin & Hal R. Varian (Cambridge, Mass.: MIT Press, 2000), 190–202. On the informationalization of food, see Lisa Heinzerling, “The Varieties and Limits of Transparency in U.S. Food Law,” Food and Drug Law Journal 70 no. 1 (2015): 11–24.

(26.) On informational and design-based strategies for manipulation, see Woodrow Hartzog, Privacy’s Blueprint: The Battle to Control the Design of New Technologies (Cambridge, Mass.: Harvard University Press, 2018), 21–55; Lauren E. Willis, “Performance-Based Consumer Regulation,” University of Chicago Law Review 82 no. 3 (2015): 1309–410, 1321–26; see also Lauren E. Willis, “The Consumer Financial Protection Bureau and the Quest for Consumer Comprehension,” Russell Sage Foundation Journal of Social Science 3 no. 1 (2017): 74–93; Lauren E. Willis, “Performance-Based Remedies: Ordering Firms to Eradicate Their Own Fraud,” Law and Contemporary Problems 80 no. 1 (2017): 7–41.

(27.) Virginia Eubanks, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor (New York: St. Martin’s Press, 2018); Safiya Umoja Noble, Algorithms of Oppression: How Search Engines Reinforce Racism (New York: New York University Press, 2018); Seeta Pena Gangadharan, “Digital Inclusion and Data Profiling,” First Monday (May 19, 2012), https://doi.org/10.5210/fm.v17i5.3821.

(28.) Securities and Exchange Comm’n, Exchange Act Release No. 43154 (Aug. 15, 2000), 65 Fed. Reg. 51,716, codified at 17 C.F.R. §§ 243.100–243.103 (2018).

(29.) Henry T.C. Hu, “Too Complex to Depict?: Innovation, ‘Pure Information’, and the SEC Disclosure Paradigm,” Texas Law Review 90 no. 7 (2012): 1601–716; Kathryn Judge, “Fragmentation Nodes: A Study in Financial Innovation, Complexity, and Systemic Risk,” Stanford Law Review 64 no. 3 (2012): 657–726.

(30.) See, for example, Leonid Bershidsky, “Russian Trolls Would Love the ‘Honest Ads Act,’” Bloomberg Opinion (Oct. 20, 2017), https://bloom.bg/2xUvOA8.

(31.) See sources cited in note 19 of this chapter.

(32.) David McCabe, “Scoop: 20 Ways Democrats Could Crack Down on Big Tech,” Axios (July 30, 2018), https://perma.cc/JER8-6FXW; “Wyden Releases Discussion Draft of Legislation to Provide Real Protection for Americans’ Privacy,” Ron Wyden (Nov. 1, 2018), https://perma.cc/7C68-7U6B; “Schatz Leads Group of 15 Senators in Introducing New Bill to Help Protect People’s Personal Data Online,” Brian Schatz (Dec. 12, 2018), https://perma.cc/E9T7-MYRH.

(33.) For two important interventions in mid-twentieth-century policy debates, see Rachel Carson, Silent Spring (New York: Houghton Mifflin, 1962); Walter B. Wriston, Risk and Other Four-Letter Words (New York: Harper & Row, 1986), 135 (“The fact is that banking is a branch of the information business.”).

(34.) For an overview, see Jose Luis Bermudez & Michael S. Pardo, “Risk, Uncertainty, and ‘Super-Risk,’” Notre Dame Journal of Law, Ethics and Public Policy 29 no. 2 (2015): 471–96.

(35.) On the mid-twentieth-century emergence of regulatory methodologies based on formal risk modeling, see William Boyd, “Genealogies of Risk: Searching for Safety, 1930s–1970s,” Ecology Law Quarterly 39 no. 4 (2012): 895–988.

(36.) For a useful, nontechnical explanation of the method, see Paul Ohm, “Sensitive Information,” Southern California Law Review 88 no. 5 (2015): 1125–96, 1172–77; see also Adam Shostack, Threat Modeling: Designing for Security (New York: Wiley, 2014).

(37.) Canonical works on the social construction of risk include Ulrich Beck, Risk Society: Towards a New Modernity, trans. Mark Ritter (Thousand Oaks, Calif.: SAGE, 1992); Ian Hacking, The Taming of Chance (Ideas in Context) (New York: Cambridge University Press, 1990). On the social and political roles of risk management discourses and practices within organizations, see Michael Power, Organized Uncertainty: Designing a World of Risk Management (New York: Oxford University Press, 2007); Michael Power, “The Risk Management of Nothing,” Accounting, Organizations and Society 34 nos. 6–7 (2009): 849–55.

(38.) Nassim Nicholas Taleb, The Black Swan: The Impact of the Highly Improbable (New York: Random House, 2010); see also Kenneth A. Bamberger, “Technologies of Compliance: Risk and Regulation in a Digital Age,” Texas Law Review 88 no. 4 (2010): 669–740, 675–76, 711–14, 718–22; James Fanto, “Anticipating the Unthinkable: The Adequacy of Risk Management in Finance and Environmental Studies,” Wake Forest Law Review 44 no. 3 (2009): 731–56.

(39.) See, for example, Mehrsa Baradaran, “Regulation by Hypothetical,” Vanderbilt Law Review 67 no. 5 (2014): 1247–326; Robert Weber, “A Theory for Deliberation-Oriented Stress Testing Regulation,” Minnesota Law Review 98 no. 6 (2014): 2236–325.

(40.) In support of cost-benefit analysis, see Michael A. Livermore & Richard L. Revesz, “Can Executive Review Help Prevent Capture?,” in Preventing Regulatory Capture: Special Interest Influence and How to Limit It, eds. Daniel Carpenter & David A. Moss (New York: Cambridge University Press, 2014), 439–44; Cass R. Sunstein, “The Limits of Quantification,” California Law Review 102 no. 6 (2014): 1369–422; Cass R. Sunstein, “The Real World of Cost-Benefit Analysis: Thirty-Six Questions (and Almost as Many Answers),” Columbia Law Review 114 no. 1 (2014): 167–212. For criticisms, see Frank Ackerman & Lisa Heinzerling, Priceless: On Knowing the Price of Everything and the Value of Nothing (New York: New Press, 2004); Lisa Heinzerling, “Quality Control: A Reply to Professor Sunstein,” California Law Review 102 no. 6 (2014): 1457–68 (arguing that the theoretical virtues of cost-benefit analysis are not realized in practice). On the history of cost-benefit analysis in U.S. government, see Theodore M. Porter, Trust in Numbers: The Pursuit of Objectivity in Science and Public Life (Princeton, N.J.: Princeton University Press, 1995), 148–89. On the history of precautionary regulation, see Tim O’Riordan & James Cameron, eds., Interpreting the Precautionary Principle (New York: Routledge, 1994); see also Boyd, “Genealogies of Risk,” 948–77.

(41.) P. Lamberson & Scott E. Page, “Tipping Points,” Working Paper No. 2012-02-002, Santa Fe Institute (2012), https://perma.cc/7FBM-5VGX. Many environmental threat models include tipping points. Timothy M. Lenton et al., “Tipping Elements in the Earth’s Climate System,” Proceedings of the National Academy of Sciences 105 no. 6 (2008): 1786–93; see also Haroon Siddique, “Disease Resistance to Antibiotics at Tipping Point, Expert Warns,” Guardian (Jan. 8, 2014), https://perma.cc/PL7E-NXXG.

(42.) Frank Ackerman, Lisa Heinzerling, & Rachel Massey, “Applying Cost-Benefit to Past Decisions: Was Environmental Protection Ever a Good Idea?,” Administrative Law Review 57 no. 1 (2005): 155–92.

(43.) Adam Levitin, “Safe Banking: Finance and Democracy,” University of Chicago Law Review 83 no. 1 (2016): 357–456; Saule T. Omarova, “License to Deal: Mandatory Approval of Complex Financial Products,” Washington University Law Review 90 no. 1 (2012): 63–140.

(44.) In Europe, the precautionary stance is best encapsulated in the purpose limitation principle, which dictates that data collected for one purpose should not be used for an unrelated purpose without consent. See opinion of the Article 29 Data Protection Working Party, Opinion 03/2013 on Purpose Limitation (Apr. 2, 2013); see also Raphael Gellert, “Data Protection: A Risk Regulation? Between the Risk Management of Everything and the Precautionary Alternative,” International Data Privacy Law 5 no. 1 (2015): 3–19. For some examples from the United States, see Omri Ben-Shahar, “Data Pollution,” University of Chicago Public Law Working Paper No. 679 (2018), https://perma.cc/9HU6-XE8G;A. Michael Froomkin, “Regulating Mass Surveillance as Privacy Pollution: Learning from Environmental Impact Statements,” University of Illinois Law Review 2015 no. 5 (2015): 1713–90; Dennis D. Hirsch, “The Glass House Effect: Big Data, the New Oil, and the Power of Analogy,” Maine Law Review 66 no. 2 (2014): 373–96.

(45.) See, for example, Adam Thierer, “The Problem with Obama’s ‘Let’s Be More Like Europe’ Privacy Plan,” Forbes (Feb. 23, 2012), https://perma.cc/9T37-3B5S; see generally Cass R. Sunstein, Laws of Fear: Beyond the Precautionary Principle (New York: Cambridge University Press, 2005).

(46.) On the rulemaking-adjudication dichotomy and what it leaves out, see Rubin, “It’s Time to Make the Administrative Procedure Act Administrative.”

(47.) For discussion of additional difficulties that the fast-moving internet industry has created for the FCC in particular, see Weiser, “The Future of Internet Regulation,” 531–48. There is robust scholarly debate on the extent to which rule-making processes have become ossified, on which I intend no comment. My point is different and concerns the ability of rule-makers to move at speeds roughly commensurate with the pace of change in highly informationalized sectors of our political economy.

(48.) Issie Lapowsky, “How Bots Broke the FCC’s Public Comment System,” Wired (Nov. 28, 2017), https://perma.cc/PTZ9-AXQP.

(49.) Cynthia R. Farina, Dmitry Epstein, Josiah Heidt, & Mary J. Newhart, “Knowledge in the People: Rethinking ‘Value’ in Public Rulemaking Participation,” Wake Forest Law Review 47 no. 5 (2012): 1185–242; Lynn E. Blais & Wendy E. Wagner, “Emerging Science, Adaptive Regulation, and the Problem of Rulemaking Ruts,” Texas Law Review 86 no. 7 (2008): 1701–40. On information subsidies and deep capture, see Chapter 3, pp. 103–06.

(50.) Nicholas R. Parrillo, “Federal Agency Guidance and the Power to Bind: An Empirical Study of Agencies and Industries,” Yale Journal on Regulation 36 no. 1 (2019): 165–271;Todd D. Rakoff, “The Choice between Formal and Informal Modes of Administrative Regulation,” Administrative Law Review 52 no. 1 (2000): 159–74; Robert A. Anthony, “Interpretive Rules, Policy Statements, Guidances, Manuals, and the Like—Should Federal Agencies Use Them to Bind the Public?,” Duke Law Journal 41 no. 6 (1992): 1311–84. For examples of guidances and staff interpretations, see “Supervisory Policy and Guidance Topics,” Board of Governors of the Federal Reserve System, https://perma.cc/P5YL-5PAQ (last visited June 22, 2018); “Informal Interpretations,” Federal Trade Commission, https://perma.cc/LNA5-XJ6K (last (p.331) visited June 22, 2018); “Staff Interpretations,” U.S. Securities & Exchange Commission, https://perma.cc/S2HL-TF4A (last visited June 22, 2018), For discussion of the deference question, see David L. Franklin, “Legislative Rules, Nonlegislative Rules, and the Perils of the Short Cut,” Yale Law Journal 120 no. 2 (2010): 276–327; John F. Manning, “Nonlegislative Rules,” George Washington Law Review 72 no. 5 (2004): 893–945; Peter L. Strauss, “The Rulemaking Continuum,” Duke Law Journal 41 no. 6 (1992): 1463–89.

(51.) Jody Freeman, “The Private Role in Public Governance,” New York University Law Review 75 no. 3 (2000): 543–675; Orly Lobel, “The Renew Deal: The Fall of Regulation and the Rise of Governance in Contemporary Legal Thought,” Minnesota Law Review 89 no. 2 (2004): 342–470; David Zaring, “Best Practices,” New York University Law Review 81 no. 1 (2006): 294–350. See, for example, “Cybersecurity Framework,” National Institute for Standards and Technology, https://perma.cc/M8S8-SUTE (last visited June 22, 2018), Joseph A. Siegel, “Collaborative Decision Making on Climate Change in the Federal Government,” Pace Environmental Law Review 27 no. 1 (2009): 257–312.

(52.) Jodi L. Short, “The Paranoid Style in Regulatory Reform,” Hastings Law Journal 63 no. 3 (2012): 633–94, 635.

(53.) See, e.g., U.S. Gov’t Accountability Office, Opportunities Exist to Improve SEC’s Oversight of the Financial Industry Regulatory Authority (2012), https://perma.cc/7743-UC94; Natural Resources Defense Council, Generally Recognized as Secret: Chemicals Added to Food in the United States (2014), https://perma.cc/329T-D997; American Bar Ass’n, Section on Antitrust Law, Self-Regulation of Advertising in the United States: An Assessment of the National Advertising Division (2015), https://perma.cc/85WC-WS7E.

(54.) Daniel J. Solove & Woodrow Hartzog, “The FTC and the New Common Law of Privacy,” Columbia Law Review 114 no. 3 (2014): 583–676.

(55.) See Federal Trade Commission Act Amendments of 1994, 15 U.S.C. § 57a(b)(3) (2018); Chris Jay Hoofnagle, Federal Trade Commission Privacy Law and Policy (New York: Cambridge University Press, 2016), 55–56, 333–35.

(56.) In re Formal Complaint of Free Press & Public Knowledge Against Comcast Corp. for Secretly Degrading Peer-to-Peer Applications, 23 F.C.C.R. 13,028 (2008); Comcast v. FCC, 600 F.3d 642, (D.C. Cir. 2010) (vacating the enforcement order against Comcast on jurisdictional grounds); Gwen Lisa Shaffer & Scott Jordan, “Classic Conditioning: The Use of Merger Conditions to Advance Policy Goals,” Media, Culture & Society 35 no. 3 (2013): 392–403.

(57.) For an optimistic assessment, see Kenneth A. Bamberger & Deirdre K. Mulligan, “Privacy on the Books and on the Ground,” Stanford Law Review 63 no. 2 (2011): 247–316, 308–09. For more critical perspectives, see Megan Gray, “Understanding and Improving Privacy ‘Audits’ under FTC Orders,” Working Paper (May 5, 2018), https://perma.cc/8QNQ-ZJ3B; Ari Ezra Waldman, “Privacy Law’s False Promise,” Washington University Law Review 97 no. 3 (forthcoming 2019).

(58.) Solove & Hartzog, “The FTC and the New Common Law of Privacy,” 607, 624–25; Margo Schlanger, “Against Secret Regulation: Why and How We Should End the Practical Obscurity of Injunctions and Consent Decrees,” DePaul Law Review 59 no. 2 (2010): 515–28.

(59.) Ian Ayres & John Braithwaite, Responsive Regulation: Transcending the Deregulation Debate (New York: Oxford University Press, 1995).

(60.) Neal Perlman, “Section 21(A) Reports: Formalizing a Functional Release Valve at the Securities Exchange Commission,” N.Y.U. Annual Survey of American Law 69 no. 4 (2015): 877–936; Kenneth A. Bamberger & Deirdre K. Mulligan, Privacy on the Ground: Driving Corporate Behavior in the United States and Europe (Cambridge, Mass: MIT Press, 2015), 187–91; William McGeveran, “Friending the Privacy Regulators,” Arizona Law Review 68 no. 4 (2016): 959–1026; see also Tim Wu, “Agency Threats,” Duke Law Journal 60 no. 8 (2011): 1841–58.

(61.) Francesca Bignami, “Comparative Legalism and the Non-Americanization of European Regulatory Styles: The Case of Data Privacy,” American Journal of Comparative Law 59 no. 2 (2011): 411–62.

(63.) James R. Beniger, The Control Revolution: Technological and Economic Origins of the Information Society (Cambridge, Mass.: Harvard University Press, 1986), 291–435.

(64.) For the HIPAA deidientification rule, see 45 C.F.R. § 164.514 (2014). For some other examples of highly technical reporting requirements, see 40 C.F.R. § 63.10(d) (2015) (EPA general reporting requirements); 21 C.F.R. § 803.10 (2015) (FDA medical device reporting requirements); 47 C.F.R. § 64.606(g) (2014) (FCC common carrier reporting requirements); 17 C.F.R. § 230.257 (2015) (SEC periodic financial reporting requirements). On the centrality of audit within modern finance, see Michael Power, The Audit Society: Rituals of Verification (New York: Oxford University Press, 1997), 15–40. On privacy audit and reporting requirements, see sources cited in note 57 of this chapter.

(65.) Bamberger, “Technologies of Compliance,” 673–74, 689–702; see also Danielle Keats Citron, “Technological Due Process,” Washington University Law Review 85 no. 6 (2008): 1249–314.

(66.) “The Story of NIST,” National Institute of Standards & Technology, https://www.nist.gov/timeline#event-a-href-node-774226 (last visited Apr. 10, 2019); Overview of ITU’s History, International Telecommunication Union, https://perma.cc/87G5-XW5T (last visited Apr. 10, 2019).

(67.) National Technology Transfer and Advancement Act of 1995, Pub. L. 103-114, § 12(d), 110 Stat. 775 (1996), codified as amended at 15 U.S.C. § 272(d) (2018).

(68.) Cary Coglianese, “The Limits of Performance-Based Regulation,” University of Michigan Journal of Law Reform 50 no. 3 (2017): 525–63; Karen Yeung, “Algorithmic Regulation: A Critical Interrogation,” Regulation and Governance 12 no. 4 (2018): 505–23.

(69.) This point dovetails with Simon’s observation that post-bureaucratic regulatory activities generally escape judicial review, see Simon, “The Organizational Premises of Administrative Law,” 70–74, but it is slightly different: a court or regulator determined to review those activities more rigorously would first need to determine how to do so. See also Bamberger, “Regulation as Delegation.”

(71.) See, for example, Gray, “Understanding and Improving Privacy ‘Audits’ under FTC Orders”; Waldman, “Privacy Law’s False Promise.”

(72.) On protections for trade secrecy within open government laws, see Government in the Sunshine Act, 5 U.S.C. § 552b(4) (2012); Freedom of Information Act. 5 U.S.C. § §552(b)(4) (2012); William Funk, “Public Participation and Transparency in Administrative Law—Three Examples as an Object Lesson,” Administrative Law Review 61(S) (2009): 171–98, 187–91; David Vladeck, “Information Access—Surveying the Current Legal Landscape of Federal Right-to-Know Laws,” Texas Law Review 86 no. 7 (2008): 1787–836.

(73.) William W. Bratton, “Enron and the Dark Side of Shareholder Value,” Tulane Law Review 76 nos. 5–6 (2002): 1275–352; John C. Coffee Jr., “Understanding Enron: ‘It’s about the Gatekeepers, Stupid’,” Business Lawyer 57 no. 4 (2002): 1403–20. For a brief summary of the largely unregulated audit landscape prior to the Enron scandal, see Michael V. Seitzinger, Marie B. Morris, & Mark Jickling, “Enron: Selected Securities, Accounting, and Pension Laws Possibly Implicated in Its Collapse,” in The Enron Scandal, ed. Theodore F. Sterling (New York: Nova Science, 2002), 103, 106–07. On the credit rating agencies, see Frank Partnoy, “The Siskel and Ebert of Financial Markets?: Two Thumbs Down for the Credit Rating Agencies,” Washington University Law Quarterly 77 no. 3 (1999): 619–712.

(74.) Chris Brummer, Soft Law and the Global Financial System: Rule Making in the 21st Century (New York: Cambridge University Press, 2012), 220–24; Daniel K. Tarullo, Banking on Basel: The Future of International Financial Regulation (New York: Columbia University Press, 2008), 166–72; Thomas J. Fitzpatrick, IV & Chris Sagers, “Faith-Based Financial Regulation: A Primer on Oversight of Credit Rating Agencies,” Administrative Law Review 61 no. 3 (2009): 557–610.

(75.) Sarbanes-Oxley Act of 2002, Pub. L. No. 107-204, Tit. I, § 101, 116 Stat. 745 (2002), codified as amended at 15 U.S.C. § 7211 (2018); see Saule T. Omarova, “Bankers, Bureaucrats, and Guardians: Toward Tripartism in Financial Services Regulation,” Journal of Corporate Law 37 no. 3 (2012): 621–74.

(76.) Credit Rating Agency Reform Act of 2006, Pub. L. No.109-291,§ 4, 120 Stat. 1329, codified as amended at 15 U.S.C. § 78o-7 (2018); see Jeffrey Manns, “Downgrading Rating Agency Reform,” George Washington Law Review 81 no. 3 (2013): 749–812.

(77.) See Dodd-Frank Wall Street Reform and Consumer Protection Act, Pub. L. No. 111-203, Tit. I, § 171, 124 Stat. 1376 (2010), codified as amended at 12 U.S.C. § 5371 (2018).

(79.) Cary Coglianese & Jennifer Nash, “The Law of the Test: Performance-Based Regulation and Diesel Emissions Control,” Yale Journal on Regulation 34 no. 1 (2017): 33–90.

(80.) See, for example, Doug Brake, “5G and Next Generation Wireless: Implications for Policy and Competition,” Information Technology & Innovation Foundation, June 2016, 10–11, https://perma.cc/9QAD-3EXH; Adam Thierer, “Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom, Revised and Expanded Edition,” Mercatus Center at George Mason University (2016), https://perma.cc/GZ9L-UNYW; Caleb Watney, “R Street Comments to FTC on Connected Vehicles” (Apr. 27, 2017), https://perma.cc/R2UN-VTEL.

(81.) Many results of such studies have been published at Freedom to Tinker, www.freedom-to-tinker.com.

(82.) This observation should not be taken as a comment on the efficacy of any of those bodies or competencies as currently constituted. It is simply a comment on their importance—and, therefore, on the importance of constituting them effectively.

(83.) Eloise Pasachoff, “The President’s Budget as a Source of Agency Policy Control,” Yale Law Journal 125 no. 8 (2016): 2182–291; Paperwork Reduction Act, 44 U.S.C. §§ 3501–3521 (2018). For a sampling of the literature on OIRA, see Thomas O. McGarity, “Presidential Control of Regulatory Agency Decisionmaking,” American University Law Review 36 no. 2 (1987): 443–90; Alan B. Morrison, “OMB Interference with Agency Rulemaking: The Wrong Way to Write a Regulation,” Harvard Law Review 99 no. 5 (1986): 1059–1074; Lisa Heinzerling, “Inside EPA: A Former Insider’s Reflections on the Relationship Between the Obama EPA and the Obama White House,” Pace Environmental Law Review 31 no. 1 (2014): 325–96; Michael Livermore & Richard L. Revesz, “Regulatory Review, Capture, and Agency Inaction,” Georgetown Law Journal 101 no. 5 (2012): 1337–98; Cass R. Sunstein, “The Office of Information and Regulatory Affairs: Myths and Realities,” Harvard Law Review 126 no. 7 (2013): 1838–79.

(84.) “Careers with the Office of Management and Budget,” Obama White House Archives, https://perma.cc/LA53-RWAG (last visited June 22, 2018).

(85.) For the statutory framework, see Government Performance and Results Act of 1993, Pub. L. No. 103-62, 107 Stat. 285, codified as amended at 31 U.S.C. § 1115 (2018). On the complex relationship between accounting methodologies and economic development, see Trevor Hopper, “Cost Accounting, Control, and Capitalism,” in Critical Histories of Accounting: Sinister Inscriptions in the Modern Era, eds. Richard K. Fleischman, Warwick Funnell & Stephen P. Walker (New York: Routledge, 2013), 129–43. On accounting methodologies as technologies of governance, see Peter Miller, “Governing by Numbers: Why Calculative Practices Matter,” Social Research 68 no. 2 (2001): 379–96.

(86.) Center for Progressive Reform, “Behind Closed Doors at the White House: How Politics Trumps Protection of Public Health, Worker Safety, and the Environment,” White Paper No. 1111 (2011), https://perma.cc/8YE9-YEG4.

(87.) Megan McArdle, “Romney’s Business,” Atlantic (Dec. 2011), https://perma.cc/EXZ7-PX8T; Pasachoff, “The President’s Budget as a Source of Agency Policy Control,” 2198, 2201.

(88.) Verizon v. FCC, 740 F.3d 623 (D.C. Cir. 2014) (invalidating the antiblocking and nondiscrimination rules in Preserving the Open Internet, 25 F.C.C.R. 17905 (Dec. 21, 2010); Protecting and Promoting the Open Internet: Final Rule, 80 Fed. Reg. 19,738 (Apr. 13, 2015), repealed by Restoring Internet Freedom, 83 Fed. Reg. 7852 (Feb. 22, 2018).

(89.) See 47 U.S.C. §§ 201(a), 202(a) (2018).

(90.) See, for example, Adam Liptak, “Verizon Blocks Messages of Abortion Rights Group,” New York Times (Sept. 27, 2007), https://perma.cc/ZUK9-AEZQ; Peter Svensson, “Comcast Blocks Some Internet Traffic,” Washington Post (Oct. 19, 2007), https://perma.cc/N8C6-NLAA; Kevin J. O’Brien, “Putting the Brakes on Web-Surfing Speeds,” New York Times (Nov. 13, 2011), https://perma.cc/4PUF-TSGU; Shalini Ramachandran, “Netflix to Pay Comcast for Smoother Streaming,” Wall Street Journal (Feb. 23, 2014), https://perma.cc/4LL6-YJA2; Edward Wyatt, “AT&T Accused of Deceiving Smartphone Customers with Unlimited Data Plans,” New York Times (Oct. 28, 2014), https://perma.cc/X8ZS-9DNK; Editorial, “Why Free Can Be a Problem on the Internet,” New York Times (Nov. 14, 2015), https://perma.cc/E6AF-BNXP; Ingrid Burrington, “How Mobile Carriers Skirt Net-Neutrality Rules,” Atlantic (Dec. 18, 2015), https://perma.cc/7YVX-EYPW; John D. McKinnon & Thomas Gryta, “YouTube Says T-Mobile Is Throttling Its Video Traffic,” Wall Street Journal (Dec. 22, 2015), https://perma.cc/2JGB-CPSX.

(91.) Medical Device Amendments Act of 1976, Pub. L. No. 94-295, § 513, 90 Stat. 539 (1976), codified as amended at 21 U.S.C. § 321 (2018); Copyright Act of 1976, Pub. L. 94-553, title 17, § 106, 90 Stat. 2541, codified as amended at 17 U.S.C. §§ 111, 119 (2018); Magnuson-Moss Warranty-Federal Trade Commission Improvement Act, Pub. L. No. 93-637, § 202, 88 Stat. 2183 (1975), codified as amended at 15 U.S.C. § 57a (2018).

(92.) Valerio de Stefano, “The Rise of the ‘Just-in-Time’ Workforce: On-Demand Work, Crowdwork, and Labor Protection in the ‘Gig-Economy,’” Comparative Labor Law & Policy Journal 37 no. 3 (2016): 471–504.

(93.) Julia Tomassetti, “From Hierarchies to Markets: Fedex Drivers and the Work Contract as Institutional Marker,” Lewis & Clark Law Review 19 no. 4 (2015): 1083–152. Charlotte E. Alexander & Elizabeth Tippett, “The Hacking of Employment Law,” Missouri Law Review 82 no. 4 (2017): 973–1021.

(94.) Lydia Pallas Loren, “Untangling the Web of Music Copyright,” Case Western Reserve Law Review 53 no. 3 (2003): 673–722; Mark H Wittow, Katherine L. Staba, & Trevor M. Gates, “A Modern Melody for the Music Industry: The Music Modernization Act Is Now the Law of the Land,” K&L Gates (Oct. 11, 2018), https://perma.cc/M5VH-XERK.

(95.) An emerging scholarly genre within administrative law consists of articles exploring the consequences and implications of regulatory overlap. See James C. Cooper, “The Costs of Regulatory Redundancy: Consumer Protection Oversight of Online Travel Agents and the Advantages of Sole FTC Jurisdiction,” George Mason University Law & Econ. Research Paper Series, Working Paper No. 15-08 (2015), https://perma.cc/ML59-BBGS; Tejas N. Narechania, “Patent Conflicts,” Georgetown Law Journal 103 no. 6 (2015): 1483–542; Jacob S. Sherkow, “Administrating Patent Litigation,” Washington Law Review 90 no. 1 (2015): 205–70; Olivier Sylvain, “Disruption and Deference,” Maryland Law Review 74 no. 4 (2015): 715–76. Regulators’ attitudes about overlapping jurisdiction vary. See, for example, Lydia Beyoud, “FCC, FTC Promise to Work in Concert on Consumer Privacy Rules in Broadband,” Bloomberg BNA (Apr. 29, 2015), https://perma.cc/8D24-X4HG; Lydia Beyoud, “Ohlhausen: Congressional Action Needed to Define FCC, FTC Regulatory Spheres,” Bloomberg BNA (Apr. 2, 2015), https://perma.cc/EWN5-9LT2.

(96.) Robert McMillan, “What Everyone Gets Wrong in the Debate over Net Neutrality,” Wired (June 23, 2014), https://perma.cc/T7VZ-RW54; see Letter from Austin C. Schlick, Director of Communications Law, Google, to Marlene H. Dortch, Secretary, FCC (Dec. 30, 2014), WC 16-106, https://perma.cc/5FFH-XK6L; see also Ryan Singel, “Now That It’s in the Broadband Game, Google Flip-Flops on Network Neutrality,” Wired (July 30, 2013), https:// (p.335) perma.cc/P2H9-U39S; Alistair Barr, “Google Strikes an Upbeat Note with FCC on Title II,” Wall Street Journal (Dec. 31, 2014), https://perma.cc/W9HZ-ZPJW.

(97.) For one prominent proposal, see Oren Bracha & Frank Pasquale, “Federal Search Commission? Access, Fairness, and Accountability in the Law of Search,” Cornell Law Review 93 no. 6 (2008): 1149–210; for criticisms, see James Grimmelmann, “Don’t Censor Search,” Yale Law Journal Pocket Part 117 (2007): 48, https://perma.cc/DK8F-WXVQ; Berin Szoka, “First Amendment Protection of Search Algorithms as Editorial Discretion,” Technology Liberation Front (June 4, 2009), https://perma.cc/3U8U-NPJZ; Christopher S. Yoo, “Free Speech and the Myth of the Internet as an Unintermediated Experience,” George Washington Law Review 78 no. 4 (2010): 697–773.

(98.) For the short-lived privacy rules, see Protecting the Privacy of Customers of Broadband and Other Telecommunications Services, 81 Fed. Reg. 87,274 (Dec. 2, 2016), rescinded by S.J. Res. 34, Pub. L. 115-22, 131 Stat. 88, 115th Cong. (2017). For the litigation challenging the FTC’s authority, see Federal Trade Commission v. AT&T Mobility LLC, 883 F.3d 848 (9th Cir. 2018) (en banc). For the challenge to the net neutrality rules, see United States Telecom Ass’n v. Federal Commc’ns Commission, 825 F.3d 674 (D.C. Cir. 2016), cert. denied, 139 S. Ct. 475 (2018).

(99.) Rory Van Loo, “Making Innovation More Competitive: The Case of Fintech,” UCLA Law Review 65 no. 1 (2018): 232–79.

(101.) Farber & O’Connell, “The Lost World of Administrative Law,” 1155–60. Jody Freeman & Jim Rossi, “Agency Coordination in Shared Regulatory Space,” Harvard Law Review 125 no. 5 (2012): 1131–211; Abbe R. Gluck, Anne Joseph O’Connell, & Rosa Po, “Unorthodox Lawmaking, Unorthodox Rulemaking,” Columbia Law Review 115 no. 7 (2015): 1789–866; Daphna Renan, “Pooling Powers,” Columbia Law Review 115 no. 2 (2015): 211–91.

(102.) European Commission, Directorate-General, Communications Networks, Content, and Technology, https://perma.cc/64BH-59SY (last visited Apr. 4, 2019).

(103.) Promising starts include Kenneth A. Bamberger & Orly Lobel, “Platform Market Power,” Berkeley Technology Law Journal 32 no. 3 (2017): 1051–92; Joshua A. Kroll et al., “Accountable Algorithms,” University of Pennsylvania Law Review 165 no. 2 (2017): 633–705; Paul Ohm, “Regulating at Scale,” Georgetown Law Technology Review 2 no. 2 (2018): 546–56, https://perma.cc/W4Y3-C4NU; Rory Van Loo, “Rise of the Digital Regulator,” Duke Law Journal 66 no. 6 (2017): 1267–330; Rory Van Loo, “Digital Market Perfection,” Michigan Law Review 117 no. 5 (2019): 815–83.