Jump to ContentJump to Main Navigation
Negotiating Internet Governance$

Roxana Radu

Print publication date: 2019

Print ISBN-13: 9780198833079

Published to Oxford Scholarship Online: April 2019

DOI: 10.1093/oso/9780198833079.001.0001

Show Summary Details
Page of

PRINTED FROM OXFORD SCHOLARSHIP ONLINE (www.oxfordscholarship.com). (c) Copyright Oxford University Press, 2019. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a monograph in OSO for personal use (for details see www.oxfordscholarship.com/page/privacy-policy).date: 19 July 2019

Privatization and Globalization of the Internet

Privatization and Globalization of the Internet

Chapter:
(p.75) 4 Privatization and Globalization of the Internet
Source:
Negotiating Internet Governance
Author(s):

Roxana Radu

Publisher:
Oxford University Press
DOI:10.1093/oso/9780198833079.003.0004

Abstract and Keywords

This chapter delves into the salient role of corporate actors in Internet policymaking during the decade of privatization and globalization of the Internet. Market dynamics drove the development of the field and the digital economy shifted attention to the potential of the network in the neoliberal understanding. From the mid-1990s to mid-2000, three major shifts occurred in Internet governance arrangements: they grew in size, scale, and scope. A number of rules for the technical management of the network were defined during this period and the bodies in charge consolidated their institutional structure. The emergence of political contestation also dates back to this period, when the positions of developing countries on key Internet governance issues started to consolidate.

Keywords:   internet governance, policymaking, privatization, globalization, technical bodies, ICANN, markets, private actors, multi-stakeholder

The ‘Internet governance’ discussions formally started around 1995, with the commercialization of the network that spurred out of the National Science Foundation Network (NSFNET) (Sylvan 2014). Two key advancements dating back to this period stand at the basis of the current functioning of the Internet: the development of the infrastructure, on the one hand, and of web applications and dotcoms on the other. Multiple for-profit Internet backbone and access providers (e.g. dial-up systems of CompuServe, America Online, Prodigy) emerged when the Internet was privatized. At first, Internet Service Providers (ISPs) preferred connections to the backbone networks in the United States, where more services and content were available. In many cases, such connections also meant charges lower than what the telecommunication monopolies in many countries asked for. As the number of providers grew, options for connection diversified and a variety of services became available at the regional and local levels, though unevenly spread across continents. The large amount of user-generated content, which set the Internet apart from other communications media, also became a competitive advantage. This fostered the emergence of an online market for domain names, a highly contentious issue for global regulation.

Mirroring the exponential growth of the network, the scale of Internet operations expanded at an unprecedented pace during the mid-1990s; international connections became the norm, rather than the exception. The driver of innovation throughout this period was the so-called ‘knowledge-creating company’ (Nonaka and Takeuchi 1995), which used the production of knowledge, including user-generated content, for developing or improving products and services. Aided by a permissive regulatory approach, advancements in website development, and the e-commerce boom, the Internet became the engine of economic growth in the 1990s in most developed countries.

This chapter investigates the privatization trends, the creation of the Internet Corporation for Assigned Names and Numbers (ICANN) as a core (p.76) institution for the technical management of the Internet, and the expansion of the global online market. It analyses the distinctive features of governance arrangements from the mid-1990s to the early 2000s, which epitomize ‘the search for governance models coherent with the neoliberal globalisation process’ (Chenou 2014, 338). Here, I analyse the emergence of what is sometimes referred to as the ‘private governance of the Internet’, growing from the national level to a global scale. Applying the theoretical framework presented in Chapter 2, the articulation of governance during this period is deconstructed empirically to reveal the tensions of regulation and institutionalization.

A Global Internet (Fairy) Tale: Market Emergence

Despite its importance, a critical history of the commercialization of the Internet is a rare occurrence in Internet governance (IG).1 The expansion of the Internet and the subsequent crisis of the late 1990s left an imprint: beyond shaking the enthusiasm of the early investors, it also shaped the very basics of the market and the modern structures of online advertising. Amid regulatory disputes, business practices became a dominant modality of governance, consonant with the classical definition of private authority as the ‘ability by non-state actors to cooperate across borders to establish rules and standards of behaviour accepted as legitimate by agents not involved in their definition’ (Noelke and Graz 2008, 2). Profit-seeking entities became co-creators of standards and norms and, in certain cases, held discretionary power for law enforcement, be it for criminal investigations or for the protection of intellectual property rights. The transformation of private intermediaries into law-enforcement institutions emerged as a reality early on, showing that the Internet space was hybrid by definition. Three main developments marked this period: the expansion of the infrastructure, the dotcom market, and the advent of web applications.

Infrastructure

The transition from a national network (with a few external links) to a global network of networks happened in a rather short time-span: from 1993 to 1995. The formal involvement of the private sector in the functioning of the network started a few years before on the NSFNET. When the higher-speed upgrade was authorized in 1987, NSFNET delegated the task to a consortium (p.77) known as Advanced Network and Services, led by Merit Network, Inc., IBM, and MCI. A number of commercial experiments—such as the MCI email service, introduced by Vint Cerf2—were tested and allowed on the NSFNET before the 1990s. When the Border Gateway Protocol was introduced in 1994, the network architecture was improved with the facilitated external Internet routing (the move of IP packets from sender to recipient across numerous autonomous networks) and decentralization. In reality, the Internet reached millions of homes before being placed on international agendas or discussed as a global issue. And, unlike the early Internet experiments, the globalization of the Internet did not rely on state subsidies anymore.

When the NSFNET was decommissioned in April 1995, the funds recovered from its dismantling were redistributed to regional networks to buy connectivity from the private providers that started to mushroom. The transition to the commercial Internet was completed through the introduction of five NSF-designated and partially funded network access points. A few years later, the role of access points was minimized as the majority of ISPs started contracting directly with backbone and transit providers. Technical standardization, infrastructure, and interconnection remained concealed to the public eye for the largest part of the Internet evolution, yet their profile on the political agenda was raised when the Internet became global.

The shift from government-owned to privately owned infrastructure was overseen by Steve Wolff from the NSF, who retrospectively assessed that it was necessary to conduct the transition in a coordinated fashion to maintain the network as a single Internet, rather than multiple separate networks. He also reiterated the commitment of the NSF to ensuring that the academic community would not remain aloof. In the words of Wolff:

There had to be commercial activity to help support networking, to help build volume on the network. That would get the cost down for everybody, including the academic community, which is what NSF was supposed to be doing. (NSF n.d.)

In 1995, responding to a solicitation made in 1993, NSF awarded contracts for three network access points to serve as links to commercial networks and one routing arbiter for exchange of traffic. It also signed a cooperative agreement for a new-generation Backbone Network Service, decommissioning NSFNET. But a global Internet required much more.

(p.78) To begin with, the physical connection with other parts of the world needed to be improved; this expansion process was led by the private sector. The underlying infrastructure on which the Internet was growing rapidly relied on submarine cables, an interconnected network originally used for telegraphy. The first of the transatlantic cables, carrying telegraphic correspondence between the United States and the United Kingdom was laid in 1859 by the Atlantic Telegraph Company. Ever since, underwater cables evolved from electromagnetic copper cables to fibre optic to support the faster transmission of messages. Historically, the telegraph cables were owned by the operators. Later on, the consortium model dominated, consisting of splitting the cost of submarine cables among telecommunication operators, but also sharing the risks and reducing competition. In the early 1990s, a strong push for adopting a common approach to the global Internet infrastructure came from the Telecommunication Standardization Sector of the International Telecommunication Union (ITU). The body has played an important role in drafting recommendations and guidelines for undersea cable development and deployment ever since. In 1958, the International Cable Protection Committee (ICPC) was established to serve as the undersea cable operators association, providing representation and leadership in international policy processes. The ICPC also acted as a forum for information exchange, technical expertise, and legal and environmental advice for its members.

Submarine connections currently carry 99 per cent of transoceanic digital communication and represent a billion-dollar business (Chesnoy 2015; Starosielski 2015). The fastest growth of this industry was registered at the end of the 1990s during the dotcom bubble. Part of what made the latter possible was the expansion of the infrastructure to Europe, Asia, and Latin America. The cables circling the African continent were laid down later, the first three being placed between 2000 and 2004.

Over the years, three tiers of networks developed, differentiated3 according to whether they provided bandwidth (tier 1 and 2) or Internet services (tier 3). Tier 1 networks, such as AT&T, Verizon, or NTT Communications, operate based on peer-to-peer agreements, generally covered by non-disclosure clauses. There are no overt settlements among tier 1 networks, meaning they do not charge among themselves for traffic sent over the networks. Tier 2 networks pay transit fees to tier 1 providers, while tier 3 ISPs pay to tier 2 networks. Speaking to the high degree of informality that persists in the infrastructure layer, the largest part of the agreements was not—and is not, to the present day—recorded in writing. A Packet Clearing House study from (p.79) 2010 to 2011 brings evidence that of 142,210 interconnection and peering agreements (representing approximately 86 per cent of all Internet carriers at the time), only 0.49 per cent were written.

Domain Name Registrations

The most contentious issue in the 1990s was the creation of a market for domain names. The commercial impact of domain name system (DNS) registrations changed the outlook of the decade, with the emergence of companies whose activities and profits derived directly from the Internet, the so-called dotcoms. In the shaping of the nascent market, the NSF (under the leadership of Steve Wolff) continued to play a steering role. The Department of Defense (DoD) subsidies for domain registrations ended in the early 1990s, and responsibility for non-military domains was transferred to NSF, which continued to subsidize the civilian Internet and awarded, in 1993, a five-year contract for managing it to Network Solutions, Inc. (NSI), at the time supporting around 7,500 domains. It was the ‘information superhighway’ time, and the potential of the Internet was just beginning to be explored.

A 1994 National Research Council report, ‘Realizing the information future: the Internet and beyond’, commissioned by NSF and prepared by a team chaired by Kleinrock,4 articulated the benefits of this evolution, but also pointed to a number of issues that became heatedly debated throughout the decade: intellectual property rights, regulation for the Internet, pricing, education, and ethics. Evaluating the performance of the InterNIC contractors through an expert panel, NSF followed the expert recommendation and authorized the NSI to begin charging for .com domain name registrations. In 1995, the NSF ‘acceptable use policy’ was ended, and NSI was authorized to charge annual fees for generic Top-Level Domain names (such as .com and .net) until 1998. In this period, the NSI profits increased from $5 million to approximately $94 million.

Initial registration would cost $100 for the first two years (minimum amount of time for registration), with a subsequent annual renewal of $50. The informal allocation of domains—based on the interactions in the small technical and academic network at the time of the DNS expansion—was perpetuated by NSI, their allocation of domains also being performed on a first come, first served basis.

The NSF continued to subsidise the .edu registrations as well as, for a limited time, the .gov domains. The high increase in registrations compelled (p.80) InterNIC to adopt automatic processing and to no longer distinguish between different types of registrants for .com, .net, and .org5 (Mueller 2004, 112). Unlike manual processing, no reviews were performed under the new model, making the InterNIC domain registrations more popular than country-code domains, which were generally more restrictively allocated. Notably, by July 1996, the number of registrations under InterNIC was 3.96 million, compared to only 1.52 million for the seven largest country domains combined: the United Kingdom, Japan, Germany, Australia, Canada, the Netherlands, and France (Mueller 2004, 114). The interest in .com second level domain names skyrocketed from 200 applications per month in January 1993 to more than 30,000 by late 1995, and to more than 200,000 by January 1998. For total registrations, that meant an increase from less than 15,000 .com second level domains in 1992 to approximately one million in January 1995 and over 8 million by 1998 (Post and Kehl 2015). This high number of registrations drove the Internet browser developers to make .com the default (Mueller 2004) and, in turn, increased the value of the domain.6

This gave way to the formation of the so-called ‘dotcom bubble’,7 with a burst around 2001. What became widespread during the 1990s was the practice of cybersquatting: the registration, trafficking in, or use of an Internet domain name with bad faith intent to profit from a trademark belonging to someone else. Intellectual property disputes became a concern, bringing into sharper focus the DNS and the authority over it. In reaction to trademark lawsuits, NSI issued a Domain Dispute Resolution Policy Statement in July 1995 for InterNIC-operated domains, serving two purposes. The first was to make public the fact that it ‘had neither the legal resources nor the legal obligation to screen requested Domain Names to determine if the use of a Domain Name by an Applicant may infringe upon the right(s) of a third party’ (NSI 1995, § 1). The second purpose was to impose an obligation on the registrants to certify that there was no infringement or interference with trademarks or intellectual property for the proposed registration and that there was bona fide intention in the use of the name. NSI reserved the right to withdraw or transfer a domain name if a court or an arbitration panel so decided. Under this policy, there was a massive increase in dispute resolutions cases, going from around 200 in 1995 to over 907 in 1997. In mid-1998, (p.81) the number of cases exceeded 750. The creation of ICANN later that year—which solved this problem—is discussed further in this chapter.

Throughout this period, the NSI had physical possession of the ‘A’ root, while Jon Postel (under contract with DoD, still physically based at the University of Southern California) continued to have policy authority over top-level domain names approval and allocation (Weinberg 2011). In 1995, the control of the main research backbone was entrusted to MCI Communications, operating alongside other commercial, academic, and non-profit networks.

Web Applications, Information Intermediaries, and E-commerce

Following the US administration’s decision to allow commercial Internet activities, the network became the largest global market; the development of web applications facilitated the e-commerce boom and raised the political profile of the Internet. It was the time of the ‘information superhighway’, as the Clinton-Gore electoral campaign highlighted. In 1993, the WWW, publicly released two years before, was popularized through the broad adoption of the Mosaic browser, co-programmed by Marc Andreessen. Email and file sharing, video and audio streaming, web pages, and voice telephony, or interactive multi-player games all became profitable areas of investment. To a large extent, this market boom tapped into the potential of the Internet architecture itself, with its separation of transport and application layers, as the Internet protocol remained indifferent to the packets it carried.

Yahoo.com, eBay.com, and msn.com were all launched in 1995. Amazon, created one year earlier, started as an online bookstore, but soon diversified its offer to software, video, and music downloading, as well as commerce in tangible goods. Alongside the giants of the day, many businesses opening up online promised overnight success and attracted investors effortlessly. Cassidy (2002) revealed that the myth of ‘companies started in a garage’ was so strong at the time that Jeff Bezos, the founder of Amazon, rented a house with a garage in Seattle to preserve it. In 1995, Netscape Navigator was launched featuring the secure sockets layer, a protocol for encrypting information, used primarily for transactions. Microsoft introduced Internet Explorer at the end of that year.

Online advertisements also contributed to shaping this virtual market. The first online ad, dating back to 1994, was an art museum ‘banner ad’ sponsored by AT&T which appeared on HotWired.com. This gave way to one of the most successful business models of the Internet era—also known as (p.82) ‘third-party advertising’—based on indexing content and increasingly more targeted marketing.

Information intermediation, closely linked to operations such as profiling, transactions, or advertising, offered more than what the infrastructure providers could promise, namely carrying information from point A to point B. The selection, ranking, aggregation, and sharing of content created by others (generally users) proved to be extremely profitable. Google—currently operating the most successful search engine worldwide—was formally registered in 1998, with the mission to ‘index the world’s information’. This fast-expanding company originally used Stanford University’s website with the domain google.stanford.edu for the development of the search engine, to be later incorporated as a company in a garage in Menlo Park, California. In 2000, Google launched AdWords, a system for selling search advertising with real-time auctions for keywords.

Online social networks began developing in 1997, when Six Degrees facilitated contact with former school mates. They were inspired by the user-created Usenet discussion and bulletin boards of the 1980s. In 2002, MySpace and Friendster were among the first to target primarily young people and became more widely used as Internet penetration rates increased. Facebook was released in 2004, and it was built on the original Facemash platform that Mark Zuckerberg designed in 2003 for his fellow Harvard students. The development of social networking sites remained largely outside regulatory purview during their first years of operation, encouraged to self-regulate.

The dotcom bubble was built on enthusiastic confidence and stock speculation for extra profits sought by an increasing number of Internet-based companies (known as dotcoms)8 and venture capital investment firms. California’s Silicon Valley and New York’s financial district were at the centre of it. In the crash, many companies went bankrupt, while others suffered huge recessions: the case of Amazon.com is emblematic, with shares dropping from 107 to 7 before starting to recover steadily. Relying on a ‘first mover advantage’ into a new market, company managers predominantly pursued fast business development strategies that required substantial financial backing. This mostly came via risk investment, as nearly 80 per cent of all venture capital resources went to Internet companies in 1999 and 2000. Investment growth went from about $7 billion in 1995 to nearly $100 billion in 2000, dropping to less than $40 billion per year for the next decade (Zook 2008).

What remained unchanged before and after the dotcom bubble was the relatively strong position of information intermediaries. With data as their (p.83) main asset, they started to set the technical constraints and guidelines for social behaviour online, defining the codes of conduct for activities ranging from defamation to cyberbullying and obscenity. The terms of service, the equivalent of the Acceptable Use policy in the early days of the NSFNET, were commonly used to define the conduct of the users. The permissive regulatory environment in the United States led to a concentration of key players in the Silicon Valley, a region of California that became a leading innovation hub following the invention of the microprocessor and microcomputer in the 1970s.

Regulatory Framework

The globalization of the commercial Internet around the mid-1990s coincided with the spread of hybrid governance, involving both the public and the private sector, often with blurry delimitations of their functions and attributions. Replacing the 1934 Communications Act, the Telecommunications Act signed into law by President Clinton in 1996 defined the regulatory regime for services using the same underlying infrastructure. It separated voice telephone services and cable television from information services, which remained added-value services (comprising services offering a capability for generating, acquiring, storing, transforming, processing, retrieving, utilizing, or making available information via telecommunications). The law separated ‘telecommunications carriers’ from Internet services carriers, placing broadcasting and spectrum allotment for the Internet under a different regime. The first broadband service, starting at 256 Kbps, was introduced in the United States shortly after by @Home.

The Telecommunications Act of 1996 also included a controversial section: the Communications Decency Act (CDA) criminalized the knowing transmission of ‘obscene or indecent’ messages to any recipient under 18; and also knowingly sending to a person under 18 anything ‘that, in context, depicts or describes, in terms patently offensive as measured by contemporary community standards, sexual or excretory activities or organs’. That section of the CDA was declared unconstitutional on freedom of expression and medical grounds in the first major Supreme Court ruling on the regulation of materials distributed online, Reno v. American Civil Liberties Union (1997), but that did not reduce the plea to regulate online indecency.

Importantly, the CDA also introduced, in section 230, one of the most important provisions for information intermediaries in the history of the Internet, namely protection from liability for the online actions of their users. (p.84) Accordingly, section 230, still valid today, states that ‘no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider’. With few limitations in place for criminal and intellectual property-based claims, this regime for the protection of intermediaries represented a cornerstone for private governing, endowing the intermediaries with rule-setting power after a long legal dispute.9

Being the first country to introduce protections against liability for online platforms, the United States established itself as a safe haven for Internet services, attracting the majority of providers. An outcome of this favourable legal environment was the growth of Silicon Valley into a prominent hub for high-tech innovation. Moreover, at the international level, the CDA triggered John Perry Barlow’s famous declaration on the Independence of Cyberspace written in Davos, Switzerland, hailing a space in which governments would have no role to play:

Governments derive their just powers from the consent of the governed. You have neither solicited nor received ours. We did not invite you. You do not know us, nor do you know our world. Cyberspace does not lie within your borders. Do not think that you can build it, as though it were a public construction project. You cannot. It is an act of nature and it grows itself through our collective actions. (Barlow 1996)

The decade starting around the time Barlow wrote his declaration was marked by strong privatization tendencies, with governmental intervention providing the space for industry self-regulation in the ‘shadow of hierarchy’.10 The deregulatory agenda of the Clinton administration was enhanced with the introduction of the ‘Framework for global electronic commerce’ in 1997, which proposed a strategy for promoting global commerce via the Internet and empowering the Department of Commerce to be the lead agency on this initiative. According to the document, ‘governments must adopt a non-regulatory, market-oriented approach to electronic commerce’ based on a (p.85) decentralized, contractual model of law in order for the Internet to deliver its full economic potential (White House 1997). The first principle of that roadmap, that ‘the private sector should lead’, set the standard for the globalization of the Internet. It envisioned governmental oversight applicable only to nine areas of regulation, including customs and taxation, electronic payments, intellectual property protection, privacy, security, and technical standards.

Not only did the US government delegate a number of functions to businesses, it also actively shaped the future global market. The value and potential of the Internet were widely recognized when the Fourth Protocol to the General Agreement on Trade in Services entered into force in 1998. This Protocol stirred the liberalization of telecom markets, through the privatization of national monopolies, the promotion of competitive services, and the establishment of national regulators.

New Rules under Construction

The framework for global electronic commerce was accompanied by a Presidential directive calling on the Department of Commerce to ‘support efforts to make the governance of the domain name system private and competitive and to create a contractually based self-regulatory regime that deals with potential conflicts between domain name usage and trademark laws on a global basis’ (Smith 2002). In response to this, the Department issued a Request for Comment on domain name administration to which it received more than 650 comments.

What prefaced this move by the US administration was a heated debate around the creation of new top-level domains (TLDs). This matter was approached by the technical community on a dedicated mailing-list, newdom, created in September 1995. Following discussions on this list, an Internet draft was issued, ‘New Registries and the Delegation of International Top-Level Domains’ widely known as ‘draft-postel’, which proposed the introduction of 150 new top-level domain names, multiple registries for .com and other domains and the chartering of Internet Assigned Numbers Authority (IANA) by ISOC. Proposals for alternative TLDs were also popular at the time, backed by entities like AlterNIC, Iperdrome, or pgmedia, which intended to create and sell new TLDs such as .web, .arts, or .xxx. While technically feasible, these domain names were not authorized by IANA to be added to the root, and NSI refused to make them visible, meaning that they had a minimal perceived value for potential buyers.

(p.86) Most importantly, the draft-postel grouping faced the coalition led by the incumbent, the NSI, gathering support around a competing proposal, the International Forum for the White Paper (IFWP). NSI had been in charge of administering additions and deletions to the authoritative root server database since 1993, in accordance with the cooperative agreement it had signed first with the NSF, and later with the Department of Commerce (DoC). In 1998, the IFWP held public meetings in Reston (Virginia), Geneva, Singapore, and Buenos Aires.11 A smaller successor group to the IFWP, referred to as the Boston Working Group, put forward a proposal for the National Telecommunications and Information Administration (NTIA).

At the time, private sector consensus was also shaped in the framework of a different platform, the Global Internet Project (GIP) formed in 1996. The GIP was less vocal, but brought together a highly influential group of senior executives of sixteen Internet and e-commerce companies (including MCI-Worldcom, IBM, etc.). It served as an advisory committee to the World Information and Telecommunication Association (WITSA), a consortium of ICT industry associations worldwide. In 1998, the GIP was led by IBM’s vice-president for Internet technology, John Patrick and started working on IG by providing strategic direction for corporate interests, which would be subsequently implemented by the Information Technology Association of America, a Washington-based business lobby group hosting the secretariat of WITSA. The primary goal of the GIP was, in their phrasing, ‘not to shape government regulation, but instead promote industry actions that will minimize the need for such regulation’. Alongside the GIP, the International Chamber of Commerce also contributed to the White Paper debates presenting an Internet economy perspective.

In response to the harsh critiques to the Postel plan, ISOC announced, in mid-November 1996, the formation of the International Ad Hoc Committee (IAHC). The IAHC was chaired by ISOC’s Don Health and comprised representatives of standard-setting organizations such as the IETF, intellectual property organizations such as the World Intellectual Property Organization (WIPO), as well as the International Trademark Association (INTA) and civil society and business representatives. Joining them was also a delegate from the NSF. The proposal they came up with differed from the Postel plan in three regards. First, it projected the addition of seven new generic TLDs12 to the DNS, as opposed to the 150 suggested before. Second, it introduced the functional differentiation of the registry and registrar functions, which did (p.87) not exist in the NSI model. The registry function referred to the collection, storage, and maintenance of data, alongside the operation of name servers that provided updated authoritative lists of domains. The registrar performed the retail function, managing the reservation of the domain names (if not already taken).

To manage the new governance system, the IAHC plan envisioned a global monopoly registry co-owned by multiple, competing registrars, sharing access to the same TLDs. The total number of registrars was limited to twenty-eight companies selected by lottery, four for each of the seven geographical regions. Registrars would be incorporated in a not-for-profit Council of Registrars (CORE) headquartered in Geneva, Switzerland, upon an entry fee of $20,000 and a monthly payment of $2,000. For the stewardship of the domain-name space, a Policy Oversight Committee would be formed, grouping representatives of ISOC, IANA, Internet Activities Board (IAB), CORE, but also the ITU, WIPO, and INTA (similar to the composition of the IAHC), supported by the Policy Advisory Board that could be joined by any signatory of this governance framework. The third key difference was the introduction of a complex mandatory arbitration system centred on domain name challenge panels placed under WIPO’s Arbitration and Mediation Center. The IAHC later drafted the Generic Top-Level Domain Memorandum of Understanding (gTLD-MoU).

Glaringly, the issue of trademarks in Internet domain names was neglected by the technical community despite its increasing commercial and political salience at the time. In contrast, the IAHC attempt to design a system that took into account the varied interests garnered more support, just like the NSI initiative bringing together private stakeholders under the IFWP. Both had a multi-stakeholder composition: the IAHC gathered representatives from various intergovernmental organizations such as the ITU, WIPO, the US NSF, trademark interest representatives, and intellectual property owners, whereas the IFWP, steered by NSI, had the private sector and members of academia involved.

The most significant attempt to internationalize the debates and to include civil society actors, the IFWP, was by-passed in the final stages of the creation of ICANN. In March 1997, the gTLD-MoU was signed by Heath and Postel in an official ceremony in Geneva at the ITU. It was drafted by the IAHC, building on support from trademark owners. In spite of the controversies it stirred, the document garnered the support of over 223 public and private organizations several months after its announcement, in particular from entities outside the United States. The ITU Secretary-General presented the MoU as a form of ‘voluntary multilateralism’ (Tarjanne 1997) and his institution offered to serve as the official repository of the MoU. Opposing the MoU (p.88) were US multinationals such as IBM and AT&T, the incumbent NSI, but also European companies such as British Telecom, trademark holders, and those concerned with a more prominent role of the ITU in Internet matters, including the US government.

At the time, more than half of Internet users were based in the United States.13 A proposal for creating a new governance system outside the United States prompted the formation of the Interagency Working Group in the US government. As the MoU was gaining momentum, Secretary of State Madeline Albright confirmed, in a cable sent to the US mission in Geneva on 1 May 1997, that the US government ‘has not yet developed a position on any of the proposals to reform the Internet domain name system, including the gTLD-MoU, nor on the appropriate role, if any, of the ITU, WIPO, or other international organisations in the administration of the Internet’ (cited in Mueller 2004, 157). The day after, the Interagency Working Group announced that it opposed the plan. On 2 July 1997, the DoC issued a Request for Comments on DNS administration, soliciting public comments on four specific aspects: overall framework, the creation of new TLDs, policies for domain name registrars, and trademark issues. More than 430 comments—totalling over 1,500 pages—were received during the comment period. This astonishing interest in the direction the DNS privatization would take is indicative of the high stakes in the process and the growing awareness around it.

Before the issue was settled, Jon Postel demonstrated his authority over the root though a redirection of the root server from NSI to IANA at the beginning of 1998. This was done by eight of the twelve operators of the Internet’s regional root nameservers. The remaining four were under the direct control of the US government. Although the transfer did not affect the experience of the users, Postel was forced by senior US officials to restore the management of the DNS and complied with the request. His action also prompted a formal response from the NTIA, which issued ‘A proposal to improve technical management of Internet names and addresses’ the week after, on 30 January 1998. The proposed rulemaking, or ‘Green Paper’, was published in the Federal Register on 20 February, and invited comments on the outline of the process by which the US government planned to privatize the DNS. Among the rationales listed were the need to move away from ad-hoc decision-making by individuals and entities not formally accountable to the Internet community and the inadequacy of continuous funding from US research agencies (NSF and the Defense Advanced Research Projects Agency (p.89) (DARPA)) for an increasingly more commercial Internet. In the process, the DoC would coordinate the US government policy role.

On 10 June 1998, the DoC announced in a ‘White Paper’ its readiness to sign an agreement with a new non-profit corporation formed by private sector Internet stakeholders. The policy statement mentioned that ‘overall policy guidance and control of the [TLDs] and the Internet root server system should be vested in a single organization that is representative of Internet users around the globe’ (NTIA 1998). The organization would be constituted based on the four principles put forward in the public consultation: Internet stability, competition, private bottom-up coordination, and global representation.

Postel continued to play a key role in the process. He appointed the IAHC members and, together with IBM’s Brian Carpenter, active in the GIP under the IBM leadership and chair of the IAB, also appointed an IANA Transition Advisory Group (ITAG) in 1998. With five out of its six members with for-profit affiliations,14 the ITAG reflected the strong position of the private sector at the time and throughout the negotiations. The Postel-led group subsequently established ICANN in September 1998 as a private, California-based, not-for-profit corporation. In September 1998, the DoC signed an MoU with NSF for transferring the responsibilities performed by NSI to the DoC, which then amended the agreement to continue to have Network Solutions as the operator of the Authoritative root server, this time under the direction of the DoC (GAO 2000).

That month, ICANN was legally incorporated and indicated to the DoC that it would submit a bid in response to the policy statement. In October, the DoC signed three agreements with ICANN: (a) an MoU for a joint DNS project, (b) a cooperative research and development agreement, and (c) a sole source contract for technical function relating to the coordination of the DNS (GAO 2000). The four months that passed between the White Paper and the allocation of the decision to ICANN were marked by intense debates. The innovative approach adopted by the US administration15 was explicit in the DoC NTIA prioritization of ‘private sector leadership’ and (p.90) ‘industry self-regulation’.16 According to Stuart Lynn, president and CEO of the Corporation between 2001 and 2003, ICANN was to serve as an alternative to the traditional multilateral treaty model pre-dating the Internet:

I have come to the conclusion that the original concept of a purely private sector body, based on consensus and consent, had been shown to be impractical … But I am also convinced that, for a resource as changeable and dynamic as the Internet, a traditional governmental approach as an alternative to ICANN remains a bad idea. (Lynn 2002)

At the end of 1998, ICANN was granted the authority to set policy for and manage the DNS, as well as the allocation and assignment of Internet Protocol (IP) addresses. It operated on a contractual basis, primarily with registries (in charge of operating and administering the master database of each TLD name registered) and accredited registrars (the companies or organizations from which consumers buy domain names). The delegation of authority from the US government was, however, not complete. The DoC retained ‘residual authority’ (Mueller 2004) over the DNS root via the IANA functions for longer than the two-year period originally envisioned, which continued to be the focus of debates for the next two decades. The voice of developing countries was little heard in these initial negotiations establishing the governance framework for the technical ruling of domain names and related market formation.

The Creation of ICANN

Both the White Paper and the initial Board of ICANN presented the new organization as a ‘technical coordinator’. The narrow mandate thus delimited would serve a legitimation purpose for the subsequent conflicts around the functions to be performed in the new context. The policymaking process within ICANN and the extensive influence of the US government remained a Gordian knot over the years. In ICANN’s phrasing, ‘policy’ refers to guidelines for making technical decisions, for instance the way in which parameters for protocols are assigned, or the conditions under which the IP address blocks are allocated or country-code top-level domains (cc-TLDs) are redelegated. Similarly, policies could regard the creation of new TLDs and (p.91) related specifications as discussed in the key proposals preceding the establishment of ICANN.

The 1998 White Paper made reference to bottom-up governance as a characteristic of Internet development and specified that ‘the US government policy applies only to management of Internet names and addresses and does not set out a system of Internet “governance” ’. In a written exchange with consumer advocates back in 1999, Esther Dyson, Chair of the ICANN Board of Directors, made the following statement:

The White Paper articulates no Internet governance role for ICANN, and the Initial Board shares that (negative) view. Therefore, ICANN does not ‘aspire to address’ any Internet governance issues; in effect, it governs the plumbing, not the people. It has a very limited mandate to administer certain (largely technical) aspects of the Internet infrastructure in general and the Domain Name System in particular. (Dyson 1999)

However, for almost a decade after, Internet governance was synonymous with the management of technical resources and the work of ICANN (Mueller 2004; Chenou and Radu 2015). The attention it raised was mostly due to the legitimacy and accountability concerns around its new role. To overcome the tensions of a UN- versus a US-solution and to respond to the increasing number of cybersquatting cases, WIPO was invited to initiate a process for resolving domain name trademark disputes. Deeply involved in the consultations around the best institutional design for ICANN, WIPO also played a formal role in the creation of the organization, as envisioned in the NTIA 1998 White Paper. This involvement at the early stage was seen as a compromise for the Europeans and Australians, who sought to counterbalance the American oversight of ICANN (Mueller 1999, 505–6).

From the start, WIPO was represented in the Governmental Advisory Committee of ICANN, together with the ITU, Organisation for Economic Co-operation and Development (OECD), the European Union (EU), and fifty-nine national governments. Back in 1998 and 1999, the close relationship between WIPO and ICANN was built around creating new rights or expanding rights to names (Mueller 2010, 232). Early cooperation between the two organizations revealed fears regarding the use of pre-emptive regulation, for example through name exclusions (blocking the use of particular names or words).17 In this debate, WIPO originally asked for the exclusion (p.92) of major trademark holders from the first domain name process, but the proposal was turned down. Nonetheless, the other WIPO recommendation—the implementation of an alternative dispute resolution method for Internet cases—was accepted. This initiative, known as Universal Domain Name Dispute Resolution Policy (UDRP), became the ‘overwhelmingly preferred mechanism for domain name dispute resolution’18 as early as 2001 (Sharrock 2001, 819).

The UDRP was stipulated in advance as a dispute resolution mechanism in all contracts involving the registration of gTLDs and some cc-TLDs. Uniquely, its arbitration awards were applied directly through changes in the DNS without resorting to enforcement via national courts. Domain name disputes treated under this policy rarely reached the litigation phase. Binding all registrars, the UDRP stipulated that most types of trademark-based domain name disputes must be resolved through agreement, arbitration, or court action before a registrar would take action (cancelling, suspending, or transferring a domain name). In practice, the complainants must prove that the disputed domain name is identical or confusingly similar to a trademark or a service mark in which the complainant has rights; the registrant does not have any rights or legitimate interest in the domain name; and the domain has been registered and is being used in bad faith. All disputes launched under UDRP are submitted to independent panellists, primarily trademark lawyers selected from an expert list.19

The creation of ICANN gave birth to multiple compromises, also visible in the initial structure of the organization. The IANA functions, previously performed by Jon Postel, remained separate in the ICANN structure. ICANN administered IANA under a contractual relationship with the NTIA, which continued to have an oversight role until 2016. The IANA functions have historically included: (1) the coordination of the assignment of technical IP parameters; (2) the administration of certain responsibilities associated with Internet DNS root zone management; (3) the allocation of Internet numbering resources; and (4) other services related to the management of .arpa and .int TLDs. For the IP addresses, as of 2003, IANA delegated the distribution of blocks of addresses to the Number Resource Organization (NRO).20

(p.93) Governments started being involved in ICANN in an advisory capacity via the Governmental Advisory Committee (GAC) as early as March 1999. The first GAC was chaired by Australia’s Paul Twomey and comprised representatives of fifty-nine national governments and of the EU, OECD, ITU, and WIPO. Upon their first meeting in 1999, GAC members affirmed the status of the DNS as a ‘public resource’. Some governments subsequently started being involved via the country code supporting organizations, responsible for policies exclusively concerning national domains, whose ownership diversified over the years. The initial delegations made by Postel relied on personal relations established in the technical community; the growth of the Internet and its increasing salience compelled a number of governments to ask for increased and oftentimes formalized control at the national level.

At the outset, ICANN comprised two constituencies, namely the Domain Name Supporting Organisation (DNSO) and the membership structure set to elect the nine At-Large board members. A ground-breaking attempt to enhance community participation was the 2000 global election for five At-Large Directors, one from each geographical region. It was the first time ICANN designed a process that would involve the global public by giving them a direct online vote in determining the governing structure. The At-large Directors election was an attempt to implement broad inclusiveness, but only 30,000 out of 176,837 registered electors (anyone in the world could register with an email and postal address) casted a vote. The practice of global elections has since been dropped.

Recurrent questions of legitimacy and accountability undermined the work of ICANN over the years. In 2000, ICANN introduced the set of seven TLDs,21 including specialized TLDs22 with a sponsor, which would generally be delegated a number of responsibilities over policy-formulation for that defined group of interest. The newly introduced sponsored domains constituted a proof of concept for a larger expansion phase, started at the end of 2003 and completed in 2004, which brought eight new TLDs.23 As the DNS was expanding, the need for an appropriate repartition of roles among stakeholders and a better inclusion of national governments seemed necessary. A report by ICANN president Stuart Lynn entitled ‘ICANN—The case for change’ (Lynn 2002) paved the way for a deep reform of the organization in order to strike a new balance between public and private governance and to gain more legitimacy at the global level. One of the main changes in the ICANN structure (p.94) was the strengthening of the role of the GAC. Developing countries saw the GAC as the policy forum of IG, while the other bodies of the organization were to take charge of technical matters. The result of the reform process was embodied by the new by-laws of the corporation that entered into force on 15 December 2002. The resulting structure has been described as ‘ICANN 2.0’ and as a ‘public-private partnership’ (Kleinwaechter 2003). The reform also triggered the creation of an At-Large Advisory Committee designed to include the ‘community of individual Internet users’ and to strengthen the participation of civil society in the daily operations of the organization.

The reform was oriented towards enhancing the legitimacy of ICANN through a structured participation of affected parties in the policymaking process. To achieve that, ICANN divided participants based on the constituency they belonged to (ISPs, intellectual property owners, and commercial business users; non-commercial users, not-for-profits, registrars, and registries), the nature of their interests (commercial or non-commercial) and the function they intended to perform: supporting organizations for the recommendation of specific policies, or advisory groups to the Board of Directors on any ICANN-related issue. As these changes indicated, the ICANN stakes were not strictly market-related. In the construction of a new system of rules, the management of key Internet resources was a source of tension for the power positions it allocated. The historical control of the Internet’s name and address space was important not only for the technical community seeking to preserve its independence, but also for trademark owners looking to enforce claims about exclusive rights. Similarly, governments, computer industry, and civil society groups expressed strong interest in ICANN policymaking processes and demanded a seat at the table.

Following the restructuring mentioned above, the business community and civil society organizations were offered two ways to participate in the ICANN policy development process: either through the Generic Names Supporting Organisation (GNSO), which made policy recommendations specifically related to gTLDs, or through the At-Large community, which advised the Board. Non-commercial interests subsequently found a home in the Non-Commercial Stakeholder Group (NCSG) of the GNSO, to advance policy objectives such as human rights, education, access to knowledge, freedom of expression, privacy rights, etc. End-users were invited to participate in ICANN in an individual capacity via the At-Large structures.

ICANN Negotiations: Political Stakes

The proposals for governing domain names prior to the creation of ICANN embedded the tenets of the time: the free-market, open competition ideology (p.95) (Postel-plan) faced the public resource discourse, be it under a form of public-private oversight (IAHC) or under industry self-regulation (MoU). In the plan designed by Postel, the governance of the Internet was strictly understood as the technical allocation of unique domain names and IP addresses. Other aspects, mentioned in the 1996 draft-postel abstract, were bound to be ‘determined, and coordinated, by contractual agreements between private interests’. Tough reactions from intellectual property owners, intergovernmental organizations, and some states confirmed that a broader support base was needed in order to obtain legitimacy and acceptance for the new governance system, since a number of stakeholders were marginalized throughout the process.

The debates that led to the establishment of ICANN involved only a small number of individuals and organizations, mostly from the United States. National governments from developing countries did not take part in the negotiations. Historically, civil society groups played a limited role in the founding of the institution and its early years (Gross 2011), gradually increasing its presence in different committees over time. Given the structure envisioned at the outset, civil society did not exert the same influence as the other three stakeholder groups (Commercial Stakeholder Group, the Registrars Stakeholder Group, and the Registries Stakeholder Group) in the development of policies. Even after several reforms, ICANN’s governance structure continued to be a unique hybrid power structure involving the technical community, businesses, governments, and different groupings of civil society. The restructuring of the organization happening in the early 2000s did not put an end to the debates around the core questions of legitimacy and the specific role of governments and, in particular, that of the United States in global processes of IG. While greater autonomy was given to ICANN by the US DoC, the successive Memoranda of Understanding and the Affirmation of Commitment have come under heavy critique as illegitimate supervision by a single government (e.g. Singh 2009, Weber and Gunnarson 2012). The extensive scholarly literature that was critical of ICANN also played a role in raising awareness of this issue (Mueller 1999).

The creation of a private sector-led governance system outside traditional organizations for regulating a global network raised important transparency and accountability issues. If ICANN was to manage some of the critical resources of the Internet, who was it accountable to? And how could its operations be supervised? The ICANN structure gave way to asking broader questions about the legitimacy of the entire governance system. Transparency was seen as an important aspect of the organization’s legitimacy. One of the most discussed aspects of the ICANN by-laws before their adoption was the inclusion of transparency procedures and participation mechanisms. A suit (p.96) was successfully filed against ICANN for the lack of transparency provisions in its original procedures (ICANN 2002). Broader concerns about democratic procedures within ICANN were also in focus (Koppell 2005).

As a result, participation in ICANN processes remained contested. Among the key questions were the following: Who were the stakeholders? How were they represented? And in what proportion compared to other stakeholders? Marginalized actors such as civil society organizations, individual users, and developing countries’ governments were part of peripheral bodies whereas the representatives of the Postel-led ‘dominant coalition’ controlled the core of the organization. A global membership was foreseen in the by-laws but was never implemented. The failure of the ICANN global elections in 2000 illustrated the difficulty of implementing meaningful participation mechanisms.

Finally, in retrospect, the tension stemming from the evolving role of the GAC did not diminish. The GAC—comprising representatives of national governments and intergovernmental organizations—was originally designed as an advisory body that would act only upon the request of the Board, in accordance with the US DoC vision (articulated in 1998) of a limited role of national governments in the management of the network. The US NTIA retained the oversight power over the IANA function, while other governments struggled to become more influential in the organization, gradually drawing attention to the political nexus in IG. Divergent views regarding the role of the state consolidated and ICANN became the symbolic locus of central IG struggles.

After the 2003 reform, the organization resembled a public–private partnership and continued to clash with the traditional ‘one state, one vote’ principle of intergovernmental organizations. The Board remained the central body of the California-based corporation and, in the view of many, ICANN largely escaped the control of national governments (Mathiason 2009). Thus, the institutional development of ICANN did not completely solve the issues of participation and legitimacy that had emerged at the time of its creation. The changing international context of the early 2000s accelerated the debate on the role of national governments in IG, making it apparent that the conflict of governance modes would enduringly shape the nascent issue domain. The dotcom crisis had undermined confidence in the self-governance of the private sector and the organization that merged private initiative with public interest functions, but remained under US government oversight, only inspired more controversy.

Mechanisms of Governance

The decade leading up to 2005 marked a new turn in the interaction between the public and the private domain. Innovative forms of private governance, (p.97) brought into place by governmental action, revealed a strong dimension of hybridity. The expansion of the Internet worldwide embedded the neoliberal ideology and the deregulation trend, which in turn gave rise to power and legitimacy concerns relative to the new institutional creations. While the state did not disappear completely after defining the market conditions, rule-making processes for the Internet had to be positioned against property rights regimes, existing governance structures and rules of exchange.

The incomplete privatization of the DNS was the exception, rather than the norm, in the dominant private ordering that solidified during the ‘Internet boom’ decade. Corporate strategies and policies, coupled with market-driven approaches to regulation fostered the salient role of corporate actors, bringing about new online business models. Peer-to-peer agreements, exchange contracts, content development and management, and end-user contracts became the main instruments for the functioning of the newly created market. But there was more to it. The privatization trends also touched the technical standardization work of groups that presented themselves as independent and autonomous.

The policy development process in technical organizations such as ICANN or the World Wide Web Consortium (W3C), generally bottom-up, consensus-based, and open to anyone,24 suffered transformations. The focus shifted from protocols and standards for the smooth functioning of the Internet to a business-driven agenda. The IETF was not exempt from it, as remarked by one long-term contributor to its processes, Avri Doria:

It was quite blatant. At some point we needed to bring forward at least two companies willing to develop something before we could put a new tech project on the table. It has since softened a bit, but for a while it was quite absolute. (McCarthy 2016)

The strong self-regulation trend fomented by the US DoC brought to the fore competing logics, meanings, and practices associated with the Internet. They all converged in the negotiations for establishing a new organization for the management of the ‘phone book’ of the Internet, presently known as ICANN. The focus on trademarks and the litigation in domain names illustrate the concerns of the late 1990s that intensified over the years. The pre-emptive regulation via name exclusions and the sunrise procedures for trademark owners were rooted in the regulatory logic dominating at the beginning of the dotcom boom. The privileged position of the intellectual property industry—not least in the ICANN–WIPO dispute resolution system—expanded as the Internet spread worldwide. (p.98)

Table 3 Governance mechanisms (global and regional) for the period 1994–2004 (based on a total of seventy-four instruments recorded in the database)

Mechanisms

Instruments

%

Examples

Legal enshrinement

Treaties, conventions, agreements

16%

1996 WIPO Performances and Phonograms Treaty

2000 Safe Harbour Agreement (USA–EU)

2001 CoE Convention on Cybercrime

2001 CIS Agreement on Cooperation in Combating Offences related to Computer Information

Court judgments, directives, policies

11%

1999 WIPO–ICANN Uniform Domain Name Dispute Resolution Policy

2000 E-commerce Directive

2000 ECtHR Rotaru v. Romania

2001 US v. Microsoft Corporation

2002 Directive on Privacy and Electronic Communications

2004 IPR Enforcement Directive

Institutional solidification

Specialized bodies

15%

1995 Article 29 Working Party on Data Protection

1995 Global Information Infrastructure Commission

1997 APEC Intellectual Property Rights Expert Group

1998 OAS Special Rapporteur for Freedom of Expression

Strategic framework/ agenda

12%

1998 WTO Work Programme for Electronic Commerce

2000 UN Millennium Development Goals

2002 APEC Shanghai Program of Action

2004 London Action Plan

Monitoring and benchmarking

3%

1998 Spamhause—abuse tracking and notification

1999 World Bank/UNESCO ICT Statistics in Education (WISE)

Modelling

Discursive

20%

1998 OECD Ministerial Declaration on Authentication for Electronic Commerce

1999 CoE Recommendation for the Protection of Privacy on the Internet

2002 UNGA Combating the Criminal Misuse of Information Technologies (56/121)

2003 UNESCO Charter on the Preservation of Digital Heritage

Operative

23%

1996 UNCITRAL Model Law on Electronic Commerce

1997 G8 24/7 Network of Contacts for High-Tech Crime

2002 Guidelines for the Security of Information Systems and Networks

2004 Arab League Model Law on Combating Information Technology Offences

The de facto institutionalization of IG happened with the participation of private actors. A similar evolution can be noted for the merging of technical, legal, and economic logics in establishing the priorities of IG. Based on the analysis of the governance instruments in the dataset—presented in Table 3—the focus on operative modelling dominated the decade (23 per cent of the instances recorded). E-commerce, cybercrime, and intellectual property rights were explicitly targeted in efforts to create new rules.

The majority of authoritative governance instruments deployed during this period fall in the modelling category. Legal enshrinement follows suit, representing the preferred solution in almost a third of instances. Absent the consensus needed for a cyberspace treaty—regularly called for in the 1990s—conventions and agreements established regionally defined a set of rules for state–state and state–private sector interactions. Illustrative of this were two important developments. The first was the fact that the Convention on Cybercrime adopted by the Council of Europe (CoE) in November 2002 was open to non-CoE members to sign and ratify. The political process behind this revealed the need to act in a unified way for tackling challenges that could no longer be isolated to a national or regional context.

The second key development was the landmark decision of the Tribunal de grande instance in Paris in the case Ligue contre le racisme et l’antisémitisme et Union des étudiants juifs de France v. Yahoo! Inc. et Société Yahoo! France (LICRA v. Yahoo!), reaffirming the anchoring of the Internet in geography and in existing law. LICRA complained that Yahoo! allowed their online auction service to be used for the sale of memorabilia from the Nazi period, contrary to Article R645-1 of the French Criminal Code. After establishing its competence to hear the case, the high court concluded that the auctions for Nazi memorabilia were open to French residents, despite their prohibition under French criminal law, and that Yahoo! was aware of the location of users accessing these pages, as proved by targeted advertising in the national language for computers connecting from France. It subsequently ruled against Yahoo!, thus imposing a geo-location obligation. Yahoo! contested the decision through a declaratory relief action in a Californian court, which found that geo-location filtering software violated Yahoo!’s First Amendment rights and was therefore unenforceable. This case set a precedent and fomented important discussions about the links between freedom of expression, regulation of online content, and technical means for selective display of items.

Such tensions, testing the exceptionalism of the Internet against existing laws, also articulated an important legal debate extending into the mid-2000s. Was the Internet to be bound by new, cyber-sensitive laws? Or could it be integrated under rules already in place, since those remained applicable nonetheless? Data communication introduced a set of policies that were inherently (p.99) (p.100) global, as the identifier space was not bound by a territorial dimension. At the same time, the issues emerging in the cyberspace were not entirely different from those regulated offline, although their salient features might have transformed in the process of moving online. Infringements of intellectual property rights and online fraud, in particular, expanded in scale and speed favoured by the low costs associated with their enactment.

The period leading up to 2003 marked the emergence of Internet-specific regulation at the global and regional levels. Unlike in the previous decade, more than half of the instruments negotiated during this period focused directly on the Internet, as opposed to covering it tangentially. Representing a key moment for the institutionalization of IG in the late 1990s, this attention to further specialization also represented an act of politicization. The delicate balance between formal and informal governance constantly evolving, formal mechanisms were no less the result of interactions with informal networks or with lobbying. In the case of ICANN, that was obvious in the concerted attempts towards establishing a more accountable governance system, while preserving a fragmented nature of supervision and narrow enforcement power.

As the domain continued to mature, the links between the private sector and the public authorities started to be formalized via regulation, gradually reducing the dependence on individuals and interpersonal networks. The (p.101) common use of binding legislation stands proof to that. The lack of attention to sanctions and the limited efforts invested in monitoring exposed the fact that the new ordering was in-the-making. The political stakes exposed in the negotiations indicated strong ideological and diplomatic tensions that materialised in threats of sanction.

Privatization and Globalization of the Internet

Figure 1 Variation of governance mechanisms across subfields (1994–2004)

Note: total does not add up due to multiple cases in which an instrument covers several subfields

As Figure 1 shows, throughout this period, the focus revolved around security concerns (in particular cybercrime and spam), and legal issues, targeting particularly jurisdiction, arbitration, and intellectual property rights (IPRs). For authors like Castells (2001, 177–8), it was the threat posed by cybercrime that restored state power during the early 2000s: ‘it became necessary for the most important governments to act together, creating a new global space of policing … a network of regulatory and policing agencies’. He linked that to the focus on surveillance, a tool which allowed for regulation and policing by traditional forms of state power. In connection with the focus on trade and e-commerce, another preoccupation arose: what risked jeopardizing the new market was the increase in cybercrime and the expansion of the ‘deep web’. Michael Bergman (2001) coined this terminology using the metaphor of dragging a net across the surface of the ocean, missing the information hidden in the underwater depths (outside the reach of search engines, and thus more difficult to find). In 2001, the deep web consisted of 7.5 petabytes.

The success and the collapse of the indexed web and the online market it created brought into sharper focus the two main preoccupations of regulators at the end of the 1990s: ensuring security online and promoting trade. In its fight against cyberterrorism following the 9/11 attacks on the World Trade Center and the Pentagon, the US government introduced the Patriot Act of (p.102) 2001, which made it illegal to advise or assist terrorists, including via online means. Many private initiatives emerged to curb the availability of terrorism-related materials online in partnership with Internet service providers, with varying degrees of success. Four years later, in the aftermath of the London attacks, stricter content controls were also introduced by European countries, in particular in the area of hate speech. Together with child online sexual abuse, these were the two focus areas for the illegal content monitored by the European network of hotlines, INHOPE, established in 1999.

As early as 1998, discussions started in the UN General Assembly with a draft resolution proposed by Russia on ‘information security’ with yearly iterations, followed by the 2002 ‘culture of cybersecurity’ resolution sponsored by the United States (Radu 2013). Another significant 1998 development at the international level was the successful negotiation by the US delegation of an amendment to the World Trade Organization (WTO) agreements to treat the Internet as a duty-free trade zone. Accordingly, exchanges through the Internet were exempted from tariffs and duties on the basis of a temporary moratorium on taxes on cross-border data flows, which has been continuously renewed. This represented a competitive advantage for the American companies providing Internet-related goods and services. The United States thus became the first country to apply trade policies to govern cross-border information flows (Aaronson 2015).

Concomitantly, the developmental aspects grew in salience in Internet governance discussions. The intensification of debates around information and communication technologies for development (ICT4D) deserves a separate discussion here. Driven by UN agencies, OECD,25 the G8, or the World Economic Forum, the plethora of transnational efforts to reduce the global digital divide stemmed from the recognition of this problem as a multidimensional gap and economic limitation with long-term consequences. The meaning of the ‘digital divide’—a term coined by Lloyd Morrisett, president of the Markle Foundation—came to incorporate more elements than the original divide between information haves and have-nots (Gunkel 2003). It was initially linked to the ownership of personal computers, but evolved into encompassing more than just patterns of Internet access. Popularized with the 1998 publication of the report of the NTIA having the term ‘digital divide’ in the title, the concept acquired media attention in the United States and worldwide (Parker 2000). ‘Falling through the Net II: New Data on the Digital Divide’, a continuation of the ‘Falling through the Net’ project from (p.103) 1995, found that variation in the penetration levels was linked to income, education level, and race. The instruments capturing development aspects remained predominantly linked to modelling, rather than direct institutional involvement and resource allocation. ICT4D remained the focus of attention on a discursive level and a key dimension in subsequent UN negotiations.

Actors

With the creation of ICANN, the grouping of ‘technical bodies’ crystallized in IG. They represented stable arrangements in a shifting policy milieu and fostered recursive interactive relationships in a space of volatile institutional arrangements. The main activities they engaged in had no territorial binding: the development of IP standards; the administration, coordination, and allocation of IP addresses; the delegation of domain names; the coordination of the root server system; the coordination of procedures related to the technical coordination of the Internet, were all global. Mueller (2010, 4) argued that organically developed institutions provided ‘a new locus of authority for key decisions about standards and critical resources’. Yet their authority also derived from and consolidated via transversal links with other organizations, including those that were in competition in the early days.

From the beginning of the 1990s, the IETF and the ITU Telecommunication Standardization Sector (ITU-T) were already cooperating informally. In the context of the emergence of a specific institutional framework for the Internet in the second half of the 1990s, the cooperation between the IETF and other organizations was clarified and in some cases formalized. A Request for Comments (RFC 2436) on the ‘Collaboration between ISOC/IETF and ITU-T’ was published in October 1998. It introduced, among others, a liaison position to foster exchange between the two bodies. The rationale behind this creation was the explosion in the growth of IP-based networks. It also foresaw the cross-referencing and the use of IETF documents in ITU-T processes and vice versa. This type of procedure was designed to integrate the two standardization processes and enable cross-organizational work, in particular as the operating logics of the two differed substantially.

The work of sui generis non-profit standard-setting organizations (such as ICANN, IETF, W3C) was authoritative because they were internationally accepted as rule-makers from the early days. All these bodies continued to function alongside inter-governmental and private-sector initiatives. Yet, (p.104) unlike the ITU-T, for whom governments represent full members and non-governmental organizations need to obtain ‘sector’ or ‘associate’ status, ‘native institutions’ (Mueller 2004) operated on an open membership model and did not generally limit participation to organizations or official structures.26 In their governing bodies, they all represented their primary constituencies—remaining, by definition, transnational. Table 4 offers an overview of the key bodies during that decade analysed in this chapter, including the governance structure in place for each, together with their status and function. As the comparative analysis shows, these organizations share a number of characteristics, such as their mission and a board to report to. NRO represents an exception, as its ‘core community’ was made up of RIRs, whose representatives constituted the Executive Council, annually rotating officers. The NRO did not develop policy directly, but served as ICANN’s Address Supporting Organization and reviewed and developed recommendations on global IP address policy for ratification by ICANN’s Board of Directors.

Table 4 Overview of technical private bodies specific to the Internet and related decision-making procedures

Body

Creation

Location

Function

Status

Sub-bodies/membership

Governing bodies

ICANN

1998

NTIA/DoC White Paper

Jon Postel

Los Angeles (US)

Coordinate the Domain Name System ($169.9m general budget in 2015)

Non-profit corporation registered under California Nonprofit Public Benefit Corporation Law

Internet Assigned Numbers Authority (IANA)

Constituent entities:

Supporting Organizations (ASO, GNSO, ccNSO)

Advisory Committees (Root Server, GAC, SSAC, ALAC)a

Board of Directors selected by Supporting Organizations and Nominating Committee (representatives of ICANN constituent entities)

Internet Society (ISOC)

1992

Vint Cerf, Bob Kahn, Lyman Chapin

Reston, Virginia (US); Geneva (CH)

Support the Internet standards development process; public policy leadership;

($39m general budget in 2015)

Non-profit international membership association, organized globally and through national ‘chapters’

IAB (comprising IETF and IRTF), organizational and individual members;

national chapters

Board of Trustees (elected by ISOC organizational members, by chapters and by the IAB)

Number resource organization (NRO)

2003

4 RIRs existing in 2003 (oldest created in 1992); AFRINIC added in 2005

Five headquarters for each Regional Internet Registries (RIRs)

Providing and promoting a coordinated Internet number registry system

Coordination and representation of the activities of RIRs

Association of the RIRs, which manage IP addresses in different regions

Five RiRs for Europe, Africa, America and Canada, Asia-Pacific, and Latin America and Caribbean; some include

National Internet Registriesb

Executive Council (annually rotating offices held by representatives of RIRs)

World Wide Web Consortium (W3C)

1994

Host institutions: MIT/CSAIL (US), ERCIM (FR), Keio University (JP)

Develop protocols and guidelines that ensure long-term growth for the Web.

(Public-private funding, membership fees, individual donations)

International industry consortium and standard-setting organization

Advisory Committee (with representatives from each member organization)

Technical Architecture Group

The team (appointed staff)

Advisory Board (elected by the Advisory Committee)

(a.) The acronyms stand for: Address Supporting Organization, Generic Names Supporting Organization, Country Code Names Supporting Organization, Government Advisory Committee, Security and Stability Advisory Committee, At-Large Advisory Committee

(b.) The total membership of the RIRs (including NIRs) as of 29 February 2016 is 35,519 (NRO 2016).

Alongside the consolidated role of technical organizations, new private and state-led bodies started profiling their work throughout the deregulatory decade, in particular on the developmental aspects of IG. In 2000, the World Economic Forum initiated the Global Digital Divide Initiative (GDDI) for increasing opportunities for private–public partnerships in diminishing the digital gap. It set in place three steering committees dedicated to Entrepreneurship, Policies, and Strategies, and Education. Similarly, the G8’s Digital Opportunities Task Force (DOT Force) was developed as an initiative to ‘bridge the global digital divide’ (Hart 2004, 2) in the aftermath of the 2000 Okinawa Summit, where the Charter on Global Information Society was signed. In paragraph 7, the Charter reaffirmed the strong lead of the private sector, relegating to governments the responsibility to ‘create a predictable, transparent and non-discriminatory policy and regulatory environment necessary for the information society’, placing on them the onus of avoiding ‘undue regulatory interventions’.

With the recognition of the basic right of access to knowledge and information as ‘a prerequisite for modern human development’ (DOT Force 2002), the high-level task force aimed to facilitate discussions with developing countries, international organizations, and other stakeholders to promote international cooperation with a view to fostering policy, regulatory, and network readiness; improving connectivity, increasing access, and lowering cost; building human capacity; and encouraging participation in global e-commerce networks. The advancement monitoring (p.105) (p.106) (p.107) and fostering was delegated to four implementation teams: National e-Strategies, Human Capacities, Global Policy Participation, and Local Content and Application.

Notwithstanding its specialized fields of action, the DOT Force lifespan was short. The entity formally concluded its work after the meeting in Canada in June 2002. By 2002, the African continent benefited the most from the programme by the development of more than five transnationally funded projects and the renewal of the New Partnership for Africa’s Development (NEPAD), with a focus on the activities of the e-Africa Commission. For the latter, ambitious objectives were envisioned. Among these, the increased tele-density to two lines per 100 persons by 2005, the achievement of e-readiness for all African countries, and the development of local content software were hardly realized by the set time frame. When the World Bank (WB) led the creation of InfoDev in June 2005, the interest had moved from developing an action force to shaping and sustaining a global development financial programme, led by a Donor’s Committee and a Programme Manager, as well as organizing an InfoDev Symposium. Alongside the WB and the EU, ten governments contributed to its projects at the time. The focus was on three main directions: (1) enabling access for all, (2) mainstreaming ICT as tools for development and poverty reduction, and (3) innovation, entrepreneurship, and growth.

Among the most interesting bodies created at the time within the UN was the ICT Task Force (UNICT), that UN Secretary-General Kofi Annan established in November 2001 in response to a UN Economic and Social Council (ECOSOC) request from July 2000. This followed the recommendation of the high-level panel of experts convened from 17 to 20 April 2000. The UNICT had a three-year long mandate, subsequently extended until 31 December 2005 to allow for its active participation in the World Summit on Information Society (WSIS) process. The task force had a hybrid composition, with fifty-five members appointed by different sectors in representative capacity, which would elect the chair. IBM, Cisco, Hewlett-Packard, Siemens, Nokia, and Sun Microsystems were represented among the members of the industry, alongside civil society organizations, governments, and international organizations. It had a panel of thirty technical advisers and a secretariat of six at the UN headquarters in New York.

The UNICT acted as a platform for dialogue open to multiple stakeholders (Flyverbom 2011) and served a purpose similar to the World Economic Forum (WEF) and G8 initiatives. Unlike the latter, it enjoyed a broader legitimation, in particular from developing countries. Its main activity was the (p.108) organization of global forums, which enabled discussions around a wide set of concerns ranging from education to IG. Between 2001 and 2005, it held ten global forums: four in New York, three in Geneva, and one in Berlin, Dublin, and Tunis, respectively. Epitomizing the drive for deregulation, UNICT proposed in its ‘Business plan’ that governments facilitate competition by opening markets, encouraging investments in infrastructure, and removing barriers to competition.

The rise of organizations dedicated to Internet-specific work, either technical or developmental, attests to the way in which the field was institutionalized via operational measures. Specialized authorities, from ICANN to InfoDev, become subject to a number of formal requirements, such as transparency and a continual base of legitimate support. As IG was gradually recognized as the shared responsibility of multiple organizations and diverse stakeholders, a dominant practice developed around it: multi-stakeholder participation.

Anchoring Practice: Multi-stakeholder Participation

The intense negotiations around the creation of ICANN shed light on the number of stakeholders interested in the global governance of the Internet. Beyond what NTIA identified as ‘key’ stakeholders (NTIA 1998), a broader group of enthusiasts and sceptics expressed opinion on the privatization of the DNS. Most of these engaged as part of a specific (interest) group and spoke on behalf of a constituency that they presented as global. In the design of the institution, several stakeholder splits became obvious: contracted and non-contracted parties, policymakers and implementers, or government and users. This multiplicity of entities helped to address some of the legitimacy concerns. Among these were the limitations stemming from the reality of a handful of people making decisions of worldwide relevance; it was seen as a disguise for agendas that benefit particular actors more than others, especially when their lobbying power was higher. The inclusion of a limited number of stakeholders—with powerful voices—in the structure of ICANN became contested and led to the first reform of ICANN in 2003.

The dominant practice of multi-stakeholder involvement—no matter what definition was adopted for it—became a pillar for community formation during that decade. The constitutive rules of the community were reiterated in setting up the membership practices through the categorization of groups, officially institutionalized within ICANN. Defining how certain (p.109) actors could participate, however, was not restrictive to newcomers, as long as they had the technical expertise, would take on board the volunteer culture, and could participate in the relevant processes. Although the involvement of actors belonging to different sectors was practised before the formation of ICANN, never before was the diversity of the community celebrated so visibly. The different subgroups forming the ‘community’ gave it a new identity, uniting rather than dividing it. The public assertion of diversity became a strong community-building goal.

By the same token, the WEF and G8 initiatives also included a multi-stakeholder approach with broad participation by stakeholders from industrialized and developing countries in public–private partnerships. Enjoying more legitimacy due to its origin in an intergovernmental agreement, the UNICT also operated on multi-stakeholder logic. Despite its lack of binding power, its work was influential in agenda-setting functions, as it was required to submit annual reports and recommendations to the UN Secretary-General. Starting in 2004, the general impetus for multi-stakeholder practices in the UN ambit was strengthened with the release of the Cardoso report endorsing the wider participation of civil society in UN activities.

In effect, unlike at ICANN, where the multi-stakeholder practice materialized in bottom-up policies, in the daily works of the UNICT or the DOT Force (which were not decision-making bodies) the practice remained primarily formative and mostly discursive. In their case, authority was vested in the organizations’ UN-appointed Chairs or Secretariats. The power asymmetries and the concerns for legitimacy were recognized in the functioning of the UNICT: stakeholders involved in the ICT Task Force had competing or conflicting aims and there was as a contest of influence and power among multilateral, bilateral, and civil society groups (Malcolm 2008).

In addition to being seen as an instrumental governance mode which gave the United States dominance over the Internet and its development, multi-stakeholderism also meant, for numerous developing countries, a move away from intergovernmental decision-making and international law. Negotiating new rules for the increasingly more commercial Internet in the absence of an equal vote guaranteed via international processes dissatisfied many. As an anchoring practice for legitimacy, multi-stakeholder participation presented a strong normative dimension. It was based on the idea that those most affected by a policy change or measure should in some way be involved in its management, governance, and resolution (Gurstein 2013). In effect though, not all affected interests were represented. This came under scrutiny by the supporters of a governmental leadership model. In December 2003, China, backed by developing countries, proposed the (p.110) adoption of an Internet treaty and the creation of a global Internet organization (Kleinwaechter 2009).

But the multi-stakeholder practice was also under scrutiny by its adopters. In the early days of this practice, the principles of representativeness and openness were not reified in the decision-making processes for IG, according to Palfrey, Chen, Hwang, and Eisenkraft (2003). Their study on public participation in ICANN revealed that an insufficient assessment of public involvement and a limited collection of substantive comments from Internet users accounted for the failure of the organization in attracting and incorporating ‘representative’ input (Palfrey et al. 2003). Gradually, the practice became an element of regulatory design (Drake and Wilson 2008; DeNardis 2009) and even an ‘-ism’ (Doria 2014), with a two-fold significance: on the one hand, there was a claim for distinctiveness in ‘multi-stakeholderism’; on the other hand, its habitual use pinpointed that the principles and tenets behind it were held to be intuitive to everyone.

In the ICANN context, supporters of multi-stakeholderism saw a legitimization of the ICANN model in the use of the principle within the UN context. Other IG bodies also labelled their practices as multi-stakeholder. Western governments saw in the multi-stakeholder model a way to promote the delegation of authority and a way to improve self-regulation and the limited state intervention that they had been implementing in various sectors, and especially in the telecommunication sector since the 1980s. Civil society organizations and some governments interpreted multi-stakeholderism as participatory democracy and hailed the effort to improve democratic processes in global governance. Only some non-Western governments and some critical factions within civil society castigated multi-stakeholderism as a pejoration of democratic practices compared with traditional intergovernmentalism.

Nonetheless, the multi-stakeholder practice continued to be implemented and applied on a global scale. It quickly opened the door for the participation of marginalized actors, even if not done in clearly defined terms. The enthusiasm for this practice gave rise to a ‘vernacular moment’ around multi-stakeholder involvement in IG, claiming a dominant position for it, beyond contestation. Over time, it resulted in the loss of the ability to look at this practice from the outside, to scrutinize it critically without being perceived as an ‘enemy’ of it. As an ideologically laden organizational principle (Mueller 2010), multi-stakeholderism had deep implications on community formation, further discussed in Chapter 6.

(p.111) Synopsis

The evolution of the Internet has been profoundly shaped by the salient role of corporate actors, transforming the field into an economic and political contest. When digital markets began to prosper in the 1990s, the dominant US-driven ‘hands-off’ approach condemned any attempt to regulate the Internet and related markets. In 1997, the Clinton administration issued a ‘Framework for global electronic commerce’ ingraining this vision in the future development of the field. However, at the beginning of the 2000s, e-commerce and other Internet-related markets were far from meeting the optimistic expectations of the 1990s. The burst of the ‘dotcom bubble’ further reaffirmed the crucial role of social institutions in the creation, reproduction, and expansion of markets.

Deeply influenced by neoliberal ideology, the regulation milieu that fostered the growth of the Internet left a global imprint. When the Internet became commercial, three major shifts occurred in governance arrangements: they grew in size, scale, and scope. First, the reach of the network extended, with connections being established all over the world. Second, the scope of governance arrangements diversified over time: from infrastructure and technical resources management, to an array of broader societal issues spurred by the ‘dotcom’ boom; legal and security issues (in particular around cybersquatting), as well as e-commerce and digital economy more broadly. Overall, this meant that in addition to the technical community dealing with standards and protocols, a new set of actors entered the governance arena at the regional and global level. With it came a renewed interest in understanding the complex interaction of technology, society, and politics, played out in the digital divide debates.

The informal arrangements dominating the period prior to the commercialization of the Internet intensified and routinized over time. The decisions of standard-setting bodies were generally archived and publicly accessible, following open meetings or mailing list discussions (generally open for anyone to join). Importantly, the initiatives for policy processes and standards development could come from anyone. The standards were open, resulting from collaborative work and placed under the organization’s name. This ethos, still present today, was slightly altered at the end of the 1990s with the rise of the for-profit ICT sector and its strong involvement in the policymaking of not-for-profit institutions.

(p.112) The question of ‘who governed the Internet’—asked more and more frequently in the period leading up to 2005—remained difficult to answer. Regional differentiation appears as a key governance trend during that decade: 43 per cent of all governance instruments reflect partial agreements and limited consensus, but they also reveal the emergence of localized approaches on issues of global concern such as cybercrime. Certain governance mechanisms resulted from (long-standing) political agendas, others were driven by the private sector, and some developed ad-hoc in reaction to these, presenting us with a fragmented picture. It is from this perspective that the articulation of governance in a heterogeneous, hybrid environment is explored in the next two chapters.

Notes:

(1) A notable exception here is Crain (2014).

(2) Cerf acted as vice-president of MCI Digital Information Services from 1982 to 1986, before joining Bob Kahn at the Corporation for National Research Initiatives (CNRI). In the beginning, CNRI hosted the secretariat of the Internet Engineering Task Force (IETF), before moving under the Internet Society (ISOC).

(3) This differentiation, frequently used by the industry, is a functional one that has not been formalized.

(4) Among the members were also Kahn and Clark.

(5) In the process, the practice of assigning one domain name per person was also renounced.

(6) In the beginning, no differentiation was made in the fees perceived for domain names, all of them being charged the same amount.

(7) In academic parlance, the ‘dotcom bubble’ is generally seen as an economic cycle, between the late-1990s and early-2000s, whereas the Internet boom refers to the constant growth of the Internet following the introduction of the World Wide Web (WWW) and search engines.

(8) This included ISPs such as Netscape, Amazon, Yahoo, etc., but also online advertising companies such as DoubleClick.

(9) The CDA resulted out of a longer legal battle, fought around two court cases decided in New York in the early 1990s, with conflicting outcomes. In the first of these, the 1991 case Cubby, Inc. v. CompuServe, Inc., the court concluded that CompuServe could not be held responsible for the defamatory comments posted by a special-interest forum columnist against a competitor as it did not review forum content before publication. In the second case, the 1995 case Stratton Oakmont, Inc. v. Prodigy Servs. Co., the court found that Prodigy, a web service company with more than 2 million subscribers and over 60,000 postings a day, did not act as a blind host. It had, in the past, moderated some of its online message boards and deleted posts for ‘offensiveness and bad taste’, which, for the court, meant that it acted as a publisher and was thus liable for defamatory postings.

(10) According to Héritier and Eckert (2008), industry self-regulation is more likely to appear when positive incentives are provided or when the threat of legislation is present.

(11) Harvard’s Berkman Center was originally involved in facilitating the meetings, and withdrew subsequently.

(12) The new gTLDs proposed were: .web, .rec, .info, .firm, .store, .nom, .arts.

(13) According to the US Census Bureau data, there were more than 40 million Internet users in the United States in 1997.

(14) The ITAG had six members: Brian Carpenter (IBM and IAB), Randy Bush (Verio, Inc.), David Farber (University of Pennsylvania), Geoff Huston (Telstra), John Klensin (MCI), and Steve Wolff (Cisco). Wolff formerly acted as director of NSF’s Computer and Information Sciences and Engineering Division, supervising the transition to the commercial Internet. He joined Cisco in 1994.

(15) This approach also faced harsh critiques, such as the one from the Harvard professor Lawrence Lessig: ‘we are creating the most significant jurisdiction since the Louisiana purchase … and we are building it outside the review of the Constitution’ (2006, 318).

(16) For Mueller (1999, 504), industry self-regulation was ‘an appealing label for a process that could be more accurately described as the US government brokering a behind-the-scenes deal among what it perceived as the major players—both private and governmental’ (in relation to the creation of ICANN).

(17) The cooperation between ICANN and WIPO was strengthened in the new gTLD programme launched in 2012, as the latter was deemed the exclusive provider of dispute resolution services for Legal Rights Objections (LRO) through its Arbitration and Mediation Center. The LRO allows trademark owners and intergovernmental organizations to file a formal objection to a third party’s application for a new TLD for infringement of an existing trademark, IGO name, or acronym in the pre-delegation phase.

(18) The number of domain names in WIPO cases in 2013 peaked at 6,191, a 22 per cent increase compared with the previous year. For the same year, 2,585 cybersquatting cases were filed with the WIPO Arbitration and Mediation Center, a 10.4 per cent decrease compared to 2012 (Chenou and Radu 2015).

(19) WIPO is one of the five UDRP service providers curating one such list of potential panellists.

(20) In turn, NRO assigned the blocks of addresses via its five member organizations, the Regional Internet Registries (RIRs), covering Europe, Africa, America and Canada, Asia-Pacific, and Latin America and the Caribbean.

(21) Among the seven new TLDs were: .biz, .info, .name, and .pro.

(22) The three sponsored TLDs were: .aero, .coop, and .museum.

(23) The following gTLDs were successfully introduced in 2004: .asia, .cat, .jobs, .mobi, .post, .tel., .xxx, and .travel.

(24) Exceptions include some closed meetings of the W3C, and the membership limited to RIRs in the NRO.

(25) The OECD (2001) defined the digital divide as: ‘the gap between individuals, households, businesses and geographic areas at different socio-economic levels with regard both to their opportunities to access ICT and to their use of the Internet for wide variety of activities’.

(26) With the exception of W3C, which accepts both individual and institutional members (business and governmental).