Will the Revolution Be A/B-Tested?
Will the Revolution Be A/B-Tested?
Abstract and Keywords
This introductory chapter defines the key terms and concepts in the book, including analytic activism, the media theory of movement power, the culture of testing, and the analytics floor and analytics frontier and provides narrative examples of each of these concepts to help clarify them for the reader. It also places the book within a broader set of debates about digital activism and citizen engagement. By digitally listening, monitoring, and testing, activist organizations can better define tactical success, and this pushes them to try new strategies. Innovative activist campaigns have seized public attention and created meaningful pressure on political targets. The focus of the book is on understanding the organizational processes that shape and produce these activist campaigns. The chapter concludes with an outline of the book.
This is a book about technology, civic associations, innovation, and power. I argue that some of the most important impacts of digital technology lie not in the capacity of disorganized masses to more easily speak, but in the capacity of new civil society organizations to more effectively listen. The book examines the organizational logic underlying key digital activist tools—distributed petition sites, automated tactical optimization systems, and online video platforms designed to promote “virality”—and the media logic that determines the power and effectiveness of these tools. It also examines the limitations of emerging technological platforms, raising a cautionary flag about the ways that putting too much faith in these new measures of online sentiment can lead us astray.
Current scholarship on digital politics often becomes mired in a pair of intellectual cul-de-sacs. Scholars either celebrate digital politics as supporting a new era of spontaneous, “bottom-up” civic engagement or bemoan the rise of “clicktivism,” which supposedly degrades citizen power by encouraging worthless online acts of simple solidarity. I find that the reach of both these intellectual camps tends to exceed their grasp. In virtually every case of large-scale, long-term citizen activism (digital, offline, or a mix of the two), organized political associations continue to play a key intermediary role. The “organizational layer” of politics is often ignored by academic interlocutors with an interest in digital politics.1 The spontaneous feel of a suddenly trending Twitter hashtag belies the organizational processes occurring just offstage, behind the scenes. Organizational concerns—fundraising, membership growth, reputation, strategy—are hidden variables that play a decisive role in how we ought to interpret the data. A “like” or retweet can be either powerful or pointless—it depends on the broader strategic context of these acts of digital communication.
(p.2) My goal in this book is not to convince you that new communications technologies are good or bad for citizen political activism. I aim instead to suggest that this is fundamentally the wrong question. All powerful social movement tactics have always been premised on a strategic analysis of what the media environment affords. Like it or not, the era of broadcast television has ended, just as surely as the era of broadsheets and pamphlets did. The digital era supports different tactics. The most powerful citizen activist organizations will, once again, be the ones that leverage, innovate, and adapt. This book grapples with how activist organizations employ digital communications technologies to develop and refine new tactics and strategies that help them build power and win victories in the twenty-first century.
What Is Analytic Activism?
Analytic activism is a new approach to citizen-driven politics that makes use of the affordances of digital technologies to fashion new strategic interventions in the political arena. It is a change in organizational structure, processes, and work routines.2 Three prominent features distinguish it from other forms of activism.
First, analytic activism embraces a culture of testing, which guides organizational learning and shapes organizational practices. The culture of testing can be as simple as conducting A/B tests to determine which email headline is best (Karpf 2012a; Issenberg 2012; Kreiss 2012). But it can also be dizzyingly complex. Rather than relying on the way they’ve always done things, advocacy groups are now testing which tactics and techniques are most effective at engaging new supporters, forging deeper ties with existing supporters, and leveraging power over targeted decision makers (Han 2014). The culture of testing creates feedback loops that help analytic activists learn, innovate, adapt, and evolve within a fast-changing hybrid media system.
Second, analytic activism prioritizes listening through digital channels: social media, like Facebook and Twitter, or more established digital media like email and website traffic. As I discuss in chapters 2 and 6, digital listening does have its limitations; it is listening without conversation, and thus lacks many of the benefits that are derived from two-way interaction. But analytic activists place an emphasis on digital listening where many of their peers fail entirely to listen. All forms of present-day activism make use of (at least some) digital media to (p.3) construct new forms of activist speech. But the distinguishing feature of analytic activism is that its practitioners are using these channels to listen in new ways, not solely for new forms of speech.
Third, analytic activism demands scale. As we have learned from Google, Facebook, Netflix, and Amazon, there is a tremendous digital advantage in raw numbers. These companies routinely glean valuable information about the interests, habits, and needs of their customers that smaller and/or start-up competitors simply cannot identify (Hindman forthcoming). They do so through testing and analytics, applied on a massive scale. The larger your numbers, the more precisely (and more frequently) you can turn to analytics for insights. Likewise, many of the tools and techniques of analytic activism provide substantial value to organizations with 1 million members while being virtually irrelevant to organizations with 10,000 members. Analytic activism is a tool of large, established advocacy groups.
One implication of these three features of analytic activism is that not all digital activism is analytic activism.3 Individual citizens and viral social movements also benefit from the affordances of the Internet. As Lance Bennett and Alexandra Segerberg (2013) have described, the lowered costs of communication and coordination online have given rise to new forms of “connective action,” in which disorganized masses of citizens can coalesce around a shared political interest without developing a stable organizational infrastructure. These connective episodes can have an important role in shaping the broader public consciousness and pressuring political elites. They draw on the new “culture of connectivity” that has become ubiquitous in our daily lives (Van Dijck 2013). But their minimal structure also limits their capacity for listening, testing, and learning. It is only the organizations with larger memberships, with their stable infrastructure and their routinized strategic practices, that are able to leverage digital media into this new style of analytic activism.
Likewise, the field of analytic activism is distinct from the radical “horizontalist” activist movements, such as Occupy Wall Street and worldwide “movements of the squares” that have captured both the public’s and academia’s imagination over the past few years. The horizontalist tradition has a substantial pedigree that predates the rise of digital communications networks. Radical anarchist activists, (p.4) “black bloc” protesters, and the Yippies of the 1960s all preached and (to at least some degree) practiced horizontalism. Today, horizontalism is too often incorrectly treated as synonymous with digital activism. As Paolo Gerbaudo (2012) has demonstrated, leadership in these (primarily offline) movements takes the form of “activist choreography,” wherein key actors within activist networks employ social media to softly help coordinate activist activity and influence the direction of their radical movements. This is an important development in digital activism, but it is distinct from what I would term analytic activism. Occupy Wall Street and similar activist movements lack the culture of testing that we see in organizations like SumOfUs.org, Change.org, and MoveOn.org. Their lack of vertical leadership structure alters how they strategize, how they listen, and how they learn.
The “hacktivist” networks like Anonymous and Lulzsec that Gabriella Coleman (2014) vibrantly describes in Hacker, Hoaxer, Whistleblower, Spy also represent a distinct form of digital activism that stands outside the sphere of analytic activism. Hacktivists are (obviously) engaging deeply with digital media and technology. But they are not focused on converting data and analytics into outputs that help them craft media interventions to move forward a specific political agenda. They are instead focused on directly exploiting vulnerabilities in software and hardware to create power outside the traditional boundaries of politics. Hacktivism is a distinct phenomenon that attracts different players with different skill sets, norms, beliefs, and goals. It deserves (and receives) detailed treatment and attention in its own right. Rather than attempt to shoehorn all these forms of digital activism into a single rubric, I choose to readily admit that they fall outside the scope of this study.
Analytic activism creates new strategic objects that can be iteratively used to develop new types of political intervention. Strategic objects are the shared reference points around which strategic conversations are constructed. Put plainly, new data matters only if it is presented to the right people, in the right context, and in the right format. If the only person looking at social media analytics is the communications intern, then those digital traces aren’t going to make much of a difference. If, instead, an advocacy organization incorporates analytics reports and experimental results into its weekly strategy meeting, then analytics gain the capacity to alter decision makers’ course of action. Strategy is about making choices, and strategists make choices on the basis of information that they have jointly agreed is relevant to the outcome. What makes analytic activism distinct from other forms of digitally enabled activism is its focus on turning new forms of digital listening into strategic inputs that, in turn, contribute to new forms of digital speech.
By digitally listening, monitoring, and testing, activist organizations are able to better define tactical success, and this pushes them to try out new strategies. This book is filled with examples of innovative activist campaigns that seized public attention and created meaningful pressure on political targets. The focus (p.5) of the book is on understanding the organizational processes that shape and produce these activist campaigns.
And the strategic innovations that result have never been more necessary, because the media environment that supported old strategic assumptions has been radically reshaped. The power of social movements has always been premised on successful interventions with the dominant media of the day. As we have moved from a broadcast media system to a hybrid media system (Chadwick 2013), the most popular tactics and techniques have lost much of their bite.
A Media Theory of Movement Power
We often make two mistakes with regard to the interaction of media institutions and political activism. First, we still frequently treat “the media” as a unitary, stable, and undifferentiated system. This was a defensible assumption in 1993, when William Gamson and Gadi Wolfsfeld wrote their authoritative treatment of the subject, “Movements and Media as Interacting Systems.” Gamson and Wolfsfeld demonstrated that “social movements need the media far more than the media need them” (117). They did so by tracing the interests of social movements and of industrial media organizations that typified the broadcast news era. But in the decades that have elapsed since that classic work was published, the media system has undergone a continuous series of upheavals.
We can no longer simply state that some protest actions are inherently more media-friendly or newsworthy than others. We now have to specify which media and which news. Protest tactics are made media-friendly when they align with dominant media technologies. They become newsworthy when they fit the norms, incentives, and routines of the major news organizations of the day. When we talk about the “media system,” we still largely have in mind the broadcast media institutions that dominated twentieth-century American politics—the nightly news and the daily paper in particular. Today, those broadcast institutions remain relevant, but they are also facing new competitive pressures, adopting new journalistic routines, and making use of new media technologies. As Andrew Chadwick (2013) suggests, we have replaced the old media cycle with a new “political information cycle.” Stories unfold differently in the political information cycle. Social media buzz helps to determine the mainstream news agenda. Partisan news sites highlight different stories to appeal to their niche audiences (Jamieson and Cappella 2008; Arceneaux and Johnson 2013). If movements and media are interacting systems, then the dramatic changes to the media system must produce ripple effects that change the opportunity structure for social movements.
Second, we treat the media as though it were a mirror, held up to society and reflecting back the most important or prominent issues of the day. The dominant (p.6) theories of policy change in political science, in fact, have long tended to ignore the role and interests of media institutions (Kingdon 1984; Baumgartner and Jones 1993). These theories draw empirical data from newspaper coverage, equating it with evidence of public opinion and public events. Media attention serves as a stand-in for public opinion in this tradition: If a topic makes the front page of the local paper or receives four minutes of coverage on the nightly news, we treat it as evidence of public interest and public will. As Susan Herbst (1998) demonstrates in Reading Public Opinion, both political activists and legislators treat the daily news agenda as evidence of public opinion.4
But a long research tradition maintains that media has never been merely a reflective technology. Kurt and Gladys Lang first offered this insight in their seminal 1953 study of the MacArthur Day parades: Media is a technology of refraction, not reflection. Introduce television cameras into an event, and you will manufacture a public spectacle. People will behave differently, performing roles for the cameras. Place newspaper reporters or bloggers at that event, and you will reveal different elements of the same spectacle. Media coverage is not a neutral arbiter or reflection of objective reality. It documents a performance that it is helping to co-create. As Gamson and Wolfsfeld (1993, 116) put it, “A demonstration with no media coverage at all is a nonevent, unlikely to have any positive influence either on mobilizing followers or influencing the target. No news is bad news.” Successful protest events are strategically designed to attract coverage from the dominant media of the day. And as the media system changes, so too must our understanding of successful protest events.
To think clearly about the opportunities that the changing media system presents to activist organizations, we must historically bracket successful movement tactics. Different media, dominant at different points in history, incentivize different forms of public spectacle. The release of a new policy report will be much more appealing to policy bloggers than to television journalists. Press conferences are an artifact of the broadcast era; bloggers see little value in a press release. The broadcast television era imparted great leverage to advocacy tactics that could make the six o’clock news. The current digital era, with its niche news programming, 24-hour cable stations, hashtag publics, and social sharing, creates leverage for a different set of tactics. The relative power of individual protest tactics—petitions and sit-ins, marches and boycotts—changes apace with the shifting media system. Whether we label these changes to the media system as indicative of changing “media regimes” (Williams and Delli Carpini 2011), “information regimes” (Bimber 2003), “hybrid media systems” (Chadwick 2013), or “civic information paradigms” (Wells 2015), the central point is that (p.7) media technologies and media institutions play a role in determining the strategic value of various protest tactics. All movement power is, in part, premised on understanding and leveraging the interests of these changing media entities. Movement power is, in this sense, also media power.
Activism is adapting to the digital age (as are we all). Our expectations of activists, however, remain decidedly anchored in the preceding century. In particular, the era of grand US social movements (roughly the 1960s and early 1970s) often receives hagiographic treatment from scholars and practitioners alike. Those movements were powerful, their tactics successful. Present-day movements are frequently compared with movements of this era and found wanting. In making this comparison, we usually ignore how those earlier movements were strategically tailored to the emerging broadcast media environment of the day.
Let me animate this point with a celebrated example: the “Bloody Sunday” march in Selma, Alabama. Taeku Lee discusses the tremendous success of this action in his 2002 book, Mobilizing Public Opinion:
The movement strategy of provoking police brutality with nonviolent direct action fit well in Selma. Sheriff Jim Clark’s bigotry and short temper were notorious… . The activists marched uneventfully [on Bloody Sunday] through downtown Selma but barely crossed the murky Alabama River on the Edmund Pettus Bridge before they were met by a detachment of law enforcement officers. About fifty Alabama state troopers and several dozen of Sheriff Clark’s posse waited on horseback, fitted with gas masks, billy clubs, and blue hard hats… . Newsmen on hand captured the surreal chain of events with film and camera. By sundown, scenes from Selma were broadcast in living rooms throughout the nation. One television station, ABC, interrupted their evening movie, Judgment at Nuremberg, to air a film report on the assault. The raw footage ignited a firestorm of public outrage. (2–3, emphasis added)
Lee is describing a key moment in one of the most celebrated, successful social movements of the twentieth century. It was not the sheer number of protesters (approximately 600) that made this action so powerful. Nor was it the poetry or the righteousness of their cause. Central to the protesters’ strategy was a clear reading of the affordances provided by the broadcast-era media environment. If Sheriff Jim Clark had left those protesters alone, the march would have ended uneventfully. The protesters would have had tired limbs and not much else to show for it. If the cameras had not been present, Clark’s brutality would have gone unheralded, another chapter in the long history of violence against African Americans in the American South. But raw footage of police brutality was piped into living rooms across the nation. To borrow a phrase from Todd (p.8) Gitlin (1980), “The whole world was watching.” And since this was 1965, a time when we had only three stations, there was nothing else on television.
Against tremendous odds, civil rights movement activists proudly and stridently forged a better society. Their personal courage was coupled with great strategic acumen. There are good reasons why present-day activists and scholars seek insight from the social movements of that era. But in the search for insight, scholars, public intellectuals, and practitioners alike tend to overlook how the tactics of that era were crafted to match the media system. If the Bloody Sunday march had occurred in 2015, it would have included hashtags and retweets, mash-ups and Vine clips. But it also would have reached a smaller, niche audience through the nightly news, and it would have been immediately reinterpreted, reframed, and denounced by partisan elites. The whole world would not have been subjected to the same images, and the resulting public mobilization would have unfolded along a different path.
Another example: In 1969, during the early years of the environmental movement, two galvanizing moments came when Time magazine ran a story about the Cuyahoga River catching fire and when an oil spill off the coast of Santa Barbara received national news coverage. This was not the first time that a major oil spill had happened, and it was the twelfth time the Cuyahoga had caught fire. But because of the limited viewing options of the broadcast media environment, these images were seen in living rooms throughout the nation. Rivers catching fire make for great television footage. The early leaders of the environmental movement seized upon the public attention generated by these broadcast tragedies and used it to galvanize media-friendly actions like the first Earth Day. As Ronald Shaiko (1993, 97) put it, “One might ask, philosophically, If Greenpeace activists hold a protest rally in the woods and the media are not there to cover it, do they really make a sound?” The birth of the environmental movement and its most iconic tactical successes were rooted in the affordances of the media system of that time.
The problem, however, is that this glamorized remembrance of past social movements inappropriately shades our perceptions of modern-day social movements. Consider, for instance, Nicholas Lehmann’s (2013) indictment of 2010 environmentalists’ failure to pass climate legislation through the US Congress:
“Today’s big environmental groups recruit through direct mail and the media, filling their rosters with millions of people who are happy to click “Like” on clean air. What the groups lack, however, is the  Earth Day organizers’ ability to generate thousands of events that people actually attend – the kind of activity that creates pressure on legislators.”
By Lehmann’s reckoning, the environmental movement of 2010 was a failure because it did not generate the same “thousands of events that people (p.9) actually attend” that the environmental movement of the broadcast era had generated. Now, in the simplest sense, Lehmann is factually incorrect: Beginning in October 2006, seven students from Middlebury College worked with their professor, Bill McKibben, to launch the Step It Up day of action on climate. After six months of organizing, facilitated mostly through the Internet, the Step It Up day of action occurred on April 15, 2007. It included 1,410 events across the country (Fisher and Boekkooi 2010). Step It Up later changed its name to 350.org, a leading climate advocacy organization that regularly plans massive global days of action that feature 4,000–5,000 simultaneous events. The youth-led Energy Action Coalition has also repeatedly planned a series of citizen lobby days that have broken records as the largest in US history, bringing 15,000 young people in face-to-face contact with their congressional representatives. Present-day movements still plan plenty of “events that people actually attend.” But that attendance is no longer picked up and refracted through a broadcast-dominant media system. Without the amplifying power of the broadcast-era industrial media, the same tactics no longer produce the pressure that they once did.5
The difference between Step It Up and the original Earth Day was not in the quantity of simultaneous teach-ins. It was not in the power of their rhetoric or the resonance of their media frames. The difference was in how those mass protest events were refracted and amplified through the larger media apparatus (and, one might add, in the sclerotic state of US congressional politics).
The original Earth Day, like the Bloody Sunday march in Selma, was strategically tailored to take advantage of a media regime that no longer exists. The mere existence of the teach-ins was news. The Earth Day teach-ins attracted broadcast media attention. And the public political agenda was defined through that media attention. New media refracts at different angles. Recruitment for Step It Up/350.org actions occurs through email lists, Facebook shares, and blog posts. The fact of the 2010 day of action was hashtagged and retweeted. These digital actions defined a political agenda for a public. But they did not leave the same imprint on the broader public consciousness. The lesson gleaned from successful social movements’ past cannot be to mimic exactly what they did. The leaders of the present must strategically adapt to this digital refraction, just as social movement leaders of the past adapted to the broadcast refraction.
(p.10) The current hybrid media environment provides opportunities for activist movements and activist moments that would have gone missing in the older industrial broadcast media environment. As James Rucker, founder of ColorOfChange.org and cofounder of Citizen Engagement Lab, argues: “The media landscape twenty years ago would have prevented the stories driving the Movement for Black Lives today from breaking through. The voices we’re now hearing, reading, and seeing are all enabled by an open Internet that has largely avoided corporate or government filter. And they are shifting public dialogue, impacting culture, and building momentum to change policy” (Center for Media Justice et al. 2015). When we lionize the tactics of social movements from a bygone era, we blind ourselves to the opportunities and potential presented by current media technologies.6
Indeed, this appears to be a key ingredient in the success of present-day political movements. The Movement for Black Lives (a.k.a. #BlackLivesMatter) has directed national attention to the crisis of police violence against young African Americans. It has done so by adopting a distinctly hybrid media strategy, including the use of hashtags that connected the dots between a series of individual tragedies and place-based protests, which themselves became the topic of media coverage (Freelon, McIlwain, and Clark 2016). These activists are not choosing between broadcast media and social media. They are using the tools at their disposal—including social media accounts—to create leverage over their direct targets (public officials) and secondary targets (including mainstream media organizations). Broadcast media outlets sent reporters to Ferguson, Missouri, to cover protests surrounding the death of teenager Michael Brown because Twitter conversation signaled its newsworthiness (Tufekci 2014c). The presence of those same reporters then helped to co-create the unfolding political spectacle (Tau 2014). Both broadcast television cameras and cell phone cameras are technologies of refraction. Social movements of the 1960s developed their tactics for an industrial broadcast media environment. Social movements of the 2010s are modifying their tactics for a hybrid media environment.
There is no single “correct” strategy for leveraging digital media into movement power. There are, however, a set of practices that, when properly instituted, help activist organizations adapt to the rhythms of the digital age. This book is an exploration of the strengths, weaknesses, possibilities, and limitations of those new practices. In particular, the book focuses on the role that new digital listening tools have begun to play in fashioning new tactics and strategies that help (p.11) large-scale political organizations create leverage in the hybrid media system. Analytics encompass a cluster of technologies that allow organizations to monitor online sentiment, test and refine communications, and quantify opinion and engagement. These are backend technologies, viewed by professional campaigners through internal “dashboards” and fashioned into strategic objects that are discussed at weekly staff meetings.
Properly harnessed, these technologies allow large organizations to engage in analytic activism. Improperly harnessed, they can send civil society organizations down a crooked path that leads to prioritizing issues, campaigns, and tactics that are more clickable over those that are more important. As I will discuss later, analytic activism supports new innovations in tactical optimization, computational management, and passive democratic feedback. It enables organizations to learn and listen in different ways and to capture the energy refracted through the hybrid media system. This book highlights leading examples of analytic activism and derives lessons about its promise, its potential, and its limitations.
The rest of this chapter delves further into the three features that distinguish analytic activism from other forms of digital activism: testing, listening, and scale. It concludes by outlining the remainder of the book.
Analytics-Based Activism and the Culture of Testing
What I’ve learned at MoveOn is that anything can be tested. And it probably is.
—Stefanie Faucher, MoveOn.org
If you’re not looking at your data, then you’re not listening to your members. And that probably makes you kind of an asshole.
—Senior analytics staffer
In the US political context, the use of analytics is commonly associated with fundraising—particularly in presidential campaigns. Michael Slaby, chief technology officer of the 2008 Obama presidential campaign, estimates that analytics-based website and email optimization netted the campaign an extra $57 million (Kreiss 2012, 145). In his 2012 book, Taking Our Country Back, Daniel Kreiss describes these optimization efforts as part of a broader practice of “computational management.” The Obama presidential campaign tested everything. It used analytics to build larger email lists, raise more money, and spend that money more efficiently than any previous electoral campaign in US history. If, as Jeffrey Alexander (2010) argues, the Obama campaign was equal parts social movement and discursive performance, then Kreiss highlights that both parts emerged through a complex sociotechnical apparatus.
(p.12) The simplest form of this analytics-based optimization is known as A/B testing. An A/B test is a simple experiment: Website visitors or email recipients are randomly assigned to two groups. Both groups interact with exactly the same message, featuring exactly one variation. The variation can be an email subject line, a suggested donation level, or different images or campaign colors. Dan Siroker, a former product manager at Google who worked for Obama’s new media team and now runs a website optimization company called Optimize.ly, describes his experience on the Obama campaign thus:
I joined what was being called the “new media” team… . The team had competent bloggers, designers, and email copywriters; I wondered where I might be able to make an impact.
One thing stood out to me: a red button.
Online donations to the campaign came from subscribers to the email newsletter; subscriptions for this came from the campaign website’s signup form; and the signup form came as a result of clicking a red button that said “Sign Up.” This was the gateway through which all of Obama’s email supporters had to pass; it all came down to one button. So, one simple, humble question immediately became pivotal. Is This the Right Button?
—(Siroker and Koomen 2013, 3–4, emphasis in original)
Siroker and his colleagues tested every element of the online user experience. Fiddling with the shape, size, color, and language associated with the button (“Sign Up Now” vs. “Learn More”) and associated image improved their sign-up rate by an estimated 40.6% (Siroker and Koomen 2013, 7). This translated to 2.8 million additional email subscribers, 288,000 more volunteers, and $57 million in added donations. For the Obama presidential campaign, analytics-based optimization was very big business. Hallie Montoya Tansey, cofounder of Target Labs, argues that these processes can be used for a much wider range of data-driven decisions: “mailings, phone calls, field offices, any type of expensive contacts.”7 Brian Christian (2012) likewise suggests, “A/B testing is not simply a best practice—it’s also a way of thinking, and for some, even a philosophy. Once initiated into the A/B ethos, it becomes a lens that starts to color just about everything—not just online—but in the offline world as well.”
This picture of website and email optimization hardly seems like a step closer to anyone’s vision of an ideal democratic society. Fifteen years ago, Steven Schier warned that an unhealthy mix of party polarization and new technology was (p.13) replacing citizen mobilization with citizen activation. “Activation employs telephones, direct mail, and Internet communication in a way that allows distinctively phrased messages of maximum possible impact. It does not seek to get most potential voters to participate in an election, as does mobilization, but instead fires up a small but potentially effective segment of the public to help a particular candidate at the polls or a particular interest as it lobbies government” (Schier 2000, 9). Schier worried that emerging niche marketing techniques were rendering mass citizen participation obsolete: “Mobilization encouraged popular rule. Activation impedes it” (9). The niche marketing that caused Schier’s alarm is quaint compared with the A/B testing and microtargeted segmentation performed by modern political campaigns. There is a reasonable argument to be made that A/B testing in political campaigns represents the supreme triumph of marketing in elections (Tufekci 2012).
The well-publicized work of the Obama campaign represents only one type of analytics, though. The Obama team’s use of A/B testing was a form of tactical optimization—their goals were already fixed (acquire supporter names, identify volunteers, raise money), and the question they were asking was fundamentally quite simple (“Is This the Right Button?”). Digital trace data about this question helped them improve tactical performance toward these goals. The Romney campaign, with less reliance on computational management, was no closer to our democratic ideals. It also engaged in niche marketing— but niche marketing that was simply less efficient.
Analytics (alternatively referred to as digital trace data) can be used for a wider range of purposes than we saw in the Obama campaign, however. The Obama campaign used analytics and testing to refine individual tactics and to evaluate competing strategies.8 But it did not use analytics to ask its members to set its priorities. The goal of an electoral campaign is simple and transparent: Win on Election Day. The goal of a social movement organization is far more complex and fluid: Build power to create a more just society. We can easily evaluate whether an electoral campaign has won or lost. But the near-term measures of activist success are indeterminate. And that creates space for activist organizations to employ digital listening for broader agenda-setting purposes. MoveOn.org, for instance, uses analytics to help gauge the will of its membership (what I termed passive democratic feedback in my previous book). Analytics for MoveOn is a governance input, a means of setting strategic direction and determining what the organization’s goals and priorities ought to be. Table 1.1 defines these three uses of analytics and describes their scale, purpose, and (p.14) limitations. Having already described tactical optimization, let me now describe each of the additional categories.
Table 1.1 Three Uses of Analytics in Activism
Uses of Analytics
Improve efficiency/effectiveness of individual tactics
Low durability, focus on “growthiness” (chapter 5)
Evaluate competing tactics and strategies
Analytics floor (chapter 5)
Passive democratic feedback
Obtain governance feedback from supporters/members; set the organization’s direction or priorities
Listening without conversation (chapter 6)
A single A/B test can change the language of an email or phone script, but it cannot transform a campaign or organization. When the culture of testing is adopted more broadly, it can have a much more expansive impact. In Taking Our Country Back, Kreiss offers evidence of how the data-driven culture of the Obama campaign led to a new style of “computational management,” in which data and testing became key features in the big-picture strategy meetings. The Obama campaign developed new analytics tools to monitor the impact and return on investment of a variety of campaign practices. Rather than isolating A/B testing within the communications team or the fundraising team, Obama’s campaign looked to measure impacts across all channels and used those quantified results to direct future resource expenditures.
Likewise, as Sasha Issenberg (2102) discusses in The Victory Lab, the data-driven turn in political campaigning has led many organizations to challenge old consultant-driven assumptions about what works in politics. Since the publication of Alan Gerber and Don Green’s seminal 2000 paper in the American Political Science Review (engagingly titled “The Effects of Canvassing, Telephone Calls, and Direct Mail on Voter Turnout: A Field Experiment”), a new generation of political campaigns has begun to take social science seriously. Issenberg identifies the Analyst Institute (AI) as a central hub of the Democratic Party network for using data (online and offline) to improve campaign techniques and bring millions more citizens to the polls on Election Day. Experiments run (p.15) by the AI are far more complex than the simple A/B tests used by the Obama new media team that Siroker discusses. They help answer larger questions about voter behavior and help guide serious conversations about the empirical state of citizen engagement.
The difference between computational management and tactical optimization lies in how widely experimentation and analytics are adopted. In some advocacy organizations, analytics and the culture of testing have taken root only in the online communications department. The result tends to be a “test and pray” model, in which campaign communicators routinely run A/B tests to optimize an individual tactic, but have no larger support structure for extracting useful lessons from those tests or for passing those lessons up to C-suite executives.9 And the problem with “test and pray” is that it limits the impact that testing can have on strategy.
There is an A/B testing story that is often told and retold at trainings on digital experiments and the culture of testing. It originates from Daniel Mintz, who currently is the director of data and analytics at Upworthy.com and previously held a similar position at MoveOn.org. The story begins with a fundraising experiment. MoveOn staffers were interested in finding out whether members would respond to zip code–based donation targets. Rather than sending national email blasts that included a text box saying (for example), “We need to raise $250,000 in the next 24 hours,” Mintz and his team wondered if they could more effectively motivate their members by providing a text box saying, “We need to raise $2,500 from [your city].” MoveOn tested the zip code–based donation frame and indeed found a statistically significant increase in donation rates. The group didn’t know why this change worked—it could be that geographic frames made the issues more relevant, or that smaller funding goals made an individual donation seem more impactful, or simply that any divergence from boilerplate fundraising language results in a novelty effect. Nonetheless, the data had spoken, and the new “from your zip code” style quickly became standard within the organization.
Within six months, many of MoveOn’s peer organizations had caught on to this new wrinkle and adopted the same language. Mintz and his team wondered whether the diffusion of this language had altered its impact. So they decided to run the experiment a second time, to see if the results held up. They did not. The zip code–based fundraising goals turned out to have had a short-run novelty value, and nothing more.
For analytic activist organizations like MoveOn, the null result of this retest is a small triumph rather than a disappointment. It represents the difference between short-term “test and pray” A/B testing and computational (p.16) management. The individual outcomes of A/B tests and analytics reports are far less important than the data-driven learning routines that analytic activists use to determine how they can operate most effectively. While tactical optimization narrowly improves the performance of individual tactics, computational management allows for new organizational learning routines that are applicable to a wider range of concerns, including how an activist group identifies priorities, defines its supporter base, and builds a shared narrative and political identity. The most important thing is not the results of any one test, but the habit of testing that encourages continued learning, debate, and experimentation.
Passive Democratic Feedback
I initially introduced the concept of passive democratic feedback in my previous book, The MoveOn Effect (2012). Particularly for civil society organizations, analytics can be used for strategic direction setting, not just for optimizing campaign communications toward a set outcome. The difference between passive democratic feedback and tactical optimization can be reduced to a simple question: “What are we trying to optimize?” The Obama campaign had a clear, fixed goal: Win a majority of the vote, in states representing a majority of the Electoral College, on November 4, 2008. By contrast, political advocacy organizations operate in a shifting environment. Their goals circle around galvanizing public support and building political power to solve some public problem or advance some issue agenda. The tactics for achieving these goals change alongside both the political system and the media system. There is no set end date for these organizations. When the McCain campaign lost in 2008, it ended. By contrast, when newly formed MomsDemandAction.org failed to pass gun legislation in the aftermath of the Sandy Hook school shooting, the organization kept working toward that goal.
One byproduct of these differences between electoral campaigns and political advocacy associations is that there is far more space within advocacy groups for legitimate debates over what they should do next. Simple tools like A/B testing and weekly member surveys can provide netroots advocacy organizations with clear indicators of member opinion on these crucial, agenda-setting questions. The process of obtaining passive democratic feedback is identical to the processes of tactical optimization. But here the organizations are using analytics tools to determine member priorities rather than to drive up member action rates. As I will discuss in chapter 2, passive democratic feedback may not approach our Habermasian ideal of an engaged, deliberative public. But it is a good deal better than the advocacy landscape that it is replacing.
Digital listening can also take a more sophisticated, active form through governance-related computational management practices. From a governance perspective, analytics can represent a powerful force when used as a routinized (p.17) structure for rough hypothesis testing and organizational learning. I witnessed this in the summer of 2013 during a site visit to the national office of 38 Degrees in London. 38 Degrees is the UK equivalent of MoveOn in the United States (Chadwick 2013, Chadwick and Dennis 2016). Both organizations are members of the international OPEN network (Online Progressive Engagement Networks) (Karpf 2013b). With 3 million members (approximately 4.6% of the national population), 38 Degrees is one of the largest and most active civil society organizations in the United Kingdom. During my visit to its office, I made note of a whiteboard that prominently displayed the question “How can we increase active membership by 30%?” This was the “Testing Whiteboard.” Every week, 38 Degrees staffers hold a brainstorming session on what questions are worth testing. They then formulate a set of rough metrics and hypotheses, spend the week running small tests, and then convene at the end of the week to summarize what they have learned.
The Testing Whiteboard does not display the same social science rigor as the experiments run by the Analyst Institute. But the quality of the research design is less important than the existence of the conversation itself. The Testing Whiteboard functions as a strategic object. It creates a space within 38 Degrees for a set of routinized strategic conversations that otherwise would not occur. It promotes a culture of testing, which leads activists to question old assumptions and try out novel strategies. Just like Mintz with his zip code–based fundraising retest, 38 Degrees has created a space for learning and experimentation that can challenge established campaign wisdom. And that learning and experimentation extends beyond political tactics to include a broader set of questions about how the organization can effectively engage with its members.
I will argue that this is an optimal adjustment to the demands of movement/media power in the digital age. The social media giants of our day are still tinkering with their platforms, and the dominant mainstream media organizations are still learning how best to interact with these digital institutions. Under these circumstances, the capacity to experiment, learn, and retest is a critical advantage. As I have previously suggested (Karpf 2012b), political organizations and social scientists alike are currently facing a unique set of challenges posed by “Internet Time.” Simply put, the Internet of 2016 is, in important respects, different from the Internet of 2012, or 2006, or 1996. The devices that we use to access the Internet, the sites that we frequent on the Internet, and the ways that we use those sites are all in a state of flux.10 And this is all happening while the medium itself diffuses to broader segments of the population.
(p.18) The Internet changes fast, and rough tools like the Testing Whiteboard promote a cultural habit of experimentation and measurement among netroots political organizations. The valuable output of the Whiteboard is not any specific lesson, but rather the weekly conversation that it invites. The Testing Whiteboard opens up new pathways for routinized organizational learning and strategic adjustment. Analytics, in this sense, become a tool that activist organizations can use to continually optimize their tactics for the current media system.
Like all tools for activism, digital listening and experimentation are imperfect. The changes I have described in this chapter represent crucial features of a new style of large-scale, reformist advocacy campaigning. It is a style that is particularly well aligned with the new hybrid media environment. But, as I will discuss, it is also a style that introduces its own biases and limitations.
“Growthiness” and the Analytics Floor
Analytic activism has its limits. Much of the final two chapters of this book is devoted to boundary conditions—to the areas and topics where digital traces can do more harm than good. It is, for instance, far easier to optimize tactics than to optimize strategy. (For instance, consider: Should the environmental movement prioritize national climate legislation, or should it focus on the state and local levels? Which option will build more power and ultimately lead to success? Arguments can be made in both directions. Digital traces won’t conclusively settle the matter.) But there is also a lower boundary to consider: the analytics floor.
There is a good reason why all of the examples of analytics for political campaigns and activism come from large organizations. With a few noteworthy exceptions (discussed in chapter 5), small organizations simply cannot make use of internal analytics (chapter 2 discusses the difference between internal and external analytics). If you have an email list of 5 million people or a website that receives 500,000 visits per day, you can run regular A/B tests on random subsets of your list, then apply those lessons to the rest of the supporter base. If you have an email list of 500 or a website that receives 50 visits per day, then A/B testing will not provide statistically significant results within a useful time frame.11
(p.19) Three variables determine the exact dimensions of the analytics floor: (1) baseline action rate, (2) list/audience size, and (3) minimum detectable effect.12 For the purpose of illumination, imagine that you belong to an advocacy group seeking to raise money from small donors. Baseline action rate is the rate at which your members currently respond to an average fundraising email. List/audience size is the total population that you can sample from. Minimum detectable effect (MDE) is the threshold at which you would actually adopt a different fundraising email. For a fundraising email from a large, US-based nonprofit, the baseline action rate is usually in the single digits, and a 1% or 0.5% change would be above the MDE. To reliably detect an effect of this size, your organization would need a testing pool of approximately 15,000 individuals.13 For the test to be worth the effort, you would then need to apply the results to a large enough list to at least cover the costs of running the experiment itself (if your membership list is 16,000 individuals, the results of your 15,000-person test won’t be worth very much!). For MoveOn or the Obama campaign, this is a routine practice. But for your neighborhood Parent Teacher Association or for a national organization devoted to a small niche issue, it is a practical impossibility. Analytics are useful for dealing with massive amounts of data. Some politics is still decidedly local, though.
It follows that analytic activism produces increasing returns to large-scale organizations. An organization with 1,000 members can barely make use of analytics. An organization with 1 million can incorporate digital feedback into its work routines. An organization with 10 million can detect even smaller effect sizes, run even more sophisticated experiments, and use these results to develop an even greater comparative advantage.
One result is that digital advocacy organizations have a powerful incentive to accumulate massive email lists. In The MoveOn Effect, I discussed how this change in communications technology is tied to a generational transition among advocacy organizations. Direct mail is a technology that requires narrow lists and high response rates because of the marginal cost incurred by each additional piece of mail. The marginal cost of an additional email recipient is close to zero,14 and this permits broad lists and low response rates, enabling the rise of (p.20) multi-issue progressive generalist organizations like MoveOn and Democracy for America.
In chapter 3, I will discuss a second result of the incentive for list growth. In the past few years, we have witnessed the rise of massive distributed petition platforms like Change.org and MoveOn Petitions. Change.org is a platform where anyone can create his or her own petition, sometimes sparking successful local, national, or global campaigns for change. It is also a for-profit “B-corp,” whose business model consists of generating larger email lists for existing advocacy groups through “sponsored petitions.” And this business model leads Change.org to prioritize list growth–friendly petitions over petitions that are tied to the central political issues of the day. Chapter 3 offers a comparison between Change.org and MoveOn.org’s distributed petition platform, MoveOn Petitions. It analyzes six months of data from these two sites, offering an indication of how their niches and business models affect what types of petitions they support and what types of campaigning they each encourage. Even though both organizations have embraced digital listening, the culture of testing, and the value of scale, we will see that they use analytics to promote entirely divergent models of political engagement. There is virtually no overlap in the petitions and issues that they feature on their sites or in the types of allies and partners that they work with. As we will see in chapter 3, analytic activism does not drive all organizations to a single homogeneous set of tactics or priorities. Analytics provide new tools for listening, experimenting, and monitoring. How an organization makes use of what it hears and learns will vary dramatically, based on its underlying mission, vision, and funding model.
Analytic Audiences and Extending Beyond Echo Chambers
It is time for a reassessment of what the Internet is good for.
We have tended for years to hold two distinct claims to be true: first, that the Internet is full of cat videos and celebrity photos—serious topics of public importance are ignored on the Web, while Kanye mash-ups and blooper videos go viral overnight; and second, that political talk online is composed of “echo chambers,” where motivated partisans push each other to even greater extremes while avoiding any information that might challenge their worldview. These two claims are far from mutually exclusive. As Markus Prior (2007) shows in Post-Broadcast Democracy, when we expand the range of media choices available to the public, citizen-consumers will act on their relative entertainment preferences. Those who enjoy politics will watch more political news and will seek out (p.21) partisan outlets that match their preferences. Those who don’t enjoy politics will turn to SportsCenter instead of the nightly news broadcast.
In writing The MoveOn Effect, I postulated that these two claims represented a fundamental stumbling block for Internet-driven social change campaigns. The Internet lowers the transaction costs for citizen participation. Lower transaction costs better reveal the demand curve for political news. But they do not reshape citizen preferences: “If the average citizen was not thirsting for political information, they will not develop a taste for it simply because it has been rendered more easily available” (Karpf 2012a, 158). Both the abundance of cat videos and the growth of echo chambers can be easily understood as evidence of citizens enacting their underlying preferences. Cats are funny! Poverty is depressing. And political crosstalk is far less enjoyable than talking with your fellow partisans (Wojcieszak 2010). The end result is that the Internet, circa 2012, was much better for political mobilization than for political persuasion.
But the Internet, and the “social web” (e.g., Facebook, Twitter, Tumblr, and Reddit) in particular, keeps changing. And since the publication of my first book, there have been some surprising developments in the area of political persuasion. Most notable is the rise of Upworthy.com, the behemoth content curation site dedicated to sharing “stuff that matters.” Founded by two former MoveOn.org staffers (one of whom, Eli Pariser, also authored The Filter Bubble), Upworthy is heavily invested in the culture of analytics and testing. It hires content curators who search the web for socially meaningful “seeds” of content (videos, stories, and infographics). Upworthy curators have developed a cottage industry of sorts, specializing in identifying the narrative qualities that make a story shareable (Critchfield 2013). Once a seed has been identified, those content creators then brainstorm twenty-five potential headline frames, applying A/B testing and more advanced analytics to determine which one works best. The differences can be tremendous: Upworthy’s editor at large, Adam Mordecai, tells the story of a single video that he and Sara Critchfield published under competing headlines. Mordecai’s best headline generated 10,000 pageviews. Critchfield’s generated more than 1 million pageviews, drawing mainstream media coverage of the latest “viral video” sweeping the nation. Analytics and testing are baked into every aspect of Upworthy’s business plan.
Upworthy’s growth has been nothing short of astonishing. In November 2013, the site attracted more than 80 million unique viewers, most of them via Facebook newsfeed-driven social sharing. By comparison, CNN.com receives 12 million to 12 to 15 million visitors per month, and NYTimes.com receives approximately 20 million visitors per month.15 Upworthy has become the (p.22) subject of numerous journalistic articles and parody sites. But, as I discuss in chapter 4, it has also demonstrated the very real potential for reaching beyond the online political echo chamber and attracting mass public attention to nontrivial topics.
Upworthy is representative of a broader trend in the hybrid media system in which a combination of aggressive testing and social media–based sharing has changed what types of stories reach a mass audience. Unlike the digital analytics discussed in chapter 3, which focus on advocacy groups using digital tools to listen to their members/supporters (analytics for mobilization), Upworthy applies many of the same techniques to listen, reach, and connect with the broader public (analytics for persuasion). Chapter 4 of this book discusses how advocacy organizations have, just in the past few years, begun to combine analytics and social media to reach much broader audiences than we (or at least I) previously believed to be feasible. Upworthy reaches beyond the traditional activist echo chamber specifically by optimizing its communications for network-based social media sharing. Advocacy organizations like New Era Colorado, the AFL-CIO, and the Gates Foundation have begun to partner with Upworthy to reach these wider audiences and leverage them into powerful political tactics that are augmented by increased mainstream media coverage.
Analytics for Organizing and the Analytics Frontier
That’s still the big problem we face: the places where it matters most [offline] are also the places where we have the least data.
—Senior analytics staffer
If the analytics floor defines a lower boundary for the use of analytics in political campaigning, then we can also conceive of the outer boundary as comprising an analytics frontier. While the floor is defined by scenarios in which data is too sparse or lists are too small, the frontier is defined by questions that are too complex for digital measurement. Put more simply, analytic activism can certainly help you mobilize a crowd, but (at least so far) it is less useful for organizing that crowd into a movement or converting that movement energy into long-term victories.
The simplest online interactions tend to be the ones that are most amenable to analytics. Tracking clicks and shares is easy. Tracking conversations is a bit trickier. Tracking online-to-offline participation is still quite hard. Tracking impacts on elite decision makers is nearly impossible. The more complex the task, the fewer people will engage in it and the more variables you need to simultaneously (p.23) account for. Think of the analytics frontier as an old map from a bygone era: It can be extended with time and effort, but until then, it defines the limits beyond which we can only scrawl hic sunt dracones (here be dragons).
As I discuss in chapter 5, two conceptual distinctions further complicate the analytics frontier. First is the difference between organizing and mobilizing (see Skocpol 2003; Ganz 2009; Han 2014). Mobilizing is about breadth—the number of bodies at a rally, signatures on a petition, or phone calls to a senator. Organizing is about depth—the number of volunteer leaders committed to your cause, the skills and relationships they have developed, and the hours and resources they are willing to give. Hahrie Han (2014) provides clear evidence that building deep grassroots volunteer capacity comes through time-consuming, relational work—from organizers rather than mobilizers. The organizations that invest in community organizing reap dividends in the form of a committed, engaged volunteer leadership base. But it is a steep investment and one that few advocacy groups, online or offline, have committed themselves to.
The problem is that organizing is fundamentally built on relational conversations, while analytic activism tends to rely on listening without conversation. Micah Sifry (2013) argues that this represents a major problem, limiting the long-term effectiveness of digital activist groups. In a 2013 article titled “You Can’t A/B Test Your Response to Syria,” Sifry writes, “It’s really striking that a decade into the emergence of online political organizing, there is still no commonly accepted and easy-to-use tool that would enable groups to conduct large-scale debate and deliberation aimed at producing a common pro-active policy on anything—despite the fact that collectively these groups have millions of email addresses and, at least in theory, the resources to put towards the problem. (It’s not for nothing, after all, that the Internet is much better at saying ‘stop’ than it is at saying ‘go.’)” As we will see in this book, analytics for mobilizing are more robust and well developed than analytics for organizing. There is a real danger that, in attempting to “listen to the data,” the current wave of analytic activist organizations will become fixated on the (mobilization) data that speaks the loudest and clearest. The analytics frontier is defined by efforts to expand analytic activism into these more challenging realms of organizing, mass conversation, and deliberation.
The second key distinction complicating the analytics frontier is that between organizing and campaigning. As Taren Stinebrickner-Kauffman (2013), executive director of SumOfUs.org, explains, “Campaigners are different from organizers. The fundamental mission of an organizer is to empower other people to create change. The fundamental mission of a campaigner, though, is to set their sights on a particular change they want to create in the world, and then go out and make it happen, whatever it takes. If that happens to involve empowering people along the way, then that’s great. But if you can make that change by (p.24) having drinks with the nephew of a Senator, so be it.” As we will see in chapters 5 and 6, the current wave of netroots political organizations is constructed mostly for campaigning, not organizing. The bias toward campaigning is not an immutable element of analytic activism, but it does help to define the shape of the current analytics frontier.
These are complex issues that Han, Sifry, and Stinebrickner-Kauffman are raising. They focus our attention on key concepts like movement power and strategy. As Stinebrickner-Kauffman puts it, “Who cares if you can get more people to make phone calls by picking the best subject line in your email if you don’t even know if the phone calls have an impact?” For both the leading academics and the leadership of political associations, these questions of depth, power, and effectiveness represent a vexing frontier, the tough puzzle that they continually attempt to solve. We did not have a clear solution to building powerful social movement organizations during the era of industrial broadcast media, and we quite certainly do not have one today, either.
The challenge of the analytics frontier is that blind reliance on analytics can exert a pressure on activist organizations to prioritize those objectives that are most easily quantifiable. The solution, as I detail in chapter 6, is to blend multiple inputs. Social media data and A/B tests can be combined with weekly member surveys, relational organizing conversations, and tough debates among coalition partners. Analytics and the culture of testing are not leading to a perfect, optimal strategy for social movement success. They are tools that help large organizations nimbly experiment, learn, and adapt to their changing surroundings. Strategizing remains hard, messy work. Analytic activists do not have all of the answers. They are just finding better ways to ask the right questions.
Outline of the Book
The chapters that follow provide a deeper dive into the concepts and themes that I have introduced thus far.
In chapter 2, I offer a detailed discussion of what we mean when we throw around terms like “analytics,” “algorithms,” and “big data.” These terms can mean different things when they are used by computer scientists and management gurus. Just as important, the practices of digital listening and online experimentation carry a wide range of divergent ethical implications, depending (for instance) on whether that listening is being conducted by governments, businesses, or voluntary associations. Chapter 2 helps to define these terms and places them in conversation with our traditional understanding of concepts like public opinion and revealed preferences. It also discusses five ethical (p.25) considerations that help us to differentiate the types of analytics used by activist organizations and the types of analytics used by data vendors, banks, and governments. For readers who have come this far in the book and are left wondering, “What do you mean by analytics?” or “Where are the data brokers and the NSA in all of this?” I encourage you to read just a bit further.
Chapter 3 turns to the digital petition industry. After assessing the various ways that online petitions act as powerful media objects within the hybrid media system, the chapter draws on a unique comparative dataset comprising the top 10 featured petitions at Change.org and MoveOn Petitions, collected daily over a six-month period. From this dataset, it becomes remarkably clear just how different these two analytics-reliant organizations are, despite featuring massive, open petition platforms where any citizen can create a petition with the potential to reach millions and galvanize social change. This chapter both offers examples of what analytic activism looks like in practice and highlights the importance of organizational variables for understanding digital advocacy and digital activism. Different organizational logics drive each of these petition sites: these logics determine how analytic tools are deployed and what types of petitions, issues, and campaign victories are promoted as a result.
Chapter 4 then uses the case of Upworthy.com to illuminate how persuasive political information now travels in new ways. The chapter offers a rejoinder to the “echo chamber” hypothesis that has been a fixture of theories of online politics for nearly 20 years. Through a longer look at the emergence, development, and growth of Upworthy.com, we can see how the shift from an Internet of search engines to an Internet of social sharing fundamentally affects the opportunity structure for social movement organizations. Upworthy itself is not an activist organization, but it represents a change in the media system, which, in turn, alters the available routes to movement power.
Chapter 5 takes up the themes of the analytics floor and the analytics frontier in greater detail. After more thoroughly defining each of these boundary conditions, the chapter offers multiple case examples of organized efforts to push against these current limitations. These include “big listening” projects that leverage external analytics data to help small activist organizations adapt to the digital environment, as well as list-pooling efforts that help small organizations run shared experiments to help navigate the analytics floor. They also include systematic efforts to define new campaigning and organizing metrics to help activist organizations optimize for power-building instead of “vanity metrics,” as well as pilot projects in governance gamification that represent a radical increase in the governance role that online supporters might one day play within their political associations. The purpose of chapter 5 is both to define the current limits of analytic activism and to illustrate how organizations are attempting to move beyond those limits.
(p.26) In the concluding chapter, I offer a broader assessment of what analytic activism is and is not currently capable of. This chapter returns to a theme from my previous book, the loss of beneficial inefficiencies. Beneficial inefficiencies are important social and institutional functions that were provided in the organizational ecology of the previous media regime by virtue of its very inefficiency.16 Analytic activism, like all digitally mediated institutions, is still in a state of becoming, and it can become better or worse over time, depending on how a constellation of practitioners and interested supporters choose to help it develop. So I conclude the book both by summarizing what we have (hopefully) learned and by highlighting the central problems that must be addressed moving forward.
(1) Notable exceptions include Kreiss (2012); Nielsen (2012); Karpf (2012a); Bimber, Flanagin, and Stohl (2012); Vromen and Coleman (2013); Chadwick (2013); Costanza-Chock (2014); Baldwin-Phillipi (2015); and Wells (2015). As you might notice from these publication dates, we are collectively starting to do a better job of taking organizations seriously again.
(2) Darren Halpin (2014) raises a key point in this regard: there is a difference between treating groups as a unit of analysis and studying the structures, incentives, and pressures that shape organizations. When we treat organizational form as a black box, the impact of analytic activism is suddenly rendered invisible, left hiding in plain sight.
(3) Throughout this book, I will switch between the terms “activism” and “advocacy.” I’ll also describe the organizations in this study with a variety of terms: “activist groups,” “advocacy groups,” “political associations,” and “social movement organizations.” This is meant to signal that, though each of these terms is associated with its own divergent academic literature, the boundaries between the terms are practically indistinguishable. Kenneth Andrews and Bob Edwards made this point in a 2004 article reviewing the “disconnected literatures on social movements, interest groups, and nonprofit organizations” (479). I find their argument persuasive, so I treat the terms as interchangeable.
(5) Incidentally, I was in Washington, DC, for the initial Step It Up day of action. Having heard a constant drumbeat about the event through listservs, discussion boards, blogs, and other niche media, I arrived at my parents’ home that weekend and told them why I was in town. My mother was a welfare rights organizer in the 1970s, and my father voted for Nader. Neither of them had heard about the event. In the post-broadcast media environment, you can efficiently target your message to the niche audience you seek to mobilize. But lost in the process is the beneficial inefficiency of spillover information, wherein untargeted individuals become generically aware that a social movement is under way.
(6) Dan Mercea and Marco Bastos (2015, 2016) have likewise traced the role of “serial activists” in transnational social movements—people who repeatedly use social media to help publicize, support, and orchestrate protest events. And Hadas Eyal (2016) has demonstrated that, among Israeli NGOs, “digital fit” is a key determinant of traditional media coverage. Though I focus mostly on American case examples in this book, there is strong evidence for similar changes on the global scale.
(7) Interview notes, Hallie Montoya Tansey, July 25, 2013.
(8) Not all of the Obama campaign’s data and experimentation were digital in nature. The campaign ran experiments to improve its phone call scripts and field contacts, for instance. The culture of testing and experimentation can be applied beyond the scope of digital media.
(9) Interview notes, Michael Grenetz, February 19, 2014.
(10) “Internet Time” is a phenomenon that occurs primarily at the content layer of the Internet. The physical layer of the Internet (the cables and wires and fiber) and the protocol layer (computer interoperability standards) are prone to stability and even anticompetitive behavior. See Zittrain (2008, ch. 4), for a discussion of the multiple layers of the medium.
(11) Technically, with a large enough effect size one might still observe statistically significant results with small lists. The New Organizing Institute has demonstrated this point through a small controlled experiment proving that reminder phone calls increase the response rate to online surveys. But in practice, there are very few nontrivial, nonobvious findings that an activist group can obtain with such small lists.
(12) Evan Miller has developed a free online tool that provides these calculations: http://www.evanmiller.org/ab-testing/sample-size.html (accessed June 25, 2016). His 2010 essay, “How Not to Run an A/B Test,” is also frequently cited by practitioners: http://www.evanmiller.org/how-not-to-run-an-ab-test.html (accessed June 25, 2016).
(13) http://www.evanmiller.org/ab-testing/sample-size.html#!5;80;5;1;0 (accessed June 25, 2016).
(14) I say “close” to zero because of two limiting factors. The first is email acquisition cost. This can be zero for organic growth, but many organizations either pay to acquire emails directly or hire staff members who are charged with list growth. The second is email deliverability. Lists with low response rates are in danger of being flagged as spammers, in which case their messages either will never reach supporters or will be automatically diverted to a spam box. Many organizations manually remove dormant email addresses specifically as a response to deliverability challenges.
(16) Think, for instance, of the strengthened social ties that come through operating a phone tree. The purpose of the phone tree is to distribute information. A spillover effect is that it builds relationships among neighbors. When we replace inefficient phone trees with more efficient neighborhood listservs, we lose the beneficial relationship-building that accrued through the inefficient medium.