Jump to ContentJump to Main Navigation
Network PropagandaManipulation, Disinformation, and Radicalization in American Politics$

Yochai Benkler, Robert Faris, and Hal Roberts

Print publication date: 2018

Print ISBN-13: 9780190923624

Published to Oxford Scholarship Online: October 2018

DOI: 10.1093/oso/9780190923624.001.0001

Show Summary Details
Page of

PRINTED FROM OXFORD SCHOLARSHIP ONLINE (www.oxfordscholarship.com). (c) Copyright Oxford University Press, 2019. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a monograph in OSO for personal use (for details see www.oxfordscholarship.com/page/privacy-policy).date: 18 July 2019

Mammon’s Algorithm

Mammon’s Algorithm

Marketing, Manipulation, and Clickbait on Facebook

Chapter:
(p.269) 9 Mammon’s Algorithm
Source:
Network Propaganda
Author(s):

Yochai Benkler

Robert Faris

Hal Roberts

Publisher:
Oxford University Press
DOI:10.1093/oso/9780190923624.003.0009

Abstract and Keywords

This chapter examines three main threats to American democracy related to Facebook: Facebook microtargeting and dark ads, behavioral manipulation by Cambridge Analytica, and political clickbait factories. The chapter considers the effectiveness of psychographically microtargeted advertising and the question of exposure to “fake news” on Facebook during the months leading up to the 2016 presidential election. The chapter explains why behaviorally informed, microtargeted dark ads are an important novel threat to democratic practice independent of the overall architecture of the media environment, even as it cautions that there is no evidence to support this theory. It also argues that both the Cambridge Analytica and political clickbait threats were overstated.

Keywords:   democracy, Facebook, dark ads, Cambridge Analytica, political clickbait factories, microtargeted advertising, fake news, presidential election

FACEBOOK SITS AT the center of the epistemic crisis after 2016. In the immediate aftermath of the election, Craig Silverman’s stories on BuzzFeed focused on the “fake news” clickbait fabricators that garnered more Facebook engagement than traditional media outlets. In the middle of 2017, Facebook’s willingness to sell advertising to Russian operatives and its hosting of a number of prominent Russian sockpuppet groups put it in the hot seat. By early 2018 the long-simmering story of Cambridge Analytica, the U.K.-based data analytics firm that had obtained tens of millions of Facebook profiles in order to develop techniques for manipulating voters, boiled over and spilled onto Facebook’s lap.

The fundamental problem is that Facebook’s core business is to collect highly refined data about its users and convert that data into microtargeted manipulations (advertisements, newsfeed adjustments) aimed at getting its users to want, believe, or do things. Actors who want to get people to do things—usually to spend money, sometimes to vote or protest—value that service. Describing this business as “advertising” or “behavioral marketing” rather than “microtargeted manipulation” makes it seem less controversial. But even if you think that microtargeted behavioral marketing is fine for parting people with their money, the normative considerations are acutely different in the context of democratic elections. That same platform-based, microtargeted manipulation used on voters threatens to undermine the very possibility of a democratic polity. That is true whether it is used by the incumbent government to manipulate its population or by committed outsiders bent on subverting democracy. The clickbait factories, the Russians, (p.270) and Cambridge Analytica all took advantage of the intentional design of Facebook’s system. We dedicated Chapter 8 to the Russians and discuss the other two here. And while we remain unpersuaded by the evidence that any of these three distinct abusers made a significant impact on the election, the basic business of Facebook, when applied to political communication, presents a long-term threat to democracy. We dedicate a good bit of Chapter 13 to addressing how at least political advertising can be regulated to constrain the abusive potential of the model.

The Basic Threat: Platforms of Persuasion

Less than a month before Election Day, two reporters for Bloomberg, Joshua Green and Sasha Issenberg, were given access to the inner workings of the Trump digital campaign.1 The context, as they describe it, was that “[a]lmost every public and private metric suggests Trump is headed for a loss, possibly an epic one.” They wrote their story before James Comey’s October surprise—his announcement that the FBI was reopening an inquiry into Hillary Clinton’s use of a private email server—shifted the course of the election. Green and Issenberg followed Brad Parscale, the head of the Trump digital campaign. Parscale was authorized to tweet on Trump’s behalf, and Green and Issenberg describe him shooting off a message while Trump was onstage at a campaign event: “Crooked @HillaryClinton’s foundation is a CRIMINAL ENTERPRISE. Time to #DrainTheSwamp!”

In October 2016, the Trump campaign had what appeared to be a highly sophisticated digital campaign that was focused on 13.5 million voters in 16 battleground states that they believed were persuadable. As Green and Issenberg told it, the strategy was not to expand the electorate but to shrink it through targeted voter suppression campaigns. The campaign messaging was explicitly negative and focused on three target populations that the Clinton campaign hoped to win by a large margin: African Americans, young women, and “idealistic white liberals.” The Trump campaign had long sought to discourage supporters of Bernie Sanders from converting to Clinton voters and tried to exploit and exacerbate divisions on character and policy. For Sanders supporters, Clinton was weak on trade policy, having tried to thread the needle between prior support for the Trans-Pacific Partnership and the political climate that had soured on free trade agreements. The Trump campaign also sought to drive a wedge between Clinton and African American voters by highlighting Clinton’s use of the term “superpredators” in a 1996 speech. Trump had tweeted earlier in August “How quickly (p.271) people forget that Crooked Hillary called African-American youth ‘SUPER PREDATORS’ - Has she apologized?”

The centerpiece of the digital campaign was named Project Alamo and included voter information data that was used for fundraising and digital political advertising. The voter data base, which included data provided by the Republican National Committee (RNC) and Cambridge Analytica, reportedly had 4,000 to 5,000 data points on 220 million Americans. By the end of the campaign, the digital platform had elicited several million small donations that totaled more than a quarter billion dollars. While prospects for the Trump campaign did not look promising in mid-October, the campaign had come a long way since June 2016, when it appeared to be dead in the water. Then, the campaign had little to no money in the bank, a total of 30 staffers on its payroll, and virtually no ad spending in swing states.2 Many thought at the time that Trump would outsource the campaign to the RNC, though relations between the Trump team and the RNC were tepid at best. A month prior to the July convention, Trump had replaced campaign manager Corey Lewandowski with Paul Manafort and had hired Parscale to lead his digital campaign. Parscale had no experience in politics but had built websites for the Trump family businesses. Whether by necessity or design, the campaign focused its attention on the digital campaign and allocated its available funding to buying ads on Facebook.

It is remarkable that the Trump campaign was able to spin up a competitive campaign over the next several months with so little experience and using a thoroughly heterodox approach. For the digital campaign, the answer lay in Facebook. The social media giant had built out capabilities specifically tailored to make it a powerful, affordable, and indispensable tool for political campaigns. By partnering with firms such as Acxiom, Facebook allowed campaigns to target voters drawing on multiple sources of data that linked together Facebook accounts with email addresses, postal addresses, phone numbers, and any number of data points on specific American voters.3 Facebook provided an interface that allowed campaigns to target specific voters, geographic regions, or demographics or to send ads to hyperspecific segments of the population based on this personal data. This capability was coupled with tools—designed first for commercial applications—to quickly evaluate how well different alternatives of the same message elicit engagement in the target audience. This A/B testing supported broad-scale experimentation and removed much of the guesswork from advertising. Green and Issenberg reported that during the campaign, the Trump campaign created “100,000 distinct pieces of creative content.” African American (p.272) audiences were reminded of Clinton’s superpredator remark while likely Sanders supporters were told about how the DNC rigged the primaries for Clinton.

Data-rich behavioral microtargeting approaches to political campaigning are not new. As Politico reported on the work of Karl Rove in the 2004 campaign: “Microtargeting became the rage of the 2004 campaign after Rove green-lighted a project to use a wider array of databases to identify potential Bush voters. The campaign bought data that allowed it to cross-reference religious affiliations, shopping habits and club memberships to unearth pockets of Bush supporters in normally hostile or inscrutable areas.”4 More prominently, the Obama presidential campaign of 2008 and particularly the much-lauded “nerds” of Obama’s 2012 campaign used data-intensive methods to model voter behavior and target specific groups for their messaging and engagement efforts.5 Clinton hired some of those celebrated nerds for her campaign.6 And the success of the Obama campaign efforts drove the RNC to invest in their own data collection and analytics in preparation for the 2014 and 2016 elections. Those RNC efforts were complemented by the Koch brothers major investments in data mining to identify voters and test the effectiveness of various campaign strategies7 and by Robert Mercer’s investment in Cambridge Analytica.8

The ideas of using internet technologies to collect data and using as many data points as possible to deliver as microtargeted a message as possible are not new. But the dynamics that have increased the efficacy of big data analysis in general were at work here as well: Facebook’s massive footprint; the increased storage and processing capacity to allow major platforms to refine and scale data analysis; and the development of machine learning algorithms to extract meaning from ever larger data sets. And Facebook has carved out a uniquely powerful position in the world of political campaigning. Not only does the company provide access to tools that leverage fine-grained data on tens of millions of Americans, but, as Daniel Kreiss and Shannon McGregor showed, Facebook offered to embed company representatives within campaigns to work elbow to elbow with campaign staff.9 For Parscale, the transition from business to politics was an easy one: “I always wonder why people in politics act like this stuff is so mystical. It’s the same shit we use in commercial, just has fancier names.”10 In a “60 Minutes” interview on CBS after the campaign, Parscale told viewers, “I understood early that Facebook was how Donald Trump was going to win. Twitter is how he talked to the people. Facebook was going to be how he won.” As our analysis of mainstream media coverage in Chapter 6 and of offline media in Chapter 11 make quite (p.273) clear Trump supporters primarily used some combination of Fox News, broadcast television, and talk radio as their primary media, and Trump’s candidacy benefited from an extraordinary amount of mainstream media coverage that had an estimated value in the billions of dollars.11 But there is no reason to doubt Parscale’s assertion that Facebook was the life blood of the Trump digital campaign, and more importantly, the focus of a majority of the spending of the overall campaign.

Writing in the New York Times just after Obama’s November 2012 re-election, Zeynep Tufekci sounded an early alarm on the downsides of big data-fueled political campaigns.12 She emphasized two primary concerns that remain in play today. The first concern is campaigns taking “persuasion into a private, invisible realm” where opponents and watchdogs have no ability to respond. The second concern is that the science of persuasion is getting better and the ability to manipulate voters through the digital realm is heightened, offering an inroad for influencing voters through emotion and irrational biases. Tufekci followed in 2014 with an expanded academic critique of “computational politics” that turns “political communication into an increasingly personalized, private transaction and thus fundamentally reshapes the public sphere, first and foremost by making it less and less public . . .”13 Many of the concerns she articulated then played out in spades during the 2016 campaign, particularly the application of microtargeting in combination with dog-whistle politics and fear mongering that tore at deep social divides in the United States. The infrastructure for harvesting behavioral data online continues to advance by linking together data gathered on a range of different services and platforms. Dipayan Ghosh and Ben Scott describe in detail the many layers and techniques used to collect, aggregate, and utilize user data to carry out propaganda on the internet.14 The range of approaches they describe were developed with commercial applications and advertising dollars in mind and repurposed for political ends. They say, “The simple fact that disinformation campaigns and legitimate advertising campaigns are effectively indistinguishable on leading internet platforms lies at the center of our challenge.”

The most extensive study published to date on microtargeted Facebook political advertising was done by a team led by Young Mie Kim of the University of Wisconsin. Kim and her collaborators recruited a nationally representative sample of about 9,500 subjects to install a browser extension that collected data about ads they had been served in the six or so weeks leading up to the 2016 election. They analyzed the roughly five million ads that their subjects had been served, of which about 1.6 million were (p.274) political ads. They found that about 11 percent of the ads were from “suspicious groups,” meaning groups banned by Facebook, lacking content after Election Day, or lacking profile information. Another 5.6 percent were delivered by groups identified as Russian-operated by the House Intelligence Committee. This is clearly a lower bound on the prevalence of Russian Facebook advertising. Twenty percent of the ads were from “astroturf” or otherwise unregistered groups, and another 20 percent were from legitimate nonprofits that had not reported to the Federal Election Commission (FEC) as a political action committee (PAC) or any other electioneering activities or independent expenditures (the ads were issue ads—ads that support an issue, not explicitly a candidate—so reporting was not necessarily required). Only about 11 percent were from groups actually registered with the FEC, and another 5 percent were of various questionable news sites. The largest category of ads, slightly over one-quarter, were “other”—sensationalist clickbait and links to meme generators. Examining who was targeted by these ads, Kim and her coauthors found that the ads were targeted at battleground states, and perhaps the most troubling finding was that households earning less than $40,000 were most heavily targeted with advertisements focusing on immigration and racial conflict. The study did not, however, report individual level microtargeting, as opposed to broader geographic or demographic targeting.

Tufekci’s work and Ghosh and Scott’s work suggest that large-scale data analysis will eventually make microtargeting much more effective than it is now. Because manipulations will happen at the level of the individual user, campaigns will be able to sharpen messages, including those intended to elicit fear and loathing or to intimidate voters from turning out, free of the relative moderation enforced by public scrutiny. Kim’s study suggests that we have a little bit of time to address the concern but that solving the problem only for explicit formal electioneering will not solve the much more pervasive problem of “dark ads,” advertisements seen only by their narrowly-targeted intended recipients, and therefore unavailable for public scrutiny, and dark money, political funding whose sources are not disclosed. Indeed, the primary finding is that Facebook makes it easy and cheap for even unsophisticated, informal groups to leverage microtargeting techniques. It is precisely because of this longer-term threat that in Chapter 13 we emphasize the importance of disclosure requirements and even more so the creation of a comprehensive public record of all the ads, electioneering and issue ads, in a way that makes them accessible to third parties for continuous monitoring and exposure.

(p.275) Plumbing the Political Soul: The Dirty Tricks Crew

The concern over the role of psychological profiling in political persuasion hit the headlines in 2018 over the tactics and role of Cambridge Analytica. Cambridge Analytica was founded in late 2013 by Robert Mercer and Steve Bannon. An offshoot of SCL, a British strategic communications company, Cambridge Analytica was created to utilize cutting-edge data analysis techniques to provide insights on audiences to inform public communication and persuasion efforts. The investment provided Mercer and Bannon with a key entity in the political communications and influence game. The company has engaged in political campaigns around the globe and frequently touted its “special sauce”—psychographic profiling techniques meant to uncover political leanings of voters that would not otherwise be apparent, perhaps even to the voters themselves, and to offer guidance on how these voters might be more effectively persuaded by striking specific deep-seated emotional chords. The company derived this technique from research at the University of Cambridge. The researchers who conducted the underlying academic research (but were not associated with Cambridge Analytica), Michal Kosinski, David Stillwell, and Thore Graepel, combined personality surveys with Facebook data to show that “digital records of behavior, Facebook Likes, can be used to automatically and accurately predict a range of highly sensitive personal attributes including: sexual orientation, ethnicity, religious and political views, personality traits, intelligence, happiness, use of addictive substances, parental separation, age, and gender.”15 After Cambridge Analytica failed to convince these researchers to work with them, the company hired Alexander Kogan to replicate their methodology. Kogan created a Facebook app, recruited participants on Mechanical Turk to take a survey and download an app that harvested their Facebook data and data from their network of friends. This presumably enabled them to map personality traits against behavior on Facebook, for example, what people “liked,” which would then allow them to make similar personality inferences about any Facebook user, and they now had data on 87 million Facebook users.

After the November 2016 election, Cambridge Analytica eagerly took credit for the “pivotal role” they played in the unexpected success of the Trump campaign. Parscale offered a different narrative that downplayed the role of Cambridge Analytica and reserved more of the credit for the Trump digital team. Many observers voiced great skepticism over the hype of psychometrics that Cambridge Analytica was selling,16 some referring to the product as snake oil.17 Ultimately, the company acknowledged that it had not (p.276) used its psychographic special sauce for the Trump campaign. The company did reportedly contribute to the Trump campaign in other ways, however, by supplying data, coordinating ad spending, and providing strategic advice.

The saga of Cambridge Analytica took a downward turn in March 2018. A former employee and whistle-blower, Chris Wylie, revealed that Cambridge Analytica had collected Facebook data on false pretenses—forcing Facebook to publicly acknowledge this issue—and had not deleted the data when required to do so. A more damaging revelation occurred when executives of the company, including CEO Alexander Nix, were recorded by an investigative team from Channel 4 News in Britain claiming to have entrapped politicians using bribes and sex workers.18 The recorded conversations also captured the managing director, Mark Turnbull, describing the use of questionable tactics to surreptitiously influence public discourse by tapping into the deep-seated fears of voters: “You just put information into the bloodstream of the internet, and then you watch it grow, give it a little push every now and again, over time to watch it take shape.” This was done, he said, in a way that is “unattributable, untrackable.” In the taped conversation, the Cambridge Analytics team also took credit for crafting the “defeat crooked Hillary” campaign, and Nix reported that they provided for the Trump campaign “all the research, all the data, all the analytics, all the targeting, we ran all the digital campaign, the television campaign, and our data informed all the strategy.” In May of 2018, the company closed down without ever demonstrating the effectiveness of its behavioral profiling approach.

A key outstanding question is how persuasive microtargeting tools based on social media usage actually are. This may seem like a silly question given all the money spent on online advertising and the value the stock market places on data-driven marketing potential. But it is in fact remarkable how little credible evidence there is that targeted political advertising based on social media usage works any better than techniques that already existed 15 years ago or even longer.19

The best publicly available scientific evidence that Facebook-based psychographic data is effective, and, indeed, possibly more effective than existing marketing techniques, comes from a pair of papers coauthored by two of the original 2013 paper authors using Facebook data to identify personality traits, Kosinski and Stillwell, along with other collaborators. In 2015 they published an article that showed that machine prediction of personality was more accurate than human prediction.20 In 2017, they published an article showing that advertising designed to fit the recipient’s personality attribute (e.g., giving an extrovert and an introvert different tenor and frame to advance (p.277) the same product or purpose) performed better for people who possess that attribute than advertising designed for people with its opposite (that is, showing an introvert an advertisement designed for an extrovert; or showing a person low on openness and advertisement designed to fit a person high on that measure).21 First, although the 2015 paper did show that machines could predict personality traits better than people, they could only do so slightly better, and both humans and machines did either a touch worse or a touch better than a coin toss. Second, the 2017 paper is the strongest published evidence supporting the idea that personality traits predicted from Facebook usage can be used to design advertising that affects behavior and, indeed, that it improves on existing marketing techniques.

The effect sizes in this 2017 study, however, suggest that the concerns about Cambridge Analytica’s impact are likely exaggerated. The 2017 paper reported on the results of three experiments. The first experiment matched the ad type to the personality type and did not significantly change click-through rates. On average it increased actual buying among those who did click through 1.54 times. To get a sense of the order of the effect, in that experiment they ran the manipulation on over three million people and achieved 390 individual purchases, including purchases by people on whom the manipulation did not work as expected. The second experiment focused on openness. It showed an effect on low openness users, but not on high openness users, and had some effect on click-through and conversion, but again, achieved 500 app installs out of over 84,000 manipulations. The much higher conversion rate is likely due to the fact that installing the app was free. Voting, in this regard, is more likely to be similar to actually paying money than installing a free app. The third campaign compared a company’s standard marketing message with a personality-fit message. The personality-informed ad improved the conversion rate from 0.32 percent to 0.37 percent, an increase of 0.05 percent—that is, for every 10,000 people who were exposed to the advertisement, 5 more people installed the app using the psychographically informed advertising than would have been predicted to install after seeing the standard advertising the company was already using. In a state like Pennsylvania, which saw slightly over 6.1 million voters in 2016, if you correctly identified every single voter’s personality (and remember, the 2015 paper suggested the level of accuracy between 50 percent and 60 percent), and every personality trait were as effective as the most effectively manipulated personality trait, and you reached every single potential voter, and got this level of improvement, you could shift about 3,000 votes more than standard techniques (assuming that political advertising pre-Facebook psychographics (p.278) data and commercial marketing pre-Facebook psychographics data are roughly equivalent in effectiveness). Trump beat Clinton in Pennsylvania by about 44,000 votes. If, more plausibly, the effect size were like the effect the researchers saw when they actually asked people to buy something, the effect would be a few hundred votes across the entire state.

Even this low estimate vastly overstates the likely impact. Recall that the accuracy of identifying the users’ personality should halve the effect. Worse, misidentified targeted appeals have been shown, in a study of the effect of canvassing in Wisconsin in the 2008 election, to actually turn off voters who are misidentified and who do not want to be contacted, and turn them against a candidate.22 Another study suggested that narrowly targeted appeals turn off some voters while appealing to others, but it is unclear whether the gains among the latter outweigh the losses among the former.23 More generally, not everyone uses Facebook often enough to be a useful part of a target audience, and we should expect to see the efficiency gain from a microtargeted advertising campaign decline as one moves from the most predictably sensitive potential voters to those who are less clearly identifiable to those who will actually respond negatively, creating an upper bound on the effectiveness of a campaign well below the maximum theoretically reachable population.24 All this ignores the fact that the advertising is conducted in the context of a blitz of advertising, backed up by door-to-door canvassing, phone banking, and a range of outreach strategies based on a wide range of prior data from real-world activity, from voting patterns and political donations, through memberships, mailing lists, and other commercially available transactional marketing data.

We are not arguing that psychographically microtargeted advertising can never work or will never work. Nothing in the available public record excludes it from becoming the kind of advertising juggernaut that it was hyped up to be in Nix’s public presentations. But unless Cambridge Analytica somehow succeeded within the span of two years in vastly improving on the techniques developed by the academics who invented the techniques on which the company was founded, it seems more likely that the Cambridge Analytica public announcement greatly exaggerated their role in the Trump campaign than that the Trump digital team dramatically underplayed the company’s contribution. The impact of the more sophisticated versions of psychographics that raised alarms also appear to be overblown.

What is clear is that Facebook was a far more important tool for the campaign and that its microtargeting functionality is considered by political consultants to be a potent and widely used campaign tool. Facebook’s initial (p.279) public response to the Cambridge Analytica story emphasized the “breach of trust” on the part of Cambridge Analytica—the fact that the company harvested the data of millions of Facebook users without complying with the company’s terms of service. From the perspective of basic threat to democracy, however, the privacy violation was significantly less important than the fact that Facebook collects and uses all these data, and that it imposes on its users terms of service that certainly allow the company and its clients, if not outsiders like Cambridge Analytica, to use the data as manipulatively as they choose. For all the reasons we outlined in our skepticism about Cambridge Analytica’s impact, we should be similarly cautious to impute to Facebook magical powers of persuasion. Nonetheless, it is plausible that microtargeting will improve as the algorithms for identifying personal characteristics improve; that it will be more effectively targeted only at those subjects most likely to be affected as desired; and that voter suppression campaigns on social media in particular may put a candidate over the top in very close campaigns. There is no evidence to confirm that this is true. Using tailored advertisements to change hearts and minds, and more importantly voter behavior, is still primarily an act of faith, much like most of the rest of online advertising.25 In Chapter 13 we do, however, suggest that the basic risk of undermining voter autonomy and the almost certain erosion of our collective confidence in the legitimacy of election outcomes are sufficient grounds to recommend that individually tailored, or too narrowly targeted advertising techniques should be constrained in the political context. At a minimum we argue that platforms like Facebook should be required to maintain all of their ads and ad experiments in a publicly accessible database, so that abusive practices can be exposed by opposing candidates or independent researchers.

Clickbait Fabricators Meet Facebook’s Algorithms

“If there’s one fundamental truth about social media’s impact on democracy it’s that it amplifies human intent—both good and bad. At its best, it allows us to express ourselves and take action. At its worst, it allows people to spread misinformation and corrode democracy.”

These words were written not by a technology pundit or new-media critic but by a Facebook product manager focused on civic engagement. He goes on to say: “I wish I could guarantee that the positives are destined to outweigh the negatives, but I can’t.”26

(p.280) The success of Facebook is a story of scale. The company figured out how to maintain a platform that serves over two billion users as well as advertisers who generate tens of billions of dollars in revenue, and the only way to do this was to create automated interfaces that require few humans in the loop. Advertisers need not ask for permission to place an ad, and user interactions are governed by algorithm.

The Facebook News Feed algorithm, from which almost half of adult Americans get news at least sometimes,27 has altered the media landscape. In particular, it has given rise to a large number of new media organizations, many of which exist only thanks to advertising revenue they receive from publishing on Facebook. Just as Web 2.0 did for the first generation of bloggers, for these entities, Facebook dismantled the obstacles for participation in the news ecosystem, and unlike Web 2.0 and bloggers, solved the revenue question in the same platform. Capital requirements are modest. The ticket to success is figuring out how to work with the algorithm that determines how many people will see your articles, and this depends in large part on the actions of readers. If you can figure out how to get people to click on your article, you are in business.

This has provided opportunities for smaller media organizations to compete for attention with large well-funded and established media sources. By serving up a stream of news stories coming from many media sources, the distinction between sketchy and reputable news sites is diminished. We see audiences that focus attention not only on the New York Times but also on the Palmer Report, and we see the overlap in audiences of Fox News and Truthfeed. This does not mean that hyperpartisan clickbait always wins on social media, or that all small media on Facebook are clickbait fabricators. But the economic rewards of producing media that use anger, outrage, ridicule, and tribal bonding are immediate and significant. Producing political clickbait is cheap enough to sustain a modest small business—individuals can sit at home and make a living producing bullshit that gets people to click on it for the entertainment value—or real business, as entrepreneurs have set up shop to produce many of these clickbait fabrication plants on both sides of the political divide.28 As a challenge to democracy, we can think of the political effects of clickbait factories as emissions of the commercial model of Facebook. They are external costs imposed on democracy as a side effect of the business model of matching advertising dollars to clicks, in the presence of human users whose cognitive apparatus includes the capacity for both system one (autonomous, rapid, unconscious) and system two (slow, reflective) decision-making.29 Unsurprisingly, clickbait fabricators have figured out that (p.281) people are quicker to click on materials that trigger system one automatic unreflective responses than materials that appeal to system two controlled reflective decision-making.

Before we shake our heads at the upstarts, it is important to recognize that the tension between making money and writing high-quality news stretches back at least to the early penny presses and the need to sell copy. The corrupting influence of the need to sell to the widest possible audience has been a central and oft-repeated theme of media criticism at least since Harold Innis’s Bias of Communications in 1951. At some level, Facebook clickbait is simply the grandchild of the tabloid headline and the Sun’s page 3, which, in turn, is simply the more frankly licentious cousin of the organizational pressures that drove the New York Times and Washington Post to choose the sensationalist and overblown headlines they did for the Clinton Foundation stories we described in Chapter 5. At another level, many of these sites, which often peddle pseudoscience conspiracy theories as gladly as they peddle political conspiracy theories, are merely the present-day incarnations of supermarket checkout-counter tabloids. And if we use those as a baseline, we should remember that in the 1992 election, the last fully pre-internet election, about 5 percent of the respondents to the Pew media survey reported that they regularly read the National Enquirer, the Sun, or the Star. That was roughly as many as those who said they regularly watched the “MacNeil/Leherer NewsHour” on PBS or C-SPAN (6 percent each).30

Our own data support the proposition that the economic incentives and ease of reach that Facebook offered did, in fact, result in Facebook’s political content exhibiting more extreme partisanship than either Twitter or the open web. Political clickbait sites were most commonly found on the far edges of the political spectrum and were significantly more pronounced on the right. As we saw in Chapter 3, at the extremes, clickbait sites popular on Facebook were happy to peddle in pedophilia and rape stories on both the right and the left, but the pattern did not extend as far or as deep on the left as on the right. The political extremes are the habitat of the politically active, those that are most likely to engage in directionally motivated reasoning and willing to tolerate hyperpartisan media that often sacrifice accuracy and nuance in favor of sensational and politically biting content. When we view the most prominent media during the 2016 election by cross-media linking, Twitter shares, and Facebook shares (Tables 9.1–9.5), we see that in the center the top sites are generally consistent across these three measures. In the center-left and center-right, there are modest differences. When we move to the left and right, the differences are substantial. On the left, Occupy Democrats, Addicting (p.282) Info, and Bipartisan Report, classic examples of hyperpartisan media, were popular on Facebook but not on Twitter or in the link economy. On the right, the top sites on Facebook include Conservative Tribune, Truthfeed, Western Journalism, Political Insider, and Ending the Fed.

Table 9.1 Most popular media on the left from May 1, 2015 to November 7, 2016.

Rank

Inlinks

Twitter

Facebook

1

Huffington Post

Huffington Post

Huffington Post

2

MSNBC

Politicus USA

Politicus USA

3

Vox

Daily Kos

MSNBC

4

Daily Beast

Raw Story

Vox

5

HillaryClinton.com

Salon

Raw Story

6

NPR

MSNBC

Daily Kos

7

PolitiFact

Mother Jones

New Yorker

8

Slate

Think Progress

Occupy Democrats

9

Salon

Daily Beast

Addicting Info

10

BernieSanders.com

Vox

Bipartisan Report

Table 9.2 Most popular media on the center-left from May 1, 2015 to November 7, 2016.

Rank

Inlinks

Twitter

Facebook

1

Washington Post

CNN

New York Times

2

New York Times

New York Times

CNN

3

CNN

Washington Post

Washington Post

4

Politico

Politico

NBC News

5

Guardian

Guardian

US Uncut

6

NBC News

Mashable

Guardian

7

CBS News

NBC News

Time

8

The Atlantic

BuzzFeed

Politico

9

LA Times

Time

Daily News

10

BuzzFeed

BBC

The Atlantic

Table 9.3 Most popular media on the center from May 1, 2015 to November 7, 2016.

Rank

Inlinks

Twitter

Facebook

1

The Hill

The Hill

The Hill

2

Wall Street Journal

Yahoo! News

ABC News

3

Business Week

Wall Street Journal

Yahoo! News

4

ABC News

USA Today

USA Today

5

USA Today

Business Week

Wall Street Journal

6

Reuters

Reuters

Business Insider

7

Yahoo! News

ABC News

Mediaite

8

Business Insider

Business Insider

Reuters

9

Forbes

Mediaite

The Intercept

10

CNBC

Forbes

Forbes

Table 9.4 Most popular media on the center-right from May 1, 2015 to November 7, 2016.

Rank

Inlinks

Twitter

Facebook

1

National Review

Russia Today

Real Clear Politics

2

Observer

Real Clear Politics

National Review

3

RedState

RedState

Observer

4

Weekly Standard

National Review

TMZ

5

Russia Today

TMZ

Russia Today

6

Reason

Weekly Standard

RedState

7

The White House

Observer

Federalist

8

McClatchy DC

Federalist

Free Thought Project

9

TMZ

The Resurgent

Patheos

10

Morning Consult

Reason

The Resurgent

Table 9.5 Most popular media on the right from May 1, 2015 to November 7, 2016.

Rank

Inlinks

Twitter

Facebook

1

Breitbart

Breitbart

Breitbart

2

Fox News

Fox News

Conservative Tribune

3

DonaldJTrump.com

Washington Examiner

Gateway Pundit

4

New York Post

Daily Caller

Fox News

5

Washington Times

Gateway Pundit

Daily Caller

6

Daily Caller

Right Scoop

Truthfeed

7

Daily Mail

Daily Mail

Western Journalism

8

Washington Examiner

Infowars

Political Insider

9

WikiLeaks

New York Post

Ending the Fed

10

Free Beacon

Washington Times

New York Post

Throughout this volume, we have treated hyperlinks as a core measure of authority and credibility among media producers. A high imbalance between that measure and Facebook shares seems to be highly predictive of (p.283) a site being political clickbait. Among the 10 most popular sites on Facebook on the left were three media sources that were nearly invisible in the link economy: Addicting Info, Bipartisan Report, and Occupy Democrats. Though extremely popular on Facebook, these sites received 53, 37, and 25 inlinks, respectively, out of the 235,000 inlinks of this sample. More than a third of those links, in turn, were among these same three sources. Similarly, on the right, five sites among the top 10 had vanishingly small footprints in (p.284) the link economy. The Conservative Tribune received just 45 inlinks from this sample, half of which came from Ending the Fed. The inlinks to Western Journalism number 130, half of which came from the Conservative Tribune. The Political Insider was the recipient of 66 links. Truthfeed received 46 inlinks, 40 of which came from the Gateway Pundit, and Ending the Fed had four inlinks. Sites that were low on inlinks but popular on Twitter or both Twitter and Facebook were at least highly partisan, and in many cases also among the worst offending conspiracy sites. During the election, low inlink sites on the left included Politicus USA and Raw Story, both of which were popular on both Facebook and Twitter. On the right, The Gateway Pundit ranked third on Facebook and fifth on Twitter during the election period, but was cited infrequently with only several hundred inlinks. Infowars ranked ninth on Twitter and eighteenth on Facebook, but received a small number of citations in the link economy. Another media source popular on social media but with very low traction in the link economy was the Right Scoop. It ranked seventh on Twitter and seventeenth on Facebook, but had only 221 links.

Most sites that were particularly dependent on Facebook for attention, whether left or right, followed similar recipes. They engaged in little or no original reporting and freely borrowed from other sources, producing short posts or articles with provocative titles intended to drive social media traffic. Most of them were not only highly partisan but also featured the most (p.285) questionable reporting, and have been frequently cited in discussions of fake news and criticized by fact-checking sites.31 It is important to emphasize that not all sites with this kind of profile are clickbait fabricators. The New Yorker has a much higher footprint on Facebook than on Twitter, largely due to the Facebook popularity of the Borowitz Report, a satirical take on current events and news. Its most popular story was titled “Stephen Hawking Angers Trump Supporters with Baffling Array of Long Words.” Similarly, the satirical publication The Onion has high Facebook sharing numbers and low tweets or links.

By combining relative attention on Twitter, Facebook, and the link economy, we are able to place media sources in groups that are highly suggestive of their position in the larger media sphere. A distinct set of websites received a disproportionate amount of attention from Facebook compared with Twitter and media inlinks. From the set of media sources that were in the top 100 by inlinks or social media shares during the 2016 election, 13 sites fell into this category (Table 9.6). Many of these sites are identified by independent sources and media reporting as examples of inaccurate if not blatantly false reporting. Both in form and substance, the majority of this group of sites are aptly described as political clickbait. Again, this does not imply equivalency across these sites. The satirical site The Onion is an outlier (p.286) in this group, in that it is explicitly satirical and ironic. The others engage in highly partisan and dubious reporting without explicit irony.

Table 9.6 13 media sources that received a disproportionate amount of attention from Facebook compared with Twitter and media inlinks during the 2016 presidential election.

Addicting Info

Bipartisan Report

Conservative Tribune

Daily Newsbin

Ending the Fed

Occupy Democrats

Political Insider

RedStateWatcher

The Onion

Truthfeed

US Uncut

Western Journalism

Young Cons

How important were these sites in the overall scheme of the election? Without quantifying how much of the overall communications environment was exposed to false stories produced by these clickbait factories, it is impossible to evaluate whether they had an impact, or whether they are even news. The most rigorous effort to quantify the question of exposure to “fake news” on Facebook during the months leading up to the 2016 U.S. presidential election was an article by economists Hunt Allcott and Matthew Gentzkow.32 Allcott and Gentzkow surveyed 1208 adults about their media use and exposed them to a series of stories, some true, some false, that had been circulated during the three months before the election, and some that were “placebos”—stories they had made up for the experiment that had not in fact circulated at all. Their first finding was that about 70 percent of respondents recalled seeing the big true stories they included in their survey, and just under 60 percent of the respondents believed these stories. By contrast, only 15 percent of respondents reported remembering any of the fake news stories they asked about, and 8 percent reported that they had believed them. For comparison, 14 percent reported that they recall having seen the placebo stories, and 8 percent reported remembering that they believed them. Allcott and Gentzkow calculate that a fake headline was about 1.2 percent more likely to be remembered than a placebo that was never published. Using these numbers, they calculated that an average American adult would have seen about 1.14 fake stories during the election cycle. They complement this survey data with traffic data that suggests that traffic to the top 665 top news sites in their set was roughly 19 times as high as traffic to the 65 fake news sites they identified, and that even on those sites no more than 60 percent of the stories were fake. The fundamental point of Allcott and Gentzkow’s work is that even if Facebook clickbait-type fake news had a large audience in absolute terms, on the order of hundreds of millions of visits, this still translates into a tiny fraction of the campaign news-related websites that people visited, and an even tinier fraction of the stories to which they were exposed. A later paper by Andrew Guess, Brendan Nyhan, and Jason Reifler confirmed Alcott and Gentzkow’s core finding, that relative exposure across the American media ecosystem was likely low. They used data collected from the browsing habits of a representative sample of U.S. users and found that while quite a few Americans (1 in 4) “visited a fake news site” at least once, only about 2.6 percent of articles Americans read about politics in September and October of 2016 were from such sites.33 This level of exposure is consistent (p.287) with the 5 percent who responded in 1992 that they regularly got news from tabloids. Furthermore, Guess, Nyhan, and Reifler found that consumption of stories was much more concentrated by comparison to bare exposure to such sites. About 10 percent of the most highly partisan among the observed population accounted for 60 percent of the stories visited, further limiting the likely persuasive effect, as opposed to the identity-affirming and entertaining effect, of political clickbait.

Despite their likely small effect on the election, political clickbait sites drew extensive public attention as the initial suspects in causing information disorder. The fact that they were so clearly prominent on Facebook put pressure on the company to do something. And because the company is so powerful relative to those who publish on it, it began to experiment over the course of 2017 and 2018 with a range of measures intended to tamp down on these clickbait sites. These included various configurations of affordances that allowed users to identify suspicious stories for factchecking, collaborated with factchecking organizations to check these, and then added a flag when fact checking organizations had disputed the articles, or added related information tags. After these initial efforts proved less than entirely successful, the company changed the News Feed algorithm to promote materials preferred by users’ friends rather than those that used commercial promotion. If we look at the 13 sites listed in Table 6.9, we see that most of them in fact declined. Addicting Info, Bipartisan Report, and Truthfeed declined to almost no traffic over the course of 2017 to early 2018, according to data from SimilarWeb. The three sites that had the highest number of visits, Conservative Tribune and Western Journalism on the right and Occupy Democrats on the left, also suffered dramatic drops over the course of 2017. Occupy Democrats declined dramatically in May of 2017, relabeled itself Washington Journal and then Washington Press, and ultimately lost about two-thirds of its audience. Conservative Tribune and Western Journalism both dropped dramatically in September, losing about 60 percent of their audience, and Western Journalism continued to decline to near obscurity thereafter. Conservative Tribune stabilized at about 40 percent of its earlier audience and remains the most visited of this list of sites. The Daily Newsbin also largely disappeared, but its author, Bill Palmer, switched to publish the Palmer Report, which sustained about five million visits a month from about a million unique visitors over the course of 2017. The rest of the sites on this list have largely declined. Only The Onion maintained and even increased the number of visits it receives, consistent with the fact that it is a completely different kind of site.

(p.288) What are we to make of these declines? On the one hand, the fact that they declined equally on the left as on the right argues against the complaints by right-wing sites that Facebook was discriminating against them because of their political viewpoint, and strongly suggests that the company was simply weeding its garden. It just so happens that, because of the propaganda feedback loop, there were a lot more weeds on the right than on the left. Consistent with this broad symmetric effect, Figure 2.20 in Chapter 2 compared the relative prominence on Facebook of sites between the election period and 2017. It exhibited no discernable pattern of anti-right sentiment—with sites on the left and right equally receding over the course of 2017.

Despite these declines, some of the worst and most influential offenders have not only survived, but increased their visibility and shares. Infowars and the Gateway Pundit both increased their Twitter footprint and maintained their overall visibility at around 20 and 15 million visits a month, respectively. True Pundit came out of nowhere to fill the shoes of disappearing sites like Ending the Fed, growing from a few hundred thousand monthly visits in early 2017 to over 3.5 million in early 2018. YourNewsWire maintained its position, hovering between three and five million visits despite the changes in algorithms and being denied access to the Google ad network. The Daily News Bin was superseded by the Palmer Report without a blip.

It remains to be seen how well Facebook can continue to weed its garden. But one suspects that Facebook clickbait sites are here to stay, just like spammers and search engine optimizers. There is an opportunity to make a buck. There are publishers who will find ways to exploit the platforms and their affordances to go after that buck. Some of these publishers will do so in ways that pollute the system on which they feed. Criticism of the platform companies will lead these platforms to keep fighting the polluting publishers. Just like spam, clickbait fabricators will require continuous treatment. Just like spam, these polluters will adapt and avoid detection and suppression. Just like spam, the platforms will play catchup, but will continue to weed. And, just like spam, although politically-inflected commercial clickbait will continue to be an irritant, there is no reason to think that it will play a significantly more important role than it played in the 2016 election. An irritant. Not a crisis.

Notes:

(1.) Joshua Green and Sasha Issenberg, “Inside the Trump Bunker, With Days to Go,” Bloomberg, October 27, 2016, https://www.bloomberg.com/news/articles/2016-10-27/inside-the-trump-bunker-with-12-days-to-go.

(2.) Philip Bump, “Donald Trump’s Campaign Manager Is Out. Here Are the Brutal Numbers That Tell Us Why.,” Washington Post, June 20, 2016, sec. The Fix, https://www.washingtonpost.com/news/the-fix/wp/2016/06/19/the-brutal-numbers-behind-a-very-bad-month-for-donald-trump/.

(3.) Philip Bump, “How Facebook Plans to Become One of the Most Powerful Tools in Politics,” Washington Post, November 26, 2014, sec. The Fix, https://www.washingtonpost.com/news/the-fix/wp/2014/11/26/how-facebook-plans-to-become-one-of-the-most-powerful-tools-in-politics/.

(4.) Jeanne Cummings, “Rove’s Patented Strategies Will Endure,” Politico, August 13, 2007, https://www.politico.com/news/stories/0807/5375.html.

(5.) Alexis C. Madrigal, “When the Nerds Go Marching In,” The Atlantic, November 16, 2012, https://www.theatlantic.com/technology/archive/2012/11/when-the-nerds-go-marching-in/265325/.

(6.) Darren Samuelsohn, “Hillary’s Nerd Squad,” Politico, March 25, 2015, https://www.politico.com/story/2015/03/hillarys-nerd-squad-116402.html.

(7.) Mike Allen and Kennth P. Vogel, “Inside the Koch Data Mine,” Politico, December 8, 2014, http://www.politico.com/story/2014/12/koch-brothers-rnc-113359.html.

(8.) Jane Mayer, “The Reclusive Hedge-Fund Tycoon Behind the Trump Presidency,” The New Yorker, March 27, 2017, https://www.newyorker.com/magazine/2017/03/27/the-reclusive-hedge-fund-tycoon-behind-the-trump-presidency.

(9.) Daniel Kreiss and Shannon C. McGregor, “Technology Firms Shape Political Communication: The Work of Microsoft, Facebook, Twitter, and Google With Campaigns During the 2016 U.S. Presidential Cycle,” Political Communication 35, no. 2 (October 26, 2017): 155–177, https://doi.org/10.1080/10584609.2017.1364814.

(10.) Green and Issenberg, “Trump Bunker.”

(11.) Nicholas Confessore and Karen Yourish, “$2 Billion Worth of Free Media for Donald Trump,” New York Times, March 15, 2016, sec. The Upshot, https://www.nytimes.com/2016/03/16/upshot/measuring-donald-trumps-mammoth-advantage-in-free-media.html.

(12.) Zeynep Tufekci, “Beware the Big Data Campaign,” New York Times, November 16, 2012, sec. Opinion, https://www.nytimes.com/2012/11/17/opinion/beware-the-big-data-campaign.html.

(13.) Zeynep Tufekci, “Engineering the Public: Big Data, Surveillance and Computational Politics,” First Monday 19, no. 7 (July 7, 2014), https://doi.org/10.5210/fm.v19i7.4901.

(14.) Dipayan Ghosh and Ben Scott, “#Digitaldeceipt: The Technologies Behind Precision Propaganda on The Internet” (New America Foundation and the Shorenstein Center at Harvard Kennedy School, January 2018), https://na-production.s3.amazonaws.com/documents/digital-deceit-final-v3.pdf.

(15.) Michal Kosinski, David Stillwell, and Thore Graepel, “Private Traits and Attributes Are Predictable from Digital Records of Human Behavior,” Proceedings of the National Academy of Sciences 110, no. 15 (April 9, 2013): 5802–5805, https://doi.org/10.1073/pnas.1218772110.

(16.) Nicholas Confessore and Danny Hakim, “Data Firm Says ‘Secret Sauce’ Aided Trump; Many Scoff,” New York Times, March 6, 2017, sec. Politics, https://www.nytimes.com/2017/03/06/us/politics/cambridge-analytica.html.

(17.) Will Oremus, “The Real Scandal Isn’t Cambridge Analytica. It’s Facebook’s Whole Business Model.,” Slate Magazine, March 20, 2018, https://slate.com/technology/2018/03/the-real-scandal-isnt-cambridge-analytica-its-facebooks-whole-business-model.html.

(18.) “Revealed: Trump’s Election Consultants Filmed Saying They Use Bribes and Sex Workers to Entrap Politicians,” Channel 4 News, March 19, 2018, https://www.channel4.com/news/cambridge-analytica-revealed-trumps-election-consultants-filmed-saying-they-use-bribes-and-sex-workers-to-entrap-politicians-investigation.

(19.) As the Cambridge Analytica story was exploding, a discussion on a listserv of academics working on fake news and disinformation raised several thoughtful considerations that suggest a good deal of skepticism about the likelihood that microtargeting based on Facebook data is in fact that much more efficient or (p.427) effective than the already-existing arsenal of tools at the disposal of political campaign operatives. The next few paragraphs are based on research we did to flesh out comments made on that list by Duncan Watts, Brendan Nyhan, David Lazer, and David Rand. Any errors here are our own.

(20.) Wu Youyou, Michal Kosinski, and David Stillwell, “Computer-Based Personality Judgments Are More Accurate than Those Made by Humans,” Proceedings of the National Academy of Sciences 112, no. 4 (January 27, 2015): 1036–1040, https://doi.org/10.1073/pnas.1418680112.

(21.) S.C. Matz et al., “Psychological Targeting as an Effective Approach to Digital Mass Persuasion,” Proceedings of the National Academy of Sciences 114, no. 48 (November 28, 2017): 12714–12719, https://doi.org/10.1073/pnas.1710966114.

(22.) Michael A. Bailey, Daniel J. Hopkins, and Todd Rogers, “Unresponsive and Unpersuaded: The Unintended Consequences of a Voter Persuasion Effort,” Political Behavior 38, no. 3 (September 2016): 713–746, https://doi.org/https://doi.org/10.1007/s11109-016-9338-8.

(23.) Eitan D. Hersh and Brian F. Schaffner, “Targeted Campaign Appeals and the Value of Ambiguity,” Journal of Politics 75, no. 2 (April 1, 2013): 520–534, https://doi.org/10.1017/S0022381613000182.

(24.) David Nickerson and Todd Rogers, “Political Campaigns and Big Data,” HKS Working Paper No. RWP13-045 (February 25, 2014), https://doi.org/https://doi.org/10.2139/ssrn.2354474.

(25.) Randall A. Lewis and Justin M. Rao, “The Unfavorable Economics of Measuring the Returns to Advertising,” SSRN Electronic Journal, September 18, 2014, https://doi.org/https://doi.org/10.2139/ssrn.2367103.

(26.) Samidh Chakrabarti, “Hard Questions: What Effect Does Social Media Have on Democracy?,” Facebook Newsroom (blog), January 22, 2018, https://newsroom.fb.com/news/2018/01/effect-social-media-democracy/.

(27.) Elisa Shearer and Jeffrey Gottfried, “News Use Across Social Media Platforms 2017” (Pew Research Center, September 7, 2017), http://www.journalism.org/2017/09/07/news-use-across-social-media-platforms-2017/.

(28.) Craig Silverman et al., “Inside The Partisan Fight For Your Facebook News Feed,” BuzzFeed, August 8, 2017, https://www.buzzfeed.com/craigsilverman/inside-the-partisan-fight-for-your-news-feed.

(29.) Daniel Kahneman, Thinking, Fast and Slow (New York: Farrar, Straus and Giroux, 2011).

(30.) “Campaign ‘92: Survey VIII” (Pew Research Center, July 8, 1992), http://www.people-press.org/1992/07/08/campaign-92-survey-viii/.

(31.) Craig Silverman et al., “Hyperpartisan Facebook Pages Are Publishing False And Misleading Information At An Alarming Rate,” BuzzFeed, October 20, 2016, https://www.buzzfeed.com/craigsilverman/partisan-fb-pages-analysis.

(32.) Hunt Allcott and Matthew Gentzkow, “Social Media and Fake News in the 2016 Election,” Journal of Economic Perspectives 31, no. 2 (January 19, 2017): 211–236.

(33.) Andrew Guess, Brendan Nyhan, and Jason Reifler, “Selective Exposure to Misinformation: Evidence from the Consumption of Fake News during the 2016 U.S. Presidential Campaign,” Jan 9, 2018, 49, http://www.dartmouth.edu/~nyhan/fake-news-2016.pdf.