Surveillance and the Examined Life
Surveillance and the Examined Life
Cultivating the Technomoral Self in a Panoptic World
Abstract and Keywords
Chapter 8 explores the ethical challenges presented by today’s emerging technologies for digital surveillance and self-tracking. Keeping technomoral virtues such as honesty, self-control, flexibility, justice, and perspective in view, this chapter examines ideals of transparency and control in contemporary discourse on the new world of dataveillance that enables a sousveillance society, one in which even the watchers are watched. Moreover, our lives are increasingly shaped by self-surveillance, self-tracking, and nudging, practices borne of wearable digital monitors and other ‘smart’ devices that let us analyze virtually every aspect of our bodies, our habits, and our days. These new modes of seeking the good life, crystallized in the emerging “Quantified Self” movement, are contrasted with traditional methods of self-examination and cultivation. The former are shown to promote a dangerously impoverished view of self-care and improvement, one that bypasses the genuine potential of surveillance technologies to promote human flourishing.
SURVEILLANCE TECHNOLOGIES ARE nothing new. For as long as there have been creative minds, people have used them to find ingenious ways of watching, listening, and tracking one another, and their motives have been as various as the means. The implications of the topic for questions of ethics, justice, and human nature have also magnetized the philosophical imagination. Among the most famous thought experiments concerning surveillance are Plato’s Ring of Gyges, which imagines how human character might be corrupted by perfect immunity from surveillance. At the other extreme is Jeremy Bentham’s notorious design for the Panopticon, a prison designed to allow constant surveillance of its inmates in order to pacify them with the ever-present possibility of being watched. In the 20th century, Michel Foucault famously reflected in Discipline and Punish upon Bentham’s Panopticon as an illustration of a broader philosophy of social control: panopticism.1 Panopticism employs a range of political, material, psychological, and economic techniques to establish maximally effective and far-ranging forms of social discipline with minimal obtrusiveness, expenditure, force, and risk.
8.1 Virtue in the Panopticon: Challenging the New Cult of Transparency
Many decades into the digital computing revolution, networked devices for pervasive and unobtrusive surveillance are now well established in countless millions of homes, schools, cars, workplaces, parks, playgrounds, government buildings, shops, restaurants, airports, and sporting arenas. They are in phones we carry in (p.189) our pockets, in medical devices implanted in our bodies, embedded via RFID chips in our credit cards and clothing tags, in the invisible geofences around our children’s schools, in the high-powered satellites that orbit us, and even in camera traps and webcams installed in the most remote swaths of terrestrial wilderness, allowing monkeys and rare mountain pallas cats to unwittingly record ‘selfies’ for our awe and amusement.
Today’s means, motives, forms, and uses of surveillance are, as they have always been, as varied as the human imagination. What is genuinely new is the way in which massively networked data storage banks and powerful algorithms now allow us to integrate, aggregate, compare, and extrapolate from the output of these diverse and globally distributed surveillance tools, producing an unprecedented and seemingly unfathomable ocean of discretely retrievable data about people, places, things, and events. What is more, most of this data is not originally created by surveillance mechanisms as such. Most of it becomes material for surveillance simply by means of the potential commercial and security value of every piece of digital information your life generates. Since, relative to its potential value, data is cheap to collect, transmit, and store, your credit card purchases, turnstile exits, emails, web searches, phone contacts, social media ‘likes,’ personal photos, family health risks, prescription history, favorite vacation spots, college essays, movie rentals, music listening history, driving habits, and dental hygiene are most likely all stored somewhere, and potentially linkable to any other piece of information about you and others you know. This is the phenomenon known as dataveillance.
Dataveillance complicates Foucault’s account of a panoptic society in several ways. First, much of the data we surrender appears to be freely given. Even if we discount data unknowingly surrendered under opaque digital ‘terms of service’ or end-user license agreements (EULAs), many of us routinely post our physical locations, consumption habits, political and religious affiliations, voting intentions, health conditions, and romantic histories with the conscious belief that the overall benefits of sharing these data outweigh the privacy risks. We often fail to see, or care, that data which individually seems trivial and harmless can, when aggregated by powerful algorithms, be profoundly revealing of our selves. Is this the ultimate triumph of the panopticon’s invisible social discipline? Or an indicator that dataveillance is driven by spontaneous, bottom-up forces rather than political control? Additionally, in a world where our exposures to surveillance are so varied and many, it is not clear to what extent surveillance still functions as an effective means of control; perhaps we will become increasingly numbed to surveillance’s inhibitory effects.
Moreover, the lines between watchers and watched have never been so fluid. Fears of pervasive government surveillance, aka ‘Big Brother,’ endure; indeed, in (p.190) the wake of Wikileaks and Edward Snowden’s revelations about the NSA and GCHQ, they are more acute than ever. Yet global networks of ‘hacktivists’ and citizen watchdogs are using their own digital tools to watch the watchers from below. Also growing are practices of lateral surveillance: ordinary citizens watching and recording one another’s activities for any number of purposes, from the relatively benign to the criminal. When we add to these the growing movement of personal or self-surveillance discussed in the next section, it becomes clear that the lines of power, discipline, and control that Foucault tried to articulate in his account of a panoptic society have become increasingly tangled and diffuse.
Some employ the term sousveillance to encompass this contemporary culture of expanding, reflexive, and manifold forms of watching and being watched.2 The newest hardware innovations associated with the emerging sousveillance culture are wearable personal computing and recording devices. Examples include fitness and health trackers such as the Fitbit and Jawbone, the Apple watch, and Google’s controversial Project Glass. Driving this culture are powerful advances in algorithms for geolocational tracking, facial and speech recognition, and aggregating and processing large datasets. A sousveillance society’s predominant cultural value is transparency. One of the earliest forecasters and most avid defenders of ‘the new transparency’ is futurist David Brin, whose 1998 book The Transparent Society called for a radical extension of liberal enlightenment philosophies of social and political openness. Arguing that the interests of justice and equality can only be served by a culture in which their violations can be openly observed by anyone, Brin admiringly cites J. Robert Oppenheimer’s remark that “We do not believe any group of men adequate enough or wise enough to operate without scrutiny or without criticism. We know that the only way to avoid error is to detect it, that the only way to detect it is to be free to inquire. We know that in secrecy error undetected will flourish and subvert.”3
Oppenheimer’s view echoes the oft-quoted sentiment of U.S. Supreme Court Justice Louis Brandeis that “Sunlight is said to be the best of disinfectants,” but all too often it is forgotten that the author of this quote was the Court’s most effective advocate of individual privacy, arguing in 1928 in the dissent to Olmstead v. United States that privacy, or “the right to be let alone,” is “the most comprehensive of rights and the right most valued by civilized men.”4 Brandeis’s remarks do not contradict one another, but acknowledge important asymmetries in power between individual citizens who require privacy as a means of preserving what liberty they have, and those political institutions and groups which require regular ‘disinfection’ by public inquiry. A similar point is made by Brin critic Bruce Schneier: “Forced openness in government reduces the relative power differential between the two, and is generally good. Forced openness in laypeople increases the relative power, and is generally bad.”5 Brin responds, however, that this falsely (p.191) presumes a society in which there is reliable information about who has the power that warrants watching, and who does not.6 In an increasingly oligarchic society where a small number of private citizens amass vastly disproportionate shares of wealth and quietly convert it to unprecedented social and political influence on commercial and government institutions, Brin may have a point.
Yet is the ethical question about surveillance only about power and public corruption? Or does the growing ‘cult of transparency’ that drives the vision of many futurist and tech leaders challenge the good life in other, more personal ways? When Brandeis remarked that privacy was the right “most valued by civilized men,” did he think this was simply because civilized men happen to be fondest of their political liberties? Perhaps becoming or remaining ‘civilized’ requires that I enjoy some creative license to experiment with thought and action in arenas that are not immediately subject to judgment by others, especially when those ‘others’ represent the cultural status quo. In her 2012 book Configuring the Networked Self, privacy law scholar Julie Cohen writes extensively on privacy’s role in preserving spaces for free moral and cultural play, a critical element in the development and maturation of selves and communities. Likewise, and in a manner that intersects with Jaron Lanier’s concerns about technological ‘lock-in,’ Evgeny Morozov warns of the profound risk of choosing technologies which, rather than encouraging deliberation and experimentation with new moral and political patterns of response, reify standing norms by enforcing and rewarding rote moral and political habits.7 Surveillance technologies that work too well in making us act ‘rightly’ in the short term may shortchange our moral and cultural growth in the long term.
If this seems to overstate the risks of a sousveillance culture, consider that many defenders of a transparent society anticipate and explicitly welcome its salutary ethical impact on private, not merely political conduct. Brin’s favorite example of the power of transparency is the way in which the visibility of other diners in a restaurant helps to ensure that they refrain from reading our lips, or leaning in to overhear our conversations. The moral ideal is that of having ‘nothing to hide,’ and thus nothing to lose from a transparent society. As Google CEO and Chairman Eric Schmidt famously said in a candid news interview, “If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place.”8 Aside from its outrageous conflation of discretion and vice, this philosophy overlooks the fact that information about me is also usually information about the others with whom I share my life, and thus to focus only on the question of whether I have something to hide is a profoundly solipsistic attitude to privacy concerns, one incompatible with the virtue of moral perspective.9 Still, there are many who would gladly impose this cost on others in order to promote a supposedly greater end: a perfectly good society, or as close as technology will (p.192) let us get. David Friedberg, CEO of a big data analytics firm owned by Monsanto, recently summed up the emerging mantra of a morality of transparency: “What happens when every secret [is exposed], from who really did the work in the office, to sex, to who said what, is that we get a more truthful society. … Technology is the empowerment of more truth, and fewer things taken on faith.”10
Implicit in this statement is the unquestioned privilege of truth over other moral values, including trust, respect, compassion, humility, and flexibility. Better to always have the ‘truth’ about who on the team “really did the work” than to permit the guy who was up all night soothing a colicky child to quietly slack off at work one afternoon without risk of exposure. Better to have the ‘truth’ about the name your husband called you in the heat of anger than to allow him the protective shield of a friend’s discretion, or the liberty to substitute a gentler word in confessing the tirade to you later. Here what Aristotle, Kongzi, and the Buddha celebrated as practical wisdom, the ability of the virtuous person to perceive the moral world in richer, subtler tones than others, is turned on its head: the technocrats would have the moral world rendered in the stark black and white of binary thought—truth or falsehood, faith or fact, good or bad.
Thus even the moral value of truth is distorted here by a profoundly antihumanistic form of what Morozov calls information reductionism, rightly scorned by Nietzsche as the delusion that there is a “ ‘world of truth’ that can be mastered completely and forever with the aid of our square little reason.”11 One can grant the existence of objective truths while recognizing that reality, and especially moral reality, will always overflow any concrete human representation of it. The moral truth about a situation is always richer and more complex than our first glimpse reveals, and the ethical response to that limitation is not simply to gather more ‘bits’ of morally salient information but also to cultivate better ways of seeing, questioning, thinking about, and listening to it.
These are exactly the talents missing in the morally rigid and passively conformist ‘village honest man’ who Kongzi held to be the ‘enemy of virtue.’ It goes without saying that the village honest man cannot remediate his deficient moral vision simply by donning a pair of Google glasses. Moreover, his (or her) rigid, binary moralism often ends in tragedy. Consider the indiscriminate data dumps by the hackers of the Ashley Madison website in 2015, in which the exposure of an unethical business model and legions of would-be marital cheaters was carried out with breathtaking hubris and recklessness—endangering the lives of closeted gay and lesbian citizens living under oppressive regimes, handing foreign intelligence agencies mountains of leverage for blackmail of military and government workers, and leaving a trail of unspeakable devastation in the collateral damage of suicides and family dissolutions. This is what a transparent society looks like when untempered by humility, justice, care, empathy, moral perspective, or wisdom.
(p.193) This is why the moral and political challenges facing humanity cannot be met simply by equipping societies with more sophisticated tools or more advanced STEM skills, independently of the virtues of the people who will wield them. Just as you do not make me a good surgeon by handing me a fancy surgical robot, or a good soldier by sitting me at the controls of a remote UAV, you do not produce more honest or discerning human beings simply by supplying them with more powerful technologies for surveillance. Technology ethicists Evan Selinger and Woodrow Hartzog further illuminate this flaw in the logic of sousveillance culture by pointing to the many empirical studies showing that people do not, in fact, reliably respond to the supply of more information by making more ethical or rational judgments. Just as often, in ways that are highly context-dependent, they respond to increased information flows by becoming more passive, less responsible, overwhelmed, and more likely to fall back on dangerous cognitive biases and shortcuts in ethical judgment.12 For example, the notorious ‘backfire effect,’ which psychologists have demonstrated in media, political, and healthcare contexts, causes people with entrenched but false beliefs to respond to ‘corrective information’—that is, reliable facts that undermine that belief—in an entirely counterintuitive and nonrational way. Such persons tend to respond to corrective information flows not by abandoning or revising their false belief, but by becoming even more confident in its truth!13
Newly data-driven powers of surveillance can also project dangerous illusions of technological neutrality. Consider the increasing reliance of law enforcement and other authorities on facial recognition software and biometric surveillance in public spaces, in combination with predictive algorithms for individual behavior and risk profiles. ‘Big Data’-driven ‘predictive policing’ is often cited as the future of law enforcement and anti-terrorism efforts, and a more objective, less biased methodology for crime prevention.14 Yet the datasets upon which predictive algorithms are trained reflect many of the same blindspots, flawed assumptions and biases of the humans that generated and labeled that data—biases all too easily disguised and reinforced by the power of data anonymization and aggregation.
Many defenders of the cult of digital transparency fall victim to a distinctive cognitive bias of their own, namely techno-fatalism, which preempts rational consideration of otherwise compelling social, political, or ethical critiques of emerging technological development. Brin expresses this bias repeatedly in The Transparent Society, telling readers to surrender the naïve fantasy of regulatory or cultural constraints on emerging surveillance technologies, or any effective steering of their course. This is a bizarre stance for a champion of Enlightenment liberalism, given that tradition’s hostility to unchecked powers. But Brin seems to have decided that 21st century technology, although massively dependent on (p.194) political subsidy and regulatory and consumer support, has somehow acquired a quasi-magical power to defy the public will:
Those cameras on every street corner are coming, as surely as the new millennium. Nothing will stop them. Oh, we can try. We might agitate, demonstrate, legislate. But in rushing to pass so-called privacy laws, we will not succeed in preventing hidden eyes from peering into our lives. … As this was true in the past, so it will be a thousandfold in the information age to come, when cameras and databases will sprout like crocuses—or weeds—whether we like it or not.15
Such fatalism in emerging technology discourse is patently false as a claim of logical necessity, and it fails to be compelling even on historical grounds; after all, humans have managed through intensely cooperative efforts to keep germline engineering as well as nuclear, chemical, and biological weapons from proliferating like unregulated ‘weeds.’ Yet if enough people are convinced by such rhetoric to surrender any hope of intelligently steering the course of emerging technologies, it becomes truth.16 Moreover, without the virtues of technomoral character outlined in chapter 6, our attempts to steer the sousveillance culture and other emerging technosocial developments in ways compatible with our long-term flourishing will most likely fail—not out of fatal necessity, but out of careless indifference to our ongoing development as moral agents.
But this is not a counsel of despair. For two-plus millennia the human family has, in diverse cultural traditions spanning the globe, developed sound practices of moral self-cultivation. When properly nourished by individuals and institutions, and applied in the relevant domains, they yield remarkable, if humanly imperfect and inconsistent, results. No one can reasonably doubt that Confucian moral education—when habitually, sincerely and thoughtfully enacted—has been successful in cultivating stronger moral dispositions toward filial care and loyalty, even if these practices remain subject to extramoral influences and error. Only a wholly uncharitable mind could doubt that millions of Buddhists have, over the centuries, managed by means of their practices to expand their capacities for flexible tolerance, moral perspective, and compassion, even if these capacities remain limited and imperfect. And few would doubt that Aristotle’s philosophy, in those communities where its influence on moral education endured, shaped the virtues of civic affiliation and political friendship so cherished, if imperfectly realized, by Greco-Roman and later European cultures. If devoted, sincere Confucians have cultivated stronger filial respect; if devoted, sincere Buddhists have cultivated enlarged compassion and generosity; and if devoted, sincere Aristotelians have cultivated more active civic friendship, then (p.195) why should we now doubt our own potential to cultivate greater technomoral virtues in ourselves?
The emergence of a digital sousveillance culture thus invites two questions. First, how might this culture influence, for better or for worse, the cultivation of technomoral virtues such as honesty, empathy, justice, and self-control? Second, and flipping the first in a way that should by now be familiar, how can cultivating the technomoral virtues outlined in chapter 6 help us to improve our chances of flourishing together in an increasingly panoptic technosocial environment? Let us begin by delving into an aspect of sousveillance culture that poses an especially strong challenge to traditional practices of moral self-cultivation: namely, the quest for the quantified self.
8.2 Technologies of the Examined Life
In the 4th century BCE, the Greek philosopher Socrates asserted that “the unexamined life is not worth living for a human being.”17 In that same century, the Chinese philosopher Mengzi quoted Kongzi (Confucius) as saying, “If I examine myself and am not upright, although I am opposed by a common fellow coarsely clad, would I not be in fear? If I examine myself and am upright, although I am opposed by thousands and tens of thousands, I shall go forward.”18 How might a 21st century sage pursue this age-old ideal of the ‘examined life’? Will she employ the very same methods of conscious reflection, meditation, and critical questioning used since classical antiquity? Or will she lean on new technologies to better understand who she is, and how well she is living? Consider this alternative vision of the examined life, proposed by contemporary futurist Kevin Kelly:
Through technology we are engineering our lives and bodies to be more quantifiable. We are embedding sensors in our bodies and in our environment in order to be able to quantify all kinds of functions. Just as science has been a matter of quantification—something doesn't count unless we can measure it—now our personal lives are becoming a matter of quantification. So the next century will be one ongoing march toward making nearly every aspect of our personal lives—from exterior to interior—more quantifiable. There is both a whole new industry in that shift as well as a whole new science, as well as a whole new lifestyle. There will be new money, new tools, and new philosophy stemming from measuring your whole life. ‘Lifelogging’ some call it. It will be the new normal.19
In chapter 4 we saw that a habit of reflective self-examination is an essential part of the practice of moral self-cultivation, a practice that cuts across cultural (p.196) and historical boundaries.20 In addition to enabling us to periodically take stock of our character and assess the moral trajectory of our lives thus far, habits of self-examination can help us to respond appropriately in novel and unfamiliar circumstances, which increasingly define 21st century life. While a person of mature character moving through familiar moral territory can summon her virtues automatically and effortlessly, a practically wise person can also work her way through unfamiliar moral territory, in part by calling upon her explicit knowledge of her own moral competencies and limitations as they relate to the practical challenge confronting her.
For classical philosophers, the imperative of self-examination answered a practical rather than a theoretical question—namely, how can we best secure the good life for ourselves and others? Today a growing number of people embrace new technology-driven habits of self-examination in the hope of attaining the good life by means of the ‘quantified self.’ The quantified self is a picture of one’s own existence produced by perpetual surveillance via biometric and mobile sensors, which yield unprecedented volumes of streaming data about one’s psychological and physical states and activities. The movement’s motto? “Self-knowledge through numbers.” Is this simply the next phase of humanity’s historical quest for the examined life? Is it an entirely new vision of the good life and the path leading to it? Or does it embody a profound misconception of the human self and its potential to flourish?
The Examined Life and the Ethical Ideal
As we have seen, a person leading an examined life historically sought reflective self-knowledge not as an end in itself, but as a means to something else: the cultivated self. A cultivated self is one that has been improved by conscious, lifelong efforts to bring one’s examined thoughts, feelings, and actions nearer to some normative ideal. The examined life was thus part of a broader practice of self-care, what Plato called the ‘care of the soul.’ In classical thinking, this care involves philosophical habits of self-awareness that enable a gradual realignment of one’s actions, values, emotions, and beliefs with the Good. We explored three cultural variants of this element of self-care in chapter 4.
Yet leading an examined life is a struggle for most. We are frequently blind to our own failings, unable to distinguish between true and false or higher and lower goods, or distracted from virtuous aims by nonmoral needs and desires. As a result, says the Confucian philosopher Mengzi, “the multitude can be said never to understand what they practice, to notice what they repeatedly do, or to be aware of the path they follow all their lives.”21 If Mengzi is correct, then few of us can hope to live well without some kind of philosophical intervention. If our (p.197) path to a good life is blocked by these obstacles to moral self-cultivation, then we require one or more practical remedies that can help us move over, around, or past them. Reflective self-examination is one such remedy.
Of course, the habit of self-examination must itself be carried out wisely, with appropriate flexibility and moderation. We should not indulge the kind of obsessive self-examination that produces paralyzing anxiety and endless self-recrimination, and/or a narcissistic overestimation of the worldly significance of one’s own moral being. That said, developing this habit in a measured way is an essential part of preserving individual and collective human moral character in new technosocial domains, and will help us to identify and cultivate the technomoral skills and virtues we need in order to flourish in increasingly unstable and opaque practical environments. Unfortunately, making this virtue a lasting habit requires more than simply recognizing its importance. As with all moral practices, it is vulnerable to disruption, neglect, corruption, and replacement by counterfeit. To see how this might be a particular danger in today’s technosocial environments, let us recall what is most essential to self-examination as a moral practice.
8.2.1 Technologies of the (Cultivated) Self
In his 1982 lectures of the same name, Foucault called such practices “technologies of the self” (techniques de soi): methods by which members of a given culture “effect by their own means or with the help of others a certain number of operations on their own bodies and souls, thoughts, conduct, and way of being, so as to transform themselves in order to attain a certain state of happiness, purity, wisdom, perfection, or immortality.”22 ‘Technologies of the self’ commonly associated with philosophical practice include the Socratic habit of dialectical questioning; the careful study and emulation of exemplary persons (e.g., Aristotle’s phronimoi or the Confucian junzi); immersion in philosophical and moral education; the cultivation of narrative habits of confession, letter writing, and meditation; reflective examinations of conscience and memory; and the testing of one’s decisions according to general moral principles.
In talking about these philosophical practices as “technologies of the self,” we hearken back to the ancient Greek concept of techné as a craft by which a product (here, the cultivated ethical self) is gradually constructed and shaped. Yet to be authentic, techniques of self-care must be consciously and reflectively embraced by the moral agent being cultivated. While others may help, the agent must actively examine and evaluate her own conduct. Even Foucault, who emphasizes the construction of the self by impersonal cultural, historical, and political forces, holds that the very thing such forces ultimately produce is a self-conscious agent who assumes moral responsibility for herself.
(p.198) In Discipline and Punish, Foucault uses Bentham’s design for the Panopticon prison tower to show how social, economic, and political structures can effectively internalize the normalizing power of surveillance, such that the subject of power willingly creates herself within the confines of her culture’s dominant ideals. Foucault’s account reflects a postmodern suspicion of such processes, which bring individual thought and behavior into alignment with cultural or political norms. Such suspicion is hardly groundless; we need not rehearse the many ways in which normalizing powers can inflict not only bodily but psychological violence upon human persons.
But it would be rash to assume that technologies of the self are inherently oppressive. To do so I would have to deny that actively striving to configure myself for a flourishing life could be an authentic choice, and this seems to unduly demean human agency. Furthermore, technologies of the self do not function as pure constraints on my development—they also reveal richer possibilities for living that may otherwise go unseen and unrealized. For example, by exposing myself to the moral questioning of others, or by comparing my actions with those of someone whose character I deeply admire, I might discover that I am far less forgiving of others than I imagined myself to be, and that I hold onto petty resentments far longer than is reasonable. In discovering this, I reopen the possibility of becoming a person less consumed by anger and judgment, freer to love, and to bond more deeply with other imperfect souls. My life is not restricted by this result of self-care—it is greatly enhanced and enlarged.
Thus while the accepted aims and means of self-cultivation are shaped by powerful cultural forces, these need not impose any single universal image of the ‘cultivated self,’ or any one best technique for pursuing it. To justify the use of philosophical technologies for an examined life, we need only assume that for any individual, the decision to pursue a developmental path by such means can be authentic and justifiable. Yet not every technology of the self is philosophical. The Quantified Self movement, embodied in Kevin Kelly’s assertion that life “doesn’t count unless we can measure it,” aims to use emerging technologies to transcend the limitations of philosophical and other qualitative methods of self-examination.
8.3 ‘Smart’ Surveillance and the Quantified Self
Adherents of the Quantified Self movement, as well as an increasing number of average consumers, employ mobile, wearable, and/or biometric sensors such as the FitBit and Jawbone devices, smartphone apps such as Moves and Chronos, video cameras, and a range of other devices to measure, track, analyze, and store volumes of recorded data concerning an ever-expanding list of personal (p.199) variables. Weight, blood pressure, muscle mass, sleep patterns, mood, physical activity, energy levels, creativity, and social and cognitive performance are just a few variables of common interest. Among the more active and vocal adherents of the movement is statistician Konstantin Augemberg, author of a blog called Measured Me (www.measuredme.com) where he posts the results of his efforts to quantify and track everything from alertness and sleep efficiency to happiness and life satisfaction. Among his most ambitious projects is a quest to quantify ‘living well,’ under which he subsumes measures of physical activity expended, healthy eating, diversity and engaged ‘flow’ of his daily experiences, and time spent ‘recharging’ with sleep and meditation.
The Quantified Self movement claims thousands of followers across the world, with hundreds of local meetup groups and an annual global conference at which enthusiasts share information on the latest gadgets for self-surveillance and tracking. Yet these numbers are dwarfed by the millions of casual consumers who compose the market for affordable and user-friendly self-tracking devices. Is this just another phase in the evolution of technologies of the self, in Foucault’s sense? Or does the move from philosophical to digital methods of self-examination represent a more profound moral shift?
A satisfactory answer would require deeper inquiry, but let us weigh some preliminary considerations. As Foucault explains at length in his 1982 lectures, the concept of the examined life is historically fluid. Classical, medieval, and modern visions circulated between the ideal’s first incarnation as a form of self-care that maintains the overall excellence of one’s life activity, and an alternative conception of self-care as a way to preserve the integrity of the soul. Foucault observes that by the 20th century, the classical vision of self-care as a tool for setting youth on the right path to political life had evolved into a medical-psychological model, in which the task of self-care is seen as a lifelong duty to take continual stock of one’s moral health.
Nevertheless, certain core notions endure across these historical changes. One common thread is the idea that a good human life presupposes my ability and intentional choice to habitually reflect upon and attend to my own moral development or trajectory. A second commonality, implicit in the first, is the requirement that I take steps to actively cultivate or steer that trajectory in the desired direction. This is so even if I reject the idea of a fixed end or destination toward which my life must aim. For even if I may freely invent the form of life to which I aspire, that choice implies a commitment to actively shape my own person into the sort of being that can achieve such a life. Lastly, the idea of self-care has historically included a duty to be concerned with those aspects of my life and activity that are central to my moral character (i.e., my virtues and vices). All of these common habits of moral self-care appeared in our examinations of the practice (p.200) of moral self-cultivation in chapters 3, 4, and 5. While Platonists, Confucians, Buddhists, Aristotelians, Stoics, medieval Christians, and moderns all had very different ideas of what one’s moral constitution ought to be like, none would have been satisfied with habits of self-surveillance that left the moral virtues largely unexamined.
While one might simply reject the notion that a good life is an examined one, let us assume for the sake of argument that it is and see what follows from it in relation to the Quantified Self movement. Let’s start with the requirement to pay conscious attention to one’s developmental trajectory. It may seem that the movement not only satisfies but exceeds this requirement by encouraging a kind of hyperattention to, even obsession with, that trajectory. A quick perusal of blog posts, websites, and essays produced by Quantified Self enthusiasts conveys an atmosphere of fevered excitement about finding ways to track ever more personal variables, with ever greater degrees of mathematical precision and reliability. Yet as the number of tracked variables expands, we must ask whether the act of ‘attending’ to oneself becomes more difficult and how many of the factors being tracked are actually meaningful. As any philosopher of perception or cognitive scientist knows, attention is as much about the ability to screen out information as it is about taking it in; in fact, the former capacity enables the latter.
Consider the Measured Me project mentioned earlier, which posted weekly ‘lifestream’ updates for years before taking a hiatus in Fall 2014. Its author’s published personal dataset for October 2012 included sixty-six daily datapoints, taken for twenty-one personal variables ranging from ‘mental energy’ to ‘charisma’ and ‘self-esteem’ (along with six variables for the weather). The previous month’s dataset included still other variables, such as calories consumed/expended and personal ‘entropy,’ a measure of how “chaotic” a particular morning, afternoon, or evening was. Now to be fair, these are early experiments in self-surveillance and quantification, rather than a honed practice of self-cultivation. But it is worth asking whether the latter can be served by habits that aim primarily at expanding the range of self-surveillance rather than selecting for and attending to the most salient features. Indeed, the obsessive quality of many Quantified Self habits evokes the philosopher Mengzi’s warning against hyperactive, overly self-conscious efforts at self-improvement, which he compared to the self-defeating habit of the farmer who tried to help his plants grow faster by pulling at their sprouts.23
This brings us to the second core element of an examined life: the decision to use the results of self-examination to cultivate or steer one’s own personal development in a desired direction. This is a form of philosophical ‘perfectionism,’ what Stanley Cavell has described (taking Emerson’s phrase) as a fundamentally human quest for my “further, next, unattained but attainable self.”24 There is (p.201) little evidence that this is a central aim of devotees of the Quantified Self. It is true that they seek correlations between tracked variables precisely because these could aid prediction and control of one’s personal states. For example, if I discover that my happiness or resilience tends to decrease sharply as local temperature or humidity rises, I might think to move to a different climate and see if my well-being improves.
Yet few Quantified Self enthusiasts express a clear sense of what kind of self they wish to cultivate overall; nor is the ‘cultivated self’ an explicit and recognized goal of the movement. Indeed the perceived value of the practice appears more epistemic than developmental, as indicated by their motto, “Self-knowledge through numbers.” The Quantified Self website claims that the movement’s aim is “to help people get meaning out of their personal data.” But this is profoundly vague; what sort of meaning? To what end, if any? While Quantified Self enthusiasts are developing techniques that could perhaps someday be placed in the service of a philosophical practice of self-cultivation, this is not presently their goal. Since it isn’t aimed in that direction, the movement in its current form does not offer nor even promise a new road to the cultivated self.
Does a quantified life display the third core feature of an examined life? Can it attend to the specifically moral features of one’s character and activity? On this score, we find the greatest tension between a quantified life and an examined one. The moral dimensions of the self are among the most difficult to translate into numbers. In contrast with variables such as calories, muscle mass, or sleep hours, the moral features of my person seem to be difficult to formalize into a tidy list of variables, much less variables that can be assigned discrete values. Yet as Kevin Kelly’s quote makes clear, the Quantified Self movement is committed to a form of self-surveillance that is both encompassing and scientific, where ‘scientific’ entails ‘quantifiable.’ Its enthusiasts readily assign numbers to philosophically rich concepts associated with the good life, such as ‘happiness’ and ‘living well.’ Yet even these are not interpreted in moral terms; rather, Quantified Self enthusiasts use them to describe a subjective sense of well-being and engaged ‘flow’ that is quite compatible with morally weak character or practice.
These preliminary analyses suggest that Quantified Self practices do not currently constitute technologies of the self in Foucault’s sense, nor do they fulfill the aims of an examined life needed to cultivate virtue and promote sustained human flourishing. It remains to be seen how the movement will evolve. It may prove to be a technological fad with transient appeal. Or it may grow and endure as a distinct cultural practice of self-surveillance unrelated to the aims of an examined life. In that case, however, it would likely be viewed as an alternative to moral technologies of the self, as it is difficult to see how the habits of self-tracking promoted by the Quantified Self movement can coexist happily with (p.202) the philosophical and spiritual habits of an examined life. After all, each practice demands a considerable investment of time and mental energy; can one imagine a Quantified Self devotee faithfully cataloguing and analyzing dozens of daily datapoints on their behavior and mental states, while also exercising the daily habits of narrative self-examination practiced by the likes of Marcus Aurelius or Emerson?
Still, why not just conclude that philosophical technologies of the self are hopelessly outdated, destined to be replaced with more ‘objective,’ ‘scientific,’ and user-friendly tools for living the examined life? Indeed, a central aim of the movement is to rely more and more on technological devices to record, store, and analyze these datapoints for us. Its advocates expect that within a few short decades, nearly everyone will use such devices to collect unprecedented amounts of data about ourselves in real-time, including data that are inaccessible to conscious reflection. Furthermore, such data will be far more precise and less vulnerable to subjective distortion than the contents of reflective self-examination. Feedback from artificially intelligent life coaches will be able to tell us what adjustments we need to make to our behavior or thinking. Technological practices of self-quantification may thus appear to offer a superior replacement for moral technologies of the self. So why reject this thinking?
The answer is straightforward enough: this thinking commits a profound category mistake. The most accurate and comprehensive recording of your past and present states would not constitute an examined life, because a dataset is not a life at all. As Aristotle reminds us, my life includes my future, and thus the examined life is always a project, never an achievement. It is the future toward which we live, and for the sake of which we examine and cultivate ourselves. We prize the examined life not for the ‘data’ that reflective philosophical practice yields, but for the transformative nature of the practice itself and the dignity it confers upon those who take it up. The examined life is worth living because it embodies those chosen habits of mind and conscience that constitute a person who takes responsibility for her own being, and in particular her moral being. This capacity may or may not be uniquely human (can a dolphin reflect upon what kind of dolphin it is, and how far it is from the dolphin it wants to be?); but it is the capacity that most uniquely defines a moral agent, and a person in the fullest sense.
8.4 Surveillance and Moral ‘Nudges’
Converging with sousveillance culture is another emerging technosocial alternative to the conscious practice of moral self-cultivation. This is the phenomenon known as nudging. The idea behind ‘nudges’ is to use technology to foster prosocial behavior in ways that require little or no conscious effort from users. Some (p.203) nudges are built into physical environments: just as speedbumps promote safe driving, one might design a building’s stairwell to be more accessible and attractive than the elevator as a means to promote residents’ exercise.25 Today, digital surveillance and self-tracking technologies afford far more personalized and responsive forms of nudging. Apps on your phone can now remind you to text your girlfriend, call your mother, and eat more fiber. Your ‘smart’ pill bottle can ask you to take your diabetes medicine. Your ToneCheck software can tell you to write a nicer email to your boss, and your Apple Watch can tell you to calm the hell down during a family argument. Ten or twenty years from now, it is hard to imagine a domain of practice in which you won’t be able to have an artificially intelligent behavioral monitor if you want one. In many places, such as the workplace, it is unlikely you will be offered the choice.
These are astonishing possibilities, and used selectively and in moderation, they could contribute a great deal to human flourishing. Unfortunately, we are not today cultivating ourselves, or educating our children, to attain the technomoral virtues needed to steer such practices wisely and intentionally. It is essential for human agency that our moral practice, whether supported by philosophical or digital technologies of the self, remain our own conscious activity and achievement rather than passive, unthinking submission. Imagine if Aristotle’s fellow Athenians had been ruled by such perfect laws that they almost never erred in moral life, either in individual or collective action. Imagine also that Athens’ laws operated in such an unobtrusive and frictionless manner that the citizens largely remained unaware of their content, their aims, or even their specific behavioral effects. Our fictional Athenians are reliably prosocial, but they cannot begin to explain why they act in good ways, why the ways they act are good, or what the good life for a human being or community might be. For they have never given a moment’s thought to any of these things, yet they go on every day behaving reliably justly to one another. Would we want to say these hypothetical Athenians are moral beings? Do they live well? Do they flourish in the way that we would most want our own children and friends to flourish?
How do our hypothetical Athenians differ from the ‘pod people’ or ‘Stepford Wives’ described in all manner of modern science fiction dystopias, where moral conformity is achieved through a process of social transmission that operates independently of deliberation and choice? In such fictions, the pod people often retain intellect or consciousness—they converse with one another about everything from sports to business to war to childrearing. It is what we cannot imagine them talking about that makes them dystopian horrors. They cannot talk about how they themselves might become better people, what obstacles, trade-offs, and choices they face in doing so, or what that project of moral self-cultivation would mean to them or the world.
(p.204) The real Athenians were not pod people, and fortunately neither are we. Nor will apps, wearable computing, or ‘smart’ environments make us into mindless moral zombies. Yet this does not make them harmless. From the risks of ‘helicopter parenting,’ in which the agency of children and young adults is tightly constrained to prevent them from making mistakes or missing opportunities, to the seductive pull of political and religious groupthink, humans already struggle to habitually exercise their own moral agency, a struggle that is millennia old. If surveillance and nudging technologies are marketed and embraced as yet one more social license to relinquish this struggle, so that our moral lives may be quietly and seamlessly molded into the shapes programmed by Silicon Valley software engineers and technocrats, we may not become pod people—but will we become more or less like the human beings we wish to be?
8.5 Flourishing with Emerging Surveillance Technologies
The practices of moral self-cultivation we articulated in Part II—and human flourishing itself—face serious challenges from the rapid emergence of a global sousveillance culture. If one were a techno-pessimist, one might conclude that the technologies driving this phenomenon promise only to magnify asymmetries of political and economic power; to diminish the space for moral play and authentic development; to render trust in human relations superfluous; to reduce embodied moral truth to decontextualized information; and to replace examined lives with datasets. On top of this, we hear from futurists and technocrats that the course of our sousveillance society is already out of our hands, and that our only hope is to adapt to whatever shapes new surveillance technologies impose upon on our lives.
Fortunately, the latter claim is entirely false, and not one of the pessimistic scenarios we have mentioned is fated. Nor is there any reason to think that the only alternative to those scenarios is to simply reject all new surveillance and ‘smart’ technologies, or to resist every one of the cultural transformations they engender. Of course, effective resistance very probably would be impossible in the absence of more widespread global cultivation of the technomoral virtues that enable prudent civic deliberation and collective action in contemporary life. But let us imagine that through improved technomoral education and practice such virtues were to be more widely cultivated. In that case, their use in prudent technosocial action would be far more discerning, subtle, and flexible than any indiscriminate refusal of new technologies. Possessors of technomoral virtues would recognize, for example, the ways in which emerging surveillance technologies can be redesigned, experimented with, applied differently, restricted to appropriate contexts, (p.205) made accountable and responsive to social and political critique, and used as extensions and reflections of our moral virtues rather than substitutes for them.
Let us consider an example. Even in a society where technomoral education for civic virtue is reasonably effective, political corruption and abuses of power will exist. The need for some degree of transparency in government and other centers of power is real, and new technologies of sousveillance can be helpful means of holding those powers accountable—if and only if they are used wisely. As Morozov notes, it is naive to think that government and other large civic institutions would be improved by recording every detail of their operations and making them available to all on demand. Not only would the glut of data produce more confusion among citizens than clarity, but it would likely change institutional behaviors for the worse. If representatives and leaders knew that every word and act would be made available for public inspection and criticism, they would be discouraged from reaching difficult but necessary compromises, voicing uncertainty, taking important risks, and making the painful trade-offs that prudent governance requires.26 Just as privacy secures an important space of free play and experimentation for individuals, it also secures this for institutions.
Thus human flourishing requires cultivating technomoral honesty, a respect for truth in its diverse appearances across the range of technosocial contexts, rather than the flattened, grossly reductive account of truth offered by the defenders of a ‘transparent society.’ An institution, politician, or citizen who records or shares information without discernment or contextual sensitivity is not virtuous but vicious. Imagine a politician who records sensitive details of diplomatic negotiations and then disseminates the unedited footage on social media, with disastrous results for global civic flourishing. This is not an honest politician, because she is not exemplary in her treatment of political truth. Compare this contextual role with that of an academic, where in general the norms of exemplary conduct will be more conducive to plain truth-telling. Yet even here, people must be discerning in the treatment of information; an honest scientist must know her audiences and venues, and there are times when excessive precision or transparency will obscure the truth rather than reveal it. Should we imagine that it would be exemplary or truth-serving for a scientist to embed in her articles a computer-generated transcript of every mundane minute of laboratory activity leading up to publication, including that morning when she, due to an error in preparing some slides for dissection, spent hours analyzing and remarking upon a completely contaminated sample?
Still, there are circumstances in the life of government, academia, business, and other institutions—even the private lives of citizens—that justify opening the books for inspection, so to speak. We need the technomoral wisdom to make more intelligent and practically discerning use of the new technologies that can (p.206) make opening the books in such circumstances easier. For example, we might insist upon the maintenance of encrypted electronic recordings of certain legislative sessions or high-level government meetings, but require these records to remain encrypted unless the courts declare a compelling public interest in the information. We might expand the rights of private citizens to safely record their interactions with law enforcement, but look toward designing technologies for doing so that are less intrusive than having multiple cell phone cameras held up in the face of every officer trying to do her job. We might require that wearable recording devices audibly announce when a photo or video is being captured. We might discourage or ban their use in public recreation or wilderness areas.
To balance the interests of child safety with the developmental importance of parent-child trust, those who wish to use RFID tags or other geo-enabled devices to track the locations of their children could be encouraged to have the data stored in third-party systems that are inconvenient to access, lowering the temptation to monitor children in non-emergency circumstances. Children in state custody or new foster homes could be supplied by courts with digital means of documenting and reporting abuse or neglect. To tackle the cognitive biases and blindspots that commonly impede accurate moral self-assessment, future self-surveillance technologies could be designed to provide new forms of moral biofeedback, allowing us to better discern unflattering, destructive, or antisocial habits and traits that evade traditional introspective methods of self-examination. We might also look to design the outputs of self-surveillance technologies in ways that resist reductive quantification; instead of yielding sets of raw, decontextualized numbers or graphs, we might design such technologies to display their results in a more descriptive or holistic form that requires integration within a moral narrative of the good life.
Of course, such proposals are conditioned upon cultural and technical feasibility, and some may just be bad ideas; but the point is that without technomoral virtues, we will lack the ability to determine individually or collectively which goals and means of surveillance are wise and worth pursuing. In particular, the global goods of human flourishing described in chapter 2 will be virtually impossible to secure in these virtues’ absence. We have already spoken of technomoral honesty, which is of clear relevance to the prudent guidance of surveillance technologies, but let us consider a few others:
Living well in the 21st century will require enhancing our capacities for deliberative self-control in technosocial contexts. Which specific surveillance and nudging practices will aid us in this effort? Which are more likely to induce greater moral passivity in the manner foretold by Orwell’s 1984 or Huxley’s Brave New World, weakening our ability to deliberate upon and mold our own desires? Which modes of education and cultural expression can strengthen our (p.207) self-control in technosocial contexts, allowing us to intelligently steer rather than passively adapt to the emerging sousveillance phenomenon?
Additionally, could cultivating technomoral humility help us to acquire more modest and realistic expectations of how rationally and effectively humans respond to surveillance practices? How might surveillance technologies be redesigned with this in mind, and used to help us to grasp the limits of, and form more reasoned hopes for, our informational capacities? Could such practices help us to more honestly confront, accept, and adapt to a technosocial future that will continue to outstrip human mastery?
With respect to the virtue of technomoral justice, how can we use surveillance technologies to enhance our awareness of unjust asymmetries in the global benefits and risks of technosocial developments, and the impositions of technosocial power on the basic rights, dignity, or welfare of others? Could we then put this new awareness to work in educational and cultural practices that foster more just uses of surveillance technology? Can more discerning applications of surveillance technologies foster greater technomoral empathy with the suffering and joys of others outside our immediate view, and better moral care and loving service of their needs? Can discerning surveillance practices foster more flexible responses to local and global technosocial change and opacity? Or more holistic and less fragmented moral perspectives?
Of course, considerable technomoral virtue is already required in order to frame and execute ethically constructive applications of surveillance technologies. We must ask ourselves where such exemplary virtues can be found today. If they are widely lacking in the relevant institutions and cultures, then the most pressing question for us is this: How can we begin to design and implement now educational and cultural projects to enhance the cultivation of technomoral virtue? For this is the only way to ensure that new surveillance technologies are wisely deployed to promote human flourishing in an increasingly opaque future—as opposed to the indiscriminate, extreme, and reductive uses of these technologies all too often favored by defenders of a new, radically transparent surveillance society.
(4.) Olmstead v. United States, 277 U.S. 438 (1928).
(8.) See video at http://www.huffingtonpost.com/2009/12/07/google-ceo-on-privacy-if_n_383105.html. March 18, 2010, updated May 25, 2011.
(17.) Plato, Apology, 37e-38a.
(21.) Mencius 7A5.
(23.) Mencius 2A2.