Jump to ContentJump to Main Navigation
Society and the InternetHow Networks of Information and Communication are Changing Our Lives$

Mark Graham and William H. Dutton

Print publication date: 2019

Print ISBN-13: 9780198843498

Published to Oxford Scholarship Online: September 2019

DOI: 10.1093/oso/9780198843498.001.0001

Show Summary Details
Page of

PRINTED FROM OXFORD SCHOLARSHIP ONLINE (www.oxfordscholarship.com). (c) Copyright Oxford University Press, 2020. All Rights Reserved. An individual user may print out a PDF of a single chapter of a monograph in OSO for personal use. date: 16 February 2020

The Political Economy of Digital Health

The Political Economy of Digital Health

Chapter:
(p.281) 16 The Political Economy of Digital Health
Source:
Society and the Internet
Author(s):

Gina Neff

Publisher:
Oxford University Press
DOI:10.1093/oso/9780198843498.003.0017

Abstract and Keywords

The Internet and digital media are increasingly seen as having enormous potential for solving problems facing healthcare systems. This chapter traces emerging “digital health” uses and applications, focusing on the political economy of data. For many people, the ability to access their own data through social media and connect with people with similar conditions holds enormous potential to empower them and improve healthcare decisions. For researchers, digital health tools present new forms of always-on data that may lead to major discoveries. Technology and telecommunications companies hope their customers? data can answer key health questions or encourage healthier behavior. At the same time, Gina Neff argues that digital health raises policy and social equity concerns regarding sensitive personal data, and runs a risk of being seen as a sort of silver bullet instead of mere technological solutionism.

Keywords:   political, economy, digital, health, data

The broad label of “digital health” combines several distinct emerging social and technological innovations and trends for tracking and managing symptoms and health conditions and for using digital tools for encouraging healthy behaviors and discouraging unhealthy ones. Increasingly, people look to digital health to solve the problems facing healthcare systems in both developed and developing economies, to improve the health and wellness of individuals and populations, and to lower healthcare costs. This chapter traces emerging uses and applications of digital health, with a particular focus on the political economy of the data that such programs generate. For many people, digital health can mean the ability to use social media to connect with people with similar conditions and the power to access, analyze, and query their own (p.282) data. These activities hold enormous potential to improve decisions about healthcare, expand access to health information, increase social support, and empower patients in their conversations with their healthcare providers. For researchers, digital-health tools present new forms of always-on mobile and Web-use data that may enable new discoveries about how daily behaviors are connected to particular health conditions—and one day may even help in their diagnosis. For technology and telecommunications companies, digital health provides another access point for customers’ potentially valuable data, and the opportunity to create goods and services to address a market demand.

The challenge of digital health for both patients and citizens is that the development of digital-health technologies outpaces the creation of ethical and legal frameworks for mitigating social equity concerns and for protecting people’s privacy. Without such frameworks, digital-health innovations may inadvertently lead to new modes of discrimination. For these reasons, the emergence of digital health is a useful case study for Internet scholars of how social institutions, legal frameworks, and professional norms shape the production, circulation, uses, and meanings of digital data.

What follows is an overview of how individuals and healthcare systems alike are marshaling digital technologies to improve health and wellness. I then outline five key ways that digital data changes the terms by which people participate in decisions and choices about their own health and wellness. I conclude by addressing four challenges to confront in the political economy of the data for digital health. For practitioners, I argue the lack of strong legal and regulatory norms around such data prevents it from being utilized to its full potential. For researchers of Internet studies, I argue that digital health provides a rich arena for furthering our understanding of the everyday practices and emerging norms of digital technologies.

Personal Data as a Resource for Health

Smartphones and wearable devices can help people to self-track: hours slept, steps taken, calories consumed, and medications administered. Technology producers shipped well over 125 million wearable sensors globally in 2018. The implications for how this data is used are urgent and important. In Self-Tracking, my co-author Dawn Nafus and I look at the emergence of digital health as a social and technological assemblage of the tools, practices, and communities of self-tracking data. Digital health relies on all three. We look at how to bring the tools of social theory and methods to understanding how self-tracking data is recorded, analyzed, reflected upon, and acted on. Communities form around digital self-tracking data, advocates argue how the (p.283) data should and could be used, and industries create new ways to buy, sell, and share it.1

It is yet to be seen how this process of turning everyday experience into digital data might affect healthcare delivery and healthcare systems. Advocates see enormous possibility for using data from wearable devices, Internet searches, smartphones, and other so-called personal data for health purposes. While there is a growing consumer-oriented market in digital wellness, exemplified by fitness trackers, smart watches, Internet-connected devices such as scales and heart monitors, and other devices, the applications of this data in medicine are not yet clear. Such emerging trends have been called “digital medicine.” According to one definition, digital medicine means

using digital tools to upgrade the practice of medicine to one that is high-definition and far more individualized. It encompasses our ability to digitize human beings using biosensors that track our complex physiologic systems, but also the means to process the vast data generated via algorithms, cloud computing, and artificial intelligence. It has the potential to democratize medicine, with smartphones as the hub, enabling each individual to generate their own real world data and being far more engaged with their health. Add to this new imaging tools, mobile device laboratory capabilities, end-to-end digital clinical trials, telemedicine, and one can see there is a remarkable array of transformative technology which lays the groundwork for a new form of healthcare.2

Currently, protocols of healthcare rely on data that is instead collected or confirmed by healthcare providers, and few physicians can imagine how they would ethically or professionally deal with an onslaught of daily individualized tracking data about each of their patients within their current workloads. In one of our first interviews, a researcher who was studying sensing technologies used for elder care said that he was surprised at doctors’ resistance to accepting the enormous amount of data generated from “smart homes” for their aging-in-place patients. One of the doctors expressed her problem as “I don’t need more data; I need more resources.”3

As such, different stakeholders in digital health and wellness contest the meanings and uses of the phrase “data.” Researchers, clinicians, patients, and citizens all voice distinct meanings for what they consider data. For example, from the point of view of the doctor quoted above, digital self-tracking data requires extra interpretive, clerical, and managerial labor, and provides little benefit to clinicians making decisions. Presumably such data costs more in time and money per patient and brings an increased liability of risk exposure to the clinician.

(p.284) Still, the narratives about the power of personal data to utterly transform healthcare holds captive the popular imagination about such tools. Three broad categories of the use of digital data for health and wellness are commonly discussed. First is the discovery of new insights, either for awareness of the self or for understanding the health of populations. A key example is how wide-scale direct-to-consumer genomics testing as in 23andMe might lead to novel discoveries. The second is that the persistence and prevalence of our smartphones and other digital devices, coupled with a seemingly magical power of data, can help “nudge” people toward healthier behaviors. Fitness trackers, sleep trackers, and food-tracking apps make such promises, at least in their marketing materials if not through research trials. Third is that digital tools can be supplements in the diagnosis and treatment of disease, perhaps even through using digital habits, smartphone sensors, and Web search information as digital indicators, either of the onset or early indication of symptoms of individuals or digital disease detection, or “infodemology” for public health. Google Flu Trends was one of the most visible of these endeavors, but researchers have worked toward using digital trace data to indicate the onset of an episode of disordered eating, movement disorders such as Parkinson’s disease, or mood disorders like depression and bipolar disorders.4

In the US, the National Institutes of Health have increased research funding for the health applications of “mobile, imaging, pervasive sensing, social media and location tracking,” or MISST technologies, almost fourfold over the last decade and doubly over the last five years.5 In the UK, the embrace of digital technologies has led the National Health Service to create NHS Digital to “transform” healthcare within the country, including through managing a trusted library of apps and creating best practices for online personal health records.

Such digital health efforts are often lacking attention to the ethical and social implications of such amassing of personal data. One review found that digital health needs “technologists, ethicists, regulators, researchers, privacy experts and research participants [to be] included in shaping & iterating the ethical frameworks intended to inform participants and protect [their] privacy.”6 Without such concerted efforts to think about privacy and discrimination, digital health programs may risk reproducing social inequality and inadvertently supporting discrimination.7 For example, qualitative research with digital app users found that people expressed surprisingly little concern when their health-related data was shared with corporations, and that a “widespread rhetoric of personalization and social sharing in ‘user-generated culture’ appears to facilitate an understanding of user-generated health data (p.285) that deemphasizes the risk of exploitation in favor of loosely defined benefits to individual and social well-being.”8

The lines between medical devices and consumer goods and between health and wellness are blurring in ways that have enormous potential for both benefit and harm to people. In terms of benefits, digital data has the capacity to link and connect people in unprecedented ways, and trace data from smartphones and Web searches is being researched for its potential as a health indicator. However, the aggregation and collection of widespread personal data that may reveal health information carries potential for social and cultural harms. Large, population-wide data sets highlight the impossibility of anonymity. Popular mobile phone apps for health and wellness that are less effective at what they claim to do may present more risks to privacy than benefits they offer. Lack of transparency and clarity about who has access to data and what purposes it will be used for may present real harm to people. Without clear lines to separate consumer goods from medical devices, the potential for harmful unintended consequences is real.

With this context for the political economy of personal data in mind, I look below at four key factors that shape how people can use their own digital data for improving their health and wellness. Each factor is accompanied by a real-world case study.

Data Plays Multiple Roles

Research that I undertook with Brittany Fiore-Gartland showed the radically different expectations different people had for data across digital health. We called these expectations “valences” and mapped different valences to different actions that stakeholders in digital health take.9

An example of the value of digital health data having different meanings in different communities is the Nightscout project. Started by parents of children with type 1 diabetes, Nightscout is an open-source, do-it-yourself modification of a medical device (a particular brand of continuous blood-glucose monitor) to display the data onto a smartwatch or smartphone. When the Nightscout project began, none of the major medical device makers offered the feature of displaying data on anything other than their own hardware. Nightscout shows what kinds of innovation are possible when users, patients, and their loved ones are involved in design. It also shows some of the complications that arise from the distinction between clinical and nonclinical data. For example, Nightscout changes which (p.286) screen the data is displayed on—from a single-purpose monitor display, to a display that the user is more likely to see. It does not recalculate that data, or make a medical recommendation, but the US Food and Drug Administration (FDA) asserts the rights over medical device data displayed in different formats. Regulators want to be sure that readings are not lost in translation and that they are safe and reliable. At the project’s beginning, the designers behind it wrote on Twitter #WeAreNotWaiting “for others to decide if, when, and how we access and use data from our own bodies.” In this case, the data has real usefulness to communities because of its timeliness and its ability to be shared. Before a commercial solution was available, parents took it in their own hands to make sure that they could have multiple displays of data from their children’s CGMs. These kinds of innovative, DIY projects might go against the speed of the medical process. But they show what patients and their loved ones can do with their own data—and why they might want to have it.

Data as a Starting Point for Connection

The word “data,” Daniel Rosenberg wrote, is rhetorically used to mean “that which is given prior to an argument.”10 Change the givens, and you change the conversation. Data, then, can be seen as the input for people to craft their own stories, the opportunity to start new conversations, and the reason for connections with others. In other words, many people looked to data for what Taylor and van Every have called a “site for conversation,”11 and people may not prize accuracy as scientists and clinicians do, instead valuing data’s ability to help them connect with others. For example, the effectiveness of fitness trackers may be due less to their ability to accurately measure calories burned than to their ability to offer people the occasion for social support for their goals.

Healthcare Providers No Longer Have a Monopoly on Health Data

New kinds of digital data about health will fundamentally change the roles of doctor and healthcare providers in patient care. Data in the hands of more (p.287) people could maintain the cultural script that already plays out in doctors’ offices (i.e., “Have you been measuring your glucose like I asked?”) or it could change them, by making different “givens” the start of the conversation. The move toward more people having detailed data about their everyday lives mean, as my colleague Dr Anthony Back has said, that the physician and care team are but single points in an ever-widening stream of information.12 This change in the health data ecology may lead to new roles for physicians and healthcare providers: those of adviser and coach. While clinicians and researchers may have expertise in the results from random controlled trials, individuals will come to be seen as the experts in their own conditions and experience. Coupled with data, there may be in the future new sets of what is considered normal. Front-line healthcare providers will need to grapple with these and make sense of these new inputs together with patients.

Data Grants the Power to Ask New Questions

Sensors on every smartphone open up the possibility for people who are not professional medical researchers to collect data about themselves and ask questions about it. The possibility of having control and access to data about themselves changes how people can be involved not only in their own healthcare decisions but also in the co-creation of new insights at the boundaries of knowledge. With data, one has the ability to ask the next question.

For example, Aisling Ann O’Kane and her co-authors found that the information practices about shared data in online patient communities show how people “look at their own personal health information, information about the condition in general, and what their peers have experienced. This information seeking contributes to a more grounded understanding of what is normal for them, what is normal for the condition, and what is normal for peers who are similar to them.”13 These kinds of critical engagement with data happen in Quantified Self meetings, support groups for people with chronic conditions, in rare disease communities, and in communities for people trying to “hack” or understand their symptoms and triggers in conditions that Western medicine does not fully understand yet, such as autoimmune disorders, migraines, and food allergies and sensitivities. These shifts toward individualized relationships to data may show a future where individuals are either recast to take responsibility for the self-management of their own health in an (p.288) empowered way14 or are an excuse for shifting the burden of care away from healthcare systems and onto patients and their families. Critical engagements with digital health data have been seen as “soft resistance” to the power of big data, as “participants assume multiple roles as project designers, data collectors, and critical sense-makers who rapidly shift priorities,” pushing back on idealized notions of unified, authoritative data sets capable of being understood in the absence of personal insight and deep contextual knowledge.15 In this sense, working with data, interrogating data, and raising critical questions afterwards is part of “doing” the self in the modern digital era, and not uncovering the self as some sort of preexisting entity, but rather making and remaking the self with digital technologies and the practices of working with the data they generate.16

Thus, the values of digital data have enormous potential to help people engage in wellness activities and healthcare and improve their lives. However, self-tracking practices and tools are not quick fixes for the problems of healthcare. The data collected about people’s lives touches on their future in the workplace, in the marketplace, and as citizens. Whether digital data will empower or harm people ultimately depends who can access and control data. The future of data-driven health and wellness depends on decisions made during design about privacy, data flows, and business models. The future of digital data about health is fundamentally about what say people have in how knowledge about them is created and disseminated.

Challenges

Unfortunately, there are several challenges that corporations, regulators, policy-makers and citizens must resolve before people can use their digital data to improve their health. These include individuals’ improved access and control over digital data about themselves and their health, better privacy and security at the device level, better legal protections for this data, including protection from harms that might arise, and clearly demonstrated and researched benefits and efficacy.

Access to Use of and Control over Data

When digital data can mediate so many different relationships, control over the meanings of data is a type of power. For too long, data about people’s health has been seen as a stable entity only interpretable by experts. When (p.289) people have access to use of their own data and control over decisions regarding who else may use data about them, they have enormous power to shape the narratives about themselves. If society does not address the questions of digital health data for whom, when, and why, then it will be a failure of social justice and abuse of the trust that people have placed in the institutions of healthcare. The power to access this data led to the power to query it and to pose the next questions, and that is power that should be in people’s hands as part of their decision-making process about their own lives.

Contextual Privacy, Security, and Transparency

As more varied types of digital data emerge as potential indicators of health, challenges of contextual privacy become more urgent. Philosopher Helen Nissenbaum uses contextual integrity as the model for how our expectations for privacy are closely coupled to the contexts in which our data is shared and generated.17 What might feel like an appropriate thing to share among my Facebook friends may seem like a violation to me when shared in my doctor’s office (or insurance company) and vice versa. For the moment, digital apps encourage their users to feel like there are no privacy violations when sharing information with corporations, especially in a market that sees personal information as the inevitable “cost” of otherwise free digital services.18 The challenge is that digital data from our Web searches and smartphones that has not typically been seen as so-called health data may soon be able to be used to predict health-related information about us. When this happens, the contextual integrity of that information is challenged, and people may experience the shift of the context of their data as a violation, regardless of the legal or technical definitions that apply to the situation. When information that has not been widely seen as health data is being used to make decisions about our health—without our involvement—that will be seen as a violation of privacy and autonomy.

Freedom from Harms

Data from wearable self-tracking devices has enormous possibility in helping to inform people about their day-to-day habits. Quantified Self communities have found creative ways to repurpose, revamp, and reimagine how people might use this data. Health data privacy policies, though, have been based (p.290) upon protecting certain classes of information about individuals that have been collected about them in a healthcare setting. How might using data from our smartphones leave us open to potential harms? Data from wearable devices is emerging as a category of data that can be used in courts in the US. What happens when the risks in gathering and aggregating this data in the first place outweigh the benefits? Might digital data implicate family members? Without legal and cultural assurance that we have protection from the potential harms of digital data, the benefits for our health of using such devices in the first place can never be clear.

Demonstrated Benefits and Efficacy

There is still much work that needs to be done to qualitatively, quantitatively, and experimentally demonstrate the benefits and efficacy of digital data used for improving health. As we argued in Self-Tracking, making this data work often means some form of hacking or modification, so that the devices and their data work for you. But it means more than hacking software and hardware—it also means creating communities where you can make sense of the data and modifying your practices with and around the data. Most commercially available technologies are not helping people ask the right questions, and these devices and their interfaces come pre-loaded with a whole host of assumptions about their users and what those users might want out of their data. Some of the demonstrated success of using digital data for health wearables will come from the randomized controlled trials favored by researchers. Other demonstrations of the benefits of using digital data for health will come from people’s own stories and experience in examining and testing their own conditions and their own “normals.” Regardless, much more work needs to be done by activists, citizens, researchers, and regulators alike to show what works and what doesn’t in digital health. Otherwise, the data offers little more than false hope and promises in exchange for the risk of wide-scale surveillance. So much more is possible.

Conclusion

The science of digital health is emerging. There is an urgent need for Internet studies scholars to contribute. First, always-on mobile and Web data require different ethical and legal frameworks when they are used as health indicators. The work of figuring these out needs social scientists to help design and deploy systems that can empower people to have better control and access over their digital health data. Second, digital health is social health, as people (p.291) make sense of data together and use their data as a starting point for connection and further conversation. For researchers this means that digital health tools present opportunities and challenges for studying the social nature of networked data and for applying insights on distributed privacy, networks and behavior change, and sense-making. Third, new forms of always-on mobile and Web use data may enable new discoveries about how daily behaviors are connected to particular health conditions—and one day may even help in their diagnosis or management. For researchers of Internet studies, this means that digital health is a domain of everyday life where the theories and insights on the emerging practices and norms of digital data can have significant impact.

Pragmatically, this means working with policy-makers, healthcare providers, and technology and telecommunications companies to ensure that digital health develops in a manner that protects customers’ potentially valuable data and takes seriously the healthcare maxim of first do no harm when it comes to the potential impact on people of their digital data. Without legal and regulatory norms and policies that prevent bias, discrimination, and harm, and without accountability, transparency, and fairness designed into the collection, analysis, and use of such data by technology companies, digital health will never be utilized to its full potential.

References

Bibliography references:

Back, A. and Neff, G. (2014). “New Roles for Physicians in the Era of Connected E-Patients.” Medicine X, Oral Presentation, September. Stanford University.

Dudhwala, F. (2017). “Doing the Self: An Ethnographic Analysis of the Quantified Self.” University of Oxford, PhD thesis. Available at https://ora.ox.ac.uk/objects/uuid:34b6097e-3568-4d81-ae79-7d65d2875177. (Accessed September 5, 2018).

Dunseath, S. et al. (2017). “NIH Support of Mobile, Imaging, Pervasive Sensing, Social Media and Location Tracking (MISST) Research: Laying the Foundation to Examine Research Ethics in the Digital Age,” npj Digital Medicine, 1(1). Available at https://doi.org/10.1038/s41746-017-0001-5. (Accessed September 5, 2018).

Ferryman, K. and Pitcan, M. (2018). “Fairness in Precision Medicine,” Data & Society, February. Available at https://datasociety.net/output/fairness-in-precision-medicine/. (Accessed March 28, 2019).

Fiore-Gartland, B. and Neff, G. (2015). “Communication, Mediation, and the Expectations of Data: Data Valences Across Health and Wellness Communities,” International Journal of Communication, 9(0): 1466–84. Available at http://ijoc.org/index.php/ijoc/article/view/2830. (Accessed September 5, 2018).

Nafus, D. and Sherman, J. (2014). “This One Does Not Go Up To 11: The Quantified Self Movement as an Alternative Big Data Practice,” International Journal of Communication, 8: 11.

(p.292) Neff, G. and Nafus, D. (2016). Self-Tracking. Cambridge, MA: MIT Press.

Nissenbaum, H. (2009). Privacy in Context: Technology, Policy, and the Integrity of Social Life. Stanford, CA: Stanford University Press.

O’Kane, A. A. et al. (2016). “Turning to Peers: Integrating Understanding of the Self, the Condition, and Others’ Experiences in Making Sense of Complex Chronic Conditions,” Computer Supported Cooperative Work (CSCW) 25(6): 477–501. Available at https://doi.org/10.1007/s10606-016-9260-y. (Accessed September 5, 2018).

Ostherr, K. et al. (2017). “Trust and Privacy in the Context of User-Generated Health Data,” Big Data & Society, 4(1) (April/June): 1–11. Available at https://doi.org/10.1177/2053951717704673. (Accessed September 5, 2018).

Rosenburg, D. (2013). “Data before the Fact,” in Lisa Gitelman (ed.), Raw Data Is an Oxymoron. Cambridge, MA: MIT Press, 15–40.

Steinhubl, S. R. and Topol, E. J. (2017). “Digital Medicine, on Its Way to Being Just Plain Medicine,” Npj Digital Medicine, 1(1). Available at https://doi.org/10.1038/s41746-017-0005-1. (Accessed September 5, 2018).

Swan, M. (2012). “Scaling Crowdsourced Health Studies: The Emergence of a New Form of Contract Research Organization,” Personalized Medicine 9(2): 223–4. Available at https://doi.org/10.2217/pme.11.97. (Accessed September 5, 2018).

Taylor, J. R. and Van Every, E. J. (2000). The Emergent Organization: Communication as Its Site and Surface. Mahwah, NJ: Lawrence Erlbaum Associates.

Yom-Tov, E. (2016). Crowdsourced Health: How What You Do on the Internet Will Improve Medicine, Cambridge, MA: MIT Press.

Notes:

(1) Neff and Nafus (2016).

(2) Steinhubl andTopol (2017). on Its Way to Being Just Plain.

(3) Fiore-Gartland and Neff (2015).

(4) See Yom-Tov (2016). for an overview.

(5) Dunseath et al. (2017).

(6) Ibid.

(7) Ferryman and Pitcan (2018).

(8) Ostherr et al. (2017).

(9) Fiore-Gartland and Neff (2015).

(10) Daniel Rosenburg (2013). “Data before the Fact,” in Lisa Gitelman (ed.), Raw Data Is an Oxymoron Cambridge, MA: MIT Press, 15–40.

(11) James R. Taylor and Elizabeth J. Van Every (2000). The Emergent Organization: Communication as Its Site and Surface. Mahwah, N.J: Lawrence Erlbaum Associates.

(12) Anthony Back and Gina Neff (September 2014). “New Roles for Physicians in the Era of Connected E-Patients,” Medicine X, Stanford University.

(13) O’Kane et al. (2016).

(14) Swan (2012).

(15) Nafus and Sherman (2014).

(16) Dudhwala (2017).

(17) Nissenbaum (2009).

(18) Ostherr et al. (2017).