Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Co-design and ethical artificial intelligence for health: An agenda for critical research and practice:

Co-design and ethical artificial intelligence for health: An agenda for critical research and... Applications of artificial intelligence/machine learning (AI/ML) in health care are dynamic and rapidly growing. One strat- egy for anticipating and addressing ethical challenges related to AI/ML for health care is patient and public involvement in the design of those technologies – often referred to as ‘co-design’. Co-design has a diverse intellectual and practical his- tory, however, and has been conceptualized in many different ways. Moreover, AI/ML introduces challenges to co-design that are often underappreciated. Informed by perspectives from critical data studies and critical digital health studies, we review the research literature on involvement in health care, and involvement in design, and examine the extent to which co-design as commonly conceptualized is capable of addressing the range of normative issues raised by AI/ML for health care. We suggest that AI/ML technologies have amplified and modified existing challenges related to patient and public involvement, and created entirely new challenges. We outline three pitfalls associated with co-design for ethical AI/ML for health care and conclude with suggestions for addressing these practical and conceptual challenges. Keywords Co-design, participatory design, artificial intelligence, health care, design ethics, data ethics approaches to technology development that aim to Introduction involve end-users as meaningful participants in the design The contemporary field of artificial intelligence/machine process, co-design is often mobilized as a strategy to learning (AI/ML) is dynamic and rapidly growing, charac- improve fairness, accountability, and transparency of algo- terized as central to a ‘4th industrial revolution’ that com- rithmic systems (Aizenberg and van den Hoven, 2020; mentators suggest will impact virtually all aspects of our Malizia and Carta, 2020; Whitman et al., 2018). lives (Couldry and Mejias, 2019; Schwab, 2017; Zuboff, Co-design is also closely allied to other trends in health 2019). Although AI/ML technologies are multi-purpose, and health care, including patient engagement, PPI and they are particularly consequential in health care, where patient and family-centred care (PFCC). Co-design and its concerns range from the changing nature of the patient–pro- variants have a diverse intellectual and practical history, vider relationship (Goldhahn et al., 2018; Topol, 2019), to however, and have been conceptualized in many different the ways in which AI/ML technologies exacerbate existing societal inequities (Benjamin, 2019; D’Ignazio and Klein, 2020; Eubanks, 2018; Noble, 2018). As a result, there has been an increased acknowledgement by corporate, govern- Institute of Health Policy, Management & Evaluation, Dalla Lana School of Public Health, University of Toronto, Toronto, ON, Canada ment, and academic actors alike that AI needs ‘ethics’. How Joint Centre for Bioethics, Dalla Lana School of Public Health, University these ‘ethics’ are meant to be established and applied, of Toronto, Toronto, ON, Canada however, has led to significant debate (we discuss our 3 Institute for Health System Solutions and Virtual Care, Women’s College own notion of ethics in this paper in ‘Theoretical Hospital, Toronto, ON, Canada approach’). Corresponding author: One strategy for anticipating and addressing the poten- Joseph Donia, Institute of Health Policy, Management & Evaluation, Dalla tial benefits and harms of AI/ML for health is patient and Lana School of Public Health, University of Toronto, 155 College St, public involvement (PPI) in the design of those technolo- Toronto, ON, Canada gies, often referred to as co-design. As a category of Email: joseph.donia@mail.utoronto.ca Creative Commons CC BY: This article is distributed under the terms of the Creative Commons Attribution 4.0 License (https:// creativecommons.org/licenses/by/4.0/) which permits any use, reproduction and distribution of the work without further permission provided the original work is attributed as specified on the SAGE and Open Access page (https://us.sagepub.com/en-us/nam/open-access-at-sage). 2 Big Data & Society ways. Moreover, the meaning and value of co-design are ML technologies, one must acknowledge how deeply inter- challenged by AI/ML systems, where users will always twined they are with the human and material realities that play some role in the production of those systems, for shape their existence in the world. example in producing data used to train models. As such, The second concept is ‘surveillance’, which has come to the extent to which co-design should be considered a suita- signify the consequences of mass data collection on human ble approach to ethical AI/ML has recently come into ques- experience and action, spurring the development of an tion (Sloane et al., 2020). entire field of research referred to as surveillance studies Informed by perspectives from critical data studies (Cheney-Lippold, 2017; Lyon, 2010). The notion at the (CDS; boyd and Crawford, 2012; Dalton and Thatcher, root of studies of surveillance is that the act of collecting 2014; Kitchin and Lauriault, 2014) and critical digital data about peoples’ activities has significant influence on health studies (CDHS; Lupton, 2016, 2017a), in this the activities in which they engage. This is true for both paper we outline three pitfalls associated with co-design individuals and populations and has novel implications in for ethical AI/ML for health based on common assumptions health and health care contexts. arising from health care and co-design discourse. We start The final concept influencing our analysis is that of the by presenting our theoretical approach in some detail, out- ‘political economy’ of data and digital technologies, refer- lining three concepts from CDS and CDHS that inform our ring to the particular economic assumptions and institutions analysis. We then present a brief description of practices of that are supported by AI/ML technologies and the organiza- involvement in design, and involvement in health care, tions by which they are developed and used. The concept is leading into a summary of overarching risks and pitfalls more accurately described as ‘political economy’ as for consideration, and conclude by outlining important opposed to just ‘economy’ to represent the inevitable exist- directions for future research and practice in this area. ence of competition for control over resources that comes along with the capitalist economic system in which we find ourselves (Couldry and Mejias, 2019; Zuboff, 2019). Theoretical approach We also acknowledge the importance of clarifying how Our analysis of involvement in the design of AI/ML for ‘ethics’ is approached in our paper. We mobilize ethics in health care is shaped by perspectives from CDS and two distinct senses. In the first sense, ethics refers to the CDHS. CDS is an interdisciplinary field, bringing together principles, values and frameworks that are used in the AI methods and perspectives from across media studies, industry to guide the development of AI technologies in sociology, anthropology, human geography, and design, ways that are deemed by stakeholders allied to the industry among others. While the field is diverse, CDS is united to be morally good (Ananny, 2016). A prime example of by a concern with the social, cultural, ethical and political such principles, values and frameworks is the growing challenges posed by data, including how they are consti- attention to fairness, accountability and transparency in tuted within wider data assemblages (Iliadis and Russo, ML as a strategy to enhance the ethical status of AI technol- 2016; Kitchin and Lauriault, 2014). ogies. This sense of ethics aligns with the study of ‘practical A related field is CDHS. While a number of scholars ethics’ or the actions and decisions that people perceive to have engaged critically with how health technologies constitute ethical practice in their everyday work related to (including health information technologies) have influenced AI (Ananny, 2016; Metcalf and Moss, 2019). health and illness (Clarke et al., 2003; Mol, 2008; Rose, However, we also employ the concept of ethics in a 2007), Lupton (2014b, 2016, 2017a, 2017b) outlines the second sense. This second sense refers to the normative unique challenges posed by digital health technologies, commitments associated with CDS and CDHS, which are including the ways in which they contribute to evolving fundamentally oriented towards advancing social justice. notions of ‘health’, ‘illness’ and ‘care’ (Lupton, 2014a, For this reason, these fields of work emphasize the cluster 2016). of concepts we have outlined (socio-materiality, surveil- Three concepts in particular from these interdisciplinary lance and political economy) in normatively motivated ana- domains inform the analysis of involvement in health- lyses that contribute to the achievement of a more just world related AI/ML development presented in this paper. The for all. Such an approach to ethics is explicitly focused on first is ‘socio-materiality’, which indicates that AI/ML tech- the operations of power and the redistribution of goods to nologies are not simply digital algorithms that happen to be those in positions of relative disadvantage. This second embedded in a variety of devices. Rather, AI/ML technolo- sense of ethics motivates the critical analysis we bring to gies are better understood as a collection of digital algo- co-design in this paper and the practical strategies we rithms, technological devices, telecommunications outline in our concluding sections. infrastructures, human goals and human rules that cohere Finally, in addition to the normative and theoretical posi- together into ‘assemblages’ that represent specific AI/ML tions outlined here, we also intend to clarify our view on the technologies (Kitchin and Lauriault, 2014). If one is to concept of co-design. It is important to note that co-design understand the ethical significance of co-design for AI/ has been represented in the research literature in a variety of Donia 3 ways (Sanders and Stappers, 2008). For example, it may typically mobilized in support of improving the acceptance refer to any form of involvement in design or be used to or usability of technology. describe a particular form of involvement distinct from More recent work has significantly expanded the topics related approaches such as participatory design. and questions with which co-design engages, with contribu- Involvement may occur throughout the design process, or tions spanning sociology, anthropology, design studies, only at particular stages. It may be employed as a strategy human–computer interaction (HCI) and computer- to improve usability and acceptance of technology or to supportive cooperative work, among others. While a full elicit stakeholder values. These differences indicate a review is beyond the scope of this paper, contributions diverse field of research and practice, where scholarly com- most relevant to our argument in this paper tend to fall munities are concerned with similar topics, but enjoy only into one of the following categories. partial overlap of assumptions and motivations upon First are contributions that explicitly attend to power and which they are based. Nonetheless, we believe there is a the social–cultural–political contexts that give rise to and clear conceptual benefit to critically examining the field shape co-design. For example, design justice of research and practice as a whole. This paper, therefore, (Costanza-Chock, 2020) is an intersectional approach to uses ‘co-design’ as an umbrella term for approaches that design that engages with how designed artefacts impact involve end-users, patients or publics in any stage of the upon dominant and oppressed groups in society, emphasiz- design process. ing mechanisms for community accountability and control. Similarly, Escobar, in Designs for the Pluriverse (2018), argues for the decolonization of design through collabora- Involvement in design tive practices that are place-based, resist dependence on Within design scholarship, formal involvement in design is markets, and are more accountable to the needs of commu- most commonly attributed to Scandinavian approaches in nities. Mainsah and Morrison (2014) and Harrington et al. the late 1970s and early 1980s which attempted to address (2019) apply a post-colonial lens to design, with workplace transformations brought about by computers. Harrington et al. (2019) identifying considerations for Inspired by action research, these early examples involved more equitable co-design with marginalized groups, such very little ‘design’ per se, but rather emphasized the import- as emphasizing attention to historical context, community ance of providing workers and union officials with the requi- access, and unintentional harms of design. site knowledge and skills to understand the potential impacts The second relevant category of contributions focuses of computer systems on their work, with the ultimate aim of specifically on advancing new conceptual or methodological strengthening collective bargaining strategies (Vines et al., approaches to co-design. These include the related domains 2013). This is perhaps best exemplified by the collective of futures design, design fiction and speculative design, resource approach, which convened ‘independent study which contain core participatory elements (Forlano and groups’ comprised of union members and academic Mathew, 2014; Harrington and Dillahunt, 2021; Lupton, researchers (Kraft and Bansler, 1994). These earliest forms 2017b; Ollenburg, 2019; Tran O’Leary et al., 2019; of co-design were explicitly politically engaged, emphasiz- Tsekleves et al., 2017; Zaidi, 2019). Approaches inspired ing productive tension over immediate consensus in arriving by actor-network theory are also highly relevant (Latour, at a decision (Björgvinsson et al., 2012b). Worker control 2005), where co-design is conceptualized as a site for ‘infra- and agency were explicit aims (Vines et al., 2013), and structuring’ or forming publics around ‘matters of concern’ most Scandinavian-inspired co-design today is characterized (Andersen et al., 2015; Björgvinsson et al., 2012a; Dantec by two core assumptions: that those affected by a decision and DiSalvo, 2013; DiSalvo, 2012; Hillgren, 2013; should have a say in its making, and that stakeholders’ Pedersen, 2020; Rossitto, 2021; Storni, 2015). Finally, tacit knowledge is essential to the success of a design approaches drawing on the related field of values in design project (Björgvinsson et al., 2012b). (VID) to leverage co-design as a strategy for discovery, ana- Today, iterations of co-design methods and principles lysis and integration of values in technology design, also are reflected in many different but related approaches. provide important context for our work (Flanagan et al., User-centred design, for example, is an approach to 2005; Halloran et al., 2009). design that focuses on eliciting users’‘real needs’ to The third relevant category is a small but growing body improve the ‘fit’ between a user and a technology of literature specific to the design of data-intensive and (Norman and Draper, 1986). User experience design emerging digital technologies such as AI/ML. Sloane focuses on a user’s expected emotions and attitudes when et al. (2020) for example distinguish participation as work engaging with designed artefacts (Cooper et al., 2014). (e.g. in generating data), participation as consultation (e.g. Human-centred design similarly emphasizes the incorpora- in providing feedback) and participation as justice (e.g. tion of a ‘human perspective’ in all phases of the design longer-term partnerships, collaboration and capacity build- process (Giacomin, 2014). While not exhaustive, these ing) in AI/ML projects, outlining various conceptualiza- represent more established approaches, where PPI is tions of what participation might entail in co-design. 4 Big Data & Society Bødker and Kyng (2018) critique what they see as the dom- policy (Abelson et al., 2004); and quality improvement and inant view of participation as a goal in itself, and outline the innovation (Donetto et al., 2015). ‘big issues’ of participatory design which have been high- PPI is also closely linked to other influential ideas about lighted by advanced digital technologies. Shifting attention how health care should be organized, and to whom health to the data that are so crucial for AI/ML technologies, care decision-makers should be accountable. PFCC has Seidelin et al. (2020) focus specifically on how data been defined as: ‘The experience (to the extent the might be better represented through co-design activities. informed, individual patient desires it) of transparency, This third category of contributions emphasizes practical and individualization, recognition, respect, dignity, and choice conceptual challenges in the co-design of AI/ML and other in all matters, without exception, related to one’s person, digital technologies specifically and is most closely linked to circumstances, and relationships in health care’ (Berwick, the issues we address in our paper. We build especially on 2009, p. 560). The basic ideas underpinning PFCC, insights from this latter domain of work that are directly however, are much older. Hippocrates urged physicians to linked to health and health care. One salient example is the ‘investigate the entire patient’ (Boivin, 2012). At the turn observation that even health-specific AI/ML technologies, and of the 20th century, Canadian physician William Osler is the resources and infrastructures upon which they rely, are typi- noted for orienting medical education towards the needs cally generated outside of traditional health and medical settings of the patient rather than the disease. and rely on logics not solely associated with the maintenance or As with co-design, conceptualizations of PPI and PFCC improvement of health (Bot et al., 2019; Sharon, 2018). Other vary considerably. Conceptual discussions of PPI have for insights we glean from this body of work are more general, example distinguished between democratic and consumer- but take on a unique meaning in health-related contexts. For ist rationales (Wait and Nolte, 2006); direct/indirect and example, that patients or publics ‘participate’ in the design of proactive/reactive forms of involvement (Tritter, 2009); algorithmic systems in ways that are unwitting or involuntary outcome-oriented versus process-oriented involvement (Vines et al., 2013), such as in producing data upon which AI/ (Ives et al., 2013); and domains of involvement, such as ML algorithms are trained (Sloane et al., 2020); that AI/ML direct care, organization or policy (Carman et al., 2013). technologies can be modified or re-purposed after deployment Others view PPI as existing on a continuum (Gibson to accomplish new health and non-health-related goals et al., 2012), or as an ongoing process of organizing, (Kitchin, 2017); the ‘black box’ nature of AI/ML algorithms where patient roles and identities are constantly being (Pasquale, 2015), which limits what can be known and formed and negotiated (Rowland and Kumagai, 2018). addressed through co-design and has particular salience in Notwithstanding these practical and conceptual chal- understanding decisions about medical care; and the challenge lenges, interest in PPI has increased, and as information of accounting for how data and insights associated with AI/ technologies have matured and become more deeply ML technologies will be used in future health and embedded in health care, strategies and perspectives from non-health-related applications (Ruckenstein and Schüll, design and related fields have also increased in prominence. 2017). These challenges inform our co-design risks and pitfalls The fields of health and biomedical informatics (HI), for presented in‘Involvement in health care’ and‘Pitfalls associated example, increasingly engage with methods and theoretical with co-design and ethical AI/ML for health’, respectively. perspectives from HCI, despite the paradigmatic differences that have historically made collaboration difficult. While HI and HCI share an interest in the variety of ways people Involvement in health care engage with technologies in diverse use-contexts, they PPI in health care has an equally long and complex history, often do so via different methods (e.g. experimental however, formal involvement arrangements can be traced to research designs vs. design-based methods); publication social movements initiated by feminist, queer and disability venues (e.g. peer-reviewed journals vs. conferences); and rights activists in the 1970s and 1980s (Brown and topics (e.g. clinical settings vs. consumer applications) Zavestoski, 2004; Busfield, 2017). These movements (Kim, 2019). Some of these divides are narrowing, rebuked medical paternalism and sought to legitimate experi- however, as health services researchers seek new ential or embodied knowledge in bringing about changes to approaches capable of addressing complex design, imple- the institutions of medicine. In 1974, the United Kingdom’s mentation and evaluation challenges posed by advanced National Health Services established Community Health digital technologies (Pham et al., 2016; Shaw et al., 2018). Councils as the first example of institutionally supported Today, the focus of health-related technology design is PPI, with a mandate to improve local service delivery and shifting once again, as information systems and the goals accountability (Hogg, 2007). While formal PPI has since they are intended to accomplish, continue to evolve. Some, taken on different profiles around the world, it continues to for example, propose that HCI and related fields find them- hold interest in many areas of health research and practice, selves in a new wave concerned primarily with persuasion including health professions education (Rowland et al., (Fogg et al., 2007). AI/ML applications in health are broad, 2018); health care research (Greenhalgh et al., 2019); health butinmanyinstances ‘nudge’ attitudes or behaviours either Donia 5 through direct intervention or by providing tailored informa- impact of involvement on decision-making) do not imply a tion (Yeung, 2017). At the individual/patient level, for stronger focus on the entirety of a sociotechnical system, example, research in digital behaviour change incorporates much of which is out of view for both users and designers methods and perspectives from design and psychology to of AI/ML technologies. Attending to this broader sociotech- accomplish self-management of medical conditions, or nical system is especially important when considering out- health promotion via behaviour modification (Michie et al., comes related to AI/ML technologies for health, where 2017). AI/ML has also been used in epidemiological model- novel forms of health surveillance, combined with the ling and forecasting (Lalmuanawma et al., 2020), clinical increasing value of health-related data, introduce ethically decision support (Montani and Striani, 2019), and health salient issues. care operations and logistics (Obermeyer et al., 2019). Scholarship and practice related to co-design and PPI Involvement of patients or publics in the design of advanced tend to emphasize the procedures and qualities of participa- digital technologies often emphasizes the inherent patient- tion or involvement, for example attending to the import- centred or empowering qualities of co-design approaches ance of processual and contextual characteristics of (Capecci et al., 2018; Enshaeifar et al., 2018; Triberti and involvement, ‘moments’ or ‘stages’ of involvement, Barello, 2016) or AI/ML technologies (Topol, 2019), espe- patient and public latitude in decision-making, organiza- cially when directed to health-related goals. As such, tional support for involvement, and the proximate impacts co-design, PPI, and PFCC afford legitimacy to AI/ML tech- of those characteristics on designed artefacts (Abelson nologies for health, though the extent to which they always et al., 2010; Frauenberger et al., 2015; Kensing and should, remains a topic of debate. Blomberg, 1998). The implicit assumption advanced by We see the affordances of AI/ML technologies for health these viewpoints is that better involvement strategies will and the challenges they pose to co-design presenting three result in better design outcomes, as evaluated by the main risks that give rise to the pitfalls presented in the fol- impacts of those strategies on design products. This per- lowing section. First, co-design risks adding new harms to spective is complicated by normative and epistemic chal- health systems as a result of putting forward innovations lenges related to AI/ML for health that result in an overly that have not been designed with unintended consequences narrow view of ethically salient issues for health-related in mind. These include the ways in which AI/ML technol- co-design, of which we outline just three below. We ogies can be instantly adapted or modified to suit new goals, contend that processes of co-design that encourage a nar- for which patients and publics have no input once the tech- rower emphasis on practices of involvement risk losing nology has been deployed. Second, co-design risks instru- sight of the broader sociotechnical system, and the crucial mentalizing patients, using their involvement in the normative issues embedded in those systems that surround design of an AI/ML technology to make advances the co-design process. towards achieving pre-existing goals established by those First, AI/ML technologies are capable of analysing in positions of power. In AI/ML for health, this power is increasingly large volumes of data, introducing forms of increasingly distributed among a diverse range of private surveillance not previously possible. AI/ML technologies actors. Third, co-design that is explicitly focused on the by definition discriminate between ‘measurable types’ or design of technologies risks obfuscating societal injustices classifications of meaning based on available data when the involvement of patients or publics focuses only (Cheney-Lippold, 2017), where classifications are largely on those problems which can be solved by technologies. invisible to those they are applied to, and determined by We now shift to a description of three main pitfalls asso- those with the power to know their significance. The diver- ciated with co-design for ethical AI/ML for health. We sity of actors and interests in digital health mean that clas- suggest that attention to these pitfalls is essential to determining sifications implicate ‘health’ in many different ways, the appropriateness and feasibility of AI/ML co-design for however. For example, Sharon (2018), drawing on health andthatbyaddressingthem, it maybepossibleto Boltanski and Thévenot (1999), identifies five different advance approaches to co-design that better equip it to orders of worth animating conceptualizations of the engage with the normative issues raised by those technologies. common good in health-related research led by large tech- nology companies: ‘civic’ (doing good for society), ‘market’ (enhancing wealth creation), ‘industrial’ (increas- Pitfalls associated with co-design ing efficiency), ‘project’ (innovation and experimentation), and ethical AI/ML for health and ‘vitalist’ (proliferating life). Their work illustrates the importance of considering the presence and strength of Pitfall #1: The tendency to place disproportionate influence of some orders of worth over others in different emphasis on procedures and qualities of involvement health-related co-design settings. The central point advanced with Pitfall #1 is that ‘better’ Second, and related, AI/ML technologies for health (and involvement strategies (which even in more critical the data upon which they rely) are of value to actors increas- approaches is often indicated by breadth, depth, or ingly distal to formal health and health care systems. While 6 Big Data & Society the everyday consequences of mostly invisible ‘measurable need to be more explicitly accounted for when considering types’ are often appreciated in terms of targeted advertising, the ethical salience of co-design. search recommendations, or dynamic pricing strategies, Implicit in any undertaking of co-design is the belief that they may also form the basis for insurance coverage and the approach is inherently more ethical than other design premium decisions. Credit rating companies, too, offer strategies not involving patients and publics. In contrast, crit- medical adherence risk-scoring products which allow ical scholarship has focused on ‘levelling the playing field’ in payers and providers to identify patients who may be at co-design processes, by articulating strategies for shared lan- higher risk for ‘non-compliance’ with medical treatments guage in design (Burrows et al., 2016), or studying how (Hogle, 2016). Fourcade and Healy (2017) have described co-design methods might ‘distort’ participation in favour of these developments in terms of an expanding ‘economy designers’ interests (Compagna and Kohlbacher, 2014). of moral judgement’, where health outcomes are experi- While these theoretical and practical developments are enced as morally deserved, based on prior ‘good’ or ‘bad’ crucial for enhancing the agency of patients and publics to health behaviours. participate more effectively in design, what is not explicitly Third, these logics can have the effect of responsibilizing acknowledged in many of these perspectives is how limita- health care, pushing monitoring and management further tions imposed on designers also influence design outcomes. into the domain of individual patients and caregivers This especially bears relevance in co-design, where (Rich et al., 2019), often through design decisions that designers are conventionally expected to move from a posi- nudge health-related behaviours. While frequently lauded tion of expert to ‘facilitator’ (Björgvinsson et al., 2012a; for their potential to more effectively engage patients in Farrington, 2016; Sanders and Stappers, 2008), ‘stager of their own care, these perspectives have been critiqued for negotiations’ (Pedersen, 2020), ‘agonistic Prometheus’ oversimplifying the meaning and value of engagement in (Storni, 2015), or ‘creator of a third space’ where knowledge digital health (Burr and Morley, 2020). Some, for exchange can occur (Muller, 2009). example, point to the unrecognized ‘repair work’ that The practices, goals and perspectives of designers are often accompanies the use of digital health technologies diverse, however, and influenced by a broad range of inter- (Forlano, 2020; Schwennesen, 2019). Moreover, as ests and values. These include other project stakeholders, Prainsack (2020) notes: ‘The very instrument of nudging professional norms, workplace culture, financial incentives, contains value judgements: It assumes that addressing the shareholders and broader economic trends. In health, these practices of people directly is better than changing struc- also crucially include the social and professional norms tural factors. It has been shown, however, that a focus on associated with biomedicine, and the epistemic privilege individual practices directs attention and resources away of evidence-based medicine (Chin-Yee and Upshur, 2019; from tackling the more structural, systemic characteristics Schwennesen, 2019), where ‘evidence-based’ is conven- that shape the problem in the first place’ (p. 11). tionally linked to the epistemic criteria of truth, validity, These evolving geographies of responsibility and foundationalism, and therefore especially quantitative (Schwennesen, 2019), asymmetries of knowledge and evidence (Upshur, 2001) – of which AI/ML is expected logics of efficiency would suggest that any claims to the to be transformative. ethical standing of co-design should be evaluated against Similarly, AI/ML systems are not static objects, but con- a much broader set of sociotechnical relations. This tingent and dynamic. For example, in a study of a physical would require that co-design not only attend to how AI/ rehabilitation algorithm intended to reduce in-person clinic ML technologies contribute to ‘medicalization’ or ‘com- visits, Schwennesen (2019) notes that crucially important modification’ as discrete outcomes of individual technolo- parameters used to assess the bodily movements of patients gies, but also the ways in which those technologies, once were not only determined by patients and physiotherapists, embedded in health and health care systems, transform but also the capabilities of the algorithmic system itself. A broader social, political, and economic fields. physiotherapist on the project notes: ‘We had to sit down and be pretty tough in setting priorities… If there were some parameters that dealt with what one does with the arms or something else, then of course, we could say, Pitfall #2: The tendency to focus attention primarily “The sensors can’t say anything about that”. So of course, on the agency of patients and publics in co-design that was automatically discarded’ (p. 181). The central point advanced with Pitfall #2 is that ‘better’ Acknowledging limits on designers’ agency underscores involvement does not mean that people are entirely free the importance of also attending to the agential capacities of from agential constraints that inevitably shape their parti- those leading design and development processes. By focusing cipation in design activities. These constraints not only attention only on enabling or empowering patients and apply to patients and publics, but others implicated in publics in isolated design events, strategies to improve the design processes too. Health, like other sectors, presents processes and outcomes of co-design risk are ineffective by its own unique constraints which continue to evolve, and failing to attend to the broader range of influences on the Donia 7 activities that take place during the design process. While import in the first place. For example, in their study of tech- some scholars have acknowledged these limitations on nological futures with youth participants in a Chicago designers’ agency (Bødker and Kyng, 2018; Hepworth, summer design program, Harrington and Dillahunt (2021) 2019; Vines et al., 2013), this consideration has yet to take a report that the primary challenges students described were more central role in co-design discourse, likely as a result of racism, police brutality, segregation, poverty and unfair a historical focus on empowering end-users of technologies. housing policies. Technological solutions solely targeting Avoiding the pitfall of attending only to the agency of the proximate issue of a health care outcome will therefore patients and publics at the expense of the agency of always be partial, where co-design of health-related AI/ML designers requires engagement with this broader ecosystem risks perpetuating institutional injustices. of design, expanding the view of who and what is consid- Second, the aim of representation in design is never ered relevant. Attending to this expanded ecosystem may completeness or objectivity, but practical usefulness illuminate strategies for co-design that go beyond the prox- (Asaro, 2000). The ways in which representation and inclu- imate issue of patient or public agency in artefact design, to sion are operationalized in design processes – typically in consideration of the institutional arrangements, technical the form of ‘average’ users or community members – artefacts, infrastructures, norms and social goals that have raises questions about exactly what is practically useful made the particular design event possible in the first place. and to whom. Where representation and inclusion obfuscate fundamental questions relating to power and privilege, there is a risk of entrenching the same problematic relations that technologies are intended to resolve. These biases take Pitfall #3: The tendency to neglect the broader on new forms when produced algorithmically. Will users contexts of representation & inclusion have the ability to contest categorizations such as The central point advanced with Pitfall #3 is that the inclu- ‘healthy’ or ‘not healthy’, ‘compliant’ or ‘non-compliant’? sion of communities in design processes does not necessa- Will they even be aware of them? rily address problems that lead to marginalization in the To avoid the tendency to neglect broader contexts of first place. Indeed, it rarely does, and instead risks sup- representation and inclusion, co-design can include provi- planting consideration of the causes of marginalization sions for reflecting on why particular individuals or with easy-to-use technological solutions that may actually groups are being pursued, what they are expected to stand exacerbate health inequities. in for, which upstream causes of health-related ‘problems’ Representation and inclusion of communities or indivi- might exist, and how co-design and AI/ML can or cannot duals presumed to be affected by AI/ML are often positioned mitigate those consequences. Where representation of as a strategy to reduce potential harms associated with patient and public interests is at stake, co-design strategies designer bias, ignorance or neglect; the more accurately can better account for the plurality of values that underpin co-design processes represent the perspectives of particular interest in, and expressions of, representation. individuals or groups in society, the more technologies will reflect their interests. However, this view obscures two core challenges posed by AI/ML technologies to involvement Avoiding the pitfalls: opportunities for and representation. While this is true of other sectors, we critical research and practice discuss these especially as they relate to health. First, not all groups benefit equally from AI/ML technol- This review has elucidated some of the challenges posed by ogies, even where representation and inclusion are mobi- AI/ML technologies to the patient and public co-design of lized as a strategy to improve access or reduce bias. Just those technologies. In some cases, AI/ML for health has as an emphasis on the agential capacities of users risks amplified existing challenges, such as questions of repre- ignoring limitations placed on designers, so too does an sentation and purpose. In others, AI/ML technologies emphasis on inclusion risk ignoring the systemic nature have presented new challenges, such as the capability of of injustice (Bell and Hartmann, 2007; Hoffmann, 2019). co-design to address questions relating to data extraction Making claims to ethical co-design demands designers and the future uses of those technologies. These risks and engage with the social determinants of health – or the obstacles apply not only to PPI in the design of individual social, political, and economic bases of individual and col- technologies, but also to health services and systems, and lective health and well-being. While some scholars have society more broadly. We suggest that many of the made important advances in attending to intersectionality methods and perspectives necessary to address these chal- (Bauer and Lizotte, 2021; Lizotte et al., 2020) and the lenges already exist, but would benefit from being social determinants of health (Kreatsoulas and brought into conversation with each other more fully. Subramanian, 2018; Pierson et al., 2021) in AI/ML We have also argued in this paper that co-design, espe- models, there remains a risk of failing to account for the cially in health care, operates as an ambiguous and broader sociotechnical context that affects their ethical diverse concept that variously includes different ideas: 8 Big Data & Society involvement, participation, representation, empowerment, insights as a basis for situating the normatively desirable patient-centredness, democracy, ethics and so on. futures that arise from co-design practice. To offer some conceptual clarity in response to these issues, we outline three areas we consider most salient to advancing the goal of co-design for ethical AI/ML for Re-conceptualizing representation in light health. While these suggestions arise from an analysis of algorithmic assemblages centred around discussions of AI/ML for health, they may We have also argued in this paper that where co-design is also serve as a call to other domains where norms and mobilized as a strategy for representation (and it often is) other incentives tend to privilege PPI in design. it is important for designers and others to recognize that co-design should not necessarily claim epistemic legitimacy or moral authority solely based on the composition of its Clarifying co-design’s commitment to values patient or public participants, and any claims to the repre- sentation of patient or public interests in co-design should In this paper, we have argued that accomplishing ethical AI/ be scrutinised in light of the technology’s broader societal ML for health requires more explicit engagement with the impacts. This is especially important with respect to AI/ broader social, political and economic fields that give rise ML technologies for health, which entail representational to both co-design and AI/ML, and relatedly, more explicit forms themselves linked to both health and non-health- engagement with the values they hope to advance. While related goals. some may argue that all co-design involves the illumination While this paradox is not easily reconcilable, we suggest of values (through the involvement of diverse publics), or that this is a challenge with which co-design scholars and that co-design itself necessarily advances particular values practitioners can more fully engage in future work: that (such as democratic values), we suggest that co-design representation of public interests is often a key rationale nevertheless would benefit from more explicit engagement for co-design, but that AI/ML technologies themselves with its normative foundations. produce representations that are partial, opaque and tempor- First, with respect to the claim that all co-design involves ary. Co-design is in a unique position to forge new ways of the illumination of values (through the involvement of conceptualizing representation in the design of AI/ML patients or publics), we echo the cautions of others who systems. For example, which forms of representation are raise the related practical and conceptual challenges of (1) inherent to AI/ML (e.g. statistical), and which does identifying relevant direct and indirect stakeholders and co-design attempt to advance (e.g. political or democratic)? (2) ensuring that the elicitation of values through the parti- When and how might these be in conflict, and which trade- cipation of those stakeholders does not run the risk of com- offs do they involve? mitting the naturalistic fallacy (i.e. conflating descriptions Chasalow and Levy (2021) argue that like co-design, of individual values preferences with normatively desirable ‘representation’ and ‘inclusion’ are ‘suitcase’ words that endpoints) (Manders-Huits, 2011). Indeed, some of the cri- can carry many different meanings which are not merely tiques associated with VID (Donia and Shaw, 2021) may semantic, but normative (e.g. political legitimacy) and epi- also be productively applied to co-design practice. stemic (e.g. tacit or inclusive knowledge). As such, we Second, with respect to the claim that co-design itself suggest that the co-design community engaged with AI/ explicitly advances particular values, we suggest that ML can be more precise when employing them, and expli- those employing co-design attend to exactly which values citly recognize the broader range of values that underpin co-design advances. For example, the earliest forms of these concepts in their different forms. co-design could be said to be broadly committed to the values of ‘workplace democracy’, ‘autonomy’ or ‘quality of work life’ (Iversen et al., 2010). However, those values Mapping sociotechnical relations commitments arose in the context of an expanding science of organization management, and also advanced Part of committing to values in co-design, including those interests related to work quality, productivity and innova- associated with representation and inclusion, involves sur- tion (Kelty, 2020). In AI/ML for health, we might ask facing the actors and institutions upon which they rely. which values are implicitly carried forward with a commit- Doing so is crucial not only to accountability, but relatedly, ment to co-design, and how those interact with different to illuminating the ability of designers and others involved views on what the ethical status of co-design and AI/ML in co-design to realize any positive vision for their work. should be. Referring back to the importance of more Here we suggest that co-design may benefit from further strongly linking the sociotechnical context of design with engagement with ‘theory-methods packages’ capable of the participation of relevant stakeholders, we suggest that explicating those relations and deriving strategies for inter- co-design would benefit from more explicitly attending to vening on the sociotechnical system in which a technology the circumstances that have made it so, and using those and design process is embedded. Donia 9 Methodological approaches in the social sciences and Funding humanities, for example, may help equip co-design with The authors received no financial support for the research, author- valuable approaches to account for this complexity. ship and/or publication of this article. Institutional ethnography (Smith, 2005) has been taken up in sociological studies of health for its explicit focus on identifying the materialized social relations that coordinate people’s everyday activities – whether patients or designers References (Webster, 2020). Other methods already used in the design, Abelson J, Forest P-G, Casebeer A, et al. (2004) Will it make a dif- such as stakeholder maps and prototypes, may also be ference if I show up and share? A citizens’ perspective on useful when they focus on the broader sociotechnical improving public involvement processes for health system system of which design and AI/ML are a part. When com- decision-making. Journal of Health Services Research & Policy 9(4): 205–212. bined with an explicit and reflexive commitment to values, Abelson J, Montesanti S, Li K, et al. (2010) Effective Strategies for these may better equip co-design to understand how differ- Interactive Public Engagement in the Development of ent contexts affect the agency of designers and other stake- Healthcare Policies and Programs. Ottawa: Canadian Health holders to actually realize the futures being envisioned. Services Research Foundation. Aizenberg E and van den Hoven J (2020) Designing for human rights in AI. Big Data & Society 7(2): 2053951720949566. Ananny M (2016) Toward an ethics of algorithms: Convening, observation, probability, and timeliness. Science, Technology, Conclusion: recognizing the limits & Human Values 41(1): 93–117. of co-design Andersen LB, Danholt P, Halskov K, et al. (2015) Participation as Our summary reflection on the content we have provided a matter of concern in participatory design. CoDesign 11(3–4): 250–261. here is a call for design humility (i.e. consistently attending Ansari A (2019) Decolonizing design through the perspectives of to what professional design cannot do for a problem). While cosmological others: Arguing for an ontological turn in design humility in science and technology has been proposed by research and practice. XRDS: Crossroads, The ACM Magazine other commentators (Jasanoff, 2007; Selbst et al., 2019), for Students 26(2): 16–19. this same consideration is at risk of being overlooked in Asaro PM (2000) Transforming society by transforming technol- co-design as a result of assuming that by letting patients ogy: The science and politics of participatory design. or publics inform development, co-design is itself an Accounting, Management and Information Technologies expression of humility. As Irani (2018) notes, all design 10(4): 257–290. entails privileged sites and conceptual frames deserving of Bauer GR and Lizotte DJ (2021) Artificial intelligence, intersec- scrutiny, and as such, any co-design humility might ask: tionality, and the future of public health. American Journal when does co-design substitute other expressions of of Public Health 111(1): 98–100. Bell JM and Hartmann D (2007) Diversity in everyday discourse: public interest and action? What are the epistemic limits The cultural ambiguities and consequences of “happy talk”. of design research as it is conventionally practised? Who American Sociological Review 72(6): 895–914. is sidelined by professional design and why? And perhaps Benjamin R (2019) Race After Technology: Abolitionist Tools for most importantly, when should we not design? the New Jim Code. Medford, MA: John Wiley & Sons. Moreover, co-design discourse itself is primarily rooted Berwick DM (2009) What ‘patient-centered’ should mean: in 20th and 21st-century Euro-North American thought. Confessions of an extremist Health Affairs (Project Hope) Ansari (2019) for example, asks: ‘What does it mean to 28(4): w555–w565. design for people who are not like us, even before we ask Björgvinsson E, Ehn P and Hillgren P-A (2012a) Agonistic parti- whether we should design for people who are not like us? cipatory design: Working with marginalised social movements. What does it mean to design for people who have different CoDesign 8(2–3): 127–144. histories, different backgrounds, and different commit- Björgvinsson E, Ehn P and Hillgren P-A (2012b) Design things and design thinking: Contemporary participatory design chal- ments from us? What does it mean to design for people lenges. Design Issues 28: 101–116. who might relate to the world differently from the way Bødker S and Kyng M (2018) Participatory design that matters— we do?’ (p. 3). Attending to these questions ought to be facing the big issues. ACM Transactions on Computer–Human the starting point for any designer enacting judgement of Interaction 25(1): 1-31. DOI: 10.1145/3152421. the ability of co-design to achieve ethical AI/ML for health. Boivin A (2012) Patient and Public Involvement in Healthcare Improvement. Montréal, Québec: University of Montréal. Boltanski L and Thévenot L (1999) The sociology of critical capacity. European Journal of Social Theory 2(3): 359–377. Declaration of conflicting interests Bot BM, Wilbanks JT and Mangravite LM (2019) Assessing the The authors declared no potential conflicts of interest with respect consequences of decentralizing biomedical research. Big to the research, authorship and/or publication of this article. Data & Society 6(1): 2053951719853858. 10 Big Data & Society boyd D and Crawford K (2012) Critical questions for big data: Donetto S, Pierri P, Tsianakas V, et al. (2015) Experience-based Provocations for a cultural, technological, and scholarly phenom- co-design and healthcare improvement: Realizing participatory enon. Information, Communication & Society 15(5): 662–679. design in the public sector. The Design Journal 18(2): 227– Brown P and Zavestoski S (2004) Social movements in health: An 248. introduction. Sociology of Health & Illness 26(6): 679–694. Donia J and Shaw JA (2021) Ethics and values in design: A struc- Burr C and Morley J (2020) Empowerment or engagement? Digital tured review and theoretical critique. Science and Engineering health technologies for mental healthcare. In: Burr C and Milano Ethics 27(5): 57. S(eds) The 2019 Yearbook of the Digital Ethics Lab. Digital Enshaeifar S, Barnaghi P, Skillman S, et al. (2018) The internet of Ethics Lab Yearbook. Cham: Springer International things for dementia care. IEEE Internet Computing 22(1): Publishing, 67–88. DOI: 10.1007/978-3-030-29145-7_5. 8–17. Burrows A, Gooberman-Hill R and Coyle D (2016) Shared Language Escobar A (2018) Designs for the Pluriverse: Radical and the Design of Home Healthcare Technology. In: Proceedings of Interdependence, Autonomy, and the Making of Worlds. the 2016 CHI Conference on Human Factors in Computing Durham, NC: Duke University Press. Systems, New York, NY, USA, 2016. pp. 3584–3594. CHI ‘16. Eubanks V (2018) Automating Inequality: How High-Tech Tools ACM. DOI: 10.1145/2858036.2858496. Profile, Police, and Punish the Poor. New York, NY: Busfield J (2017) The concept of medicalisation reassessed. St. Martin’s Press. Sociology of Health & Illness 39(5): 759–774. Farrington CJ (2016) Co-designing healthcare systems: Between Capecci M, Ciabattoni L, Ferracuti F, et al. (2018) Collaborative transformation and tokenism. Journal of the Royal Society of design of a telerehabilitation system enabling virtual second Medicine 109(10): 368–371. opinion based on fuzzy logic. IET Computer Vision 12(4): Flanagan M, Howe DC and Nissenbaum H (2005) Values at 502–512. Play: Design Tradeoffs in Socially-oriented Game Design. Carman KL, Dardess P, Maurer M, et al. (2013) Patient and family In: Proceedings of the SIGCHI Conference on Human engagement: A framework for understanding the elements and Factors in Computing Systems,New York,NY, USA, developing interventions and policies. Health Affairs (Project 2005, pp.751–760. CHI’05. ACM. DOI: 10.1145/1054972. Hope) 32(2): 223–231. 1055076. Chasalow K and Levy K (2021) Representativeness in Statistics, Fogg BJ, Cueller G and Danielson D (2007) Motivating, influen- Politics, and Machine Learning. In: ACM Conference on cing, and persuading users: An introduction to captology. In: Fairness, Accountability, and Transparency (FAccT’21),3 The Human–Computer Interaction Handbook. Boca Raton, March 2021. FL: CRC Press, 159–172. Cheney-Lippold J (2017) We Are Data: Algorithms and the Forlano L (2020) The Danger of Intimate Algorithms. In: Public Making of Our Digital Selves. New York, NY: NYU Press. Books Blog. Available at: https://www.publicbooks.org/the- Chin-Yee B and Upshur R (2019) Three problems with big data danger-of-intimate-algorithms/ (accessed 13 November and artificial intelligence in medicine. Perspectives in 2020). Biology and Medicine 62(2): 237–256. Forlano L and Mathew A (2014) From design fiction to design Clarke AE, Shim JK, Mamo L, et al. (2003) Biomedicalization: friction: Speculative and participatory design of values- Technoscientific transformations of health, illness, and US bio- embedded urban technology. Journal of Urban Technology medicine. American Sociological Review 68(2): 161–194. 21(4): 7–24. Compagna D and Kohlbacher F (2014) The limits of participatory Fourcade M and Healy K (2017) Seeing like a market. Socio- technology development: The case of service robots in care Economic Review 15(1): 9–29. facilities for older people. Technological Forecasting and Frauenberger C, Good J, Fitzpatrick G, et al. (2015) In pursuit of Social Change 93: 19–31. rigour and accountability in participatory design. International Cooper A, Reimann R, Cronin D, et al. (2014) About Face: The Journal of Human–Computer Studies 74: 93–106. Essentials of Interaction Design. Hoboken, NJ: John Wiley Giacomin J (2014) What is human centred design? The Design & Sons. Journal 17(4): 606–623. Costanza-Chock S (2020) Design Justice: Community-Led Gibson A, Britten N and Lynch J (2012) Theoretical directions for Practices to Build the Worlds We Need. Cambridge, MA: an emancipatory concept of patient and public involvement. MIT Press. Health 16(5): 531–547. Couldry N and Mejias UA (2019) The Costs of Connection: How Goldhahn J, Rampton V and Spinas GA (2018) Could artificial Data is Colonizing Human Life and Appropriating it for intelligence make doctors obsolete? BMJ 363. DOI: 10.1136/ Capitalism. Redwood City, CA: Stanford University Press. bmj.k4563. D’Ignazio C and Klein LF (2020) Data Feminism. Cambridge, Greenhalgh T, Hinton L, Finlay T, et al. (2019) Frameworks for MA: MIT Press. supporting patient and public involvement in research: Dalton C and Thatcher J (2014) What does a critical data studies Systematic review and co-design pilot. Health Expectations look like, and why do we care? Seven points for a critical 22(4): 785-801. DOI: 10.1111/hex.12888. approach to ‘big data’. Society and Space 29. Halloran J, Hornecker E, Stringer M, et al. (2009) The value of Dantec CAL and DiSalvo C (2013) Infrastructuring and the forma- values: Resourcing co-design of ubiquitous computing. tion of publics in participatory design. Social Studies of Science CoDesign 5(4): 245–273. 43(2): 241–264. Harrington C and Dillahunt TR (2021) Eliciting Tech Futures DiSalvo C (2012) Revealing Hegemony: Agonistic Information Among Black Young Adults: A Case Study of Remote Design. Cambridge, MA: MIT Press. Speculative Co-Design. In: Proceedings of the 2021 CHI Donia 11 Conference on Human Factors in Computing Systems, 2021, for covid-19 (SARS-CoV-2) pandemic: A review. Chaos, pp.1–15. Solitons & Fractals 139: 110059. Harrington C, Erete S and Piper AM (2019) Deconstructing com- Latour B (2005) Reassembling the Social: An Introduction to munity-based collaborative design: Towards more equitable Actor-Network-Theory. New York, NY: OUP. participatory design engagements. Proceedings of the ACM Lizotte DJ, Mahendran M, Churchill SM, et al. (2020) Math versus on Human–Computer Interaction 3(CSCW): 1–25. meaning in MAIHDA: A commentary on multilevel statistical Hepworth K (2019) A panopticon on my wrist: The biopower of models for quantitative intersectionality. Social Science & big data visualization for wearables. Design and Culture Medicine 245: 112500. 11(3): 323–344. Lupton D (2014a) Apps as artefacts: Towards a critical per- Hillgren P-A (2013) Participatory design for social and public spective on mobile health and medical apps. Societies innovation: Living labs as spaces for agonistic experiments 4(4): 606–622. and friendly hacking. In Public and Collaborative: Exploring Lupton D (2014b) Critical perspectives on digital health technol- the Intersection of Design, Social Innovation and Public ogies. Sociology Compass 8(12): 1344–1359. Policy:75–88. Malmö, Sweden: DESIS Network United Lupton D (2016) Towards critical digital health studies: States. Reflections on two decades of research in health and the way Hoffmann AL (2019) Where fairness fails: Data, algorithms, and forward. Health 20(1): 49–61. the limits of antidiscrimination discourse. Information, Lupton D (2017a) Digital Health: Critical and Cross-Disciplinary Communication & Society 22(7): 900–915. Perspectives. New York, NY: Routledge. Hogg CNL (2007) Patient and public involvement: What next for Lupton D (2017b) Digital health now and in the future: Findings the NHS? Health Expectations 10(2): 129–138. from a participatory design stakeholder workshop. DIGITAL Hogle L (2016) Data-intensive resourcing in healthcare. HEALTH 3: 2055207617740018. BioSocieties 11. DOI: 10.1057/s41292-016-0004-5. Lyon D (2010) Surveillance, power and everyday life. In: Iliadis A and Russo F (2016) Critical data studies: An introduction. Emerging Digital Spaces in Contemporary Society. London, Big Data & Society 3(2): 2053951716674238. UK: Springer, 107–120. Irani L (2018) “Design thinking”: Defending silicon valley at the Mainsah H and Morrison A (2014) Participatory Design Through apex of global labor hierarchies. Catalyst: Feminism, Theory, a Cultural Lens: Insights from Postcolonial Theory. In: Technoscience 4(1): 1-19. Proceedings of the 13th Participatory Design Conference: Iversen OS, Halskov K and Leong TW (2010) Rekindling values Short Papers, Industry Cases, Workshop Descriptions, in participatory design. In: Proceedings of the 11th biennial Doctoral Consortium Papers, and Keynote Abstracts – participatory design conference, 2010, pp.91–100. Volume 2, New York, NY, USA, 2014, pp.83–86. PDC’14. Ives J, Damery S and Redwod S (2013) PPI, paradoxes and Plato: ACM. DOI: 10.1145/2662155.2662195. Who’s sailing the ship? Journal of Medical Ethics 39(3): 181–185. Malizia A and Carta S (2020) Co-Creation and Co-Design Jasanoff S (2007) Technologies of humility. Nature 450(7166): Methodologies to address Social Justice and Ethics in 33–33. Machine Learning. ACM SIGCHI Italy. Kelty CM (2020) The Participant: A Century of Participation in Manders-Huits N (2011) What values in design? The challenge of Four Stories. Chicago, IL: University of Chicago Press. incorporating moral values into design. Science and Kensing F and Blomberg J (1998) Participatory design: Issues and Engineering Ethics 17(2): 271–287. concerns. Computer Supported Cooperative Work 7(3): 167–185. Metcalf J and Moss E (2019) Owning ethics: Corporate logics, Kim S (2019) Comparative Review of Research on Health silicon valley, and the institutionalization of ethics. Social Information Technology in Biomedical Informatics and Research 86(2): 449–476. Human-Computer Interaction. In: International Conference Michie S, Thomas J, Johnston M, et al. (2017) The human behav- on Human-Computer Interaction, 2019, pp.16–32. Springer. iour-change project: Harnessing the power of artificial intelli- Kitchin R (2017) Thinking critically about and researching gence and machine learning for evidence synthesis and algorithms. Information, Communication & Society 20(1): interpretation. Implementation Science 12(1): 21. 14–29. Mol A (2008) The Logic of Care: Health and the Problem of Kitchin R and Lauriault T (2014) Towards Critical Data Studies: Patient Choice. New York, NY: Routledge. Charting and Unpacking Data Assemblages and Their Work. Montani S and Striani M (2019) Artificial intelligence in clinical The Programmable City Working Paper 2 ID 2474112, decision support: A focused literature survey. Yearbook of SSRN Scholarly Paper, 30 July. Rochester, NY. Available at: Medical Informatics 28(1): 120. https://papers.ssrn.com/abstract = 2474112 (accessed 27 Muller MJ (2009) Participatory design: The third space in HCI. In: August 2019). Human–Computer Interaction. Boca Raton, FL: CRC Press, Kraft P and Bansler J (1994) The collective resource approach: 181–202. The Scandinavian experience. Scandinavian Journal of Noble SU (2018) Algorithms of Oppression: How Search Engines Information Systems 6(1): 71-84. Available at: https://aisel.ais- Reinforce Racism. New York, NY: NYU Press. net.org/sjis/vol6/iss1/4. Norman DA and Draper SW (1986) User Centered System Kreatsoulas C and Subramanian SV (2018) Machine learning in Design: New Perspectives on Human–Computer Interaction. social epidemiology: Learning from experience. SSM - Boca Raton, FL: Taylor & Francis. Population Health 4: 347–349. Obermeyer Z, Powers B, Vogeli C, et al. (2019) Dissecting racial Lalmuanawma S, Hussain J and Chhakchhuak L (2020) bias in an algorithm used to manage the health of populations. Applications of machine learning and artificial intelligence Science 366(6464): 447–453. 12 Big Data & Society Ollenburg SA (2019) A futures-design-process model for partici- Sloane M, Moss E, Awomolo O, et al. (2020) Participation is not a patory futures. Journal of Futures Studies 23(4): 51–62. Design Fix for Machine Learning. In: Proceedings of the 37th Pasquale F (2015) The Black Box Society. Cambridge, MA: International Conference on Machine Learning, Vienna, Harvard University Press. Austria, 2020. Pedersen S (2020) Staging negotiation spaces: A co-design frame- Smith DE (2005) Institutional Ethnography: A Sociology for work. Design Studies 68: 58–81. People. Lanham, MD: Rowman Altamira. Pham Q, Wiljer D and Cafazzo JA (2016) Beyond the rando- Storni C (2015) Notes on ANT for designers: Ontological, meth- mized controlled trial: A review of alternatives in mHealth odological and epistemological turn in collaborative design. clinical trial methods. JMIR mHealth and UHealth 4(3): CoDesign 11(3–4): 166–178. e107. Topol E (2019) Deep Medicine: How Artificial Intelligence Can Pierson E, Cutler DM, Leskovec J, et al. (2021) An algorithmic Make Healthcare Human Again. New York, NY: Basic Books. approach to reducing unexplained pain disparities in under- Tran O’Leary J, Zewde S, Mankoff J, et al. (2019) Who gets to served populations. Nature Medicine 27(1): 136–140. future? Race, representation, and design methods in Prainsack B (2020) The value of healthcare data: To nudge, or not? Africatown. In: Proceedings of the 2019 CHI Conference on Policy Studies 41(5): 547–562. Human Factors in Computing Systems, 2019, pp.1–13. Rich E, Miah A and Lewis S (2019) Is digital health care more Triberti S and Barello S (2016) The quest for engaging AmI: equitable? The framing of health inequalities within Patient engagement and experience design tools to promote England’s digital health policy 2010–2017. Sociology of effective assisted living. Journal of Biomedical Informatics Health & Illness 41: 31–49. 63: 150–156. Rose N (2007) Molecular biopolitics, somatic ethics and the spirit Tritter JQ (2009) Revolution or evolution: The challenges of con- of biocapital. Social Theory & Health 5(1): 3–29. ceptualizing patient and public involvement in a consumerist Rossitto C (2021) Political ecologies of participation: Reflecting world. Health Expectations 12(3): 275–287. on the long-term impact of civic projects. Proceedings of the Tsekleves E, Darby A, Whicher A, et al. (2017) Co-designing ACM on Human-Computer Interaction 5(CSCW1): 1–27. design fictions: A new approach for debating and priming ACM, New York, NY, USA. future healthcare technologies and services. Archives of Rowland P and Kumagai AK (2018) Dilemmas of representation: Design Research 30(2): 5–21. Patient engagement in health professions education. Academic Upshur RE (2001) The status of qualitative research as evidence. Medicine 93(6): 869–873. In J. M. Morse, J. M. Swanson & A. J. Kuzel (Eds.), The Rowland P, Anderson M, Kumagai AK, et al. (2018) Patient invol- Nature of Qualitative Evidence:5–26. Sage Thousand Oaks, vement in health professionals’ education: A meta-narrative CA: SAGE. review. Advances in Health Sciences Education 24(3): 595- Vines J, Clarke R, Wright P, et al. (2013) Configuring participa- 617. DOI: 10.1007/s10459-018-9857-7. tion: on how we involve people in design. In: Proceedings of Ruckenstein M and Schüll ND (2017) The datafication of health. the SIGCHI Conference on Human Factors in Computing Annual Review of Anthropology 46(1): 261–278. Systems, 2013, pp.429–438. Sanders EB-N and Stappers PJ (2008) Co-creation and the new Wait S and Nolte E (2006) Public involvement policies in health: landscapes of design. CoDesign 4(1): 5–18. Exploring their conceptual basis. Health Economics Policy and Schwab K (2017) The Fourth Industrial Revolution. Redfern, Law 1: 149. NSW: Currency. Webster F (2020) The Social Organization of Best Practice: An Schwennesen N (2019) Algorithmic assemblages of care: Institutional Ethnography of Physicians’ Work. Cham, Imaginaries, epistemologies and repair work. Sociology of Switzerland: Springer. Health & Illness 41: 176–192. Whitman M, Hsiang C and Roark K (2018) Potential for participa- Seidelin C, Dittrich Y and Grönvall E (2020) Foregrounding data tory big data ethics and algorithm design: a scoping mapping in co-design–an exploration of how data may become an object review. In: Proceedings of the 15th Participatory Design of design. International Journal of Human–Computer Studies Conference: Short Papers, Situated Actions, Workshops and 143: 102505. Tutorial - Volume 2, New York, NY, USA, 20 August 2018, Selbst AD, Boyd D, Friedler SA, et al. (2019) Fairness and pp.1–6. PDC’18. Association for Computing Machinery. abstraction in sociotechnical systems. In: Proceedings of the DOI: 10.1145/3210604.3210644. Conference on Fairness, Accountability, and Transparency, Yeung K (2017) ‘Hypernudge’: Big data as a mode of regulation 2019, pp.59–68. by design. Information, Communication & Society 20(1): 118– Sharon T (2018) When digital health meets digital capitalism, how 136. many common goods are at stake? Big Data & Society 5(2): Zaidi L (2019) Worldbuilding in science fiction, foresight and 2053951718819032. design. Journal of Futures Studies 23(4): 15–26. Shaw J, Agarwal P, Desveaux L, et al. (2018) Beyond “implemen- Zuboff S (2019) The Age of Surveillance Capitalism: The Fight for tation”: Digital health innovation and service design. npj a Human Future at the New Frontier of Power. New York, Digital Medicine 1(1): 48. NY: Profile Books. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Big Data & Society SAGE

Co-design and ethical artificial intelligence for health: An agenda for critical research and practice:

Big Data & Society , Volume 8 (2): 1 – Dec 17, 2021

Loading next page...
 
/lp/sage/co-design-and-ethical-artificial-intelligence-for-health-an-agenda-for-PU0w2WkGTV
Publisher
SAGE
Copyright
Copyright © 2022 by SAGE Publications Ltd, unless otherwise noted. Manuscript content on this site is licensed under Creative Commons Licenses.
ISSN
2053-9517
eISSN
2053-9517
DOI
10.1177/20539517211065248
Publisher site
See Article on Publisher Site

Abstract

Applications of artificial intelligence/machine learning (AI/ML) in health care are dynamic and rapidly growing. One strat- egy for anticipating and addressing ethical challenges related to AI/ML for health care is patient and public involvement in the design of those technologies – often referred to as ‘co-design’. Co-design has a diverse intellectual and practical his- tory, however, and has been conceptualized in many different ways. Moreover, AI/ML introduces challenges to co-design that are often underappreciated. Informed by perspectives from critical data studies and critical digital health studies, we review the research literature on involvement in health care, and involvement in design, and examine the extent to which co-design as commonly conceptualized is capable of addressing the range of normative issues raised by AI/ML for health care. We suggest that AI/ML technologies have amplified and modified existing challenges related to patient and public involvement, and created entirely new challenges. We outline three pitfalls associated with co-design for ethical AI/ML for health care and conclude with suggestions for addressing these practical and conceptual challenges. Keywords Co-design, participatory design, artificial intelligence, health care, design ethics, data ethics approaches to technology development that aim to Introduction involve end-users as meaningful participants in the design The contemporary field of artificial intelligence/machine process, co-design is often mobilized as a strategy to learning (AI/ML) is dynamic and rapidly growing, charac- improve fairness, accountability, and transparency of algo- terized as central to a ‘4th industrial revolution’ that com- rithmic systems (Aizenberg and van den Hoven, 2020; mentators suggest will impact virtually all aspects of our Malizia and Carta, 2020; Whitman et al., 2018). lives (Couldry and Mejias, 2019; Schwab, 2017; Zuboff, Co-design is also closely allied to other trends in health 2019). Although AI/ML technologies are multi-purpose, and health care, including patient engagement, PPI and they are particularly consequential in health care, where patient and family-centred care (PFCC). Co-design and its concerns range from the changing nature of the patient–pro- variants have a diverse intellectual and practical history, vider relationship (Goldhahn et al., 2018; Topol, 2019), to however, and have been conceptualized in many different the ways in which AI/ML technologies exacerbate existing societal inequities (Benjamin, 2019; D’Ignazio and Klein, 2020; Eubanks, 2018; Noble, 2018). As a result, there has been an increased acknowledgement by corporate, govern- Institute of Health Policy, Management & Evaluation, Dalla Lana School of Public Health, University of Toronto, Toronto, ON, Canada ment, and academic actors alike that AI needs ‘ethics’. How Joint Centre for Bioethics, Dalla Lana School of Public Health, University these ‘ethics’ are meant to be established and applied, of Toronto, Toronto, ON, Canada however, has led to significant debate (we discuss our 3 Institute for Health System Solutions and Virtual Care, Women’s College own notion of ethics in this paper in ‘Theoretical Hospital, Toronto, ON, Canada approach’). Corresponding author: One strategy for anticipating and addressing the poten- Joseph Donia, Institute of Health Policy, Management & Evaluation, Dalla tial benefits and harms of AI/ML for health is patient and Lana School of Public Health, University of Toronto, 155 College St, public involvement (PPI) in the design of those technolo- Toronto, ON, Canada gies, often referred to as co-design. As a category of Email: joseph.donia@mail.utoronto.ca Creative Commons CC BY: This article is distributed under the terms of the Creative Commons Attribution 4.0 License (https:// creativecommons.org/licenses/by/4.0/) which permits any use, reproduction and distribution of the work without further permission provided the original work is attributed as specified on the SAGE and Open Access page (https://us.sagepub.com/en-us/nam/open-access-at-sage). 2 Big Data & Society ways. Moreover, the meaning and value of co-design are ML technologies, one must acknowledge how deeply inter- challenged by AI/ML systems, where users will always twined they are with the human and material realities that play some role in the production of those systems, for shape their existence in the world. example in producing data used to train models. As such, The second concept is ‘surveillance’, which has come to the extent to which co-design should be considered a suita- signify the consequences of mass data collection on human ble approach to ethical AI/ML has recently come into ques- experience and action, spurring the development of an tion (Sloane et al., 2020). entire field of research referred to as surveillance studies Informed by perspectives from critical data studies (Cheney-Lippold, 2017; Lyon, 2010). The notion at the (CDS; boyd and Crawford, 2012; Dalton and Thatcher, root of studies of surveillance is that the act of collecting 2014; Kitchin and Lauriault, 2014) and critical digital data about peoples’ activities has significant influence on health studies (CDHS; Lupton, 2016, 2017a), in this the activities in which they engage. This is true for both paper we outline three pitfalls associated with co-design individuals and populations and has novel implications in for ethical AI/ML for health based on common assumptions health and health care contexts. arising from health care and co-design discourse. We start The final concept influencing our analysis is that of the by presenting our theoretical approach in some detail, out- ‘political economy’ of data and digital technologies, refer- lining three concepts from CDS and CDHS that inform our ring to the particular economic assumptions and institutions analysis. We then present a brief description of practices of that are supported by AI/ML technologies and the organiza- involvement in design, and involvement in health care, tions by which they are developed and used. The concept is leading into a summary of overarching risks and pitfalls more accurately described as ‘political economy’ as for consideration, and conclude by outlining important opposed to just ‘economy’ to represent the inevitable exist- directions for future research and practice in this area. ence of competition for control over resources that comes along with the capitalist economic system in which we find ourselves (Couldry and Mejias, 2019; Zuboff, 2019). Theoretical approach We also acknowledge the importance of clarifying how Our analysis of involvement in the design of AI/ML for ‘ethics’ is approached in our paper. We mobilize ethics in health care is shaped by perspectives from CDS and two distinct senses. In the first sense, ethics refers to the CDHS. CDS is an interdisciplinary field, bringing together principles, values and frameworks that are used in the AI methods and perspectives from across media studies, industry to guide the development of AI technologies in sociology, anthropology, human geography, and design, ways that are deemed by stakeholders allied to the industry among others. While the field is diverse, CDS is united to be morally good (Ananny, 2016). A prime example of by a concern with the social, cultural, ethical and political such principles, values and frameworks is the growing challenges posed by data, including how they are consti- attention to fairness, accountability and transparency in tuted within wider data assemblages (Iliadis and Russo, ML as a strategy to enhance the ethical status of AI technol- 2016; Kitchin and Lauriault, 2014). ogies. This sense of ethics aligns with the study of ‘practical A related field is CDHS. While a number of scholars ethics’ or the actions and decisions that people perceive to have engaged critically with how health technologies constitute ethical practice in their everyday work related to (including health information technologies) have influenced AI (Ananny, 2016; Metcalf and Moss, 2019). health and illness (Clarke et al., 2003; Mol, 2008; Rose, However, we also employ the concept of ethics in a 2007), Lupton (2014b, 2016, 2017a, 2017b) outlines the second sense. This second sense refers to the normative unique challenges posed by digital health technologies, commitments associated with CDS and CDHS, which are including the ways in which they contribute to evolving fundamentally oriented towards advancing social justice. notions of ‘health’, ‘illness’ and ‘care’ (Lupton, 2014a, For this reason, these fields of work emphasize the cluster 2016). of concepts we have outlined (socio-materiality, surveil- Three concepts in particular from these interdisciplinary lance and political economy) in normatively motivated ana- domains inform the analysis of involvement in health- lyses that contribute to the achievement of a more just world related AI/ML development presented in this paper. The for all. Such an approach to ethics is explicitly focused on first is ‘socio-materiality’, which indicates that AI/ML tech- the operations of power and the redistribution of goods to nologies are not simply digital algorithms that happen to be those in positions of relative disadvantage. This second embedded in a variety of devices. Rather, AI/ML technolo- sense of ethics motivates the critical analysis we bring to gies are better understood as a collection of digital algo- co-design in this paper and the practical strategies we rithms, technological devices, telecommunications outline in our concluding sections. infrastructures, human goals and human rules that cohere Finally, in addition to the normative and theoretical posi- together into ‘assemblages’ that represent specific AI/ML tions outlined here, we also intend to clarify our view on the technologies (Kitchin and Lauriault, 2014). If one is to concept of co-design. It is important to note that co-design understand the ethical significance of co-design for AI/ has been represented in the research literature in a variety of Donia 3 ways (Sanders and Stappers, 2008). For example, it may typically mobilized in support of improving the acceptance refer to any form of involvement in design or be used to or usability of technology. describe a particular form of involvement distinct from More recent work has significantly expanded the topics related approaches such as participatory design. and questions with which co-design engages, with contribu- Involvement may occur throughout the design process, or tions spanning sociology, anthropology, design studies, only at particular stages. It may be employed as a strategy human–computer interaction (HCI) and computer- to improve usability and acceptance of technology or to supportive cooperative work, among others. While a full elicit stakeholder values. These differences indicate a review is beyond the scope of this paper, contributions diverse field of research and practice, where scholarly com- most relevant to our argument in this paper tend to fall munities are concerned with similar topics, but enjoy only into one of the following categories. partial overlap of assumptions and motivations upon First are contributions that explicitly attend to power and which they are based. Nonetheless, we believe there is a the social–cultural–political contexts that give rise to and clear conceptual benefit to critically examining the field shape co-design. For example, design justice of research and practice as a whole. This paper, therefore, (Costanza-Chock, 2020) is an intersectional approach to uses ‘co-design’ as an umbrella term for approaches that design that engages with how designed artefacts impact involve end-users, patients or publics in any stage of the upon dominant and oppressed groups in society, emphasiz- design process. ing mechanisms for community accountability and control. Similarly, Escobar, in Designs for the Pluriverse (2018), argues for the decolonization of design through collabora- Involvement in design tive practices that are place-based, resist dependence on Within design scholarship, formal involvement in design is markets, and are more accountable to the needs of commu- most commonly attributed to Scandinavian approaches in nities. Mainsah and Morrison (2014) and Harrington et al. the late 1970s and early 1980s which attempted to address (2019) apply a post-colonial lens to design, with workplace transformations brought about by computers. Harrington et al. (2019) identifying considerations for Inspired by action research, these early examples involved more equitable co-design with marginalized groups, such very little ‘design’ per se, but rather emphasized the import- as emphasizing attention to historical context, community ance of providing workers and union officials with the requi- access, and unintentional harms of design. site knowledge and skills to understand the potential impacts The second relevant category of contributions focuses of computer systems on their work, with the ultimate aim of specifically on advancing new conceptual or methodological strengthening collective bargaining strategies (Vines et al., approaches to co-design. These include the related domains 2013). This is perhaps best exemplified by the collective of futures design, design fiction and speculative design, resource approach, which convened ‘independent study which contain core participatory elements (Forlano and groups’ comprised of union members and academic Mathew, 2014; Harrington and Dillahunt, 2021; Lupton, researchers (Kraft and Bansler, 1994). These earliest forms 2017b; Ollenburg, 2019; Tran O’Leary et al., 2019; of co-design were explicitly politically engaged, emphasiz- Tsekleves et al., 2017; Zaidi, 2019). Approaches inspired ing productive tension over immediate consensus in arriving by actor-network theory are also highly relevant (Latour, at a decision (Björgvinsson et al., 2012b). Worker control 2005), where co-design is conceptualized as a site for ‘infra- and agency were explicit aims (Vines et al., 2013), and structuring’ or forming publics around ‘matters of concern’ most Scandinavian-inspired co-design today is characterized (Andersen et al., 2015; Björgvinsson et al., 2012a; Dantec by two core assumptions: that those affected by a decision and DiSalvo, 2013; DiSalvo, 2012; Hillgren, 2013; should have a say in its making, and that stakeholders’ Pedersen, 2020; Rossitto, 2021; Storni, 2015). Finally, tacit knowledge is essential to the success of a design approaches drawing on the related field of values in design project (Björgvinsson et al., 2012b). (VID) to leverage co-design as a strategy for discovery, ana- Today, iterations of co-design methods and principles lysis and integration of values in technology design, also are reflected in many different but related approaches. provide important context for our work (Flanagan et al., User-centred design, for example, is an approach to 2005; Halloran et al., 2009). design that focuses on eliciting users’‘real needs’ to The third relevant category is a small but growing body improve the ‘fit’ between a user and a technology of literature specific to the design of data-intensive and (Norman and Draper, 1986). User experience design emerging digital technologies such as AI/ML. Sloane focuses on a user’s expected emotions and attitudes when et al. (2020) for example distinguish participation as work engaging with designed artefacts (Cooper et al., 2014). (e.g. in generating data), participation as consultation (e.g. Human-centred design similarly emphasizes the incorpora- in providing feedback) and participation as justice (e.g. tion of a ‘human perspective’ in all phases of the design longer-term partnerships, collaboration and capacity build- process (Giacomin, 2014). While not exhaustive, these ing) in AI/ML projects, outlining various conceptualiza- represent more established approaches, where PPI is tions of what participation might entail in co-design. 4 Big Data & Society Bødker and Kyng (2018) critique what they see as the dom- policy (Abelson et al., 2004); and quality improvement and inant view of participation as a goal in itself, and outline the innovation (Donetto et al., 2015). ‘big issues’ of participatory design which have been high- PPI is also closely linked to other influential ideas about lighted by advanced digital technologies. Shifting attention how health care should be organized, and to whom health to the data that are so crucial for AI/ML technologies, care decision-makers should be accountable. PFCC has Seidelin et al. (2020) focus specifically on how data been defined as: ‘The experience (to the extent the might be better represented through co-design activities. informed, individual patient desires it) of transparency, This third category of contributions emphasizes practical and individualization, recognition, respect, dignity, and choice conceptual challenges in the co-design of AI/ML and other in all matters, without exception, related to one’s person, digital technologies specifically and is most closely linked to circumstances, and relationships in health care’ (Berwick, the issues we address in our paper. We build especially on 2009, p. 560). The basic ideas underpinning PFCC, insights from this latter domain of work that are directly however, are much older. Hippocrates urged physicians to linked to health and health care. One salient example is the ‘investigate the entire patient’ (Boivin, 2012). At the turn observation that even health-specific AI/ML technologies, and of the 20th century, Canadian physician William Osler is the resources and infrastructures upon which they rely, are typi- noted for orienting medical education towards the needs cally generated outside of traditional health and medical settings of the patient rather than the disease. and rely on logics not solely associated with the maintenance or As with co-design, conceptualizations of PPI and PFCC improvement of health (Bot et al., 2019; Sharon, 2018). Other vary considerably. Conceptual discussions of PPI have for insights we glean from this body of work are more general, example distinguished between democratic and consumer- but take on a unique meaning in health-related contexts. For ist rationales (Wait and Nolte, 2006); direct/indirect and example, that patients or publics ‘participate’ in the design of proactive/reactive forms of involvement (Tritter, 2009); algorithmic systems in ways that are unwitting or involuntary outcome-oriented versus process-oriented involvement (Vines et al., 2013), such as in producing data upon which AI/ (Ives et al., 2013); and domains of involvement, such as ML algorithms are trained (Sloane et al., 2020); that AI/ML direct care, organization or policy (Carman et al., 2013). technologies can be modified or re-purposed after deployment Others view PPI as existing on a continuum (Gibson to accomplish new health and non-health-related goals et al., 2012), or as an ongoing process of organizing, (Kitchin, 2017); the ‘black box’ nature of AI/ML algorithms where patient roles and identities are constantly being (Pasquale, 2015), which limits what can be known and formed and negotiated (Rowland and Kumagai, 2018). addressed through co-design and has particular salience in Notwithstanding these practical and conceptual chal- understanding decisions about medical care; and the challenge lenges, interest in PPI has increased, and as information of accounting for how data and insights associated with AI/ technologies have matured and become more deeply ML technologies will be used in future health and embedded in health care, strategies and perspectives from non-health-related applications (Ruckenstein and Schüll, design and related fields have also increased in prominence. 2017). These challenges inform our co-design risks and pitfalls The fields of health and biomedical informatics (HI), for presented in‘Involvement in health care’ and‘Pitfalls associated example, increasingly engage with methods and theoretical with co-design and ethical AI/ML for health’, respectively. perspectives from HCI, despite the paradigmatic differences that have historically made collaboration difficult. While HI and HCI share an interest in the variety of ways people Involvement in health care engage with technologies in diverse use-contexts, they PPI in health care has an equally long and complex history, often do so via different methods (e.g. experimental however, formal involvement arrangements can be traced to research designs vs. design-based methods); publication social movements initiated by feminist, queer and disability venues (e.g. peer-reviewed journals vs. conferences); and rights activists in the 1970s and 1980s (Brown and topics (e.g. clinical settings vs. consumer applications) Zavestoski, 2004; Busfield, 2017). These movements (Kim, 2019). Some of these divides are narrowing, rebuked medical paternalism and sought to legitimate experi- however, as health services researchers seek new ential or embodied knowledge in bringing about changes to approaches capable of addressing complex design, imple- the institutions of medicine. In 1974, the United Kingdom’s mentation and evaluation challenges posed by advanced National Health Services established Community Health digital technologies (Pham et al., 2016; Shaw et al., 2018). Councils as the first example of institutionally supported Today, the focus of health-related technology design is PPI, with a mandate to improve local service delivery and shifting once again, as information systems and the goals accountability (Hogg, 2007). While formal PPI has since they are intended to accomplish, continue to evolve. Some, taken on different profiles around the world, it continues to for example, propose that HCI and related fields find them- hold interest in many areas of health research and practice, selves in a new wave concerned primarily with persuasion including health professions education (Rowland et al., (Fogg et al., 2007). AI/ML applications in health are broad, 2018); health care research (Greenhalgh et al., 2019); health butinmanyinstances ‘nudge’ attitudes or behaviours either Donia 5 through direct intervention or by providing tailored informa- impact of involvement on decision-making) do not imply a tion (Yeung, 2017). At the individual/patient level, for stronger focus on the entirety of a sociotechnical system, example, research in digital behaviour change incorporates much of which is out of view for both users and designers methods and perspectives from design and psychology to of AI/ML technologies. Attending to this broader sociotech- accomplish self-management of medical conditions, or nical system is especially important when considering out- health promotion via behaviour modification (Michie et al., comes related to AI/ML technologies for health, where 2017). AI/ML has also been used in epidemiological model- novel forms of health surveillance, combined with the ling and forecasting (Lalmuanawma et al., 2020), clinical increasing value of health-related data, introduce ethically decision support (Montani and Striani, 2019), and health salient issues. care operations and logistics (Obermeyer et al., 2019). Scholarship and practice related to co-design and PPI Involvement of patients or publics in the design of advanced tend to emphasize the procedures and qualities of participa- digital technologies often emphasizes the inherent patient- tion or involvement, for example attending to the import- centred or empowering qualities of co-design approaches ance of processual and contextual characteristics of (Capecci et al., 2018; Enshaeifar et al., 2018; Triberti and involvement, ‘moments’ or ‘stages’ of involvement, Barello, 2016) or AI/ML technologies (Topol, 2019), espe- patient and public latitude in decision-making, organiza- cially when directed to health-related goals. As such, tional support for involvement, and the proximate impacts co-design, PPI, and PFCC afford legitimacy to AI/ML tech- of those characteristics on designed artefacts (Abelson nologies for health, though the extent to which they always et al., 2010; Frauenberger et al., 2015; Kensing and should, remains a topic of debate. Blomberg, 1998). The implicit assumption advanced by We see the affordances of AI/ML technologies for health these viewpoints is that better involvement strategies will and the challenges they pose to co-design presenting three result in better design outcomes, as evaluated by the main risks that give rise to the pitfalls presented in the fol- impacts of those strategies on design products. This per- lowing section. First, co-design risks adding new harms to spective is complicated by normative and epistemic chal- health systems as a result of putting forward innovations lenges related to AI/ML for health that result in an overly that have not been designed with unintended consequences narrow view of ethically salient issues for health-related in mind. These include the ways in which AI/ML technol- co-design, of which we outline just three below. We ogies can be instantly adapted or modified to suit new goals, contend that processes of co-design that encourage a nar- for which patients and publics have no input once the tech- rower emphasis on practices of involvement risk losing nology has been deployed. Second, co-design risks instru- sight of the broader sociotechnical system, and the crucial mentalizing patients, using their involvement in the normative issues embedded in those systems that surround design of an AI/ML technology to make advances the co-design process. towards achieving pre-existing goals established by those First, AI/ML technologies are capable of analysing in positions of power. In AI/ML for health, this power is increasingly large volumes of data, introducing forms of increasingly distributed among a diverse range of private surveillance not previously possible. AI/ML technologies actors. Third, co-design that is explicitly focused on the by definition discriminate between ‘measurable types’ or design of technologies risks obfuscating societal injustices classifications of meaning based on available data when the involvement of patients or publics focuses only (Cheney-Lippold, 2017), where classifications are largely on those problems which can be solved by technologies. invisible to those they are applied to, and determined by We now shift to a description of three main pitfalls asso- those with the power to know their significance. The diver- ciated with co-design for ethical AI/ML for health. We sity of actors and interests in digital health mean that clas- suggest that attention to these pitfalls is essential to determining sifications implicate ‘health’ in many different ways, the appropriateness and feasibility of AI/ML co-design for however. For example, Sharon (2018), drawing on health andthatbyaddressingthem, it maybepossibleto Boltanski and Thévenot (1999), identifies five different advance approaches to co-design that better equip it to orders of worth animating conceptualizations of the engage with the normative issues raised by those technologies. common good in health-related research led by large tech- nology companies: ‘civic’ (doing good for society), ‘market’ (enhancing wealth creation), ‘industrial’ (increas- Pitfalls associated with co-design ing efficiency), ‘project’ (innovation and experimentation), and ethical AI/ML for health and ‘vitalist’ (proliferating life). Their work illustrates the importance of considering the presence and strength of Pitfall #1: The tendency to place disproportionate influence of some orders of worth over others in different emphasis on procedures and qualities of involvement health-related co-design settings. The central point advanced with Pitfall #1 is that ‘better’ Second, and related, AI/ML technologies for health (and involvement strategies (which even in more critical the data upon which they rely) are of value to actors increas- approaches is often indicated by breadth, depth, or ingly distal to formal health and health care systems. While 6 Big Data & Society the everyday consequences of mostly invisible ‘measurable need to be more explicitly accounted for when considering types’ are often appreciated in terms of targeted advertising, the ethical salience of co-design. search recommendations, or dynamic pricing strategies, Implicit in any undertaking of co-design is the belief that they may also form the basis for insurance coverage and the approach is inherently more ethical than other design premium decisions. Credit rating companies, too, offer strategies not involving patients and publics. In contrast, crit- medical adherence risk-scoring products which allow ical scholarship has focused on ‘levelling the playing field’ in payers and providers to identify patients who may be at co-design processes, by articulating strategies for shared lan- higher risk for ‘non-compliance’ with medical treatments guage in design (Burrows et al., 2016), or studying how (Hogle, 2016). Fourcade and Healy (2017) have described co-design methods might ‘distort’ participation in favour of these developments in terms of an expanding ‘economy designers’ interests (Compagna and Kohlbacher, 2014). of moral judgement’, where health outcomes are experi- While these theoretical and practical developments are enced as morally deserved, based on prior ‘good’ or ‘bad’ crucial for enhancing the agency of patients and publics to health behaviours. participate more effectively in design, what is not explicitly Third, these logics can have the effect of responsibilizing acknowledged in many of these perspectives is how limita- health care, pushing monitoring and management further tions imposed on designers also influence design outcomes. into the domain of individual patients and caregivers This especially bears relevance in co-design, where (Rich et al., 2019), often through design decisions that designers are conventionally expected to move from a posi- nudge health-related behaviours. While frequently lauded tion of expert to ‘facilitator’ (Björgvinsson et al., 2012a; for their potential to more effectively engage patients in Farrington, 2016; Sanders and Stappers, 2008), ‘stager of their own care, these perspectives have been critiqued for negotiations’ (Pedersen, 2020), ‘agonistic Prometheus’ oversimplifying the meaning and value of engagement in (Storni, 2015), or ‘creator of a third space’ where knowledge digital health (Burr and Morley, 2020). Some, for exchange can occur (Muller, 2009). example, point to the unrecognized ‘repair work’ that The practices, goals and perspectives of designers are often accompanies the use of digital health technologies diverse, however, and influenced by a broad range of inter- (Forlano, 2020; Schwennesen, 2019). Moreover, as ests and values. These include other project stakeholders, Prainsack (2020) notes: ‘The very instrument of nudging professional norms, workplace culture, financial incentives, contains value judgements: It assumes that addressing the shareholders and broader economic trends. In health, these practices of people directly is better than changing struc- also crucially include the social and professional norms tural factors. It has been shown, however, that a focus on associated with biomedicine, and the epistemic privilege individual practices directs attention and resources away of evidence-based medicine (Chin-Yee and Upshur, 2019; from tackling the more structural, systemic characteristics Schwennesen, 2019), where ‘evidence-based’ is conven- that shape the problem in the first place’ (p. 11). tionally linked to the epistemic criteria of truth, validity, These evolving geographies of responsibility and foundationalism, and therefore especially quantitative (Schwennesen, 2019), asymmetries of knowledge and evidence (Upshur, 2001) – of which AI/ML is expected logics of efficiency would suggest that any claims to the to be transformative. ethical standing of co-design should be evaluated against Similarly, AI/ML systems are not static objects, but con- a much broader set of sociotechnical relations. This tingent and dynamic. For example, in a study of a physical would require that co-design not only attend to how AI/ rehabilitation algorithm intended to reduce in-person clinic ML technologies contribute to ‘medicalization’ or ‘com- visits, Schwennesen (2019) notes that crucially important modification’ as discrete outcomes of individual technolo- parameters used to assess the bodily movements of patients gies, but also the ways in which those technologies, once were not only determined by patients and physiotherapists, embedded in health and health care systems, transform but also the capabilities of the algorithmic system itself. A broader social, political, and economic fields. physiotherapist on the project notes: ‘We had to sit down and be pretty tough in setting priorities… If there were some parameters that dealt with what one does with the arms or something else, then of course, we could say, Pitfall #2: The tendency to focus attention primarily “The sensors can’t say anything about that”. So of course, on the agency of patients and publics in co-design that was automatically discarded’ (p. 181). The central point advanced with Pitfall #2 is that ‘better’ Acknowledging limits on designers’ agency underscores involvement does not mean that people are entirely free the importance of also attending to the agential capacities of from agential constraints that inevitably shape their parti- those leading design and development processes. By focusing cipation in design activities. These constraints not only attention only on enabling or empowering patients and apply to patients and publics, but others implicated in publics in isolated design events, strategies to improve the design processes too. Health, like other sectors, presents processes and outcomes of co-design risk are ineffective by its own unique constraints which continue to evolve, and failing to attend to the broader range of influences on the Donia 7 activities that take place during the design process. While import in the first place. For example, in their study of tech- some scholars have acknowledged these limitations on nological futures with youth participants in a Chicago designers’ agency (Bødker and Kyng, 2018; Hepworth, summer design program, Harrington and Dillahunt (2021) 2019; Vines et al., 2013), this consideration has yet to take a report that the primary challenges students described were more central role in co-design discourse, likely as a result of racism, police brutality, segregation, poverty and unfair a historical focus on empowering end-users of technologies. housing policies. Technological solutions solely targeting Avoiding the pitfall of attending only to the agency of the proximate issue of a health care outcome will therefore patients and publics at the expense of the agency of always be partial, where co-design of health-related AI/ML designers requires engagement with this broader ecosystem risks perpetuating institutional injustices. of design, expanding the view of who and what is consid- Second, the aim of representation in design is never ered relevant. Attending to this expanded ecosystem may completeness or objectivity, but practical usefulness illuminate strategies for co-design that go beyond the prox- (Asaro, 2000). The ways in which representation and inclu- imate issue of patient or public agency in artefact design, to sion are operationalized in design processes – typically in consideration of the institutional arrangements, technical the form of ‘average’ users or community members – artefacts, infrastructures, norms and social goals that have raises questions about exactly what is practically useful made the particular design event possible in the first place. and to whom. Where representation and inclusion obfuscate fundamental questions relating to power and privilege, there is a risk of entrenching the same problematic relations that technologies are intended to resolve. These biases take Pitfall #3: The tendency to neglect the broader on new forms when produced algorithmically. Will users contexts of representation & inclusion have the ability to contest categorizations such as The central point advanced with Pitfall #3 is that the inclu- ‘healthy’ or ‘not healthy’, ‘compliant’ or ‘non-compliant’? sion of communities in design processes does not necessa- Will they even be aware of them? rily address problems that lead to marginalization in the To avoid the tendency to neglect broader contexts of first place. Indeed, it rarely does, and instead risks sup- representation and inclusion, co-design can include provi- planting consideration of the causes of marginalization sions for reflecting on why particular individuals or with easy-to-use technological solutions that may actually groups are being pursued, what they are expected to stand exacerbate health inequities. in for, which upstream causes of health-related ‘problems’ Representation and inclusion of communities or indivi- might exist, and how co-design and AI/ML can or cannot duals presumed to be affected by AI/ML are often positioned mitigate those consequences. Where representation of as a strategy to reduce potential harms associated with patient and public interests is at stake, co-design strategies designer bias, ignorance or neglect; the more accurately can better account for the plurality of values that underpin co-design processes represent the perspectives of particular interest in, and expressions of, representation. individuals or groups in society, the more technologies will reflect their interests. However, this view obscures two core challenges posed by AI/ML technologies to involvement Avoiding the pitfalls: opportunities for and representation. While this is true of other sectors, we critical research and practice discuss these especially as they relate to health. First, not all groups benefit equally from AI/ML technol- This review has elucidated some of the challenges posed by ogies, even where representation and inclusion are mobi- AI/ML technologies to the patient and public co-design of lized as a strategy to improve access or reduce bias. Just those technologies. In some cases, AI/ML for health has as an emphasis on the agential capacities of users risks amplified existing challenges, such as questions of repre- ignoring limitations placed on designers, so too does an sentation and purpose. In others, AI/ML technologies emphasis on inclusion risk ignoring the systemic nature have presented new challenges, such as the capability of of injustice (Bell and Hartmann, 2007; Hoffmann, 2019). co-design to address questions relating to data extraction Making claims to ethical co-design demands designers and the future uses of those technologies. These risks and engage with the social determinants of health – or the obstacles apply not only to PPI in the design of individual social, political, and economic bases of individual and col- technologies, but also to health services and systems, and lective health and well-being. While some scholars have society more broadly. We suggest that many of the made important advances in attending to intersectionality methods and perspectives necessary to address these chal- (Bauer and Lizotte, 2021; Lizotte et al., 2020) and the lenges already exist, but would benefit from being social determinants of health (Kreatsoulas and brought into conversation with each other more fully. Subramanian, 2018; Pierson et al., 2021) in AI/ML We have also argued in this paper that co-design, espe- models, there remains a risk of failing to account for the cially in health care, operates as an ambiguous and broader sociotechnical context that affects their ethical diverse concept that variously includes different ideas: 8 Big Data & Society involvement, participation, representation, empowerment, insights as a basis for situating the normatively desirable patient-centredness, democracy, ethics and so on. futures that arise from co-design practice. To offer some conceptual clarity in response to these issues, we outline three areas we consider most salient to advancing the goal of co-design for ethical AI/ML for Re-conceptualizing representation in light health. While these suggestions arise from an analysis of algorithmic assemblages centred around discussions of AI/ML for health, they may We have also argued in this paper that where co-design is also serve as a call to other domains where norms and mobilized as a strategy for representation (and it often is) other incentives tend to privilege PPI in design. it is important for designers and others to recognize that co-design should not necessarily claim epistemic legitimacy or moral authority solely based on the composition of its Clarifying co-design’s commitment to values patient or public participants, and any claims to the repre- sentation of patient or public interests in co-design should In this paper, we have argued that accomplishing ethical AI/ be scrutinised in light of the technology’s broader societal ML for health requires more explicit engagement with the impacts. This is especially important with respect to AI/ broader social, political and economic fields that give rise ML technologies for health, which entail representational to both co-design and AI/ML, and relatedly, more explicit forms themselves linked to both health and non-health- engagement with the values they hope to advance. While related goals. some may argue that all co-design involves the illumination While this paradox is not easily reconcilable, we suggest of values (through the involvement of diverse publics), or that this is a challenge with which co-design scholars and that co-design itself necessarily advances particular values practitioners can more fully engage in future work: that (such as democratic values), we suggest that co-design representation of public interests is often a key rationale nevertheless would benefit from more explicit engagement for co-design, but that AI/ML technologies themselves with its normative foundations. produce representations that are partial, opaque and tempor- First, with respect to the claim that all co-design involves ary. Co-design is in a unique position to forge new ways of the illumination of values (through the involvement of conceptualizing representation in the design of AI/ML patients or publics), we echo the cautions of others who systems. For example, which forms of representation are raise the related practical and conceptual challenges of (1) inherent to AI/ML (e.g. statistical), and which does identifying relevant direct and indirect stakeholders and co-design attempt to advance (e.g. political or democratic)? (2) ensuring that the elicitation of values through the parti- When and how might these be in conflict, and which trade- cipation of those stakeholders does not run the risk of com- offs do they involve? mitting the naturalistic fallacy (i.e. conflating descriptions Chasalow and Levy (2021) argue that like co-design, of individual values preferences with normatively desirable ‘representation’ and ‘inclusion’ are ‘suitcase’ words that endpoints) (Manders-Huits, 2011). Indeed, some of the cri- can carry many different meanings which are not merely tiques associated with VID (Donia and Shaw, 2021) may semantic, but normative (e.g. political legitimacy) and epi- also be productively applied to co-design practice. stemic (e.g. tacit or inclusive knowledge). As such, we Second, with respect to the claim that co-design itself suggest that the co-design community engaged with AI/ explicitly advances particular values, we suggest that ML can be more precise when employing them, and expli- those employing co-design attend to exactly which values citly recognize the broader range of values that underpin co-design advances. For example, the earliest forms of these concepts in their different forms. co-design could be said to be broadly committed to the values of ‘workplace democracy’, ‘autonomy’ or ‘quality of work life’ (Iversen et al., 2010). However, those values Mapping sociotechnical relations commitments arose in the context of an expanding science of organization management, and also advanced Part of committing to values in co-design, including those interests related to work quality, productivity and innova- associated with representation and inclusion, involves sur- tion (Kelty, 2020). In AI/ML for health, we might ask facing the actors and institutions upon which they rely. which values are implicitly carried forward with a commit- Doing so is crucial not only to accountability, but relatedly, ment to co-design, and how those interact with different to illuminating the ability of designers and others involved views on what the ethical status of co-design and AI/ML in co-design to realize any positive vision for their work. should be. Referring back to the importance of more Here we suggest that co-design may benefit from further strongly linking the sociotechnical context of design with engagement with ‘theory-methods packages’ capable of the participation of relevant stakeholders, we suggest that explicating those relations and deriving strategies for inter- co-design would benefit from more explicitly attending to vening on the sociotechnical system in which a technology the circumstances that have made it so, and using those and design process is embedded. Donia 9 Methodological approaches in the social sciences and Funding humanities, for example, may help equip co-design with The authors received no financial support for the research, author- valuable approaches to account for this complexity. ship and/or publication of this article. Institutional ethnography (Smith, 2005) has been taken up in sociological studies of health for its explicit focus on identifying the materialized social relations that coordinate people’s everyday activities – whether patients or designers References (Webster, 2020). Other methods already used in the design, Abelson J, Forest P-G, Casebeer A, et al. (2004) Will it make a dif- such as stakeholder maps and prototypes, may also be ference if I show up and share? A citizens’ perspective on useful when they focus on the broader sociotechnical improving public involvement processes for health system system of which design and AI/ML are a part. When com- decision-making. Journal of Health Services Research & Policy 9(4): 205–212. bined with an explicit and reflexive commitment to values, Abelson J, Montesanti S, Li K, et al. (2010) Effective Strategies for these may better equip co-design to understand how differ- Interactive Public Engagement in the Development of ent contexts affect the agency of designers and other stake- Healthcare Policies and Programs. Ottawa: Canadian Health holders to actually realize the futures being envisioned. Services Research Foundation. Aizenberg E and van den Hoven J (2020) Designing for human rights in AI. Big Data & Society 7(2): 2053951720949566. Ananny M (2016) Toward an ethics of algorithms: Convening, observation, probability, and timeliness. Science, Technology, Conclusion: recognizing the limits & Human Values 41(1): 93–117. of co-design Andersen LB, Danholt P, Halskov K, et al. (2015) Participation as Our summary reflection on the content we have provided a matter of concern in participatory design. CoDesign 11(3–4): 250–261. here is a call for design humility (i.e. consistently attending Ansari A (2019) Decolonizing design through the perspectives of to what professional design cannot do for a problem). While cosmological others: Arguing for an ontological turn in design humility in science and technology has been proposed by research and practice. XRDS: Crossroads, The ACM Magazine other commentators (Jasanoff, 2007; Selbst et al., 2019), for Students 26(2): 16–19. this same consideration is at risk of being overlooked in Asaro PM (2000) Transforming society by transforming technol- co-design as a result of assuming that by letting patients ogy: The science and politics of participatory design. or publics inform development, co-design is itself an Accounting, Management and Information Technologies expression of humility. As Irani (2018) notes, all design 10(4): 257–290. entails privileged sites and conceptual frames deserving of Bauer GR and Lizotte DJ (2021) Artificial intelligence, intersec- scrutiny, and as such, any co-design humility might ask: tionality, and the future of public health. American Journal when does co-design substitute other expressions of of Public Health 111(1): 98–100. Bell JM and Hartmann D (2007) Diversity in everyday discourse: public interest and action? What are the epistemic limits The cultural ambiguities and consequences of “happy talk”. of design research as it is conventionally practised? Who American Sociological Review 72(6): 895–914. is sidelined by professional design and why? And perhaps Benjamin R (2019) Race After Technology: Abolitionist Tools for most importantly, when should we not design? the New Jim Code. Medford, MA: John Wiley & Sons. Moreover, co-design discourse itself is primarily rooted Berwick DM (2009) What ‘patient-centered’ should mean: in 20th and 21st-century Euro-North American thought. Confessions of an extremist Health Affairs (Project Hope) Ansari (2019) for example, asks: ‘What does it mean to 28(4): w555–w565. design for people who are not like us, even before we ask Björgvinsson E, Ehn P and Hillgren P-A (2012a) Agonistic parti- whether we should design for people who are not like us? cipatory design: Working with marginalised social movements. What does it mean to design for people who have different CoDesign 8(2–3): 127–144. histories, different backgrounds, and different commit- Björgvinsson E, Ehn P and Hillgren P-A (2012b) Design things and design thinking: Contemporary participatory design chal- ments from us? What does it mean to design for people lenges. Design Issues 28: 101–116. who might relate to the world differently from the way Bødker S and Kyng M (2018) Participatory design that matters— we do?’ (p. 3). Attending to these questions ought to be facing the big issues. ACM Transactions on Computer–Human the starting point for any designer enacting judgement of Interaction 25(1): 1-31. DOI: 10.1145/3152421. the ability of co-design to achieve ethical AI/ML for health. Boivin A (2012) Patient and Public Involvement in Healthcare Improvement. Montréal, Québec: University of Montréal. Boltanski L and Thévenot L (1999) The sociology of critical capacity. European Journal of Social Theory 2(3): 359–377. Declaration of conflicting interests Bot BM, Wilbanks JT and Mangravite LM (2019) Assessing the The authors declared no potential conflicts of interest with respect consequences of decentralizing biomedical research. Big to the research, authorship and/or publication of this article. Data & Society 6(1): 2053951719853858. 10 Big Data & Society boyd D and Crawford K (2012) Critical questions for big data: Donetto S, Pierri P, Tsianakas V, et al. (2015) Experience-based Provocations for a cultural, technological, and scholarly phenom- co-design and healthcare improvement: Realizing participatory enon. Information, Communication & Society 15(5): 662–679. design in the public sector. The Design Journal 18(2): 227– Brown P and Zavestoski S (2004) Social movements in health: An 248. introduction. Sociology of Health & Illness 26(6): 679–694. Donia J and Shaw JA (2021) Ethics and values in design: A struc- Burr C and Morley J (2020) Empowerment or engagement? Digital tured review and theoretical critique. Science and Engineering health technologies for mental healthcare. In: Burr C and Milano Ethics 27(5): 57. S(eds) The 2019 Yearbook of the Digital Ethics Lab. Digital Enshaeifar S, Barnaghi P, Skillman S, et al. (2018) The internet of Ethics Lab Yearbook. Cham: Springer International things for dementia care. IEEE Internet Computing 22(1): Publishing, 67–88. DOI: 10.1007/978-3-030-29145-7_5. 8–17. Burrows A, Gooberman-Hill R and Coyle D (2016) Shared Language Escobar A (2018) Designs for the Pluriverse: Radical and the Design of Home Healthcare Technology. In: Proceedings of Interdependence, Autonomy, and the Making of Worlds. the 2016 CHI Conference on Human Factors in Computing Durham, NC: Duke University Press. Systems, New York, NY, USA, 2016. pp. 3584–3594. CHI ‘16. Eubanks V (2018) Automating Inequality: How High-Tech Tools ACM. DOI: 10.1145/2858036.2858496. Profile, Police, and Punish the Poor. New York, NY: Busfield J (2017) The concept of medicalisation reassessed. St. Martin’s Press. Sociology of Health & Illness 39(5): 759–774. Farrington CJ (2016) Co-designing healthcare systems: Between Capecci M, Ciabattoni L, Ferracuti F, et al. (2018) Collaborative transformation and tokenism. Journal of the Royal Society of design of a telerehabilitation system enabling virtual second Medicine 109(10): 368–371. opinion based on fuzzy logic. IET Computer Vision 12(4): Flanagan M, Howe DC and Nissenbaum H (2005) Values at 502–512. Play: Design Tradeoffs in Socially-oriented Game Design. Carman KL, Dardess P, Maurer M, et al. (2013) Patient and family In: Proceedings of the SIGCHI Conference on Human engagement: A framework for understanding the elements and Factors in Computing Systems,New York,NY, USA, developing interventions and policies. Health Affairs (Project 2005, pp.751–760. CHI’05. ACM. DOI: 10.1145/1054972. Hope) 32(2): 223–231. 1055076. Chasalow K and Levy K (2021) Representativeness in Statistics, Fogg BJ, Cueller G and Danielson D (2007) Motivating, influen- Politics, and Machine Learning. In: ACM Conference on cing, and persuading users: An introduction to captology. In: Fairness, Accountability, and Transparency (FAccT’21),3 The Human–Computer Interaction Handbook. Boca Raton, March 2021. FL: CRC Press, 159–172. Cheney-Lippold J (2017) We Are Data: Algorithms and the Forlano L (2020) The Danger of Intimate Algorithms. In: Public Making of Our Digital Selves. New York, NY: NYU Press. Books Blog. Available at: https://www.publicbooks.org/the- Chin-Yee B and Upshur R (2019) Three problems with big data danger-of-intimate-algorithms/ (accessed 13 November and artificial intelligence in medicine. Perspectives in 2020). Biology and Medicine 62(2): 237–256. Forlano L and Mathew A (2014) From design fiction to design Clarke AE, Shim JK, Mamo L, et al. (2003) Biomedicalization: friction: Speculative and participatory design of values- Technoscientific transformations of health, illness, and US bio- embedded urban technology. Journal of Urban Technology medicine. American Sociological Review 68(2): 161–194. 21(4): 7–24. Compagna D and Kohlbacher F (2014) The limits of participatory Fourcade M and Healy K (2017) Seeing like a market. Socio- technology development: The case of service robots in care Economic Review 15(1): 9–29. facilities for older people. Technological Forecasting and Frauenberger C, Good J, Fitzpatrick G, et al. (2015) In pursuit of Social Change 93: 19–31. rigour and accountability in participatory design. International Cooper A, Reimann R, Cronin D, et al. (2014) About Face: The Journal of Human–Computer Studies 74: 93–106. Essentials of Interaction Design. Hoboken, NJ: John Wiley Giacomin J (2014) What is human centred design? The Design & Sons. Journal 17(4): 606–623. Costanza-Chock S (2020) Design Justice: Community-Led Gibson A, Britten N and Lynch J (2012) Theoretical directions for Practices to Build the Worlds We Need. Cambridge, MA: an emancipatory concept of patient and public involvement. MIT Press. Health 16(5): 531–547. Couldry N and Mejias UA (2019) The Costs of Connection: How Goldhahn J, Rampton V and Spinas GA (2018) Could artificial Data is Colonizing Human Life and Appropriating it for intelligence make doctors obsolete? BMJ 363. DOI: 10.1136/ Capitalism. Redwood City, CA: Stanford University Press. bmj.k4563. D’Ignazio C and Klein LF (2020) Data Feminism. Cambridge, Greenhalgh T, Hinton L, Finlay T, et al. (2019) Frameworks for MA: MIT Press. supporting patient and public involvement in research: Dalton C and Thatcher J (2014) What does a critical data studies Systematic review and co-design pilot. Health Expectations look like, and why do we care? Seven points for a critical 22(4): 785-801. DOI: 10.1111/hex.12888. approach to ‘big data’. Society and Space 29. Halloran J, Hornecker E, Stringer M, et al. (2009) The value of Dantec CAL and DiSalvo C (2013) Infrastructuring and the forma- values: Resourcing co-design of ubiquitous computing. tion of publics in participatory design. Social Studies of Science CoDesign 5(4): 245–273. 43(2): 241–264. Harrington C and Dillahunt TR (2021) Eliciting Tech Futures DiSalvo C (2012) Revealing Hegemony: Agonistic Information Among Black Young Adults: A Case Study of Remote Design. Cambridge, MA: MIT Press. Speculative Co-Design. In: Proceedings of the 2021 CHI Donia 11 Conference on Human Factors in Computing Systems, 2021, for covid-19 (SARS-CoV-2) pandemic: A review. Chaos, pp.1–15. Solitons & Fractals 139: 110059. Harrington C, Erete S and Piper AM (2019) Deconstructing com- Latour B (2005) Reassembling the Social: An Introduction to munity-based collaborative design: Towards more equitable Actor-Network-Theory. New York, NY: OUP. participatory design engagements. Proceedings of the ACM Lizotte DJ, Mahendran M, Churchill SM, et al. (2020) Math versus on Human–Computer Interaction 3(CSCW): 1–25. meaning in MAIHDA: A commentary on multilevel statistical Hepworth K (2019) A panopticon on my wrist: The biopower of models for quantitative intersectionality. Social Science & big data visualization for wearables. Design and Culture Medicine 245: 112500. 11(3): 323–344. Lupton D (2014a) Apps as artefacts: Towards a critical per- Hillgren P-A (2013) Participatory design for social and public spective on mobile health and medical apps. Societies innovation: Living labs as spaces for agonistic experiments 4(4): 606–622. and friendly hacking. In Public and Collaborative: Exploring Lupton D (2014b) Critical perspectives on digital health technol- the Intersection of Design, Social Innovation and Public ogies. Sociology Compass 8(12): 1344–1359. Policy:75–88. Malmö, Sweden: DESIS Network United Lupton D (2016) Towards critical digital health studies: States. Reflections on two decades of research in health and the way Hoffmann AL (2019) Where fairness fails: Data, algorithms, and forward. Health 20(1): 49–61. the limits of antidiscrimination discourse. Information, Lupton D (2017a) Digital Health: Critical and Cross-Disciplinary Communication & Society 22(7): 900–915. Perspectives. New York, NY: Routledge. Hogg CNL (2007) Patient and public involvement: What next for Lupton D (2017b) Digital health now and in the future: Findings the NHS? Health Expectations 10(2): 129–138. from a participatory design stakeholder workshop. DIGITAL Hogle L (2016) Data-intensive resourcing in healthcare. HEALTH 3: 2055207617740018. BioSocieties 11. DOI: 10.1057/s41292-016-0004-5. Lyon D (2010) Surveillance, power and everyday life. In: Iliadis A and Russo F (2016) Critical data studies: An introduction. Emerging Digital Spaces in Contemporary Society. London, Big Data & Society 3(2): 2053951716674238. UK: Springer, 107–120. Irani L (2018) “Design thinking”: Defending silicon valley at the Mainsah H and Morrison A (2014) Participatory Design Through apex of global labor hierarchies. Catalyst: Feminism, Theory, a Cultural Lens: Insights from Postcolonial Theory. In: Technoscience 4(1): 1-19. Proceedings of the 13th Participatory Design Conference: Iversen OS, Halskov K and Leong TW (2010) Rekindling values Short Papers, Industry Cases, Workshop Descriptions, in participatory design. In: Proceedings of the 11th biennial Doctoral Consortium Papers, and Keynote Abstracts – participatory design conference, 2010, pp.91–100. Volume 2, New York, NY, USA, 2014, pp.83–86. PDC’14. Ives J, Damery S and Redwod S (2013) PPI, paradoxes and Plato: ACM. DOI: 10.1145/2662155.2662195. Who’s sailing the ship? Journal of Medical Ethics 39(3): 181–185. Malizia A and Carta S (2020) Co-Creation and Co-Design Jasanoff S (2007) Technologies of humility. Nature 450(7166): Methodologies to address Social Justice and Ethics in 33–33. Machine Learning. ACM SIGCHI Italy. Kelty CM (2020) The Participant: A Century of Participation in Manders-Huits N (2011) What values in design? The challenge of Four Stories. Chicago, IL: University of Chicago Press. incorporating moral values into design. Science and Kensing F and Blomberg J (1998) Participatory design: Issues and Engineering Ethics 17(2): 271–287. concerns. Computer Supported Cooperative Work 7(3): 167–185. Metcalf J and Moss E (2019) Owning ethics: Corporate logics, Kim S (2019) Comparative Review of Research on Health silicon valley, and the institutionalization of ethics. Social Information Technology in Biomedical Informatics and Research 86(2): 449–476. Human-Computer Interaction. In: International Conference Michie S, Thomas J, Johnston M, et al. (2017) The human behav- on Human-Computer Interaction, 2019, pp.16–32. Springer. iour-change project: Harnessing the power of artificial intelli- Kitchin R (2017) Thinking critically about and researching gence and machine learning for evidence synthesis and algorithms. Information, Communication & Society 20(1): interpretation. Implementation Science 12(1): 21. 14–29. Mol A (2008) The Logic of Care: Health and the Problem of Kitchin R and Lauriault T (2014) Towards Critical Data Studies: Patient Choice. New York, NY: Routledge. Charting and Unpacking Data Assemblages and Their Work. Montani S and Striani M (2019) Artificial intelligence in clinical The Programmable City Working Paper 2 ID 2474112, decision support: A focused literature survey. Yearbook of SSRN Scholarly Paper, 30 July. Rochester, NY. Available at: Medical Informatics 28(1): 120. https://papers.ssrn.com/abstract = 2474112 (accessed 27 Muller MJ (2009) Participatory design: The third space in HCI. In: August 2019). Human–Computer Interaction. Boca Raton, FL: CRC Press, Kraft P and Bansler J (1994) The collective resource approach: 181–202. The Scandinavian experience. Scandinavian Journal of Noble SU (2018) Algorithms of Oppression: How Search Engines Information Systems 6(1): 71-84. Available at: https://aisel.ais- Reinforce Racism. New York, NY: NYU Press. net.org/sjis/vol6/iss1/4. Norman DA and Draper SW (1986) User Centered System Kreatsoulas C and Subramanian SV (2018) Machine learning in Design: New Perspectives on Human–Computer Interaction. social epidemiology: Learning from experience. SSM - Boca Raton, FL: Taylor & Francis. Population Health 4: 347–349. Obermeyer Z, Powers B, Vogeli C, et al. (2019) Dissecting racial Lalmuanawma S, Hussain J and Chhakchhuak L (2020) bias in an algorithm used to manage the health of populations. Applications of machine learning and artificial intelligence Science 366(6464): 447–453. 12 Big Data & Society Ollenburg SA (2019) A futures-design-process model for partici- Sloane M, Moss E, Awomolo O, et al. (2020) Participation is not a patory futures. Journal of Futures Studies 23(4): 51–62. Design Fix for Machine Learning. In: Proceedings of the 37th Pasquale F (2015) The Black Box Society. Cambridge, MA: International Conference on Machine Learning, Vienna, Harvard University Press. Austria, 2020. Pedersen S (2020) Staging negotiation spaces: A co-design frame- Smith DE (2005) Institutional Ethnography: A Sociology for work. Design Studies 68: 58–81. People. Lanham, MD: Rowman Altamira. Pham Q, Wiljer D and Cafazzo JA (2016) Beyond the rando- Storni C (2015) Notes on ANT for designers: Ontological, meth- mized controlled trial: A review of alternatives in mHealth odological and epistemological turn in collaborative design. clinical trial methods. JMIR mHealth and UHealth 4(3): CoDesign 11(3–4): 166–178. e107. Topol E (2019) Deep Medicine: How Artificial Intelligence Can Pierson E, Cutler DM, Leskovec J, et al. (2021) An algorithmic Make Healthcare Human Again. New York, NY: Basic Books. approach to reducing unexplained pain disparities in under- Tran O’Leary J, Zewde S, Mankoff J, et al. (2019) Who gets to served populations. Nature Medicine 27(1): 136–140. future? Race, representation, and design methods in Prainsack B (2020) The value of healthcare data: To nudge, or not? Africatown. In: Proceedings of the 2019 CHI Conference on Policy Studies 41(5): 547–562. Human Factors in Computing Systems, 2019, pp.1–13. Rich E, Miah A and Lewis S (2019) Is digital health care more Triberti S and Barello S (2016) The quest for engaging AmI: equitable? The framing of health inequalities within Patient engagement and experience design tools to promote England’s digital health policy 2010–2017. Sociology of effective assisted living. Journal of Biomedical Informatics Health & Illness 41: 31–49. 63: 150–156. Rose N (2007) Molecular biopolitics, somatic ethics and the spirit Tritter JQ (2009) Revolution or evolution: The challenges of con- of biocapital. Social Theory & Health 5(1): 3–29. ceptualizing patient and public involvement in a consumerist Rossitto C (2021) Political ecologies of participation: Reflecting world. Health Expectations 12(3): 275–287. on the long-term impact of civic projects. Proceedings of the Tsekleves E, Darby A, Whicher A, et al. (2017) Co-designing ACM on Human-Computer Interaction 5(CSCW1): 1–27. design fictions: A new approach for debating and priming ACM, New York, NY, USA. future healthcare technologies and services. Archives of Rowland P and Kumagai AK (2018) Dilemmas of representation: Design Research 30(2): 5–21. Patient engagement in health professions education. Academic Upshur RE (2001) The status of qualitative research as evidence. Medicine 93(6): 869–873. In J. M. Morse, J. M. Swanson & A. J. Kuzel (Eds.), The Rowland P, Anderson M, Kumagai AK, et al. (2018) Patient invol- Nature of Qualitative Evidence:5–26. Sage Thousand Oaks, vement in health professionals’ education: A meta-narrative CA: SAGE. review. Advances in Health Sciences Education 24(3): 595- Vines J, Clarke R, Wright P, et al. (2013) Configuring participa- 617. DOI: 10.1007/s10459-018-9857-7. tion: on how we involve people in design. In: Proceedings of Ruckenstein M and Schüll ND (2017) The datafication of health. the SIGCHI Conference on Human Factors in Computing Annual Review of Anthropology 46(1): 261–278. Systems, 2013, pp.429–438. Sanders EB-N and Stappers PJ (2008) Co-creation and the new Wait S and Nolte E (2006) Public involvement policies in health: landscapes of design. CoDesign 4(1): 5–18. Exploring their conceptual basis. Health Economics Policy and Schwab K (2017) The Fourth Industrial Revolution. Redfern, Law 1: 149. NSW: Currency. Webster F (2020) The Social Organization of Best Practice: An Schwennesen N (2019) Algorithmic assemblages of care: Institutional Ethnography of Physicians’ Work. Cham, Imaginaries, epistemologies and repair work. Sociology of Switzerland: Springer. Health & Illness 41: 176–192. Whitman M, Hsiang C and Roark K (2018) Potential for participa- Seidelin C, Dittrich Y and Grönvall E (2020) Foregrounding data tory big data ethics and algorithm design: a scoping mapping in co-design–an exploration of how data may become an object review. In: Proceedings of the 15th Participatory Design of design. International Journal of Human–Computer Studies Conference: Short Papers, Situated Actions, Workshops and 143: 102505. Tutorial - Volume 2, New York, NY, USA, 20 August 2018, Selbst AD, Boyd D, Friedler SA, et al. (2019) Fairness and pp.1–6. PDC’18. Association for Computing Machinery. abstraction in sociotechnical systems. In: Proceedings of the DOI: 10.1145/3210604.3210644. Conference on Fairness, Accountability, and Transparency, Yeung K (2017) ‘Hypernudge’: Big data as a mode of regulation 2019, pp.59–68. by design. Information, Communication & Society 20(1): 118– Sharon T (2018) When digital health meets digital capitalism, how 136. many common goods are at stake? Big Data & Society 5(2): Zaidi L (2019) Worldbuilding in science fiction, foresight and 2053951718819032. design. Journal of Futures Studies 23(4): 15–26. Shaw J, Agarwal P, Desveaux L, et al. (2018) Beyond “implemen- Zuboff S (2019) The Age of Surveillance Capitalism: The Fight for tation”: Digital health innovation and service design. npj a Human Future at the New Frontier of Power. New York, Digital Medicine 1(1): 48. NY: Profile Books.

Journal

Big Data & SocietySAGE

Published: Dec 17, 2021

Keywords: Co-design; participatory design; artificial intelligence; health care; design ethics; data ethics

References