Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Establishing a social licence for Financial Technology: Reflections on the role of the private sector in pursuing ethical data practices:

Establishing a social licence for Financial Technology: Reflections on the role of the private... Current attention directed at ethical dimensions of data and Artificial Intelligence have led to increasing recognition of the need to secure and maintain public support for uses (and reuses) of people’s data. This is essential to establish a “Social Licence” for current and future practices. The notion of a “Social Licence” recognises that there can be mean- ingful differences between what is legally permissible and what is socially acceptable. Establishing a Social Licence entails public engagement to build relationships of trust and ensure that practices align with public values. While the concept of the Social Licence is well-established in other sectors – notably in relation to extractive industries – it has only very recently begun to be discussed in relation to digital innovation and data-intensive industries. This article therefore draws on existing literature relating to the Social Licence in extractive industries to explore the potential approaches needed to establish a Social Licence for emerging data-intensive industries. Additionally, it draws on well-established literature relating to trust (from psychology and organisational science) to examine the relevance of trust, and trustworthiness, for emerging practices in data-intensive industries. In doing so the article considers the extent to which pursuing a Social Licence might complement regulation and inform codes of practice to place ethical and social considerations at the heart of industry practice. We focus on one key industry: Financial Technology. We demonstrate the importance of combining technical and social approaches to address ethical challenges in data-intensive innovation (particularly relating to Artificial Intelligence) and to establish relationships of trust to underpin a Social Licence for Financial Technology. Such approaches are needed across all areas and industries of data-intensive innovation to complement regulation and inform the development of ethical codes of practice. This is important to underpin culture change and to move beyond rhetorical commitments to develop best practice putting ethics at the heart of innovation. Keywords Financial Technology, data, social licence, ethics, responsible artificial intelligence, trust Introduction the UK Government’s Centre for Data Ethics and Innovation; a House of Lords Select Committee on Recent years have witnessed a dramatic increase in Artificial Intelligence which proposed an ethical attention directed at ethical dimensions of data practi- ces and Artificial Intelligence (AI). Increasingly momentum for innovation is being met with interest Newcastle University Business School, Newcastle upon Tyne, UK in related ethical considerations and a number of School of Computing, Newcastle University, Newcastle upon Tyne, UK high profile institutes and bodies have been established Corresponding author: to focus on this area. These include the European Mhairi Aitken, Newcastle University Business School, Urban Sciences Commission’s High-Level Expert Group on Artificial Building, Newcastle University, 1 Science Square, Newcastle upon Tyne Intelligence whose mandate included drafting a set of NE4 5TG, UK. AI Ethics Guidelines (European Commission, 2019); Email: mhairi.aitken@newcastle.ac.uk Creative Commons CC BY: This article is distributed under the terms of the Creative Commons Attribution 4.0 License (https:// creativecommons.org/licenses/by/4.0/) which permits any use, reproduction and distribution of the work without further permission provided the original work is attributed as specified on the SAGE and Open Access pages (https://us.sagepub.com/en-us/nam/open-access-at-sage). 2 Big Data & Society framework for AI; the Ada Lovelace Institute and; and inform codes of practice to place ethical and social DeepMind Ethics and Society. Internationally, the considerations at the heart of industry practice. World Economic Forum has proposed nine ethical In order to illustrate this, we focus on one key indus- questions to ask of AI systems and the Fairness, try: Financial Technology (FinTech). We argue that as Accountability, and Transparency in Machine a data-intensive industry FinTech requires ethical data Learning forum developed Principles for Accountable practices to be developed and demonstrated in order to Algorithms and a Social Impact Statement for establish and maintain a SL. While there is industrial Algorithms (Stahl and Wright, 2018). Such bodies advocacy surrounding the potential benefits of data focus on developing Responsible AI (e.g. PWC, 2018) science and AI in banking, it is not yet clear whether and trustworthy approaches to AI and digital innova- there is a SL for these practices. Therefore, FinTech tion. While questions remain as to whether, in practice, provides a timely example through which to examine this goes further than “ethics washing” (Hasselbalch, the opportunities and potential approaches to develop 2019), this has led to a proliferation of guidance and ethical data practices beyond compliance with principles relating to ethical AI (Fjeld et al., 2019) and regulation. within the private sector there is emerging interest in The article draws together the multi-disciplinary the concept of Corporate Digital Responsibility (CDR; perspectives of the authors to reflect on how a SL for Lobschat et al., 2020). data-intensive industries might be realised. It begins by This trend is in part a response to high profile public providing some background to the concept of the SL controversies around data misuse as well as the intro- before discussing the ways in which this has been duction of new regulation through the EU General applied to innovation in data practices. The article Data Protection Regulation (GDPR), which gives indi- will then focus on FinTech and discuss the relevance viduals greater control over their own data (Politou and implications of a SL for the FinTech industry. et al., 2018). In the wake of such developments ques- In particular, the article draws on literature from com- tions have arisen as to whether compliance with regu- puter science, organisational science and science and lation is sufficient to ensure ethical data practices and technology studies to consider the importance of devel- to what extent ethical codes of practice are also needed oping relationships of trust and to set out some of the (Hasselbalch, 2019). This has resulted in increasing rec- technical and social approaches used to facilitate this. ognition of the need to secure and maintain public sup- port for uses (and reuses) of people’s data in order to Notes on terminology establish a “Social Licence” (SL) for current and future Data-driven industry is becoming an established and practices. commonly used term to describe industries whose oper- The notion of a “Social Licence” recognises that ations are underpinned by data practices (e.g. data col- there can be meaningful differences between what is lection, storage, linkage, analysis or sharing), this term legally permissible and what is socially acceptable implies that the data that is used pre-exists the opera- (Carter et al., 2015). A SL is granted by a community tions of the industry and that its existence (in an objec- of stakeholders and is intangible and unwritten but tive and “real” sense) shapes the operations and may be essential for the sustainability and legitimacy practices of industry. However, this positivist position of particular practices or industries. Developing and overlooks the ways in which data are created through maintaining a SL requires public engagement incorpo- the operations and practices of industry. Indeed the rating diverse perspectives and interests, beyond those creation and curation of data are equally important of professional communities, to ensure that current and functions of these industries, and it is through these future practices are aligned with the values of society. functions that data comes to have value (Sadowski, While the concept of the “Social Licence” is well- 2019). Therefore throughout this article we refer to established in other sectors it has only very recently data dependent or data-intensive industries. The term begun to be discussed in relation to digital innovation “data dependant” acknowledges that access to data and data-intensive industries. This article therefore and the continuous creation of new data is fundamen- draws on existing literature relating to the SL in extrac- tal to the operations of these industries. “Data- tive industries to explore the potential approaches intensive” acknowledges the central role of data in needed to establish a SL for emerging data-intensive the operations and practices of these industries (both industries. Additionally, it draws on well-established creating and using data). literature relating to trust (from psychology and organ- The use of the term AI emphasizes the algorithms isational science) to examine the relevance of trust, and that operate on the data as well as the associated auto- trustworthiness, for emerging data practices in the pri- vate sector. In doing so the article considers the extent mated or autonomous decision-making carried out by to which pursuing a SL might complement regulation a computer. The ethical concerns in AI extend beyond Aitken et al. 3 concerns about data into the implications of machines approaches which attempt to manage (rather carrying out tasks and making decisions autonomous- than address) community opposition to developments ly. A specific class of algorithms is that of machine (Owen and Kemp, 2012; Parsons and Moffat, 2014). learning algorithms, in which (large) sets of data Yet on the other hand, the flexibility in the definition allow the machine to learn certain trades or character- and operationalisation of the concept of the SL is con- istics, for instance allowing one to classify people, sidered one of its strengths. pictures or activities. As will be discussed below, learn- While legal licences are fixed and clearly defined to ing from data introduces a particular flavour of ethical permit certain practices over a designated time period, concerns, including issues such as perpetuating bias, a SL is unwritten and tacit in nature (Moffat et al., discrimination and exclusion present in the used data, 2016). Rather than being granted by governments or which contains judgements from the past. official bodies, a SL is granted by the relevant commu- nity of stakeholders or citizens without written agree- ments or formal procedures (Warhurst, 2001). Hence, Social Licence the SL is ‘a dynamic and changing reflection of the The term “Social Licence” or “Social Licence to quality and strength of the relationship between an Operate” has been in use since the 1990s and has industry and a community of stakeholders’ (Moffat been most influential in relation to extractive industries et al., 2016: 480–481). Importantly, relationships such as mining and forestry (Moffat et al., 2016). The between industry and stakeholders are not static, but emergence of SL in the 1990s reflected changing rather are continuously evolving and adapting. community-industry relationships in response to Therefore, the nature of the SL will also evolve and increasing pressure and scrutiny surrounding environ- adapt and ‘business must have regard for evolving mental impacts and social performance of extractive social attitudes and expectations if it is to maintain industries as well as increasing expectations for com- its “social licence”’ (Brown and Fraser, 2006: 108). munity/public participation in decision-making about industry development (Conrad, 2018). This led to the A social licence for data practices concept of the SL being developed as a means of pur- In response to high profile public controversies around suing new relationships between industry and commu- data use (and misuse), recent years have brought calls nities to reflect public values and ensure community to establish a SL for data practices (e.g. Allen et al., support for projects. Given the increasing attention 2019, Carter et al., 2015; Lawler et al., 2018; Leonard, directed at ethical dimensions of technological innova- 2018). For example, the New Zealand Government tion (particularly around data science and AI) and established the Data Futures Partnership to develop increasing consumer interest in these issues (Brusoni guidelines that public and private organisations can and Vaccaro, 2017) there is now a similar strong ratio- use to develop a SL for data use (Data Futures nale to apply the concept of SL to new data-intensive Partnership, 2017). The resulting guidelines ‘aim to industries. enable organisations to maximise the value of data There is no single, agreed upon definition of the SL through building the trust of their clients and develop- or how it should be pursued in practice. Some conceive ing wider community acceptance’ (Data Futures SL as representing a set of demands and expectations Partnership, 2017). They consider a SL to be estab- regarding how a business should operate (e.g. lished ‘when people trust that their data will be used Gunningham et al., 2004), while others emphasise as they have agreed, and accept that enough value will that establishing a SL requires industry to adapt to be created’ (Data Futures Partnership, 2017). social norms and values and to change its practices to Interest in the SL has been particularly evident in reflect these (e.g. Harvey, 2011). Moffat et al. (2016: relation to digital health and data-intensive health 480) summarised that the SL ‘tends to be regarded as the ongoing acceptance or approval of an operation by research. For example, Carter et al. (2015) contended those local community stakeholders who are affected that the failed implementation of the Care.Data digital health platform demonstrated that the programme had by it and those stakeholders who can affect its failed to adequately secure a SL. A recent international profitability.’ consensus statement set out the importance of public The lack of fixed definition has enabled considerable engagement to establishing a SL for data-intensive flexibility in how SL is pursued. On the one hand, this health research (Aitken et al., 2019); recent research has led to criticisms that SL can be employed ‘oppor- into public attitudes towards research uses of health tunistically to serve the particular objectives and goals data have been framed in terms of understanding the of companies, activists and governments’ (Moffat et al., 2016: 480). Indeed, previous studies have found that conditions needed to ensure a SL for secondary uses of companies have referred to the SL in pursuing data (e.g. Paprica et al., 2019) and Allen et al. (2019) 4 Big Data & Society have discussed the role of data custodians in establish- far-reaching – and often unpredicted – impacts across ing and maintaining a SL for the use of personal infor- society a broad conception of stakeholders acknowl- mation in health research. edges the importance of wide public engagement In applying the concept of SL to emerging data- beyond potential service-users. As such wide public intensive industries comparisons are invited between engagement with broad publics is vital to ensure that these new industries and extractive industries in current and future practices reflect public values and which SL has previously been applied. Data is often interests and has an important role to play in strength- depicted as the “new oil” or a “goldmine” (e.g. CTO, ening wider science–society relations (Aitken et al., 2019; The Economist, 2017): such analogies conceptu- 2019). The extent to which such broad approaches alise data as a commodity and a resource to be are likely to be adopted by industry is considered fur- exploited. However, this analogy has been contested ther below. as it is noted that data does not exist in a natural It is noteworthy that while the concept of the SL form ready to be extracted, but rather it is created or originates in controversies around private sector activ- manufactured and given value through the ways in ities, to date, the SL for data practices is largely being which it is used (Sadowski, 2019). Yet while data may examined in relation to public sector uses/reuses of not be the new oil, data dependant industries neverthe- data (e.g. relating to health data). Important questions less have much to learn from previous (good and bad) therefore remain relating to the role of private compa- experiences of extractive industry engagement with nies in establishing a SL for current and future data stakeholders to address practical, ethical and social practices. As technologies are increasingly developed dimensions of their operations. In particular, processes and deployed in the private sector, and ethical consid- of data accumulation through the creation of new mar- erations arise in their application, greater consideration kets and services in previously under-served regions of should be given to the role of private companies in the world creating new forms of dependence and lead- addressing these issues. We aim to address this gap in ing to “data colonialism” (Sadowski, 2019) are remi- the literature by examining the ways in which a SL niscent of the history of exploitative relationships in might be established in emerging data-intensive indus- extractive industries where natural resources have tries, using FinTech as an illustrative case study. been extracted and removed from local areas to profit overseas corporations. The concept of SL was estab- FinTech lished as a means of addressing such injustices and as FinTech has been defined as ‘a new financial industry such may play an important role in ensuring data jus- that applies technology to improve financial activities’ tice (Taylor, 2017). (Schueffel, 2016: 45) and FinTech firms have been This brief history of SL in extractive industries high- described as ‘firms that are combining innovative busi- lights the importance of building and maintaining rela- ness models and technology to enable, enhance and tionships with stakeholders to maintain stakeholder disrupt financial services’ (Gulamhuseinwala et al., support. The ways in which stakeholders are defined 2015: 4). Through innovation in FinTech, the financial and identified are crucial considerations in pursuing a industry as a whole is evolving and adopting new tech- SL. Typically extractive industries have sought to nologies and data-intensive practices. Innovations engage with local communities (as “communities of which have played a role in creating this impact include place”; Moffat et al., 2016) whereas approaches based internet banking, mobile payments, crowdfunding, on physical proximity are likely to be irrelevant to the peer-to-peer lending, Robo-Advisory and online iden- activities of most data-intensive industries such as tification (Schueffel, 2016). Some of the technologies FinTech. Narrow definitions might conceive stakehold- used by FinTechs include: User-facing web-based tech- ers as being customers, investors or regulators, whereas nologies including applications for mobile phones and broader approaches would include all people from web browsers; back-end technologies, such as cloud whom data is derived and used in developing or imple- and blockchain; data collection, processing and analyt- menting data-dependent services and/or anyone who is potentially affected by the industry’s activities. While ics. AI in particular is used for a range of purposes GDPR gives individuals greater control over their data, including developing automated chatbots for customer in many instances individuals do not explicitly consent services, efficient processes for detecting fraud and to their data being used in developing or implementing money laundering and improving automated processes new services (e.g. where data is used in aggregate form), that utilise large volumes of data (e.g. client risk pro- or know how their data might be used. Moreover, filing or credit scoring; Maskey, 2018). people increasingly have limited choices regarding Customer uptake of FinTech products is rapidly whether or not to use a service which requires access expanding: In 2015, the EY FinTech Adoption Index to their data. Given that data practices are having reported that ‘a weighted average of 15.5% of digitally Aitken et al. 5 active consumers are FinTech users’ according to their to attract and retain customers; Adopting ethical and definition of a FinTech user using at least two FinTech transparent approaches to business in order to prepare products (e.g. mobile payments, apps or insurance tele- FinTechs to anticipate and respond to regulatory and matics; Gulamhuseinwala et al., 2015: 6). Just two policy developments or; Efforts to demonstrate a SL in years later this number had doubled with 33% of order to set FinTechs apart from traditional banking survey respondents being FinTech users institutions. However, following a purely instrumental (Gulamhuseinwala et al., 2017). Thus, FinTech repre- rationale can lead to approaches which pay ‘lip service’ sents a fast-developing industry underpinned by data- to public concerns through enacting purely cosmetic dependent technologies. Given that finance is an area forms of public engagement without genuine intentions that affects most – if not all – members of society the to address concerns or reflect public values in a com- potential impacts of this industry are significant. Such pany’s operation. impacts might include transforming the way people A final set of motivations are underpinned by sub- access and use money, creating cashless societies stantive rationales that regard the development and (Teigland et al., 2018), opening up financial services maintenance of a SL as being aimed at creating wider to unbanked or underbanked populations (World positive outcomes across society ‘from this point of Bank, 2017) and enhancing competition in the market view, citizens are seen as subjects, not objects, of the (Bank of England, 2019). However, simultaneously the process. They work actively to shape decisions, rather reliance on data-intensive technologies and processes than having their views canvassed by other actors to can risk creating new opaque systems through which inform decisions that are then taken’ (Wilsdon and access to finance is determined or to increasingly neces- Willis, 2004: 39). Following this approach engaging sitate citizens’ participation in the Big Data society. As with public values and interests aims to establish a such, the emergence of FinTech represents a timely SL for current and future practices while also offering opportunity to examine the relevance of SL for inform- opportunities to “do things better” and maximise ben- ing ethical data practices in private sector organisations efits not only for the FinTechs concerned but also for within data-intensive industries. wider society. Here it is important to emphasise that To date literature around FinTech has not explicitly establishing a SL is not simply about avoiding or mit- engaged with the concept of a SL. Studies which have igating potential negative impacts but equally about examined public attitudes or responses have typically maximising the benefits of FinTech. focused on customer uptake of FinTech products (e.g. In the following sections, we consider ways through Chuang et al., 2016; Gulamhuseinwala et al., 2015, which FinTechs could pursue a SL, focussing on the 2017). In doing so they have tended to focus on cus- importance of developing relationships of trust and the tomers’ motivations for using FinTech services, and role of public engagement in establishing and maintain- largely neglected non-customers’ reasons for not using ing a SL. FinTech services, or the reasons why some FinTech offerings have been unsuccessful (Kavuri and Milne, Trust and trustworthiness in FinTech 2019). There is a lack of public deliberation or engage- Previous studies have considered consumer trust in ment to examine the extent to which FinTech practices relation to FinTech. For example, Gulamhuseinwala align with public values and interests. In short, wider et al. (2015) described that while many potential cus- issues around public acceptability of FinTech or its role tomers look positively at FinTech offerings, more than in society remain largely overlooked. 25% preferred traditional providers and another 11% There are a number of reasons why FinTechs might do not trust new FinTech companies. Similarly, be motivated to pursue a SL. These reasons in turn Chuang et al. (2016) concluded that brand and service reflect different underpinning rationales which can be trust significantly affect attitudes and willingness to use normative, instrumental and/or substantive (Fiorino, FinTech services. However, this attention to consumer 1990; Wilsdon and Willis, 2004). First, a normative trust has often overlooked considerations of what it rationale leads to moral positions that suggest that means for a FinTech to be trustworthy. While trust is FinTech firms should engage with stakeholders and reflect public values as ‘it’s the right thing to do’ at the heart of a SL, this is established through mutual (Wilsdon and Willis, 2004). Second, more practically relationships enabling all interests and perspectives to minded approaches follow instrumental rationales be reflected and addressed (Moffat et al., 2016). which view efforts to establish and demonstrate a SL Customers – and the wider public – do not passively as means to achieve an organisation’s own objectives receive information about technologies, products or (Wilsdon and Willis, 2004). Instrumental rationales services (Wynne, 2006). In emphasising the importance might lead to a variety of potential approaches includ- of aligning with public values, the SL requires dialogue ing: Efforts to build and maintain public trust in order and public engagement to identify and address public 6 Big Data & Society values, concerns and interests (Moffat et al., 2016). suggest Predictability is crucial in relation to openness, Therefore, establishing a SL requires reflection not transparency and commitments to meaningful engage- just on ways of building public trust but also of estab- ment (Moffat and Zhang, 2014). Moffat and Zhang lishing and demonstrating trustworthiness (Aitken (2014) found that procedural fairness and good quality et al., 2016a, 2019). engagement were key to sustaining relationships of trust underpinning a SL. This suggests that predictabil- ity in terms of an organisation’s approaches and fair- Establishing trustworthiness ness may be more important than predictability of A number of authors have proposed frameworks to particular actions or activities which should be able examine perceived trustworthiness. For example, to adapt in response to engagement processes (Moffat Butler and Cantrell (1984) suggested that trust was et al., 2016). Therefore, in pursuing a SL, FinTechs based on perceptions of: integrity; competence; consis- need to ensure consistency in engagement approaches. tency; loyalty and; openness. Butler (1991) later expanded this list to include: discreetness; fairness; Relationships of trust promise fulfilment; availability; receptivity and; overall trustworthiness. Mayer et al. (1995) suggested that per- Relationships of trust can take many forms. For exam- ceived trustworthiness is based on judgements of an ple, there are direct relationships between a FinTech entity’s: Ability, Benevolence and Integrity (ABI). firm and its customers (or other stakeholders), but This ABI framework has since been widely used and there are also networks of indirect relationships further developed to examine relationships of trust in through which assessments of trustworthiness are organisational settings. made. Andras et al. (2018) note that trust can be devel- Ability refers to the extent to which the entity is oped through awareness of others’ experiences or inter- perceived to have the skills and competencies to carry actions with the trustee, if someone we know (and out the particular tasks relevant to the situation in trust) uses a particular service we assume that it is trust- which they would be trusted. Benevolence is described worthy ‘the main idea, however, is that we did not gen- as ‘the extent to which a trustee is believed to want to erate trust in the [service] per se but trust a person that do good to the trustor’ (Mayer et al., 1995: 718). trusts the [service]’ (Andras et al., 2018: 6). Moreover, Integrity requires confidence that the entity will act in trust behaviours can reflect trust in a third party whose accordance with a set of principles and that those prin- association with the trustee gives the trustor confidence ciples align with the values of the trustor (Mayer et al., to take such behaviours – this, Andras et al. (2018) 1995). If an entity is perceived to possess each of these describe as Second Order Trust. For example, when attributes they are likely to be trusted, whereas if ‘any making purchases online, a shopper trusts the online of these attributes [are] called seriously into question, review system (and the anonymous reviewers), which this makes us wary’ (Dietz and Gillespie, 2012: 6). Each enables them to have confidence in the product or ser- of the attributes are related and may reinforce one vice they are buying and gives them Second Order another; however, it is also possible for someone to Trust in the seller. Similarly, Second Order Trust may trust an entity when one or more of the attributes is be established based on trust in financial regulators considered to be lacking as ‘each of the three factors whose approval may be perceived to give FinTechs can vary along a continuum’ (Mayer et al., 1995: 721). “legitimacy”. More recently authors have expanded on The concept of Second Order Trust is particularly Mayer et al.’s framework suggesting that including salient around new technologies and services, where Predictability or Reliability is important. For example, early adopters are likely to have either high technical Dietz and Den Hartog (2006) suggested that the four knowledge or propensity to risk-taking. Wider adop- key characteristics on which judgements of trustworthi- tion of the technology or service will depend on trust ness are based are: Ability; Benevolence; Integrity; and building up through social networks emanating from Predictability (referred to as the ABIþ model). these early adopters. Therefore, while early adopters’ Predictability will reinforce perceptions of the Ability; experiences may depend more on confidence in techni- Benevolence; and Integrity of the trustee. cal competencies and first-order trust in new services, Considering the role of Predictability or Reliability subsequent adopters’ relationships with those services draws attention to the importance of trust being are likely to be founded on Second Order Trust. sustained overtime through ongoing relationships. Second Order Trust draws attention to the impor- Importantly these relationships should be able to tance of multiple relationships and the ways in which adapt to changing contexts and social dynamics individuals assess the trustworthiness of an entity – (Moffat et al., 2016). Studies which have examined such as a FinTech firm. These relationships are both the conditions needed to establish and maintain a SL direct and indirect and never static, rather ‘the level of Aitken et al. 7 trust will evolve as the parties interact’ (Mayer et al., bias in data and; risks of unemployment through 1995: 727). This highlights the relevance of perceived increased automation (e.g. O’Neil, 2016; Stahl and Predictability as a factor influencing assessments of Wright, 2018). trustworthiness (Dietz and Den Hartog, 2006). Dietz and Den Hartog (2006) note that regularity of behav- Technical approaches iour over time will strengthen trust whereas unpredict- While developing trustworthy data practices remains ability or unreliability will weaken trust. Furthermore, an emerging field of interest, increasingly technical trust can be either strengthened or weakened through approaches are being developed with this aim interactions between trustors and trustees as well as (Toreini et al., 2019). For example, IBM (n.d.) has set through indirect relationships in social networks. As out approaches towards ‘building and enabling AI these relationships evolve assessments of an entity’s solutions people can trust’ through four key ABI will also adapt. features of “Trustworthy AI”: Robustness, Fairness, Explainability and Lineage. The ways in which these Trustworthy technology four key features might establish trustworthiness are Given the importance of innovation in data practices summarised below: for FinTech, a SL for FinTech operations will depend on perceived trustworthiness not only of FinTech firms Robustness but also of the technologies underpinning new financial The European Commission (2019: 16) notes that tech- services. As new market entrants, the issue of trustwor- nical robustness is a crucial component of trustworthy thiness presents a pivotal challenge for FinTechs where AI and states that ‘technical robustness requires that most are yet to establish strong brand reputations. AI systems be developed with a preventative approach Moreover, both the financial sector and data depen- to risks and in a manner such that they reliably behave dant technologies (such as AI) have been the subject as intended while minimising unintentional and unex- of public controversies in recent years. The public pected harm, and preventing unacceptable harm.’ image of the financial industry is still recovering from Across literatures there are varying definitions of the effects of financial crash and mortgage crisis of Robustness. Robustness refers to ‘dependability of a 2008 (Dietz and Gillespie, 2012) and recent years system with respect to external faults, which character- have brought considerable press coverage of scandals izes a system reaction to a specific class of faults’ relating to mishandling, misuse or abuse of data. Since (Avizienis et al., 2004: 23). Robustness then is a state perceptions of an organisation’s trustworthiness are in which an algorithm functions normally in the pres- shaped by context and awareness of related events ence of accidental faults or a malicious intruder these factors will be significant in influencing public (attacker), while the attacker actively or passively perceptions of FinTech. Given increasing attention manipulates the operation of the algorithm. In litera- directed at social and ethical dimensions of new data ture around machine learning (Bhagoji et al., 2018; dependant technologies, FinTechs – whose services rely Rauber et al., 2017), robustness includes security but on these technologies – will need to anticipate and also privacy issues of the algorithm as well as likely address the challenges this presents. barriers to its performance (including errors caused In developing and implementing new financial serv- by implementation faults or the algorithm’s accuracy ices underpinned by data dependant technologies a limitations). We use the term robustness in such a gen- variety of practical and ethical challenges are encoun- eral sense. tered. Practical considerations include developing The bottom line for all defensive approaches is the mechanisms to ensure security of logins when using need for realistic analysis of the potential attackers for banking applications on mobile phones and; minimis- goal, knowledge, capability and strategy. Security and ing risks of privacy breaches. Ethical considerations Privacy aspects of a machine learning based system include ensuring fairness in algorithmic decision- have two aspects: safe data and safe model making; avoiding unjust outcomes; ensuring equal (Liu et al., 2018). The first focuses on the security access across society to the benefits of technology; con- and privacy issues of the data which is vulnerable sidering the potential impacts of automation on per- against different attacks, more importantly the injec- ceived responsibility for outcomes (on the part of tion of invalid/malicious input from adversaries or both professionals and customers) and ensuring that leakage of sensitive information. The second, on the automated processes do not reduce customer autono- other hand, resolves the security and privacy concerns my (Scott, 2017). Simultaneously, there are wider eth- ical issues around data and AI that are of relevance to of the model in terms of reliable functioning and trust- FinTech, these include concerns regarding surveillance; worthy performance. 8 Big Data & Society Mathematical modelling of the behaviour of the auditor that enforces fairness to the model when it is system is undertaken to identify and mitigate the processing the results (Agarwal et al., 2018; Zhang unpredictable causes of faults in machine learning per- et al., 2018). Post-processing solutions detect discrimi- formance (either due to security issues or implementa- nation in the outcome of the algorithm. While there are tion and accuracy errors). In these approaches (Hein numerous definitions of fairness in the literature, these and Andriushchenko, 2017; Raghunathan et al., 2018), approaches strongly rely on a mathematical definition the system is modelled mathematically and its of fairness. These solutions measure the fairness of an behaviour is analysed in different situations. Such algorithm by assessing the disparity between privileged approaches aim to guarantee the predictability of the and unprivileged groups in the algorithm results. machine learning system and its resilience to different Kusner et al. (2017) categorised post-processing solu- categories of faults. tions into four groups. In each the data contains one or Technical approaches aimed at ensuring robustness more protected features that identify the privileged and provide a diverse range of methods to avoid disclosing unprivileged groups (e.g. gender or ethnicity). users’ privacy, maintain the functioning integrity of the AI and remain resistant against attack. As such, these 1. Fairness through Unawareness discards protected approaches aim to demonstrate Ability through their features in the decision-making process. However, technical competence to safeguard data, while also this is not a robust solution because it does not con- demonstrating the organisation’s Benevolence and sider the correlation between protected features and Integrity in taking measures to protect individuals’ pri- other features of data (Chen et al., 2019). vacy. We posit that demonstrating such characteristics 2. Individual Fairness considers an algorithm fair if it consistently over time will also enhance perceived gets similar predictions for similar individuals. Predictability or Reliability. 3. Demographic Parity is satisfied if the prediction results of a group would be the same with or without considering protected features. Fairness 4. Equality of Opportunity requires the accuracy of an Avoiding unfair bias in algorithmic decision-making is algorithm to be equal between privileged and unpri- a crucial element of trustworthiness. This is particularly vileged samples. relevant for FinTechs that rely on AI to improve effi- ciency and accuracy in decision-making processes. The There is no comprehensive solution to eliminate dis- European Commission (2019) notes that unfair bias crimination. Therefore, the choice of the technical can arise through the inclusion of inadvertent historic approaches addressing fairness aim to detect or prevent bias, incomplete data or a lack of good governance bias in the output of an AI model. As such they aim to models. If such bias persists in the algorithm it can demonstrate Ability through technical competence, ‘lead to unintended (in)direct prejudice and discrimina- Benevolence through avoiding harm to minority or tion against certain groups or people, potentially exac- vulnerable groups and Integrity through taking erbating prejudice and marginalisation’ (European approaches that reflect the values of society (which is Commission, 2019: 18). pivotal to establishing a SL). The baseline assumption in fairness-based approaches is that data is biased and should be Explainability moderated. Fairness can be addressed in one of the following stages in a model’s operation cycle: AI algorithms are often considered “black box” models pre-processing, algorithm modification and post- (Michie et al., 1994); thus the processes through which processing (d’Alessandro et al., 2017; Friedler et al., outputs are derived lack transparency. The European 2019). Pre-processing fairness resolutions are focused Commission (2019: 18) states that ‘whenever an AI system has a significant impact on people’s lives, it on the mitigation of bias in the data itself. Resolutions tend to be independent of the AI model. should be possible to demand a suitable explanation They either re-label the data samples to make the of the AI system’s decision-making process.’ results fair (Jiang and Nachum 2019) or assign a Moreover, the right to an explanation is a key feature weight to each one where samples that are more of GDPR (Kaminski, 2019). As such, explainability likely to be discriminated against receive more atten- plays an important role in building relationships of tion (Calmon et al., 2017; Kamiran and Calders, 2012). trust to underpin a SL. Ensuring that the ways in Algorithm modification methods aim to propose AI which AI is used and decisions that are made based models that are inherently fair. Such fairness is fulfilled on data are understood, is crucial to facilitate the in either a model that is designed to be statistically fair good communication and dialogue needed to establish (Kamishima et al., 2012), or through deployment of an a SL (Moffat et al., 2016). Aitken et al. 9 Technical approaches to ensure explainability aim to (Chouldechova, 2017). Additionally, the approaches demonstrate Ability through technical competence, outlined above are not always complementary to one while also enabling assessments of an organisation’s another. There is significant interest in Explainability Benevolence and Integrity. Indeed, transparency is cru- given the requirements brought in by GDPR; however, this poses challenges in relation to many AI applica- cial to enable insights into an organisation’s motiva- tions (Goebel et al., 2018). Where a technology can be tions or values. Therefore, explainability may not directly demonstrate Benevolence or Integrity but developed to be robust and fair but is not fully explain- might constitute an important feature to enable assess- able, trade-offs may be necessary. Such trade-offs might have important implications for trustworthiness ments of these characteristics. Moreover, explainability and for establishing or maintaining a SL. Therefore, may be vital to facilitate public/stakeholder engage- understanding stakeholders’ interests and values in ment and dialogue essential for establishing a SL. relation to the way their data is used or the ways in which technologies are deployed will be valuable Lineage to guide decision-making in these instances. As AI models evolve and adapt, transparency can be Moreover, transparency around these trade-offs and problematic and the “black box” nature of AI is ampli- the ways in which particular features of technologies fied. Approaches focusing on lineage aim to make the have been prioritised may be important to maintain inner components and the history of the AI algorithm relationships of trust with stakeholders. traceable by logging the necessary details and keeping track of the interactions occurring between compo- Social approaches nents. The European Commission (2019) advocate traceability of AI algorithms to include documenting Developing technical approaches to address ethical all the data sets and processes involved in the data challenges will be an important component to underpin gathering and data labelling phases. Traceability is a SL for current and future practices of FinTech organ- regarded as essential to enable ‘identification of the isations; however, technical solutions alone are insuffi- reasons why an AI-decision was erroneous which, in cient to achieve this outcome (European Commission, turn, could help prevent future mistakes [while also] 2019). As outlined above, a proliferation of frame- facilitat[ing] auditability as well as explainability’ works and discourse surrounds the technical (European Commission, 2019: 18). approaches through which to pursue trustworthy data Technical approaches enabling traceability of the practices, conversely, there is considerably less discus- lineage of AI models provide insights into how these sion of the social approaches needed to complement processes have developed, placing an emphasis on trans- these – or how social approaches might be undertaken parency compared to explainability of current processes in data-intensive industries. Yet, social approaches are or outcomes. However, traceability also enhances central in establishing a SL. The following section con- explainability (European Commission, 2019) and per- siders the implications of this for FinTech. ceived reliability/predictability. As with explainability, these approaches aim to demonstrate Ability through Public engagement technical competence while also enabling assessments A SL is established and maintained through ongoing of an organisation’s Benevolence and Integrity through relationships between a community of stakeholders and deeper forms of transparency. an industry/organisation. This entails ongoing engage- ment and dialogue to identify and respond to stake- Trade-offs holders’ values, interests and concerns (Moffat et al., These four broad classifications illustrate the range of 2016). different technical approaches being developed and Current levels of interest in public engagement with used to address ethical challenges relating to data prac- data practices are high (particularly relating to AI), tices and AI. While each of these may be important for reflected via the growing number of bodies working developing trustworthy practices to underpin a SL, it in this area (including Google DeepMind Ethics and may not always be possible to achieve each of the four Society, the UK Government’s Centre for Data Ethics aims simultaneously. Indeed, even within each of the and Innovation, the New Zealand Government’s Data approaches trade-offs may be necessary, for example Futures Partnership and the Royal Society). It is now fairness can have different meanings and be assessed widely recognised that Big Data analytics and AI bring via different measurements, therefore ensuring fairness significant social and economic impacts and necessitate may require prioritising certain methods which in both regulatory supervision and ethical and social turn prioritise different dimensions of fairness assessment (Stahl and Wright, 2018). 10 Big Data & Society Consideration of the social and ethical dimensions parties, and differences in value sets, worldviews and of data practices reflects a longer history of public perspectives are still likely to create opportunity for engagement with science and technology. In the past, mistrust and conflict’ (Moffat et al., 2016: 483). These public engagement has been promoted as a means to remain persistent challenges in public participation address contentious areas of innovation and to build or across a variety of domains and ones which are funda- restore public trust and mitigate controversy (Aitken mental to address, in order for public engagement sur- et al., 2016a). This is deemed important as ‘science rounding AI, and data-intensive industries such as and technology demand assenting publics to maintain FinTech, to be meaningful and impactful. their hold on the collective imagination, not to mention The potential motivations for FinTechs to under- purse-strings’ (Jasanoff, 2011: 248). However, public take public engagement will shape the aims of engage- engagement goes beyond communicating the value of ment, the approaches taken and the range of potential science and technology, and instead requires engaging outcomes. As noted above, there are a variety of rea- in dialogue with the public to understand and reflect sons for FinTech firms to pursue a SL: these may range public values in innovation, governance and policy. from purely instrumental perspectives which regard the Previous studies in public engagement with science SL and related public engagement as a mechanism and technology have demonstrated the limitations of through which to attract and retain customers and approaches aimed at gaining public trust through increase profits, through to substantive perspectives improving public understanding. Such approaches which focus on bringing wider benefits for society treat members of the public as ‘passive recipients of and meaningfully involving members of the public to scientific knowledge’ (Cunningham-Burley, 2006: address ethical considerations. Clearly such rationales 206), overlooking how members of the public critically lead to different approaches being taken and different assess, deconstruct and evaluate claims to scientific ideas of what it would mean for public engagement to knowledge in line with their own ideologies, experien- be successful (Aitken et al., 2016b). Experience in other ces and the contexts in which the information is industries suggests that approaches informed by instru- received (Hagendijk and Irwin, 2006). Thus, demon- mental rationales may have the most appeal to private strating technical competence or communicating the sector organisations, yet those informed by substantive robustness of technical responses to ethical challenges rationales are more likely to be effective (Aitken et al., will not automatically lead to public trust and support. 2016b). For example, a review of community engage- Rather, technical approaches need to be combined with ment practices by wind farm developers found that social responses that build relationships of trust while most developers took an instrumental approach through which claims to technical competence, and to community engagement (using methods which demonstrations of ABI will be evaluated. As such, restricted the ways people could participate or the aligned with the approaches taken to establish a SL, range of potential outcomes), those that followed rather than aiming to manufacture public trust in sci- more substantive approaches (opening up engagement ence and technology, the focus of public engagement is processes and devolving some control over the process to ensure that the trustworthiness of science and tech- and outcomes to public participants) were most suc- nology evolves through efforts to address and reflect cessful in generating public support, which was ironi- public values (Aitken et al., 2016a; Wynne, 2006). cally the primary objective of companies following To date, deliberative public engagement relating to instrumental approaches (Aitken et al., 2016b). data has typically been undertaken by research organ- Therefore, while companies may be reluctant to share isations or public sector bodies (e.g. Data Futures power in decision-making or planning processes, evi- Partnership, 2017; RSA, 2018). Important questions dence suggests that doing so leads to positive outcomes arise regarding whether private sector (non-research) in terms of generating wider public support and estab- organisations, such as FinTechs, can, or should, facil- lishing a SL. itate these processes. Community engagement is a key As noted above, a SL is granted through engage- component of establishing and maintaining a SL in ment with ‘local community stakeholders who are extractive industries (e.g. mining and forestry); howev- affected by [a project or development] and those stake- er, this has been undertaken with varying degrees of holders who can affect its profitability’ (Moffat et al., commitment and quality (Moffat et al., 2016). In 2016: 480). In extractive industries, identifying ‘local some cases, community engagement has been largely community stakeholders’ may be more straightforward cosmetic due to companies retaining control over the given the physical location of projects. For FinTechs, a process, restricting the range of possible outcomes and ‘local community’ defined by geographic proximity is setting the terms for community participation: ‘even in most cases irrelevant to the operations of a FinTech. when all key stakeholders are explicitly invited into a Instead, while in extractive industries local communi- conversation [.. .] asymmetric power relations between ties have been identified based on physical proximity to Aitken et al. 11 the locations from which resources are extracted, in New Zealand Government, the Royal Society or data-intensive industries affected communities might DeepMind Ethics and Society have substantial budgets be conceptualised as those from whom data is derived. and resources which they can use to fund large scale This creates a much wider set of relevant stakeholders. public engagement projects to reach out to diverse Moreover, taking a broader approach to stakeholders groups across society, FinTech companies are unlikely as people who are “affected by a project” necessitates to have significant resources (or expertise) for these consideration of the impacts of data practices on activities. Furthermore, given that the SL for FinTech society. Such impacts might include potentially trans- is interdependent with a SL for broader data-intensive formative effects on financial systems which affect peo- industries, and innovation, questions arise as to who is ple’s access to finance (either positively or negatively) responsible for facilitating engagement activities. or contributory effects to the increasing role of data in Individual FinTech firms have an incentive to develop society and the reduction in opportunities to partici- a SL for their own operations, yet, it may be that wider pate fully in society without allowing one’s data to be industry level engagement is needed to establish a collected or used. Such a broad conceptualisation sug- broader SL for the FinTech sector. gests that stakeholders might include the whole Indeed, public engagement can occur at a range of of society. levels reflecting different aims and objectives and Considering the second group of stakeholders as requiring different approaches. For example, public those ‘who can affect profitability’ may also invite engagement relating to data-intensive health research either narrow or broad definitions. Narrow definitions takes place at many different scales, including: ‘wide- might focus on potential and actual customers as those scale public conversations about uses or potential uses who can affect profitability. Broad definitions would of data in health research; [public engagement] to consider the role of the wider public as potentially inform or co-design the development of policies or gov- affecting profitability through their support or opposi- ernance practices relating to uses of data in health tion to data practices more broadly as well as those research; engagement or involvement of members of used specifically in FinTech. Indeed, as previous scan- the public in governance decisions about data access dals have demonstrated, public controversies around and use; engagement or involvement of members of data use and misuse have the potential to significantly the public at different phases in particular research affect data dependant industries (as was evidenced in projects; analysing and disseminating the results of the case of Care.Data (Carter et al., 2015)). research using data in ways which will support Therefore, establishing and maintaining a SL entails improvements in healthcare and systems’ (Aitken going beyond a narrow focus on stakeholders as a com- et al., 2019: 2). Moving this approach into the pany’s customer base to a more inclusive conceptuali- FinTech context suggests public engagement might sation of the wider public as stakeholders. Yet, the valuably serve a similar range of purposes: at times extent to which a private sector organisation – such being undertaken at policy or industry level to inform as a FinTech – will be willing, or adequately resourced the development of policies, governance mechanisms to engage with such broad stakeholders is questionable. and industry practices, and at other times being under- Instrumental approaches to engagement will be likely taken by individual FinTech firms to address ethical to lead to a narrow focus on stakeholders as existing or dimensions in developing, implementing and evaluat- potential customers (those considered to have the most ing new products, services or areas of innovation. immediate impacts on profitability); however, over- looking wider stakeholders risks practices leading to Conclusions unanticipated negative impacts or opposition to Despite the substantial and growing rhetoric around approaches which are not aligned with public values. ethical and trustworthy data practices (in all sectors) Therefore, a FinTech may not be granted a SL for its there is limited evidence of how this is being put into operations if it overlooks the interests of broader stake- practice. As this article has discussed, this is important holders. This highlights that while a FinTech may define its stakeholders in particular ways, others for FinTechs who increasingly employ these technolo- (including stakeholders themselves) might define them gies to underpin new financial services. While there is differently and it is the stakeholders rather than the industrial advocacy surrounding the potential benefits FinTech which has the authority to grant, refuse or of data science and AI in banking, it is not yet clear withdraw a SL for its operations. Thus, taking a whether there is a SL for these practices. narrow approach to defining stakeholders may be a This has wider implications for developing ethical short-sighted and risky strategy. data practices. The proliferation of ethical codes of Nevertheless, taking a broad approach presents fur- practice and guidance well-illustrates that ethical prac- ther challenges. While organisations such as the tice requires more than just regulation. However, it is 12 Big Data & Society debatable whether the growing number of codes of advantage as an alternative to traditional banking practice is in reality leading to meaningful changes. incumbents (King, 2018). On the other hand, it In particular, given that such guidance are beyond reg- means that there may be substantial work required to ulation they are enacted voluntarily with little or no establish relationships of trust with the wider public. enforcement. This means they depend on organisa- Yet, there is also an opportunity to develop new tional culture change to realise their value. Such culture approaches which might further enhance competitive change in turn requires meaningful commitments to advantage. As has been noted by Brusoni and ethical practice from senior levels of management. Vaccaro (2017: 223) ‘the ethical standing of an organi- There may be a range of motivations for organisations zation—that is represented by its internal practices, to adopt ethical codes of practice; however, we argue products and services—clearly provides a unique way that framing this in terms of pursuing a SL for the to differentiate from competitors’. operations of data-intensive industries provides a This article has not aimed to identify public interests clear rationale and set of approaches to underpin or concerns relating to data practices in FinTech, or to emerging ethical best practices. set out what is required for FinTech to align with SL is distinct from approaches such as CDR public values. Since there is a paucity of public engage- (Cooper et al., 2019; Lobschat et al., 2020) in that it ment or deliberation examining public values around places public – or stakeholder – engagement at its FinTech practices, further research (including through heart. Since a SL is granted or refused by external public engagement methods) is needed to examine what stakeholders (rather than secured internally) it focuses this means in practice. Therefore, this article focusses attention at the importance of aligning with public on setting out the approaches needed to achieve this. values through public engagement. Enforcement does We posit that such approaches are needed across all not come through formal sanctions or penalties but areas and industries whose operations are dependent rather through the loss of public trust, legitimacy or on data to complement regulation and inform the credibility which can have substantial and far- development of ethical codes of practice. This is impor- reaching implications for an organisation and industry. tant to underpin culture change and to move beyond Establishing a SL underpinned by relationships of rhetorical commitments to develop best practice put- trust requires FinTechs to combine a range of technical and social approaches and continually reflect on ethical ting ethics at the heart of innovation. dilemmas as well as the extent to which practices align with social values. In this regard, the growing body of Declaration of conflicting interests guidance and best practice regarding responsible or The author(s) declared no potential conflicts of interest with trustworthy AI, ethical data practices and CDR repre- respect to the research, authorship, and/or publication of this sent a valuable set of resources to draw upon, yet it is article. important that this goes beyond rhetorical commit- ments and leads to practical and meaningful action. Funding In particular, establishing trustworthiness requires not The author(s) disclosed receipt of the following financial sup- just demonstrating technical competence (or Ability) port for the research, authorship, and/or publication of this but also Benevolence and Integrity in the ways that article: This research was funded by the EPSRC, grant refer- data is used and technologies are deployed. ence: EP/R033595/1. Moreover, in order to align with public values, it is vital that ethical approaches are informed by the ORCID iDs views and interests of broad stakeholders. In the case of FinTech, establishing a SL for these technologies Mhairi Aitken https://orcid.org/0000-0002-4654-9803 and subsequent services may prove vital to the ongoing Karen Elliott https://orcid.org/0000-0002-2455-0475 success and sustainability of this sector. FinTech firms face a number of challenges in estab- References lishing relationships of trust: First, the damaged repu- Agarwal A, Beygelzimer A, Dudık M, et al. (2018) A reduc- tation of the financial sector as a whole (Dietz and tions approach to fair classification. arXiv preprint Gillespie, 2012). Second, the unfamiliarity of technolo- arXiv:1803.02453. gies driving FinTech products and services. Third, Aitken M, Cunningham-Burley S and Pagliari C (2016a) increasing public awareness of controversies around Moving from trust to trustworthiness: Experiences of data misuse and fourth, that as new entrants to the public engagement in the Scottish Health Informatics financial marketplace FinTechs have yet to establish Programme. Science & Public Policy 43(5): 713–723. widely recognised brand reputations. On the one Aitken M, Haggett C and Rudolph D (2016b) Practices hand, this ‘newness’ may offer FinTechs a competitive and rationales of community engagement with wind Aitken et al. 13 farms: Awareness raising, consultation, empowerment. Cooper T, Siu J and Wei K (2019) Corporate digital respon- Planning Theory & Practice 17(4): 557–576. sibility: Doing well by doing good. Available at: https:// Aitken M, Tully MP, Porteous C, et al. (2019) Consensus www.accenture.com/au-en/insight-outlook-doing-well- statement on public involvement and engagement with doing-good (accessed 22 January 2020). data-intensive health research. International Journal of CTO (2019) The Big Data Goldmine. Available at: https:// Population Data Science 4(1): 1–6. ctoboost.com/the-big-data-goldmine/ (accessed 9 May Allen J, Adams C and Flack F 2019. The role of data custo- 2019). dians in establishing and maintaining social licence for Cunningham-Burley S (2006) Public knowledge and public health research. Bioethics 33(4): 502–510. trust. Public Health Genomics 9(3): 204–210. Andras P, Esterle L, Guckert M, et al. (2018) Trusting intel- d’Alessandro B, O’Neil C and LaGatta T (2017) ligent machines: Deepening trust within Socio-Technical Conscientious classification: A data scientist’s guide to systems. IEEE Technology and Society Magazine 37(4): discrimination-aware classification. Big Data 5(2): 76–83. 120–134. Avizienis A, Laprie JC, Randell B, et al. (2004) Basic con- Data Futures Partnership (2017) A Path to Social Licence: cepts and taxonomy of dependable and secure computing. Guidelines for Trusted Data Use. Available at: https://trust IEEE Transactions on Dependable and Secure Computing eddata.co.nz/wp-content/uploads/2017/08/Background- 1(1): 11–33. Trusted-Data.pdf (accessed 3 May 2019). Bank of England (2019) Quarterly Bulleting: Topical article, Dietz G and Den Hartog DN (2006) Measuring trust inside Embracing the promise of FinTech. Available at: https:// organisations. Personnel Review 35(5): 557–588. www.bankofengland.co.uk/-/media/boe/files/quarterly- Dietz G and Gillespie N (2012) Recovery of Trust: Case bulletin/2019/embracing-the-promise-of-fintech (accessed Studies of Organisational Failures and Trust Repair. Vol. 20 February 2020) 5. London: Institute of Business Ethics. Bhagoji AN, Cullina D, Sitawarin C, et al. (2018) Enhancing European Commission (2019) Ethics Guidelines for robustness of machine learning systems via data transfor- Trustworthy AI. In: ec.europa.eu. https://ec.europa.eu/dig mations. In: 52nd annual conference on information scien- ital-single-market/en/news/ethics-guidelines-trustworthy- ces and systems (CISS), pp. 1–5. Piscataway, NJ: IEEE. ai (accessed 20 February 2020). Brown J and Fraser M (2006) Approaches and perspectives in Fiorino DJ 1990 Citizen participation and environmental social and environmental accounting: An overview of the risk: A survey of institutional mechanisms. Science, conceptual landscape. Business Strategy and the Technology, & Human Values 15(2): 226–243. Environment 15: 103–117. Fjeld J, Hilligoss H, Achten N, et al. (2019) Principled Brusoni S and Vaccaro A (2017) Ethics, technology and orga- Artificial Intelligence: A map of ethical and rights-based nizational innovation. Journal of Business Ethics 143(2): approaches. Available at: https://ai-hr.cyber.harvard.edu/ 223–226. primp-viz.html (accessed 4 July 2019). Butler JK Jr. and Cantrell RS (1984) A behavioral decision Friedler SA, Scheidegger C, Venkatasubramanian S, et al. theory approach to modeling dyadic trust in superiors and (January 2019) A comparative study of fairness- subordinates. Psychological Reports 55(1): 19–28. enhancing interventions in machine learning. In: Butler JK Jr. (1991) Toward understanding and measuring Proceedings of the conference on fairness, accountability, conditions of trust: Evolution of a conditions of trust and transparency, pp. 329–338. New York, NY: ACM. inventory. Journal of Management 17(3): 643–663. Goebel R, Chander A, Holzinger K, et al. (2018) August Calmon F, Wei D, Vinzamuri B, et al. (2017) Optimized pre- Explainable AI: The new 42? In: International cross- processing for discrimination prevention. In: Advances in domain conference for machine learning and knowledge Neural Information Processing Systems, pp. 3992–4001. extraction. Cham: Springer, pp. 295–303. Cambridge, MA: MIT Press. Gulamhuseinwala I, Bull T and Lewis S (2015) FinTech is gain- Carter P, Laurie GT and Dixon-Woods M (2015) The social ing traction and young, high-income users are the early licence for research: Why care data ran into trouble. adopters. Journal of Financial Perspectives 3(3): 1–20. Journal of Medical Ethics 41(5): 404–409. Gulamhuseinwala I, Hatch M and Lloyd J (2017) EY FinTech Chen J, Kallus N, Mao X, et al. (January 2019) Fairness Adoption Index 2017: The rapid emergence of FinTech. under unawareness: Assessing disparity when protected Available at: https://www.ey.com/Publication/vwLU class is unobserved. In: Proceedings of the conference on Assets/ey-fintech-adoption-index-2017/$FILE/ey-fintech- fairness, accountability, and transparency, pp. 339–348. adoption-index-2017.pdf (accessed 3 May 2019). New York, NY: ACM. Gunningham N, Kagan RA and Thornton D (2004) Social Chouldechova A (2017) Fair prediction with disparate licence and environmental protection: Why businesses impact: A study of bias in recidivism prediction instru- go beyond compliance. Law & Social Inquiry 29(2): ments. Big Data 5(2): 153–163. 307–341. Chuang L, Liu C and Kao H (2016) The adoption of fintech Hagendijk R and Irwin A (2006) Public deliberation and gov- service: TAM perspective. International Journal of ernance: Engaging with science and technology in contem- Management and Administrative Sciences 3(7): 1–15. porary Europe. Minerva 44(2): 167–184. Conrad J (2018) The social licence to operate and social con- Harvey B (2011) SIA from a developers perspective: tract theory: Themes and relations of two concepts – A lit- erature analysis. University of Iceland, Iceland. Foreword. In: Vanclay F and Esteves AM (eds) New 14 Big Data & Society Directions in Social Impact Assessment. Cheltenham: Mayer RC, Davis JH and Schoorman FD (1995) An integra- Edward Elgar Publishing, pp. xxvii–xxxiii. tive model of organizational trust. The Academy of Hasselbalch G (2019) Making sense of data ethics. The Management Review 20(3): 709–734. powers behind the data ethics debate in European policy- Michie D, Spiegelhalter DJ, Taylor CC, et al. (eds) (1994) making. Internet Policy Review 8(2). Available at: https:// Machine Learning, Neural and Statistical Classification. policyreview.info/articles/analysis/making-sense-data-eth Upper Saddle River, NJ: Prentice Hall. ISBN: 0-13- ics-powers-behind-data-ethics-debate-european-policy 106360-X making (accessed 1 October 2019). Moffat K, Lacey J, Zhang A, et al. (2016) The social licence Hein M and Andriushchenko M (2017) Formal guarantees to operate: A critical review. Forestry: An International on the robustness of a classifier against adversarial Journal of Forest Research 89(5): 477–488. manipulation. In: Advances in Neural Information Moffat K and Zhang A (2014) The paths to social licence to Processing Systems, pp. 2266–2276. Cambridge, MA: operate: An integrative model explaining community MIT Press. acceptance of mining. Resources Policy 39: 61–70. IBM (n.d.) Trusted AI. Available at: https://www.research. O’Neil C (2016) Weapons of math destruction: How big data ibm.com/artificial-intelligence/trusted-ai/ (accessed 3 increases inequality and threatens democracy. Broadway May 2019). Books. Jasanoff S (2011) Designs on Nature: Science and Democracy Owen J and Kemp D (2012) Social licence and mining: in Europe and the United States. Princeton, NJ: Princeton A critical perspective. Resources Policy 38: 29–35. University Press. Paprica PA, de Melo MN and Schull MJ (2019) Social licence Jiang H and Nachum O (2019) Identifying and correcting and the general public’s attitudes toward research based label bias in machine learning. arXiv preprint on linked administrative health data: a qualitative study. arXiv:1901.04966. CMAJ open 7(1): E40. Kaminski ME (2019) The right to explanation, explained. Parsons R and Moffat K (2014) Constructing the meaning of Berkeley Tech LJ 34: 189. ‘social licence’. Social Epistemology 28: 340–363. Kamiran F and Calders T (2012) Data preprocessing techni- Politou E, Alepis E and Patsakis C (2018) Forgetting person- ques for classification without discrimination. Knowledge al data and revoking consent under the GDPR: and Information Systems 33(1): 1–33. Challenges and proposed solutions. Journal of Kamishima T, Akaho S, Asoh H, et al. (September 2012) Cybersecurity 4(1): p.tyy001. Fairness-aware classifier with prejudice remover regular- PWC (2018) 2018 AI predictions: 8 insights to shape business izer. In: Joint European conference on machine learning and strategy. Available at: https://www.pwc.es/es/home/assets/ knowledge discovery in databases. Berlin: Springer, pp. 35– ai-predictions-2018-report.pdf (accessed 3 May 2019). 50. Raghunathan A, Steinhardt J and Liang P (2018) Certified Kavuri AS and Milne A (2019) FinTech and the future of defenses against adversarial examples. arXiv preprint financial services: What are the research gaps? papers.ssrn. arXiv:1801.09344. com Rauber J, Brendel W and Bethge M (2017) Foolbox: King B (2018) Bank 4.0: Banking Everywhere, Never at a A python toolbox to benchmark the robustness of Bank. Marshall Cavendish International, Asia. machine learning models. arXiv preprint arXiv:1707.04131. Kusner MJ, Loftus J, Russell C, et al. (2017) Counterfactual RSA (2018) Artificial Intelligence, Real Public Engagement. fairness. In: Advances in Neural Information Processing Available at: https://www.thersa.org/discover/publica Systems, pp. 4066–4076. Cambridge, MA: MIT Press. tions-and-articles/reports/artificial-intelligence-real- Lawler M, Morris AD, Sullivan R, Birney E, Middleton A, public-engagement (accessed 3 May 2019). Makaroff L, Knoppers BM, Horgan D and Eggermont A Sadowski J (2019) When data is capital: Datafication, accu- (2018) A roadmap for restoring trust in Big Data. The mulation, and extraction. Big Data & Society 6(1). Lancet Oncology 19(8): 1014. Schueffel P (2016) Taming the beast: A scientific definition of Leonard PG (2018) Social licence and digital trust in data- Fintech. Available at SSRN 3097312. driven applications and AI: A problem statement and possi- Scott B (2017) Hardcoding Ethics into FinTech. Ethics & ble solutions. Available at SSRN 3261228. Trust in Finance Global edition 2016–2017 http://www.eth Liu Q, Li P, Zhao W, et al. (2018) A survey on security icsinfinance.org/wp-content/uploads/2018/01/Brett-Scott- threats and defensive techniques of machine learning: Hard-coding-ethics-into-fintech.pdf accessed on 20/02/20 A data driven view. IEEE Access 6: 12103–12117. Stahl BC and Wright D (2018) Ethics and privacy in AI and Lobschat L, Mueller B, Eggers F, et al. (2020) Corporate big data: Implementing responsible research and innova- digital responsibility. Journal of Business Research. Epub tion. IEEE Security & Privacy 16(3): 26–33. ahead of print 2020. DOI:10.1016/j.jbusres.2019.10.006 Taylor L (2017) What is data justice? The case for Maskey S (2018) How Artificial Intelligence is Helping connecting digital rights and freedoms globally. Big Data Financial Institutions. Available at: https://www.forbes. & Society 4(2). com/sites/forbestechcouncil/2018/12/05/how-artificial-in Teigland R, Siri S, Larsson A, et al. (eds) (2018) The Rise and telligence-is-helping-financial-institutions/#7cdd45b4460a Development of FinTech: Accounts of Disruption from (accessed 3 May 2019). Sweden and beyond. London: Routledge. Aitken et al. 15 The Economist (2017) The world’s most valuable resource is no World Bank (2017) The Global Findex Database 2017. longer oil, but data. Available at: https://www.economist. Available at: https://globalfindex.worldbank.org/ (accessed com/leaders/2017/05/06/the-worlds-most-valuable- 20 February 2020). resource-is-no-longer-oil-but-data (accessed 9 May 2019). Wynne B (2006) Public engagement as a means of restoring Toreini E, Aitken M, Coopamootoo K, et al. (2019) The public trust in science – Hitting the notes, but missing the relationship between trust in AI and trustworthy machine music?’ Community Genetics 9(3): 211–220. learning technologies. arXiv preprint arXiv:1912.00782. Zhang BH, Lemoine B and Mitchell M (2018) December. Warhurst A (2001) Corporate citizenship and corporate Mitigating unwanted biases with adversarial learning. In: social investment: Drivers of tri-sector partnerships. Proceedings of the 2018 AAAI/ACM conference on AI, Journal of Corporate Citizenship 1: 57–73. Ethics, and Society, pp. 335–340. New York, NY: ACM. Wilsdon J and Willis R (2004) See-through Science: Why Public Engagement needs to move Upstream. London: Demos. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Big Data & Society SAGE

Establishing a social licence for Financial Technology: Reflections on the role of the private sector in pursuing ethical data practices:

Loading next page...
 
/lp/sage/establishing-a-social-licence-for-financial-technology-reflections-on-bZan5lOy0e
Publisher
SAGE
Copyright
Copyright © 2022 by SAGE Publications Ltd, unless otherwise noted. Manuscript content on this site is licensed under Creative Commons Licenses.
ISSN
2053-9517
eISSN
2053-9517
DOI
10.1177/2053951720908892
Publisher site
See Article on Publisher Site

Abstract

Current attention directed at ethical dimensions of data and Artificial Intelligence have led to increasing recognition of the need to secure and maintain public support for uses (and reuses) of people’s data. This is essential to establish a “Social Licence” for current and future practices. The notion of a “Social Licence” recognises that there can be mean- ingful differences between what is legally permissible and what is socially acceptable. Establishing a Social Licence entails public engagement to build relationships of trust and ensure that practices align with public values. While the concept of the Social Licence is well-established in other sectors – notably in relation to extractive industries – it has only very recently begun to be discussed in relation to digital innovation and data-intensive industries. This article therefore draws on existing literature relating to the Social Licence in extractive industries to explore the potential approaches needed to establish a Social Licence for emerging data-intensive industries. Additionally, it draws on well-established literature relating to trust (from psychology and organisational science) to examine the relevance of trust, and trustworthiness, for emerging practices in data-intensive industries. In doing so the article considers the extent to which pursuing a Social Licence might complement regulation and inform codes of practice to place ethical and social considerations at the heart of industry practice. We focus on one key industry: Financial Technology. We demonstrate the importance of combining technical and social approaches to address ethical challenges in data-intensive innovation (particularly relating to Artificial Intelligence) and to establish relationships of trust to underpin a Social Licence for Financial Technology. Such approaches are needed across all areas and industries of data-intensive innovation to complement regulation and inform the development of ethical codes of practice. This is important to underpin culture change and to move beyond rhetorical commitments to develop best practice putting ethics at the heart of innovation. Keywords Financial Technology, data, social licence, ethics, responsible artificial intelligence, trust Introduction the UK Government’s Centre for Data Ethics and Innovation; a House of Lords Select Committee on Recent years have witnessed a dramatic increase in Artificial Intelligence which proposed an ethical attention directed at ethical dimensions of data practi- ces and Artificial Intelligence (AI). Increasingly momentum for innovation is being met with interest Newcastle University Business School, Newcastle upon Tyne, UK in related ethical considerations and a number of School of Computing, Newcastle University, Newcastle upon Tyne, UK high profile institutes and bodies have been established Corresponding author: to focus on this area. These include the European Mhairi Aitken, Newcastle University Business School, Urban Sciences Commission’s High-Level Expert Group on Artificial Building, Newcastle University, 1 Science Square, Newcastle upon Tyne Intelligence whose mandate included drafting a set of NE4 5TG, UK. AI Ethics Guidelines (European Commission, 2019); Email: mhairi.aitken@newcastle.ac.uk Creative Commons CC BY: This article is distributed under the terms of the Creative Commons Attribution 4.0 License (https:// creativecommons.org/licenses/by/4.0/) which permits any use, reproduction and distribution of the work without further permission provided the original work is attributed as specified on the SAGE and Open Access pages (https://us.sagepub.com/en-us/nam/open-access-at-sage). 2 Big Data & Society framework for AI; the Ada Lovelace Institute and; and inform codes of practice to place ethical and social DeepMind Ethics and Society. Internationally, the considerations at the heart of industry practice. World Economic Forum has proposed nine ethical In order to illustrate this, we focus on one key indus- questions to ask of AI systems and the Fairness, try: Financial Technology (FinTech). We argue that as Accountability, and Transparency in Machine a data-intensive industry FinTech requires ethical data Learning forum developed Principles for Accountable practices to be developed and demonstrated in order to Algorithms and a Social Impact Statement for establish and maintain a SL. While there is industrial Algorithms (Stahl and Wright, 2018). Such bodies advocacy surrounding the potential benefits of data focus on developing Responsible AI (e.g. PWC, 2018) science and AI in banking, it is not yet clear whether and trustworthy approaches to AI and digital innova- there is a SL for these practices. Therefore, FinTech tion. While questions remain as to whether, in practice, provides a timely example through which to examine this goes further than “ethics washing” (Hasselbalch, the opportunities and potential approaches to develop 2019), this has led to a proliferation of guidance and ethical data practices beyond compliance with principles relating to ethical AI (Fjeld et al., 2019) and regulation. within the private sector there is emerging interest in The article draws together the multi-disciplinary the concept of Corporate Digital Responsibility (CDR; perspectives of the authors to reflect on how a SL for Lobschat et al., 2020). data-intensive industries might be realised. It begins by This trend is in part a response to high profile public providing some background to the concept of the SL controversies around data misuse as well as the intro- before discussing the ways in which this has been duction of new regulation through the EU General applied to innovation in data practices. The article Data Protection Regulation (GDPR), which gives indi- will then focus on FinTech and discuss the relevance viduals greater control over their own data (Politou and implications of a SL for the FinTech industry. et al., 2018). In the wake of such developments ques- In particular, the article draws on literature from com- tions have arisen as to whether compliance with regu- puter science, organisational science and science and lation is sufficient to ensure ethical data practices and technology studies to consider the importance of devel- to what extent ethical codes of practice are also needed oping relationships of trust and to set out some of the (Hasselbalch, 2019). This has resulted in increasing rec- technical and social approaches used to facilitate this. ognition of the need to secure and maintain public sup- port for uses (and reuses) of people’s data in order to Notes on terminology establish a “Social Licence” (SL) for current and future Data-driven industry is becoming an established and practices. commonly used term to describe industries whose oper- The notion of a “Social Licence” recognises that ations are underpinned by data practices (e.g. data col- there can be meaningful differences between what is lection, storage, linkage, analysis or sharing), this term legally permissible and what is socially acceptable implies that the data that is used pre-exists the opera- (Carter et al., 2015). A SL is granted by a community tions of the industry and that its existence (in an objec- of stakeholders and is intangible and unwritten but tive and “real” sense) shapes the operations and may be essential for the sustainability and legitimacy practices of industry. However, this positivist position of particular practices or industries. Developing and overlooks the ways in which data are created through maintaining a SL requires public engagement incorpo- the operations and practices of industry. Indeed the rating diverse perspectives and interests, beyond those creation and curation of data are equally important of professional communities, to ensure that current and functions of these industries, and it is through these future practices are aligned with the values of society. functions that data comes to have value (Sadowski, While the concept of the “Social Licence” is well- 2019). Therefore throughout this article we refer to established in other sectors it has only very recently data dependent or data-intensive industries. The term begun to be discussed in relation to digital innovation “data dependant” acknowledges that access to data and data-intensive industries. This article therefore and the continuous creation of new data is fundamen- draws on existing literature relating to the SL in extrac- tal to the operations of these industries. “Data- tive industries to explore the potential approaches intensive” acknowledges the central role of data in needed to establish a SL for emerging data-intensive the operations and practices of these industries (both industries. Additionally, it draws on well-established creating and using data). literature relating to trust (from psychology and organ- The use of the term AI emphasizes the algorithms isational science) to examine the relevance of trust, and that operate on the data as well as the associated auto- trustworthiness, for emerging data practices in the pri- vate sector. In doing so the article considers the extent mated or autonomous decision-making carried out by to which pursuing a SL might complement regulation a computer. The ethical concerns in AI extend beyond Aitken et al. 3 concerns about data into the implications of machines approaches which attempt to manage (rather carrying out tasks and making decisions autonomous- than address) community opposition to developments ly. A specific class of algorithms is that of machine (Owen and Kemp, 2012; Parsons and Moffat, 2014). learning algorithms, in which (large) sets of data Yet on the other hand, the flexibility in the definition allow the machine to learn certain trades or character- and operationalisation of the concept of the SL is con- istics, for instance allowing one to classify people, sidered one of its strengths. pictures or activities. As will be discussed below, learn- While legal licences are fixed and clearly defined to ing from data introduces a particular flavour of ethical permit certain practices over a designated time period, concerns, including issues such as perpetuating bias, a SL is unwritten and tacit in nature (Moffat et al., discrimination and exclusion present in the used data, 2016). Rather than being granted by governments or which contains judgements from the past. official bodies, a SL is granted by the relevant commu- nity of stakeholders or citizens without written agree- ments or formal procedures (Warhurst, 2001). Hence, Social Licence the SL is ‘a dynamic and changing reflection of the The term “Social Licence” or “Social Licence to quality and strength of the relationship between an Operate” has been in use since the 1990s and has industry and a community of stakeholders’ (Moffat been most influential in relation to extractive industries et al., 2016: 480–481). Importantly, relationships such as mining and forestry (Moffat et al., 2016). The between industry and stakeholders are not static, but emergence of SL in the 1990s reflected changing rather are continuously evolving and adapting. community-industry relationships in response to Therefore, the nature of the SL will also evolve and increasing pressure and scrutiny surrounding environ- adapt and ‘business must have regard for evolving mental impacts and social performance of extractive social attitudes and expectations if it is to maintain industries as well as increasing expectations for com- its “social licence”’ (Brown and Fraser, 2006: 108). munity/public participation in decision-making about industry development (Conrad, 2018). This led to the A social licence for data practices concept of the SL being developed as a means of pur- In response to high profile public controversies around suing new relationships between industry and commu- data use (and misuse), recent years have brought calls nities to reflect public values and ensure community to establish a SL for data practices (e.g. Allen et al., support for projects. Given the increasing attention 2019, Carter et al., 2015; Lawler et al., 2018; Leonard, directed at ethical dimensions of technological innova- 2018). For example, the New Zealand Government tion (particularly around data science and AI) and established the Data Futures Partnership to develop increasing consumer interest in these issues (Brusoni guidelines that public and private organisations can and Vaccaro, 2017) there is now a similar strong ratio- use to develop a SL for data use (Data Futures nale to apply the concept of SL to new data-intensive Partnership, 2017). The resulting guidelines ‘aim to industries. enable organisations to maximise the value of data There is no single, agreed upon definition of the SL through building the trust of their clients and develop- or how it should be pursued in practice. Some conceive ing wider community acceptance’ (Data Futures SL as representing a set of demands and expectations Partnership, 2017). They consider a SL to be estab- regarding how a business should operate (e.g. lished ‘when people trust that their data will be used Gunningham et al., 2004), while others emphasise as they have agreed, and accept that enough value will that establishing a SL requires industry to adapt to be created’ (Data Futures Partnership, 2017). social norms and values and to change its practices to Interest in the SL has been particularly evident in reflect these (e.g. Harvey, 2011). Moffat et al. (2016: relation to digital health and data-intensive health 480) summarised that the SL ‘tends to be regarded as the ongoing acceptance or approval of an operation by research. For example, Carter et al. (2015) contended those local community stakeholders who are affected that the failed implementation of the Care.Data digital health platform demonstrated that the programme had by it and those stakeholders who can affect its failed to adequately secure a SL. A recent international profitability.’ consensus statement set out the importance of public The lack of fixed definition has enabled considerable engagement to establishing a SL for data-intensive flexibility in how SL is pursued. On the one hand, this health research (Aitken et al., 2019); recent research has led to criticisms that SL can be employed ‘oppor- into public attitudes towards research uses of health tunistically to serve the particular objectives and goals data have been framed in terms of understanding the of companies, activists and governments’ (Moffat et al., 2016: 480). Indeed, previous studies have found that conditions needed to ensure a SL for secondary uses of companies have referred to the SL in pursuing data (e.g. Paprica et al., 2019) and Allen et al. (2019) 4 Big Data & Society have discussed the role of data custodians in establish- far-reaching – and often unpredicted – impacts across ing and maintaining a SL for the use of personal infor- society a broad conception of stakeholders acknowl- mation in health research. edges the importance of wide public engagement In applying the concept of SL to emerging data- beyond potential service-users. As such wide public intensive industries comparisons are invited between engagement with broad publics is vital to ensure that these new industries and extractive industries in current and future practices reflect public values and which SL has previously been applied. Data is often interests and has an important role to play in strength- depicted as the “new oil” or a “goldmine” (e.g. CTO, ening wider science–society relations (Aitken et al., 2019; The Economist, 2017): such analogies conceptu- 2019). The extent to which such broad approaches alise data as a commodity and a resource to be are likely to be adopted by industry is considered fur- exploited. However, this analogy has been contested ther below. as it is noted that data does not exist in a natural It is noteworthy that while the concept of the SL form ready to be extracted, but rather it is created or originates in controversies around private sector activ- manufactured and given value through the ways in ities, to date, the SL for data practices is largely being which it is used (Sadowski, 2019). Yet while data may examined in relation to public sector uses/reuses of not be the new oil, data dependant industries neverthe- data (e.g. relating to health data). Important questions less have much to learn from previous (good and bad) therefore remain relating to the role of private compa- experiences of extractive industry engagement with nies in establishing a SL for current and future data stakeholders to address practical, ethical and social practices. As technologies are increasingly developed dimensions of their operations. In particular, processes and deployed in the private sector, and ethical consid- of data accumulation through the creation of new mar- erations arise in their application, greater consideration kets and services in previously under-served regions of should be given to the role of private companies in the world creating new forms of dependence and lead- addressing these issues. We aim to address this gap in ing to “data colonialism” (Sadowski, 2019) are remi- the literature by examining the ways in which a SL niscent of the history of exploitative relationships in might be established in emerging data-intensive indus- extractive industries where natural resources have tries, using FinTech as an illustrative case study. been extracted and removed from local areas to profit overseas corporations. The concept of SL was estab- FinTech lished as a means of addressing such injustices and as FinTech has been defined as ‘a new financial industry such may play an important role in ensuring data jus- that applies technology to improve financial activities’ tice (Taylor, 2017). (Schueffel, 2016: 45) and FinTech firms have been This brief history of SL in extractive industries high- described as ‘firms that are combining innovative busi- lights the importance of building and maintaining rela- ness models and technology to enable, enhance and tionships with stakeholders to maintain stakeholder disrupt financial services’ (Gulamhuseinwala et al., support. The ways in which stakeholders are defined 2015: 4). Through innovation in FinTech, the financial and identified are crucial considerations in pursuing a industry as a whole is evolving and adopting new tech- SL. Typically extractive industries have sought to nologies and data-intensive practices. Innovations engage with local communities (as “communities of which have played a role in creating this impact include place”; Moffat et al., 2016) whereas approaches based internet banking, mobile payments, crowdfunding, on physical proximity are likely to be irrelevant to the peer-to-peer lending, Robo-Advisory and online iden- activities of most data-intensive industries such as tification (Schueffel, 2016). Some of the technologies FinTech. Narrow definitions might conceive stakehold- used by FinTechs include: User-facing web-based tech- ers as being customers, investors or regulators, whereas nologies including applications for mobile phones and broader approaches would include all people from web browsers; back-end technologies, such as cloud whom data is derived and used in developing or imple- and blockchain; data collection, processing and analyt- menting data-dependent services and/or anyone who is potentially affected by the industry’s activities. While ics. AI in particular is used for a range of purposes GDPR gives individuals greater control over their data, including developing automated chatbots for customer in many instances individuals do not explicitly consent services, efficient processes for detecting fraud and to their data being used in developing or implementing money laundering and improving automated processes new services (e.g. where data is used in aggregate form), that utilise large volumes of data (e.g. client risk pro- or know how their data might be used. Moreover, filing or credit scoring; Maskey, 2018). people increasingly have limited choices regarding Customer uptake of FinTech products is rapidly whether or not to use a service which requires access expanding: In 2015, the EY FinTech Adoption Index to their data. Given that data practices are having reported that ‘a weighted average of 15.5% of digitally Aitken et al. 5 active consumers are FinTech users’ according to their to attract and retain customers; Adopting ethical and definition of a FinTech user using at least two FinTech transparent approaches to business in order to prepare products (e.g. mobile payments, apps or insurance tele- FinTechs to anticipate and respond to regulatory and matics; Gulamhuseinwala et al., 2015: 6). Just two policy developments or; Efforts to demonstrate a SL in years later this number had doubled with 33% of order to set FinTechs apart from traditional banking survey respondents being FinTech users institutions. However, following a purely instrumental (Gulamhuseinwala et al., 2017). Thus, FinTech repre- rationale can lead to approaches which pay ‘lip service’ sents a fast-developing industry underpinned by data- to public concerns through enacting purely cosmetic dependent technologies. Given that finance is an area forms of public engagement without genuine intentions that affects most – if not all – members of society the to address concerns or reflect public values in a com- potential impacts of this industry are significant. Such pany’s operation. impacts might include transforming the way people A final set of motivations are underpinned by sub- access and use money, creating cashless societies stantive rationales that regard the development and (Teigland et al., 2018), opening up financial services maintenance of a SL as being aimed at creating wider to unbanked or underbanked populations (World positive outcomes across society ‘from this point of Bank, 2017) and enhancing competition in the market view, citizens are seen as subjects, not objects, of the (Bank of England, 2019). However, simultaneously the process. They work actively to shape decisions, rather reliance on data-intensive technologies and processes than having their views canvassed by other actors to can risk creating new opaque systems through which inform decisions that are then taken’ (Wilsdon and access to finance is determined or to increasingly neces- Willis, 2004: 39). Following this approach engaging sitate citizens’ participation in the Big Data society. As with public values and interests aims to establish a such, the emergence of FinTech represents a timely SL for current and future practices while also offering opportunity to examine the relevance of SL for inform- opportunities to “do things better” and maximise ben- ing ethical data practices in private sector organisations efits not only for the FinTechs concerned but also for within data-intensive industries. wider society. Here it is important to emphasise that To date literature around FinTech has not explicitly establishing a SL is not simply about avoiding or mit- engaged with the concept of a SL. Studies which have igating potential negative impacts but equally about examined public attitudes or responses have typically maximising the benefits of FinTech. focused on customer uptake of FinTech products (e.g. In the following sections, we consider ways through Chuang et al., 2016; Gulamhuseinwala et al., 2015, which FinTechs could pursue a SL, focussing on the 2017). In doing so they have tended to focus on cus- importance of developing relationships of trust and the tomers’ motivations for using FinTech services, and role of public engagement in establishing and maintain- largely neglected non-customers’ reasons for not using ing a SL. FinTech services, or the reasons why some FinTech offerings have been unsuccessful (Kavuri and Milne, Trust and trustworthiness in FinTech 2019). There is a lack of public deliberation or engage- Previous studies have considered consumer trust in ment to examine the extent to which FinTech practices relation to FinTech. For example, Gulamhuseinwala align with public values and interests. In short, wider et al. (2015) described that while many potential cus- issues around public acceptability of FinTech or its role tomers look positively at FinTech offerings, more than in society remain largely overlooked. 25% preferred traditional providers and another 11% There are a number of reasons why FinTechs might do not trust new FinTech companies. Similarly, be motivated to pursue a SL. These reasons in turn Chuang et al. (2016) concluded that brand and service reflect different underpinning rationales which can be trust significantly affect attitudes and willingness to use normative, instrumental and/or substantive (Fiorino, FinTech services. However, this attention to consumer 1990; Wilsdon and Willis, 2004). First, a normative trust has often overlooked considerations of what it rationale leads to moral positions that suggest that means for a FinTech to be trustworthy. While trust is FinTech firms should engage with stakeholders and reflect public values as ‘it’s the right thing to do’ at the heart of a SL, this is established through mutual (Wilsdon and Willis, 2004). Second, more practically relationships enabling all interests and perspectives to minded approaches follow instrumental rationales be reflected and addressed (Moffat et al., 2016). which view efforts to establish and demonstrate a SL Customers – and the wider public – do not passively as means to achieve an organisation’s own objectives receive information about technologies, products or (Wilsdon and Willis, 2004). Instrumental rationales services (Wynne, 2006). In emphasising the importance might lead to a variety of potential approaches includ- of aligning with public values, the SL requires dialogue ing: Efforts to build and maintain public trust in order and public engagement to identify and address public 6 Big Data & Society values, concerns and interests (Moffat et al., 2016). suggest Predictability is crucial in relation to openness, Therefore, establishing a SL requires reflection not transparency and commitments to meaningful engage- just on ways of building public trust but also of estab- ment (Moffat and Zhang, 2014). Moffat and Zhang lishing and demonstrating trustworthiness (Aitken (2014) found that procedural fairness and good quality et al., 2016a, 2019). engagement were key to sustaining relationships of trust underpinning a SL. This suggests that predictabil- ity in terms of an organisation’s approaches and fair- Establishing trustworthiness ness may be more important than predictability of A number of authors have proposed frameworks to particular actions or activities which should be able examine perceived trustworthiness. For example, to adapt in response to engagement processes (Moffat Butler and Cantrell (1984) suggested that trust was et al., 2016). Therefore, in pursuing a SL, FinTechs based on perceptions of: integrity; competence; consis- need to ensure consistency in engagement approaches. tency; loyalty and; openness. Butler (1991) later expanded this list to include: discreetness; fairness; Relationships of trust promise fulfilment; availability; receptivity and; overall trustworthiness. Mayer et al. (1995) suggested that per- Relationships of trust can take many forms. For exam- ceived trustworthiness is based on judgements of an ple, there are direct relationships between a FinTech entity’s: Ability, Benevolence and Integrity (ABI). firm and its customers (or other stakeholders), but This ABI framework has since been widely used and there are also networks of indirect relationships further developed to examine relationships of trust in through which assessments of trustworthiness are organisational settings. made. Andras et al. (2018) note that trust can be devel- Ability refers to the extent to which the entity is oped through awareness of others’ experiences or inter- perceived to have the skills and competencies to carry actions with the trustee, if someone we know (and out the particular tasks relevant to the situation in trust) uses a particular service we assume that it is trust- which they would be trusted. Benevolence is described worthy ‘the main idea, however, is that we did not gen- as ‘the extent to which a trustee is believed to want to erate trust in the [service] per se but trust a person that do good to the trustor’ (Mayer et al., 1995: 718). trusts the [service]’ (Andras et al., 2018: 6). Moreover, Integrity requires confidence that the entity will act in trust behaviours can reflect trust in a third party whose accordance with a set of principles and that those prin- association with the trustee gives the trustor confidence ciples align with the values of the trustor (Mayer et al., to take such behaviours – this, Andras et al. (2018) 1995). If an entity is perceived to possess each of these describe as Second Order Trust. For example, when attributes they are likely to be trusted, whereas if ‘any making purchases online, a shopper trusts the online of these attributes [are] called seriously into question, review system (and the anonymous reviewers), which this makes us wary’ (Dietz and Gillespie, 2012: 6). Each enables them to have confidence in the product or ser- of the attributes are related and may reinforce one vice they are buying and gives them Second Order another; however, it is also possible for someone to Trust in the seller. Similarly, Second Order Trust may trust an entity when one or more of the attributes is be established based on trust in financial regulators considered to be lacking as ‘each of the three factors whose approval may be perceived to give FinTechs can vary along a continuum’ (Mayer et al., 1995: 721). “legitimacy”. More recently authors have expanded on The concept of Second Order Trust is particularly Mayer et al.’s framework suggesting that including salient around new technologies and services, where Predictability or Reliability is important. For example, early adopters are likely to have either high technical Dietz and Den Hartog (2006) suggested that the four knowledge or propensity to risk-taking. Wider adop- key characteristics on which judgements of trustworthi- tion of the technology or service will depend on trust ness are based are: Ability; Benevolence; Integrity; and building up through social networks emanating from Predictability (referred to as the ABIþ model). these early adopters. Therefore, while early adopters’ Predictability will reinforce perceptions of the Ability; experiences may depend more on confidence in techni- Benevolence; and Integrity of the trustee. cal competencies and first-order trust in new services, Considering the role of Predictability or Reliability subsequent adopters’ relationships with those services draws attention to the importance of trust being are likely to be founded on Second Order Trust. sustained overtime through ongoing relationships. Second Order Trust draws attention to the impor- Importantly these relationships should be able to tance of multiple relationships and the ways in which adapt to changing contexts and social dynamics individuals assess the trustworthiness of an entity – (Moffat et al., 2016). Studies which have examined such as a FinTech firm. These relationships are both the conditions needed to establish and maintain a SL direct and indirect and never static, rather ‘the level of Aitken et al. 7 trust will evolve as the parties interact’ (Mayer et al., bias in data and; risks of unemployment through 1995: 727). This highlights the relevance of perceived increased automation (e.g. O’Neil, 2016; Stahl and Predictability as a factor influencing assessments of Wright, 2018). trustworthiness (Dietz and Den Hartog, 2006). Dietz and Den Hartog (2006) note that regularity of behav- Technical approaches iour over time will strengthen trust whereas unpredict- While developing trustworthy data practices remains ability or unreliability will weaken trust. Furthermore, an emerging field of interest, increasingly technical trust can be either strengthened or weakened through approaches are being developed with this aim interactions between trustors and trustees as well as (Toreini et al., 2019). For example, IBM (n.d.) has set through indirect relationships in social networks. As out approaches towards ‘building and enabling AI these relationships evolve assessments of an entity’s solutions people can trust’ through four key ABI will also adapt. features of “Trustworthy AI”: Robustness, Fairness, Explainability and Lineage. The ways in which these Trustworthy technology four key features might establish trustworthiness are Given the importance of innovation in data practices summarised below: for FinTech, a SL for FinTech operations will depend on perceived trustworthiness not only of FinTech firms Robustness but also of the technologies underpinning new financial The European Commission (2019: 16) notes that tech- services. As new market entrants, the issue of trustwor- nical robustness is a crucial component of trustworthy thiness presents a pivotal challenge for FinTechs where AI and states that ‘technical robustness requires that most are yet to establish strong brand reputations. AI systems be developed with a preventative approach Moreover, both the financial sector and data depen- to risks and in a manner such that they reliably behave dant technologies (such as AI) have been the subject as intended while minimising unintentional and unex- of public controversies in recent years. The public pected harm, and preventing unacceptable harm.’ image of the financial industry is still recovering from Across literatures there are varying definitions of the effects of financial crash and mortgage crisis of Robustness. Robustness refers to ‘dependability of a 2008 (Dietz and Gillespie, 2012) and recent years system with respect to external faults, which character- have brought considerable press coverage of scandals izes a system reaction to a specific class of faults’ relating to mishandling, misuse or abuse of data. Since (Avizienis et al., 2004: 23). Robustness then is a state perceptions of an organisation’s trustworthiness are in which an algorithm functions normally in the pres- shaped by context and awareness of related events ence of accidental faults or a malicious intruder these factors will be significant in influencing public (attacker), while the attacker actively or passively perceptions of FinTech. Given increasing attention manipulates the operation of the algorithm. In litera- directed at social and ethical dimensions of new data ture around machine learning (Bhagoji et al., 2018; dependant technologies, FinTechs – whose services rely Rauber et al., 2017), robustness includes security but on these technologies – will need to anticipate and also privacy issues of the algorithm as well as likely address the challenges this presents. barriers to its performance (including errors caused In developing and implementing new financial serv- by implementation faults or the algorithm’s accuracy ices underpinned by data dependant technologies a limitations). We use the term robustness in such a gen- variety of practical and ethical challenges are encoun- eral sense. tered. Practical considerations include developing The bottom line for all defensive approaches is the mechanisms to ensure security of logins when using need for realistic analysis of the potential attackers for banking applications on mobile phones and; minimis- goal, knowledge, capability and strategy. Security and ing risks of privacy breaches. Ethical considerations Privacy aspects of a machine learning based system include ensuring fairness in algorithmic decision- have two aspects: safe data and safe model making; avoiding unjust outcomes; ensuring equal (Liu et al., 2018). The first focuses on the security access across society to the benefits of technology; con- and privacy issues of the data which is vulnerable sidering the potential impacts of automation on per- against different attacks, more importantly the injec- ceived responsibility for outcomes (on the part of tion of invalid/malicious input from adversaries or both professionals and customers) and ensuring that leakage of sensitive information. The second, on the automated processes do not reduce customer autono- other hand, resolves the security and privacy concerns my (Scott, 2017). Simultaneously, there are wider eth- ical issues around data and AI that are of relevance to of the model in terms of reliable functioning and trust- FinTech, these include concerns regarding surveillance; worthy performance. 8 Big Data & Society Mathematical modelling of the behaviour of the auditor that enforces fairness to the model when it is system is undertaken to identify and mitigate the processing the results (Agarwal et al., 2018; Zhang unpredictable causes of faults in machine learning per- et al., 2018). Post-processing solutions detect discrimi- formance (either due to security issues or implementa- nation in the outcome of the algorithm. While there are tion and accuracy errors). In these approaches (Hein numerous definitions of fairness in the literature, these and Andriushchenko, 2017; Raghunathan et al., 2018), approaches strongly rely on a mathematical definition the system is modelled mathematically and its of fairness. These solutions measure the fairness of an behaviour is analysed in different situations. Such algorithm by assessing the disparity between privileged approaches aim to guarantee the predictability of the and unprivileged groups in the algorithm results. machine learning system and its resilience to different Kusner et al. (2017) categorised post-processing solu- categories of faults. tions into four groups. In each the data contains one or Technical approaches aimed at ensuring robustness more protected features that identify the privileged and provide a diverse range of methods to avoid disclosing unprivileged groups (e.g. gender or ethnicity). users’ privacy, maintain the functioning integrity of the AI and remain resistant against attack. As such, these 1. Fairness through Unawareness discards protected approaches aim to demonstrate Ability through their features in the decision-making process. However, technical competence to safeguard data, while also this is not a robust solution because it does not con- demonstrating the organisation’s Benevolence and sider the correlation between protected features and Integrity in taking measures to protect individuals’ pri- other features of data (Chen et al., 2019). vacy. We posit that demonstrating such characteristics 2. Individual Fairness considers an algorithm fair if it consistently over time will also enhance perceived gets similar predictions for similar individuals. Predictability or Reliability. 3. Demographic Parity is satisfied if the prediction results of a group would be the same with or without considering protected features. Fairness 4. Equality of Opportunity requires the accuracy of an Avoiding unfair bias in algorithmic decision-making is algorithm to be equal between privileged and unpri- a crucial element of trustworthiness. This is particularly vileged samples. relevant for FinTechs that rely on AI to improve effi- ciency and accuracy in decision-making processes. The There is no comprehensive solution to eliminate dis- European Commission (2019) notes that unfair bias crimination. Therefore, the choice of the technical can arise through the inclusion of inadvertent historic approaches addressing fairness aim to detect or prevent bias, incomplete data or a lack of good governance bias in the output of an AI model. As such they aim to models. If such bias persists in the algorithm it can demonstrate Ability through technical competence, ‘lead to unintended (in)direct prejudice and discrimina- Benevolence through avoiding harm to minority or tion against certain groups or people, potentially exac- vulnerable groups and Integrity through taking erbating prejudice and marginalisation’ (European approaches that reflect the values of society (which is Commission, 2019: 18). pivotal to establishing a SL). The baseline assumption in fairness-based approaches is that data is biased and should be Explainability moderated. Fairness can be addressed in one of the following stages in a model’s operation cycle: AI algorithms are often considered “black box” models pre-processing, algorithm modification and post- (Michie et al., 1994); thus the processes through which processing (d’Alessandro et al., 2017; Friedler et al., outputs are derived lack transparency. The European 2019). Pre-processing fairness resolutions are focused Commission (2019: 18) states that ‘whenever an AI system has a significant impact on people’s lives, it on the mitigation of bias in the data itself. Resolutions tend to be independent of the AI model. should be possible to demand a suitable explanation They either re-label the data samples to make the of the AI system’s decision-making process.’ results fair (Jiang and Nachum 2019) or assign a Moreover, the right to an explanation is a key feature weight to each one where samples that are more of GDPR (Kaminski, 2019). As such, explainability likely to be discriminated against receive more atten- plays an important role in building relationships of tion (Calmon et al., 2017; Kamiran and Calders, 2012). trust to underpin a SL. Ensuring that the ways in Algorithm modification methods aim to propose AI which AI is used and decisions that are made based models that are inherently fair. Such fairness is fulfilled on data are understood, is crucial to facilitate the in either a model that is designed to be statistically fair good communication and dialogue needed to establish (Kamishima et al., 2012), or through deployment of an a SL (Moffat et al., 2016). Aitken et al. 9 Technical approaches to ensure explainability aim to (Chouldechova, 2017). Additionally, the approaches demonstrate Ability through technical competence, outlined above are not always complementary to one while also enabling assessments of an organisation’s another. There is significant interest in Explainability Benevolence and Integrity. Indeed, transparency is cru- given the requirements brought in by GDPR; however, this poses challenges in relation to many AI applica- cial to enable insights into an organisation’s motiva- tions (Goebel et al., 2018). Where a technology can be tions or values. Therefore, explainability may not directly demonstrate Benevolence or Integrity but developed to be robust and fair but is not fully explain- might constitute an important feature to enable assess- able, trade-offs may be necessary. Such trade-offs might have important implications for trustworthiness ments of these characteristics. Moreover, explainability and for establishing or maintaining a SL. Therefore, may be vital to facilitate public/stakeholder engage- understanding stakeholders’ interests and values in ment and dialogue essential for establishing a SL. relation to the way their data is used or the ways in which technologies are deployed will be valuable Lineage to guide decision-making in these instances. As AI models evolve and adapt, transparency can be Moreover, transparency around these trade-offs and problematic and the “black box” nature of AI is ampli- the ways in which particular features of technologies fied. Approaches focusing on lineage aim to make the have been prioritised may be important to maintain inner components and the history of the AI algorithm relationships of trust with stakeholders. traceable by logging the necessary details and keeping track of the interactions occurring between compo- Social approaches nents. The European Commission (2019) advocate traceability of AI algorithms to include documenting Developing technical approaches to address ethical all the data sets and processes involved in the data challenges will be an important component to underpin gathering and data labelling phases. Traceability is a SL for current and future practices of FinTech organ- regarded as essential to enable ‘identification of the isations; however, technical solutions alone are insuffi- reasons why an AI-decision was erroneous which, in cient to achieve this outcome (European Commission, turn, could help prevent future mistakes [while also] 2019). As outlined above, a proliferation of frame- facilitat[ing] auditability as well as explainability’ works and discourse surrounds the technical (European Commission, 2019: 18). approaches through which to pursue trustworthy data Technical approaches enabling traceability of the practices, conversely, there is considerably less discus- lineage of AI models provide insights into how these sion of the social approaches needed to complement processes have developed, placing an emphasis on trans- these – or how social approaches might be undertaken parency compared to explainability of current processes in data-intensive industries. Yet, social approaches are or outcomes. However, traceability also enhances central in establishing a SL. The following section con- explainability (European Commission, 2019) and per- siders the implications of this for FinTech. ceived reliability/predictability. As with explainability, these approaches aim to demonstrate Ability through Public engagement technical competence while also enabling assessments A SL is established and maintained through ongoing of an organisation’s Benevolence and Integrity through relationships between a community of stakeholders and deeper forms of transparency. an industry/organisation. This entails ongoing engage- ment and dialogue to identify and respond to stake- Trade-offs holders’ values, interests and concerns (Moffat et al., These four broad classifications illustrate the range of 2016). different technical approaches being developed and Current levels of interest in public engagement with used to address ethical challenges relating to data prac- data practices are high (particularly relating to AI), tices and AI. While each of these may be important for reflected via the growing number of bodies working developing trustworthy practices to underpin a SL, it in this area (including Google DeepMind Ethics and may not always be possible to achieve each of the four Society, the UK Government’s Centre for Data Ethics aims simultaneously. Indeed, even within each of the and Innovation, the New Zealand Government’s Data approaches trade-offs may be necessary, for example Futures Partnership and the Royal Society). It is now fairness can have different meanings and be assessed widely recognised that Big Data analytics and AI bring via different measurements, therefore ensuring fairness significant social and economic impacts and necessitate may require prioritising certain methods which in both regulatory supervision and ethical and social turn prioritise different dimensions of fairness assessment (Stahl and Wright, 2018). 10 Big Data & Society Consideration of the social and ethical dimensions parties, and differences in value sets, worldviews and of data practices reflects a longer history of public perspectives are still likely to create opportunity for engagement with science and technology. In the past, mistrust and conflict’ (Moffat et al., 2016: 483). These public engagement has been promoted as a means to remain persistent challenges in public participation address contentious areas of innovation and to build or across a variety of domains and ones which are funda- restore public trust and mitigate controversy (Aitken mental to address, in order for public engagement sur- et al., 2016a). This is deemed important as ‘science rounding AI, and data-intensive industries such as and technology demand assenting publics to maintain FinTech, to be meaningful and impactful. their hold on the collective imagination, not to mention The potential motivations for FinTechs to under- purse-strings’ (Jasanoff, 2011: 248). However, public take public engagement will shape the aims of engage- engagement goes beyond communicating the value of ment, the approaches taken and the range of potential science and technology, and instead requires engaging outcomes. As noted above, there are a variety of rea- in dialogue with the public to understand and reflect sons for FinTech firms to pursue a SL: these may range public values in innovation, governance and policy. from purely instrumental perspectives which regard the Previous studies in public engagement with science SL and related public engagement as a mechanism and technology have demonstrated the limitations of through which to attract and retain customers and approaches aimed at gaining public trust through increase profits, through to substantive perspectives improving public understanding. Such approaches which focus on bringing wider benefits for society treat members of the public as ‘passive recipients of and meaningfully involving members of the public to scientific knowledge’ (Cunningham-Burley, 2006: address ethical considerations. Clearly such rationales 206), overlooking how members of the public critically lead to different approaches being taken and different assess, deconstruct and evaluate claims to scientific ideas of what it would mean for public engagement to knowledge in line with their own ideologies, experien- be successful (Aitken et al., 2016b). Experience in other ces and the contexts in which the information is industries suggests that approaches informed by instru- received (Hagendijk and Irwin, 2006). Thus, demon- mental rationales may have the most appeal to private strating technical competence or communicating the sector organisations, yet those informed by substantive robustness of technical responses to ethical challenges rationales are more likely to be effective (Aitken et al., will not automatically lead to public trust and support. 2016b). For example, a review of community engage- Rather, technical approaches need to be combined with ment practices by wind farm developers found that social responses that build relationships of trust while most developers took an instrumental approach through which claims to technical competence, and to community engagement (using methods which demonstrations of ABI will be evaluated. As such, restricted the ways people could participate or the aligned with the approaches taken to establish a SL, range of potential outcomes), those that followed rather than aiming to manufacture public trust in sci- more substantive approaches (opening up engagement ence and technology, the focus of public engagement is processes and devolving some control over the process to ensure that the trustworthiness of science and tech- and outcomes to public participants) were most suc- nology evolves through efforts to address and reflect cessful in generating public support, which was ironi- public values (Aitken et al., 2016a; Wynne, 2006). cally the primary objective of companies following To date, deliberative public engagement relating to instrumental approaches (Aitken et al., 2016b). data has typically been undertaken by research organ- Therefore, while companies may be reluctant to share isations or public sector bodies (e.g. Data Futures power in decision-making or planning processes, evi- Partnership, 2017; RSA, 2018). Important questions dence suggests that doing so leads to positive outcomes arise regarding whether private sector (non-research) in terms of generating wider public support and estab- organisations, such as FinTechs, can, or should, facil- lishing a SL. itate these processes. Community engagement is a key As noted above, a SL is granted through engage- component of establishing and maintaining a SL in ment with ‘local community stakeholders who are extractive industries (e.g. mining and forestry); howev- affected by [a project or development] and those stake- er, this has been undertaken with varying degrees of holders who can affect its profitability’ (Moffat et al., commitment and quality (Moffat et al., 2016). In 2016: 480). In extractive industries, identifying ‘local some cases, community engagement has been largely community stakeholders’ may be more straightforward cosmetic due to companies retaining control over the given the physical location of projects. For FinTechs, a process, restricting the range of possible outcomes and ‘local community’ defined by geographic proximity is setting the terms for community participation: ‘even in most cases irrelevant to the operations of a FinTech. when all key stakeholders are explicitly invited into a Instead, while in extractive industries local communi- conversation [.. .] asymmetric power relations between ties have been identified based on physical proximity to Aitken et al. 11 the locations from which resources are extracted, in New Zealand Government, the Royal Society or data-intensive industries affected communities might DeepMind Ethics and Society have substantial budgets be conceptualised as those from whom data is derived. and resources which they can use to fund large scale This creates a much wider set of relevant stakeholders. public engagement projects to reach out to diverse Moreover, taking a broader approach to stakeholders groups across society, FinTech companies are unlikely as people who are “affected by a project” necessitates to have significant resources (or expertise) for these consideration of the impacts of data practices on activities. Furthermore, given that the SL for FinTech society. Such impacts might include potentially trans- is interdependent with a SL for broader data-intensive formative effects on financial systems which affect peo- industries, and innovation, questions arise as to who is ple’s access to finance (either positively or negatively) responsible for facilitating engagement activities. or contributory effects to the increasing role of data in Individual FinTech firms have an incentive to develop society and the reduction in opportunities to partici- a SL for their own operations, yet, it may be that wider pate fully in society without allowing one’s data to be industry level engagement is needed to establish a collected or used. Such a broad conceptualisation sug- broader SL for the FinTech sector. gests that stakeholders might include the whole Indeed, public engagement can occur at a range of of society. levels reflecting different aims and objectives and Considering the second group of stakeholders as requiring different approaches. For example, public those ‘who can affect profitability’ may also invite engagement relating to data-intensive health research either narrow or broad definitions. Narrow definitions takes place at many different scales, including: ‘wide- might focus on potential and actual customers as those scale public conversations about uses or potential uses who can affect profitability. Broad definitions would of data in health research; [public engagement] to consider the role of the wider public as potentially inform or co-design the development of policies or gov- affecting profitability through their support or opposi- ernance practices relating to uses of data in health tion to data practices more broadly as well as those research; engagement or involvement of members of used specifically in FinTech. Indeed, as previous scan- the public in governance decisions about data access dals have demonstrated, public controversies around and use; engagement or involvement of members of data use and misuse have the potential to significantly the public at different phases in particular research affect data dependant industries (as was evidenced in projects; analysing and disseminating the results of the case of Care.Data (Carter et al., 2015)). research using data in ways which will support Therefore, establishing and maintaining a SL entails improvements in healthcare and systems’ (Aitken going beyond a narrow focus on stakeholders as a com- et al., 2019: 2). Moving this approach into the pany’s customer base to a more inclusive conceptuali- FinTech context suggests public engagement might sation of the wider public as stakeholders. Yet, the valuably serve a similar range of purposes: at times extent to which a private sector organisation – such being undertaken at policy or industry level to inform as a FinTech – will be willing, or adequately resourced the development of policies, governance mechanisms to engage with such broad stakeholders is questionable. and industry practices, and at other times being under- Instrumental approaches to engagement will be likely taken by individual FinTech firms to address ethical to lead to a narrow focus on stakeholders as existing or dimensions in developing, implementing and evaluat- potential customers (those considered to have the most ing new products, services or areas of innovation. immediate impacts on profitability); however, over- looking wider stakeholders risks practices leading to Conclusions unanticipated negative impacts or opposition to Despite the substantial and growing rhetoric around approaches which are not aligned with public values. ethical and trustworthy data practices (in all sectors) Therefore, a FinTech may not be granted a SL for its there is limited evidence of how this is being put into operations if it overlooks the interests of broader stake- practice. As this article has discussed, this is important holders. This highlights that while a FinTech may define its stakeholders in particular ways, others for FinTechs who increasingly employ these technolo- (including stakeholders themselves) might define them gies to underpin new financial services. While there is differently and it is the stakeholders rather than the industrial advocacy surrounding the potential benefits FinTech which has the authority to grant, refuse or of data science and AI in banking, it is not yet clear withdraw a SL for its operations. Thus, taking a whether there is a SL for these practices. narrow approach to defining stakeholders may be a This has wider implications for developing ethical short-sighted and risky strategy. data practices. The proliferation of ethical codes of Nevertheless, taking a broad approach presents fur- practice and guidance well-illustrates that ethical prac- ther challenges. While organisations such as the tice requires more than just regulation. However, it is 12 Big Data & Society debatable whether the growing number of codes of advantage as an alternative to traditional banking practice is in reality leading to meaningful changes. incumbents (King, 2018). On the other hand, it In particular, given that such guidance are beyond reg- means that there may be substantial work required to ulation they are enacted voluntarily with little or no establish relationships of trust with the wider public. enforcement. This means they depend on organisa- Yet, there is also an opportunity to develop new tional culture change to realise their value. Such culture approaches which might further enhance competitive change in turn requires meaningful commitments to advantage. As has been noted by Brusoni and ethical practice from senior levels of management. Vaccaro (2017: 223) ‘the ethical standing of an organi- There may be a range of motivations for organisations zation—that is represented by its internal practices, to adopt ethical codes of practice; however, we argue products and services—clearly provides a unique way that framing this in terms of pursuing a SL for the to differentiate from competitors’. operations of data-intensive industries provides a This article has not aimed to identify public interests clear rationale and set of approaches to underpin or concerns relating to data practices in FinTech, or to emerging ethical best practices. set out what is required for FinTech to align with SL is distinct from approaches such as CDR public values. Since there is a paucity of public engage- (Cooper et al., 2019; Lobschat et al., 2020) in that it ment or deliberation examining public values around places public – or stakeholder – engagement at its FinTech practices, further research (including through heart. Since a SL is granted or refused by external public engagement methods) is needed to examine what stakeholders (rather than secured internally) it focuses this means in practice. Therefore, this article focusses attention at the importance of aligning with public on setting out the approaches needed to achieve this. values through public engagement. Enforcement does We posit that such approaches are needed across all not come through formal sanctions or penalties but areas and industries whose operations are dependent rather through the loss of public trust, legitimacy or on data to complement regulation and inform the credibility which can have substantial and far- development of ethical codes of practice. This is impor- reaching implications for an organisation and industry. tant to underpin culture change and to move beyond Establishing a SL underpinned by relationships of rhetorical commitments to develop best practice put- trust requires FinTechs to combine a range of technical and social approaches and continually reflect on ethical ting ethics at the heart of innovation. dilemmas as well as the extent to which practices align with social values. In this regard, the growing body of Declaration of conflicting interests guidance and best practice regarding responsible or The author(s) declared no potential conflicts of interest with trustworthy AI, ethical data practices and CDR repre- respect to the research, authorship, and/or publication of this sent a valuable set of resources to draw upon, yet it is article. important that this goes beyond rhetorical commit- ments and leads to practical and meaningful action. Funding In particular, establishing trustworthiness requires not The author(s) disclosed receipt of the following financial sup- just demonstrating technical competence (or Ability) port for the research, authorship, and/or publication of this but also Benevolence and Integrity in the ways that article: This research was funded by the EPSRC, grant refer- data is used and technologies are deployed. ence: EP/R033595/1. Moreover, in order to align with public values, it is vital that ethical approaches are informed by the ORCID iDs views and interests of broad stakeholders. In the case of FinTech, establishing a SL for these technologies Mhairi Aitken https://orcid.org/0000-0002-4654-9803 and subsequent services may prove vital to the ongoing Karen Elliott https://orcid.org/0000-0002-2455-0475 success and sustainability of this sector. FinTech firms face a number of challenges in estab- References lishing relationships of trust: First, the damaged repu- Agarwal A, Beygelzimer A, Dudık M, et al. (2018) A reduc- tation of the financial sector as a whole (Dietz and tions approach to fair classification. arXiv preprint Gillespie, 2012). Second, the unfamiliarity of technolo- arXiv:1803.02453. gies driving FinTech products and services. Third, Aitken M, Cunningham-Burley S and Pagliari C (2016a) increasing public awareness of controversies around Moving from trust to trustworthiness: Experiences of data misuse and fourth, that as new entrants to the public engagement in the Scottish Health Informatics financial marketplace FinTechs have yet to establish Programme. Science & Public Policy 43(5): 713–723. widely recognised brand reputations. On the one Aitken M, Haggett C and Rudolph D (2016b) Practices hand, this ‘newness’ may offer FinTechs a competitive and rationales of community engagement with wind Aitken et al. 13 farms: Awareness raising, consultation, empowerment. Cooper T, Siu J and Wei K (2019) Corporate digital respon- Planning Theory & Practice 17(4): 557–576. sibility: Doing well by doing good. Available at: https:// Aitken M, Tully MP, Porteous C, et al. (2019) Consensus www.accenture.com/au-en/insight-outlook-doing-well- statement on public involvement and engagement with doing-good (accessed 22 January 2020). data-intensive health research. International Journal of CTO (2019) The Big Data Goldmine. Available at: https:// Population Data Science 4(1): 1–6. ctoboost.com/the-big-data-goldmine/ (accessed 9 May Allen J, Adams C and Flack F 2019. The role of data custo- 2019). dians in establishing and maintaining social licence for Cunningham-Burley S (2006) Public knowledge and public health research. Bioethics 33(4): 502–510. trust. Public Health Genomics 9(3): 204–210. Andras P, Esterle L, Guckert M, et al. (2018) Trusting intel- d’Alessandro B, O’Neil C and LaGatta T (2017) ligent machines: Deepening trust within Socio-Technical Conscientious classification: A data scientist’s guide to systems. IEEE Technology and Society Magazine 37(4): discrimination-aware classification. Big Data 5(2): 76–83. 120–134. Avizienis A, Laprie JC, Randell B, et al. (2004) Basic con- Data Futures Partnership (2017) A Path to Social Licence: cepts and taxonomy of dependable and secure computing. Guidelines for Trusted Data Use. Available at: https://trust IEEE Transactions on Dependable and Secure Computing eddata.co.nz/wp-content/uploads/2017/08/Background- 1(1): 11–33. Trusted-Data.pdf (accessed 3 May 2019). Bank of England (2019) Quarterly Bulleting: Topical article, Dietz G and Den Hartog DN (2006) Measuring trust inside Embracing the promise of FinTech. Available at: https:// organisations. Personnel Review 35(5): 557–588. www.bankofengland.co.uk/-/media/boe/files/quarterly- Dietz G and Gillespie N (2012) Recovery of Trust: Case bulletin/2019/embracing-the-promise-of-fintech (accessed Studies of Organisational Failures and Trust Repair. Vol. 20 February 2020) 5. London: Institute of Business Ethics. Bhagoji AN, Cullina D, Sitawarin C, et al. (2018) Enhancing European Commission (2019) Ethics Guidelines for robustness of machine learning systems via data transfor- Trustworthy AI. In: ec.europa.eu. https://ec.europa.eu/dig mations. In: 52nd annual conference on information scien- ital-single-market/en/news/ethics-guidelines-trustworthy- ces and systems (CISS), pp. 1–5. Piscataway, NJ: IEEE. ai (accessed 20 February 2020). Brown J and Fraser M (2006) Approaches and perspectives in Fiorino DJ 1990 Citizen participation and environmental social and environmental accounting: An overview of the risk: A survey of institutional mechanisms. Science, conceptual landscape. Business Strategy and the Technology, & Human Values 15(2): 226–243. Environment 15: 103–117. Fjeld J, Hilligoss H, Achten N, et al. (2019) Principled Brusoni S and Vaccaro A (2017) Ethics, technology and orga- Artificial Intelligence: A map of ethical and rights-based nizational innovation. Journal of Business Ethics 143(2): approaches. Available at: https://ai-hr.cyber.harvard.edu/ 223–226. primp-viz.html (accessed 4 July 2019). Butler JK Jr. and Cantrell RS (1984) A behavioral decision Friedler SA, Scheidegger C, Venkatasubramanian S, et al. theory approach to modeling dyadic trust in superiors and (January 2019) A comparative study of fairness- subordinates. Psychological Reports 55(1): 19–28. enhancing interventions in machine learning. In: Butler JK Jr. (1991) Toward understanding and measuring Proceedings of the conference on fairness, accountability, conditions of trust: Evolution of a conditions of trust and transparency, pp. 329–338. New York, NY: ACM. inventory. Journal of Management 17(3): 643–663. Goebel R, Chander A, Holzinger K, et al. (2018) August Calmon F, Wei D, Vinzamuri B, et al. (2017) Optimized pre- Explainable AI: The new 42? In: International cross- processing for discrimination prevention. In: Advances in domain conference for machine learning and knowledge Neural Information Processing Systems, pp. 3992–4001. extraction. Cham: Springer, pp. 295–303. Cambridge, MA: MIT Press. Gulamhuseinwala I, Bull T and Lewis S (2015) FinTech is gain- Carter P, Laurie GT and Dixon-Woods M (2015) The social ing traction and young, high-income users are the early licence for research: Why care data ran into trouble. adopters. Journal of Financial Perspectives 3(3): 1–20. Journal of Medical Ethics 41(5): 404–409. Gulamhuseinwala I, Hatch M and Lloyd J (2017) EY FinTech Chen J, Kallus N, Mao X, et al. (January 2019) Fairness Adoption Index 2017: The rapid emergence of FinTech. under unawareness: Assessing disparity when protected Available at: https://www.ey.com/Publication/vwLU class is unobserved. In: Proceedings of the conference on Assets/ey-fintech-adoption-index-2017/$FILE/ey-fintech- fairness, accountability, and transparency, pp. 339–348. adoption-index-2017.pdf (accessed 3 May 2019). New York, NY: ACM. Gunningham N, Kagan RA and Thornton D (2004) Social Chouldechova A (2017) Fair prediction with disparate licence and environmental protection: Why businesses impact: A study of bias in recidivism prediction instru- go beyond compliance. Law & Social Inquiry 29(2): ments. Big Data 5(2): 153–163. 307–341. Chuang L, Liu C and Kao H (2016) The adoption of fintech Hagendijk R and Irwin A (2006) Public deliberation and gov- service: TAM perspective. International Journal of ernance: Engaging with science and technology in contem- Management and Administrative Sciences 3(7): 1–15. porary Europe. Minerva 44(2): 167–184. Conrad J (2018) The social licence to operate and social con- Harvey B (2011) SIA from a developers perspective: tract theory: Themes and relations of two concepts – A lit- erature analysis. University of Iceland, Iceland. Foreword. In: Vanclay F and Esteves AM (eds) New 14 Big Data & Society Directions in Social Impact Assessment. Cheltenham: Mayer RC, Davis JH and Schoorman FD (1995) An integra- Edward Elgar Publishing, pp. xxvii–xxxiii. tive model of organizational trust. The Academy of Hasselbalch G (2019) Making sense of data ethics. The Management Review 20(3): 709–734. powers behind the data ethics debate in European policy- Michie D, Spiegelhalter DJ, Taylor CC, et al. (eds) (1994) making. Internet Policy Review 8(2). Available at: https:// Machine Learning, Neural and Statistical Classification. policyreview.info/articles/analysis/making-sense-data-eth Upper Saddle River, NJ: Prentice Hall. ISBN: 0-13- ics-powers-behind-data-ethics-debate-european-policy 106360-X making (accessed 1 October 2019). Moffat K, Lacey J, Zhang A, et al. (2016) The social licence Hein M and Andriushchenko M (2017) Formal guarantees to operate: A critical review. Forestry: An International on the robustness of a classifier against adversarial Journal of Forest Research 89(5): 477–488. manipulation. In: Advances in Neural Information Moffat K and Zhang A (2014) The paths to social licence to Processing Systems, pp. 2266–2276. Cambridge, MA: operate: An integrative model explaining community MIT Press. acceptance of mining. Resources Policy 39: 61–70. IBM (n.d.) Trusted AI. Available at: https://www.research. O’Neil C (2016) Weapons of math destruction: How big data ibm.com/artificial-intelligence/trusted-ai/ (accessed 3 increases inequality and threatens democracy. Broadway May 2019). Books. Jasanoff S (2011) Designs on Nature: Science and Democracy Owen J and Kemp D (2012) Social licence and mining: in Europe and the United States. Princeton, NJ: Princeton A critical perspective. Resources Policy 38: 29–35. University Press. Paprica PA, de Melo MN and Schull MJ (2019) Social licence Jiang H and Nachum O (2019) Identifying and correcting and the general public’s attitudes toward research based label bias in machine learning. arXiv preprint on linked administrative health data: a qualitative study. arXiv:1901.04966. CMAJ open 7(1): E40. Kaminski ME (2019) The right to explanation, explained. Parsons R and Moffat K (2014) Constructing the meaning of Berkeley Tech LJ 34: 189. ‘social licence’. Social Epistemology 28: 340–363. Kamiran F and Calders T (2012) Data preprocessing techni- Politou E, Alepis E and Patsakis C (2018) Forgetting person- ques for classification without discrimination. Knowledge al data and revoking consent under the GDPR: and Information Systems 33(1): 1–33. Challenges and proposed solutions. Journal of Kamishima T, Akaho S, Asoh H, et al. (September 2012) Cybersecurity 4(1): p.tyy001. Fairness-aware classifier with prejudice remover regular- PWC (2018) 2018 AI predictions: 8 insights to shape business izer. In: Joint European conference on machine learning and strategy. Available at: https://www.pwc.es/es/home/assets/ knowledge discovery in databases. Berlin: Springer, pp. 35– ai-predictions-2018-report.pdf (accessed 3 May 2019). 50. Raghunathan A, Steinhardt J and Liang P (2018) Certified Kavuri AS and Milne A (2019) FinTech and the future of defenses against adversarial examples. arXiv preprint financial services: What are the research gaps? papers.ssrn. arXiv:1801.09344. com Rauber J, Brendel W and Bethge M (2017) Foolbox: King B (2018) Bank 4.0: Banking Everywhere, Never at a A python toolbox to benchmark the robustness of Bank. Marshall Cavendish International, Asia. machine learning models. arXiv preprint arXiv:1707.04131. Kusner MJ, Loftus J, Russell C, et al. (2017) Counterfactual RSA (2018) Artificial Intelligence, Real Public Engagement. fairness. In: Advances in Neural Information Processing Available at: https://www.thersa.org/discover/publica Systems, pp. 4066–4076. Cambridge, MA: MIT Press. tions-and-articles/reports/artificial-intelligence-real- Lawler M, Morris AD, Sullivan R, Birney E, Middleton A, public-engagement (accessed 3 May 2019). Makaroff L, Knoppers BM, Horgan D and Eggermont A Sadowski J (2019) When data is capital: Datafication, accu- (2018) A roadmap for restoring trust in Big Data. The mulation, and extraction. Big Data & Society 6(1). Lancet Oncology 19(8): 1014. Schueffel P (2016) Taming the beast: A scientific definition of Leonard PG (2018) Social licence and digital trust in data- Fintech. Available at SSRN 3097312. driven applications and AI: A problem statement and possi- Scott B (2017) Hardcoding Ethics into FinTech. Ethics & ble solutions. Available at SSRN 3261228. Trust in Finance Global edition 2016–2017 http://www.eth Liu Q, Li P, Zhao W, et al. (2018) A survey on security icsinfinance.org/wp-content/uploads/2018/01/Brett-Scott- threats and defensive techniques of machine learning: Hard-coding-ethics-into-fintech.pdf accessed on 20/02/20 A data driven view. IEEE Access 6: 12103–12117. Stahl BC and Wright D (2018) Ethics and privacy in AI and Lobschat L, Mueller B, Eggers F, et al. (2020) Corporate big data: Implementing responsible research and innova- digital responsibility. Journal of Business Research. Epub tion. IEEE Security & Privacy 16(3): 26–33. ahead of print 2020. DOI:10.1016/j.jbusres.2019.10.006 Taylor L (2017) What is data justice? The case for Maskey S (2018) How Artificial Intelligence is Helping connecting digital rights and freedoms globally. Big Data Financial Institutions. Available at: https://www.forbes. & Society 4(2). com/sites/forbestechcouncil/2018/12/05/how-artificial-in Teigland R, Siri S, Larsson A, et al. (eds) (2018) The Rise and telligence-is-helping-financial-institutions/#7cdd45b4460a Development of FinTech: Accounts of Disruption from (accessed 3 May 2019). Sweden and beyond. London: Routledge. Aitken et al. 15 The Economist (2017) The world’s most valuable resource is no World Bank (2017) The Global Findex Database 2017. longer oil, but data. Available at: https://www.economist. Available at: https://globalfindex.worldbank.org/ (accessed com/leaders/2017/05/06/the-worlds-most-valuable- 20 February 2020). resource-is-no-longer-oil-but-data (accessed 9 May 2019). Wynne B (2006) Public engagement as a means of restoring Toreini E, Aitken M, Coopamootoo K, et al. (2019) The public trust in science – Hitting the notes, but missing the relationship between trust in AI and trustworthy machine music?’ Community Genetics 9(3): 211–220. learning technologies. arXiv preprint arXiv:1912.00782. Zhang BH, Lemoine B and Mitchell M (2018) December. Warhurst A (2001) Corporate citizenship and corporate Mitigating unwanted biases with adversarial learning. In: social investment: Drivers of tri-sector partnerships. Proceedings of the 2018 AAAI/ACM conference on AI, Journal of Corporate Citizenship 1: 57–73. Ethics, and Society, pp. 335–340. New York, NY: ACM. Wilsdon J and Willis R (2004) See-through Science: Why Public Engagement needs to move Upstream. London: Demos.

Journal

Big Data & SocietySAGE

Published: Mar 4, 2020

Keywords: Financial Technology; data; social licence; ethics; responsible artificial intelligence; trust

References