One of the main difficulties in assessing artificial intelligence is the tendency for people to anthropomorphise it. This becomes particularly problematic when we attach human moral activities to AI. For example, the European Commission’s High-level Expert Group on AI have adopted the position that we should establish a relationship of trust with AI and should cultivate trustworthy AI. Trust is one of the most important and defining activities in human relationships, so proposing that AI should be trusted, is a very (...) serious claim. This paper will show that AI cannot be something that has the capacity to be trusted according to the most prevalent definitions of trust because it does not possess emotive states or can be held responsible for their actions—requirements of the affective and normative accounts of trust. While AI meets all of the requirements of the rational account of trust, it will be shown that this is not actually a type of trust at all, but is instead, a form of reliance. Ultimately, even complex machines such as AI should not be viewed as trustworthy as this undermines the value of interpersonal trust, anthropomorphises AI, and diverts responsibility from those developing and using them. (shrink)
Purpose The purpose of this paper is clearly illustrate this convergence and the prescriptive recommendations that such documents entail. There is a significant amount of research into the ethical consequences of artificial intelligence. This is reflected by many outputs across academia, policy and the media. Many of these outputs aim to provide guidance to particular stakeholder groups. It has recently been shown that there is a large degree of convergence in terms of the principles upon which these guidance documents are (...) based. Despite this convergence, it is not always clear how these principles are to be translated into practice. Design/methodology/approach In this paper, the authors move beyond the high-level ethical principles that are common across the AI ethics guidance literature and provide a description of the normative content that is covered by these principles. The outcome is a comprehensive compilation of normative requirements arising from existing guidance documents. This is not only required for a deeper theoretical understanding of AI ethics discussions but also for the creation of practical and implementable guidance for developers and users of AI. Findings In this paper, the authors therefore provide a detailed explanation of the normative implications of existing AI ethics guidelines but directed towards developers and organisational users of AI. The authors believe that the paper provides the most comprehensive account of ethical requirements in AI currently available, which is of interest not only to the research and policy communities engaged in the topic but also to the user communities that require guidance when developing or deploying AI systems. Originality/value The authors believe that they have managed to compile the most comprehensive document collecting existing guidance which can guide practical action but will hopefully also support the consolidation of the guidelines landscape. The authors’ findings should also be of academic interest and inspire philosophical research on the consistency and justification of the various normative statements that can be found in the literature. (shrink)
Self-driving vehicles offer great potential to improve efficiency on roads, reduce traffic accidents, increase productivity, and minimise our environmental impact in the process. However, they have also seen resistance from different groups claiming that they are unsafe, pose a risk of being hacked, will threaten jobs, and increase environmental pollution from increased driving as a result of their convenience. In order to reap the benefits of SDVs, while avoiding some of the many pitfalls, it is important to effectively determine what (...) challenges we will face in the future and what steps need to be taken now to avoid them. The approach taken in this paper is the construction of a likely future, through the process of a policy scenario methodology, if we continue certain trajectories over the coming years. The purpose of this is to articulate issues we currently face and the construction of a foresight analysis of how these may develop in the next 6 years. It will highlight many of the key facilitators and inhibitors behind this change and the societal impacts caused as a result. This paper will synthesise the wide range of ethical, legal, social and economic impacts that may result from SDV use and implementation by 2025, such as issues of autonomy, privacy, liability, security, data protection, and safety. It will conclude with providing steps that we need to take to avoid these pitfalls, while ensuring we reap the benefits that SDVs bring. (shrink)
The ethics of artificial intelligence is a widely discussed topic. There are numerous initiatives that aim to develop the principles and guidance to ensure that the development, deployment and use of AI are ethically acceptable. What is generally unclear is how organisations that make use of AI understand and address these ethical issues in practice. While there is an abundance of conceptual work on AI ethics, empirical insights are rare and often anecdotal. This paper fills the gap in our current (...) understanding of how organisations deal with AI ethics by presenting empirical findings collected using a set of ten case studies and providing an account of the cross-case analysis. The paper reviews the discussion of ethical issues of AI as well as mitigation strategies that have been proposed in the literature. Using this background, the cross-case analysis categorises the organisational responses that were observed in practice. The discussion shows that organisations are highly aware of the AI ethics debate and keen to engage with ethical issues proactively. However, they make use of only a relatively small subsection of the mitigation strategies proposed in the literature. These insights are of importance to organisations deploying or using AI, to the academic AI ethics debate, but maybe most valuable to policymakers involved in the current debate about suitable policy developments to address the ethical issues raised by AI. (shrink)
This study investigates the ethical use of Big Data and Artificial Intelligence technologies —using an empirical approach. The paper categorises the current literature and presents a multi-case study of 'on-the-ground' ethical issues that uses qualitative tools to analyse findings from ten targeted case-studies from a range of domains. The analysis coalesces identified singular ethical issues,, into clusters to offer a comparison with the proposed classification in the literature. The results show that despite the variety of different social domains, fields, and (...) applications of AI, there is overlap and correlation between the organisations’ ethical concerns. This more detailed understanding of ethics in AI + BD is required to ensure that the multitude of suggested ways of addressing them can be targeted and succeed in mitigating the pertinent ethical issues that are often discussed in the literature. (shrink)
Agricultural Big Data analytics (ABDA) is being proposed to ensure better farming practices, decision-making, and a sustainable future for humankind. However, the use and adoption of these technologies may bring about potentially undesirable consequences, such as exercises of power. This paper will analyse Brey’s five distinctions of power relationships (manipulative, seductive, leadership, coercive, and forceful power) and apply them to the use agricultural Big Data. It will be shown that ABDA can be used as a form of manipulative power to (...) initiate cheap land grabs and acquisitions. Seductive power can be exercised by pressuring farmers into situations they would not have otherwise chosen (such as installing monitors around their farm and limited access to their farm and machinery). It will be shown that agricultural technology providers (ATPs) demonstrate leadership power by getting farmers to agree to use ABDA without informed consent. Coercive power is exercised when ATPs threaten farmers with the loss of ABDA if they do not abide by the policies and requirements of the ATP or are coerced to remain with the ATP because of fear of legal and economic reprisal. ATPs may use ABDA to determine willingness-to-pay rates from farmers, using this information to force farmers into precarious and vulnerable positions. Altogether, this paper will apply these five types of power to the use and implementation of ABDA to demonstrate that it is being used to exercise power in the agricultural industry. (shrink)
As with other fields of applied ethics, philosophers engaged in business ethics struggle to carry out substantive philosophical reflection in a way that mirrors the practical reasoning that goes on within business management itself. One manifestation of the philosopher’s struggle is the field’s division into approaches that emphasize moral philosophy and those grounded in the methods of social science. I claim here that the task for those who come to business ethics with philosophical training is to avoid unintentionally widening the (...) gap between philosophical theory and business management by emphasizing the centrality of practical wisdom to both good managment and to the moral life. Distinguishing my own approach from recent emphases on phronesis in management literature, I draw on the concepts of social practice and of narrative to tie practical reasoning to a company’s unique story. Practical reason, social practices and narrative are employed together to give an account of the art of management at Patagonia. The essay hopes to both provide a way for philosophers engaged with business ethics to see family resemblances between their practices and those of business management and to offer a pedagogical example useful for those in any discipline interested in viewing businesses ethically. (shrink)
This paper will examine the social and ethical impacts of using artificial intelligence in the agricultural sector. It will identify what are some of the most prevalent challenges and impacts identified in the literature, how this correlates with those discussed in the domain of AI ethics, and are being implemented into AI ethics guidelines. This will be achieved by examining published articles and conference proceedings that focus on societal or ethical impacts of AI in the agri-food sector, through a thematic (...) analysis of the literature. The thematic analysis will be divided based on the classifications outlined through 11 overarching principles, from an established lexicon. While research on AI agriculture is still relatively new, this paper aims to map the debate and illustrate what the literature says in the context of social and ethical impacts. It aim is to analyse these impacts, based on these 11 principles. This research will contrast which impacts are not being discussed in agricultural AI and which issues are not being discussed in AI ethics guidelines, but which are discussed in relation to agricultural AI. The aim of this is to identify gaps within the agricultural literature, and gaps in AI ethics guidelines, that may need to be addressed. (shrink)
We point out a simple but hitherto ignored link between the theoryof updates, the theory of counterfactuals, and classical modal logic: update is a classicalexistential modality, counterfactual is a classical universalmodality, and the accessibility relations corresponding to these modalities are inverses. The Ramsey Rule (often thought esoteric) is simply an axiomatisation of this inverse relationship. We use this fact to translate between rules for updates andrules for counterfactuals. Thus, Katsuno and Mendelzons postulatesU1--U8 are translated into counterfactual rules C1--C8(Table VII), and (...) many of the familiar counterfactual rulesare translated into rules for updates (Table VIII). Ourconclusions are summarised in Table V. (shrink)
The Egli-Milner power-ordering is used to define verisimilitude orderings on theories from preference orderings on models. The effects of the definitions on constraints such as stopperedness and soundness are explored. Orderings on theories are seen to contain more information than orderings on models. Belief revision is defined in terms of both types of orderings, and conditions are given which make the two notions coincide.
Should we care about the environment because it is economically valuable or because nature has intrinsic value? This book gives a clear overview of some of the main theoretical problems within environmental ethics and offers definitive solutions and alternatives.
Artificial intelligence ethics requires a united approach from policymakers, AI companies, and individuals, in the development, deployment, and use of these technologies. However, sometimes discussions can become fragmented because of the different levels of governance or because of different values, stakeholders, and actors involved. Recently, these conflicts became very visible, with such examples as the dismissal of AI ethics researcher Dr. Timnit Gebru from Google and the resignation of whistle-blower Frances Haugen from Facebook. Underpinning each debacle was a conflict between (...) the organisation’s economic and business interests and the morals of their employees. This paper will examine tensions between the ethics of AI organisations and the values of their employees, by providing an exploration of the AI ethics literature in this area, and a qualitative analysis of three workshops with AI developers and practitioners. Common ethical and social tensions will be discussed, along with proposals on how to avoid or reduce these conflicts in practice. Altogether, we suggest the following steps to help reduce ethical issues within AI organisations: improved and diverse ethics education and training within businesses; internal and external ethics auditing; the establishment of AI ethics ombudsmen, AI ethics review committees and an AI ethics watchdog; as well as access to trustworthy AI ethics whistle-blower organisations. (shrink)