Real engines of the artificial intelligence revolution, machine learning models, and algorithms are embedded nowadays in many services and products around us. As a society, we argue it is now necessary to transition into a phronetic paradigm focused on the ethical dilemmas stemming from the conception and application of AIs to define actionable recommendations as well as normative solutions. However, both academic research and society-driven initiatives are still quite far from clearly defining a solid program of study and intervention. In (...) this contribution, we will focus on selected ethical investigations around AI by proposing an incremental model of trust that can be applied to both human-human and human-AI interactions. Starting with a quick overview of the existing accounts of trust, with special attention to Taddeo’s concept of “e-trust,” we will discuss all the components of the proposed model and the reasons to trust in human-AI interactions in an example of relevance for business organizations. We end this contribution with an analysis of the epistemic and pragmatic reasons of trust in human-AI interactions and with a discussion of kinds of normativity in trustworthiness of AIs. (shrink)
In this paper we argue that transparency of machine learning algorithms, just as explanation, can be defined at different levels of abstraction. We criticize recent attempts to identify the explanation of black box algorithms with making their decisions (post-hoc) interpretable, focusing our discussion on counterfactual explanations. These approaches to explanation simplify the real nature of the black boxes and risk misleading the public about the normative features of a model. We propose a new form of algorithmic transparency, that consists in (...) explaining algorithms as an intentional product, that serves a particular goal, or multiple goals (Daniel Dennet’s design stance) in a given domain of applicability, and that provides a measure of the extent to which such a goal is achieved, and evidence about the way that measure has been reached. We call such idea of algorithmic transparency “design publicity.” We argue that design publicity can be more easily linked with the justification of the use and of the design of the algorithm, and of each individual decision following from it. In comparison to post-hoc explanations of individual algorithmic decisions, design publicity meets a different demand (the demand for impersonal justification) of the explainee. Finally, we argue that when models that pursue justifiable goals (which may include fairness as avoidance of bias towards specific groups) to a justifiable degree are used consistently, the resulting decisions are all justified even if some of them are (unavoidably) based on incorrect predictions. For this argument, we rely on John Rawls’s idea of procedural justice applied to algorithms conceived as institutions. (shrink)
In his recent article ‘Limits of trust in medical AI,’ Hatherley argues that, if we believe that the motivations that are usually recognised as relevant for interpersonal trust have to be applied to interactions between humans and medical artificial intelligence, then these systems do not appear to be the appropriate objects of trust. In this response, we argue that it is possible to discuss trust in medical artificial intelligence, if one refrains from simply assuming that trust describes human–human interactions. To (...) do so, we consider an account of trust that distinguishes trust from reliance in a way that is compatible with trusting non-human agents. In this account, to trust a medical AI is to rely on it with little monitoring and control of the elements that make it trustworthy. This attitude does not imply specific properties in the AI system that in fact only humans can have. This account of trust is applicable, in particular, to all cases where a physician relies on the medical AI predictions to support his or her decision making. (shrink)
Recent epidemiological reports of associations between socioeconomic status and epigenetic markers that predict vulnerability to diseases are bringing to light substantial biological effects of social inequalities. Here, we start the discussion of the moral consequences of these findings. We firstly highlight their explanatory importance in the context of the research program on the Developmental Origins of Health and Disease (DOHaD) and the social determinants of health. In the second section, we review some theories of the moral status of health inequalities. (...) Rather than a complete outline of the debate, we single out those theories that rest on the principle of equality of opportunity and analyze the consequences of DOHaD and epigenetics for these particular conceptions of justice. We argue that DOHaD and epigenetics reshape the conceptual distinction between natural and acquired traits on which these theories rely and might provide important policy tools to tackle unjust distributions of health. (shrink)
The concept of the digital phenotype has been used to refer to digital data prognostic or diagnostic of disease conditions. Medical conditions may be inferred from the time pattern in an insomniac’s tweets, the Facebook posts of a depressed individual, or the web searches of a hypochondriac. This paper conceptualizes digital data as an extended phenotype of humans, that is as digital information produced by humans and affecting human behavior and culture. It argues that there are ethical obligations to persons (...) affected by generalizable knowledge of a digital phenotype, not only those who are personally identifiable or involved in data generation. This claim is illustrated by considering the health-related digital phenotypes of precision medicine and digital epidemiology. (shrink)
Luciano Floridi was not the first to discuss the idea of group privacy, but he was perhaps the first to discuss it in relation to the insights derived from big data analytics. He has argued that it is important to investigate the possibility that groups have rights to privacy that are not reducible to the privacy of individuals forming such groups. In this paper, we introduce a distinction between two concepts of group privacy. The first, the “what happens in Vegas (...) stays in Vegas” privacy (in the following: WHVSV privacy), deals with confidential information shared with the member of a group and inaccessible to (all or a specific group of) outsiders. The second, to which we shall refer as inferential privacy, deals with the inferences that can be made about a group of people defined by a feature, or combination thereof, shared by all individuals in the group. We show why we unreservedly agree with Floridi that groups can have a form of privacy that amounts to more than the mere fact of being sets of individuals each of whom has individual privacy; moreover, like Floridi, we find it plausible that at least some groups (those satisfying our definition of type-a groups) may have a right to a species of group privacy (that is, WHVSV privacy) as groups (and not just as individuals who belong to those groups). However, by turning our attention to the context of big data analytics, we show that the relevant, new notion of group privacy is one of inferential privacy. We argue that an absolute right (either of individuals or groups) to inferential privacy is implausible. We also show that many groups generated algorithmically (those satisfying our definition of type-b groups) cannot be right holders as groups (unless they become type-a groups). (shrink)
Here, we provide an ethical analysis of discrimination in private insurance to guide the application of non-discriminatory algorithms for risk prediction in the insurance context. This addresses the need for ethical guidance of data-science experts, business managers, and regulators, proposing a framework of moral reasoning behind the choice of fairness goals for prediction-based decisions in the insurance domain. The reference to private insurance as a business practice is essential in our approach, because the consequences of discrimination and predictive inaccuracy in (...) underwriting are different from those of using predictive algorithms in other sectors. Here we focus on the trade-off in the extent to which one can pursue indirect non-discrimination versus predictive accuracy. The moral assessment of this trade-off is related to the context of application—to the consequences of inaccurate risk predictions in the insurance domain. (shrink)
Holm (2022) argues that a class of algorithmic fairness measures, that he refers to as the ‘performance parity criteria’, can be understood as applications of John Broome’s Fairness Principle. We argue that the performance parity criteria cannot be read this way. This is because in the relevant context, the Fairness Principle requires the equalization of actual individuals’ individual-level chances of obtaining some good (such as an accurate prediction from a predictive system), but the performance parity criteria do not guarantee any (...) such thing: the measures merely ensure that certain population-level ratios hold. (shrink)
Digital apps using Bluetooth to log proximity events are increasingly supported by technologists and governments. By and large, the public debate on this matter focuses on privacy, with experts from both law and technology offering very concrete proposals and participating to a lively debate. Far less attention is paid to effective incentives and their fairness. This paper aims to fill this gap by offering a practical, workable solution for a promising incentive, justified by the ethical principles of non-maleficence, beneficence, autonomy (...) and justice. This incentive is a free phone optimised for running such app. (shrink)
Clients may feel trapped into sharing their private digital data with insurance companies to get a desired insurance product or premium. However, private insurance must collect some data to offer products and premiums appropriate to the client’s level of risk. This situation creates tension between the value of privacy and common insurance business practice. We argue for three main claims: first, coercion to share private data with insurers is pro tanto wrong because it violates the autonomous choice of a privacy-valuing (...) client. Second, we maintain that irrespective of being coerced, the choice of accepting digital surveillance by insurers makes it harder for the client to protect his or her autonomy. The violation of autonomy also makes coercing customers into digital surveillance pro tanto morally wrong. Third, having identified an economically plausible process involving no direct coercion by insurers, leading to the adoption of digital surveillance, we argue that such an outcome generates further threats against autonomy. This threat provides individuals with a pro tanto reason to prevent this process. We highlight the freedom dilemma faced by regulators who aim to prevent this outcome by constraining market freedoms and argue for the need for further moral and empirical research on this question. (shrink)
Intensified and extensive data production and data storage are characteristics of contemporary western societies. Health data sharing is increasing with the growth of Information and Communication Technology platforms devoted to the collection of personal health and genomic data. However, the sensitive and personal nature of health data poses ethical challenges when data is disclosed and shared even if for scientific research purposes. With this in mind, the Science and Values Working Group of the COST Action CHIP ME ‘Citizen's Health through (...) public-private Initiatives: Public health, Market and Ethical perspectives’ identified six core values they considered to be essential for the ethical sharing of health data using ICT platforms. We believe that using this ethical framework will promote respectful scientific practices in order to maintain individuals’ trust in research. We use these values to analyse five ICT platforms and explore how emerging data sharing platforms are reconfiguring the data sharing experience from a range of perspectives. We discuss which types of values, rights and responsibilities they entail and enshrine within their philosophy or outlook on what it means to share personal health information. Through this discussion we address issues of the design and the development process of personal health data and patient-oriented infrastructures, as well as new forms of technologically-mediated empowerment. (shrink)
In this article, we defend a normative theory of prenatal equality of opportunity, based on a critical revision of Rawls's principle of fair equality of opportunity . We argue that if natural endowments are defined as biological properties possessed at birth and the distribution of natural endowments is seen as beyond the scope of justice, Rawls's FEO allows for inequalities that undermine the social conditions of a property-owning democracy. We show this by considering the foetal programming of disease and the (...) possibility of germ-line modifications. If children of lower socioeconomic background are more likely to develop in a poor foetal environment and germ-line enhancements are available only to the rich, initial inequalities between the rich and the poor would grow, and yet FEO would be satisfied. In order to avoid the problem, we propose a revised FEO principle omitting any reference to the comparison of natural endowments. Our revised FEO requires that institutions mitigate social class effect from reproduction and gestation to the greatest extent compatible with parental freedoms and the value of the family. (shrink)
Purpose Cybersecurity in healthcare has become an urgent matter in recent years due to various malicious attacks on hospitals and other parts of the healthcare infrastructure. The purpose of this paper is to provide an outline of how core values of the health systems, such as the principles of biomedical ethics, are in a supportive or conflicting relation to cybersecurity. Design/methodology/approach This paper claims that it is possible to map the desiderata relevant to cybersecurity onto the four principles of medical (...) ethics, i.e. beneficence, non-maleficence, autonomy and justice, and explore value conflicts in that way. Findings With respect to the question of how these principles should be balanced, there are reasons to think that the priority of autonomy relative to beneficence and non-maleficence in contemporary medical ethics could be extended to value conflicts in health-related cybersecurity. Research limitations/implications However, the tension between autonomy and justice, which relates to the desideratum of usability of information and communication technology systems, cannot be ignored even if one assumes that respect for autonomy should take priority over other moral concerns. Originality/value In terms of value conflicts, most discussions in healthcare deal with the conflict of balancing efficiency and privacy given the sensible nature of health information. In this paper, the authors provide a broader and more detailed outline. (shrink)
This paper discusses the concept of “human disenhancement”, i.e. the worsening of human individual abilities and expectations through technology. The goal is provoking ethical reflection on technological innovation outside the biomedical realm, in particular the substitution of human work with computer-driven automation. According to some widely accepted economic theories, automatization and computerization are responsible for the disappearance of many middle-class jobs. I argue that, if that is the case, a technological innovation can be a cause of “human disenhancement”, globally, and (...) all things considered, even when the local and immediate effect of that technology is to increase the demand of more sophisticated human skills than the ones they substitute. The conclusion is that current innovations in the ICT sector are objectionable from a moral point of view, because they disenhance more people than they enhance. (shrink)
Recent evidence of intergenerational epigenetic programming of disease risk broadens the scope of public health preventive interventions to future generations, i.e. non existing people. Due to the transmission of epigenetic predispositions, lifestyles such as smoking or unhealthy diet might affect the health of populations across several generations. While public policy for the health of future generations can be justified through impersonal considerations, such as maximizing aggregate well-being, in this article we explore whether there are rights-based obligations supervening on intergenerational epigenetic (...) programming despite the non-identity argument, which challenges this rationale in case of policies that affect the number and identity of future people. We propose that rights based obligations grounded in the interests of non-existing people might fall upon existing people when generations overlap. In particular, if environmental exposure in F0 will affect the health of F2 through epigenetic programming, then F1 might face increased costs to address F2's condition in the future: this might generate obligations upon F0 from various distributive principles, such as the principle of equal opportunity for well being. (shrink)
This paper explores the analogy between food label information and genetic information, in order to defend the right not to know judgmental nutritional information, such as the one conveyed by traffic light labels and other, more aggressive, recent proposals. Traffic light labeling judges the nutritional quality of food by means of colored flags on the front pack . It involves a simplification of the link between food quality and health outcomes. Unlike GDAs ,1 it does not present the consumer with (...) neutral nutritional information, but conveys an interpretation of the link between nutritional qualities and .. (shrink)
Does biomedical enhancement challenge justice in health care? This paper argues that health care justice based on the concept of normal functioning is inadequate if enhancements are widespread. Two different interpretations of normal functioning are distinguished: the “species typical” vs. the “normal cooperator” account, showing that each version of the theory fails to account for certain egalitarian intuitions about help and assistance owed to people with health needs, where enhancements are widespread.
Enhancements of the human germ-line introduce further inequalities in the competition for scarce goods, such as income and desirable social positions. Social inequalities, in turn, amplify the range of genetic inequalities that access to germ-line enhancements may produce. From an egalitarian point of view, inequalities can be arranged to the benefit of the worst-off group (for instance, through general taxation), but the possibility of an indefinite growth of social and genetic inequality raises legitimate concerns. It is argued that inequalities produced (...) by markets of germ-line enhancements are just if it they are embedded in a framework of social institutions that satisfies two conditions: (i) Rawls’ Difference Principle, which states that inequalities of income and wealth should benefit the worst-off group; (ii) the lexically prior 'principle of rough equality', which states that citizens’ initial life-chances should be similar enough, so that extreme inequalities in income, wealth and power are not produced or accumulated through institutions justified by the Difference Principle. The principle of rough equality replaces the Rawlsian principles of the Fair Value of the Political Liberties and Fair Equality of Opportunity in a post-genomic society and expresses a concern with background political equality, which is argued to be a condition of the freedom and equality of citizens that should not be traded off with material benefits. Extreme inequalities are defined in terms of political equality. (shrink)
Norman Daniels argues that health is important for justice because it affects the distribution of opportunities. He claims that a just society should guarantee fair opportunities by promoting and restoring the “normal functioning” of its citizens, that is, their health. The scope of citizens' mutual obligations with respect to health is defined by a reasonable agreement that, according to Daniels, should be based on the distinction between normal functioning and pathology drawn by the biomedical sciences. This paper deals with the (...) question whether it is legitimate to ascribe the responsibility of defining this important moral boundary to the biomedical sciences, which Daniels regards as value neutral. Daniels appeals to Christopher Boorse's sophisticated bio-statistical theory (BST) to show the plausibility of a value-neutral distinction between normal functioning and pathology. Here I argue that a careful analysis of the concept of normal functioning, such as the one offered by the recent critique by Elselijn Kingma, shows that it depends from evaluative assumptions. This, I argue, implies that Daniels's theory must give up its naturalistic commitments. In the conclusion, the paper offers a detailed discussion and an objection to one of Daniels's arguments in favor of a moderate form of normativism that remains too close to Boorse's naturalism. (shrink)
Innovations in science and technology are often the source of public concern, but few have generated debates as intense and at the same time with such a popular fascination as those surrounding genetic technologies. Unequal access to preimplantation diagnosis could give some individuals the opportunity to select children with more advantageous predispositions.
Introduction to the Ethical Perspectives Theme Issue (19/1) on Genetics and Justice, with contributions by Greg Bognar, David Hunter, Michele Loi, Oliver Feeney, Vilhjálmur Arnason, Durnin et al.
1. I am grateful to the respondents for the opportunity provided, to clarify the concept of a libertarian right to test and its normative implications. To sum up, I concede that genomes have a normatively salient informational aspect, that exercising the LRT may cause informational harm and violate rights of genetically related individuals, and that this is relevant to the regulation of genetic testing. But such considerations are logically compatible with a non-absolute LRT and its libertarian justification. The LRT is (...) practically relevant because it inverts the burden of justification and recognising a LRT may affect the way in which other rights are protected in a conflict of rights case. I will try to clarify this further, in what follows. 2. Consider B, an individual who is, and is aware of being, genetically related to A. Admittedly, while person A has a LRT, the interests of person B should also be protected.1 …. (shrink)
This paper articulates a careful and detailed objection to the moral permissibility of postnatal abortion. Giubilini and Minerva claim that if being unable to nurture one’s newborn child without significant burdens to oneself, family or society, is a proper moral ground for the demand that the life of a fetus be terminated, then ‘after-birth abortion should be considered a permissible option for women who would be damaged by [rearing the child or] giving up their newborns for adoption.’ It will be (...) shown that the permissibility of postnatal abortion does not follow from the argument’s premises, in particular, the premise that the newborn is not a person in the morally relevant sense. (shrink)