“I don’t think people are ready to trust these algorithms at face value”: trust and the use of machine learning algorithms in the diagnosis of rare disease

BMC Medical Ethics 23 (1):1-14 (2022)
  Copy   BIBTEX

Abstract

BackgroundAs the use of AI becomes more pervasive, and computerised systems are used in clinical decision-making, the role of trust in, and the trustworthiness of, AI tools will need to be addressed. Using the case of computational phenotyping to support the diagnosis of rare disease in dysmorphology, this paper explores under what conditions we could place trust in medical AI tools, which employ machine learning.MethodsSemi-structured qualitative interviews with stakeholders who design and/or work with computational phenotyping systems. The method of constant comparison was used to analyse the interview data.ResultsInterviewees emphasized the importance of establishing trust in the use of CP technology in identifying rare diseases. Trust was formulated in two interrelated ways in these data. First, interviewees talked about the importance of using CP tools within the context of a trust relationship; arguing that patients will need to trust clinicians who use AI tools and that clinicians will need to trust AI developers, if they are to adopt this technology. Second, they described a need to establish trust in the technology itself, or in the knowledge it provides—epistemic trust. Interviewees suggested CP tools used for the diagnosis of rare diseases might be perceived as more trustworthy if the user is able to vouchsafe for the technology’s reliability and accuracy and the person using/developing them is trusted.ConclusionThis study suggests we need to take deliberate and meticulous steps to design reliable or confidence-worthy AI systems for use in healthcare. In addition, we need to devise reliable or confidence-worthy processes that would give rise to reliable systems; these could take the form of RCTs and/or systems of accountability transparency and responsibility that would signify the epistemic trustworthiness of these tools. words 294.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 91,202

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

A Metacognitive Approach to Trust and a Case Study: Artificial Agency.Ioan Muntean - 2019 - Computer Ethics - Philosophical Enquiry (CEPE) Proceedings.
Trust: self-interest and the common good.Marek Kohn - 2008 - New York: Oxford University Press.
The Structure of Trust in China and the U.S.Ho-Kong Chan, Kit-Chun Joanna Lam & Pak-Wai Liu - 2011 - Journal of Business Ethics 100 (4):553 - 566.

Analytics

Added to PP
2022-11-17

Downloads
17 (#815,534)

6 months
12 (#174,629)

Historical graph of downloads
How can I increase my downloads?

Citations of this work

No citations found.

Add more citations

References found in this work

Trust and antitrust.Annette Baier - 1986 - Ethics 96 (2):231-260.
Autonomy and Trust in Bioethics.Onora O'Neill - 2002 - New York: Cambridge University Press.
Trust as an affective attitude.Karen Jones - 1996 - Ethics 107 (1):4-25.
Linking Trust to Trustworthiness.Onora O’Neill - 2018 - International Journal of Philosophical Studies 26 (2):293-300.

View all 14 references / Add more references