Negotiating cultural sensitivity in medical AI

Journal of Medical Ethics (forthcoming)
  Copy   BIBTEX

Abstract

Ugar and Malele write that generic machine learning (ML) technologies for mental health diagnosis would be challenging to implement in sub-Saharan Africa due to cultural specificities in how those conditions are diagnosed. For example, they say that in South Africa, the appearance of ‘schizophrenia’ might be understood as a type of spiritual possession, rather than a mental disorder caused by a brain dysfunction. Hence, a generic ML system is likely to ‘misdiagnose’ persons whose symptomatology matches that of schizophrenia in the South African context. The authors thus claim that ‘a generic or universal design cannot be effective given the heterogeneity of value judgements in defining what mental health disorders are in different contexts’.1 Should we take this to mean that ML systems ‘should not be designed with a generalised perception of mental disorders,’1 as the authors suggest? On the contrary, my view is that generic ML can be useful, with the caveat that issues of cultural sensitivity may engender translational challenges in various national and cultural contexts. Yet, if it can be demonstrated that a generic ML system can reliably and accurately pick up on shared symptomatologies across cultures despite what any individual or community might believe about their actual causes, this aspect is surely the function and value of diagnostic ML which we should take to matter. Cultural gaps in aetiological understandings of mental health conditions make it all the more important to advance generic ML as a calibrating diagnostic tool. In this commentary, therefore, I will endeavour to make a case for generic ML in mental healthcare. ### Aetiological (mis)understanding across cultures The fundamental complexity to which Ugar and Malele direct our attention is the fact that mental health conditions are attributed to variable causal explanations, rather than a …

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 92,991

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Testimonial injustice in medical machine learning.Giorgia Pozzi - 2023 - Journal of Medical Ethics 49 (8):536-540.
Consequences of unexplainable machine learning for the notions of a trusted doctor and patient autonomy.Michal Klincewicz & Lily Frank - 2020 - Proceedings of the 2nd EXplainable AI in Law Workshop (XAILA 2019) Co-Located with 32nd International Conference on Legal Knowledge and Information Systems (JURIX 2019).

Analytics

Added to PP
2024-05-28

Downloads
7 (#1,411,318)

6 months
7 (#492,113)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Ji-Young Lee
University of Copenhagen

Citations of this work

No citations found.

Add more citations