Abstract
When applied in the health sector, AI-based applications raise not only ethical but legal and safety concerns, where algorithms trained on data from majority populations can generate less accurate or reliable results for minorities and other disadvantaged groups.
Keywords No keywords specified (fix it)
Categories (categorize this paper)
DOI 10.1017/jme.2022.13
Options
Edit this record
Mark as duplicate
Export citation
Find it on Scholar
Request removal from index
Revision history

Download options

PhilArchive copy


Upload a copy of this paper     Check publisher's policy     Papers currently archived: 71,316
External links

Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
Through your library

References found in this work BETA

No references found.

Add more references

Citations of this work BETA

Add more citations

Similar books and articles

On Algorithmic Fairness in Medical Practice.Thomas Grote & Geoff Keeling - 2022 - Cambridge Quarterly of Healthcare Ethics 31 (1):83-94.
Concept Representation Analysis in the Context of Human-Machine Interactions.Farshad Badie - 2016 - In 14th International Conference on e-Society. pp. 55-61.
Detecting Racial Bias in Algorithms and Machine Learning.Nicol Turner Lee - 2018 - Journal of Information, Communication and Ethics in Society 16 (3):252-260.
Model Theory and Machine Learning.Hunter Chase & James Freitag - 2019 - Bulletin of Symbolic Logic 25 (3):319-332.

Analytics

Added to PP index
2022-04-08

Total views
3 ( #1,362,670 of 2,519,441 )

Recent downloads (6 months)
3 ( #205,898 of 2,519,441 )

How can I increase my downloads?

Downloads

My notes