Analysing and organising human communications for AI fairness assessment

AI and Society:1-21 (forthcoming)
  Copy   BIBTEX

Abstract

Algorithms used in the public sector, e.g., for allocating social benefits or predicting fraud, often require involvement from multiple stakeholders at various phases of the algorithm’s life-cycle. This paper focuses on the communication issues between diverse stakeholders that can lead to misinterpretation and misuse of algorithmic systems. Ethnographic research was conducted via 11 semi-structured interviews with practitioners working on algorithmic systems in the Dutch public sector, at local and national levels. With qualitative coding analysis, we identify key elements of the communication processes that underlie fairness-related human decisions. More specifically, we analyze the division of roles and tasks, the required skills, and the challenges perceived by diverse stakeholders. Three general patterns emerge from the coding analysis: (1) Policymakers, civil servants, and domain experts are less involved compared to developers throughout a system’s life-cycle. This leads to developers taking on the role of decision-maker and policy advisor, while they potentially miss the required skills. (2) End-users and policy-makers often lack the technical skills to interpret a system’s output, and rely on actors having a developer role for making decisions concerning fairness issues. (3) Citizens are structurally absent throughout a system’s life-cycle. This may lead to unbalanced fairness assessments that do not include key input from relevant stakeholders. We formalize the underlying communication issues within such networks of stakeholders and introduce the phase-actor-role-task-skill (PARTS) model. PARTS can both (i) represent the communication patterns identified in the interviews, and (ii) explicitly outline missing elements in communication patterns such as actors who miss skills or collaborators for their tasks, or tasks that miss qualified actors. The PARTS model can be extended to other use cases and used to analyze and design the human organizations responsible for assessing fairness in algorithmic systems. It can be further extended to explore communication issues in other use cases, design potential solutions, and organize accountability with a common vocabulary.

Other Versions

No versions found

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 99,576

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Democratizing Algorithmic Fairness.Pak-Hang Wong - 2020 - Philosophy and Technology 33 (2):225-244.

Analytics

Added to PP
n/a

Downloads
23 (#823,484)

6 months
23 (#124,442)

Historical graph of downloads
How can I increase my downloads?