Order:
See also
Reuben Binns
University of Southampton
  1.  71
    Algorithmic Accountability and Public Reason.Reuben Binns - 2018 - Philosophy and Technology 31 (4):543-556.
    The ever-increasing application of algorithms to decision-making in a range of social contexts has prompted demands for algorithmic accountability. Accountable decision-makers must provide their decision-subjects with justifications for their automated system’s outputs, but what kinds of broader principles should we expect such justifications to appeal to? Drawing from political philosophy, I present an account of algorithmic accountability in terms of the democratic ideal of ‘public reason’. I argue that situating demands for algorithmic accountability within this justificatory framework enables us to (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   31 citations  
  2.  33
    Algorithmic Content Moderation: Technical and Political Challenges in the Automation of Platform Governance.Christian Katzenbach, Reuben Binns & Robert Gorwa - 2020 - Big Data and Society 7 (1):1–15.
    As government pressure on major technology companies builds, both firms and legislators are searching for technical solutions to difficult platform governance puzzles such as hate speech and misinformation. Automated hash-matching and predictive machine learning tools – what we define here as algorithmic moderation systems – are increasingly being deployed to conduct content moderation at scale by major platforms for user-generated content such as Facebook, YouTube and Twitter. This article provides an accessible technical primer on how algorithmic moderation works; examines some (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   5 citations  
  3.  13
    Fairer Machine Learning in the Real World: Mitigating Discrimination Without Collecting Sensitive Data.Reuben Binns & Michael Veale - 2017 - Big Data and Society 4 (2).
    Decisions based on algorithmic, machine learning models can be unfair, reproducing biases in historical data used to train them. While computational techniques are emerging to address aspects of these concerns through communities such as discrimination-aware data mining and fairness, accountability and transparency machine learning, their practical implementation faces real-world challenges. For legal, institutional or commercial reasons, organisations might not hold the data on sensitive attributes such as gender, ethnicity, sexuality or disability needed to diagnose and mitigate emergent indirect discrimination-by-proxy, such (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   15 citations