Switch to: References

Add citations

You must login to add citations.
  1. Misinformation, Content Moderation, and Epistemology: Protecting Knowledge.Keith Raymond Harris - 2024 - Routledge.
    This book argues that misinformation poses a multi-faceted threat to knowledge, while arguing that some forms of content moderation risk exacerbating these threats. It proposes alternative forms of content moderation that aim to address this complexity while enhancing human epistemic agency. The proliferation of fake news, false conspiracy theories, and other forms of misinformation on the internet and especially social media is widely recognized as a threat to individual knowledge and, consequently, to collective deliberation and democracy itself. This book argues (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Enhancing Deliberation with Digital Democratic Innovations.Anna Mikhaylovskaya - 2024 - Philosophy and Technology 37 (1).
    Democratic innovations have been widely presented by both academics and practitioners as a potential remedy to the crisis of representative democracy. Many argue that deliberation should play a pivotal role in these innovations, fostering greater citizen participation and political influence. However, it remains unclear how digitalization affects the quality of deliberation—whether digital democratic innovations (DDIs) undermine or enhance deliberation. This paper takes an inductive approach in political theory to critically examine three features of online deliberation that matter for deliberative democracy: (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Algorithmic Profiling as a Source of Hermeneutical Injustice.Silvia Milano & Carina Prunkl - forthcoming - Philosophical Studies:1-19.
    It is well-established that algorithms can be instruments of injustice. It is less frequently discussed, however, how current modes of AI deployment often make the very discovery of injustice difficult, if not impossible. In this article, we focus on the effects of algorithmic profiling on epistemic agency. We show how algorithmic profiling can give rise to epistemic injustice through the depletion of epistemic resources that are needed to interpret and evaluate certain experiences. By doing so, we not only demonstrate how (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • On the Philosophy of Unsupervised Learning.David S. Watson - 2023 - Philosophy and Technology 36 (2):1-26.
    Unsupervised learning algorithms are widely used for many important statistical tasks with numerous applications in science and industry. Yet despite their prevalence, they have attracted remarkably little philosophical scrutiny to date. This stands in stark contrast to supervised and reinforcement learning algorithms, which have been widely studied and critically evaluated, often with an emphasis on ethical concerns. In this article, I analyze three canonical unsupervised learning problems: clustering, abstraction, and generative modeling. I argue that these methods raise unique epistemological and (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • The fabrics of machine moderation: Studying the technical, normative, and organizational structure of Perspective API.Yarden Skop & Bernhard Rieder - 2021 - Big Data and Society 8 (2).
    Over recent years, the stakes and complexity of online content moderation have been steadily raised, swelling from concerns about personal conflict in smaller communities to worries about effects on public life and democracy. Because of the massive growth in online expressions, automated tools based on machine learning are increasingly used to moderate speech. While ‘design-based governance’ through complex algorithmic techniques has come under intense scrutiny, critical research covering algorithmic content moderation is still rare. To add to our understanding of concrete (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  • “AI will fix this” – The Technical, Discursive, and Political Turn to AI in Governing Communication.Christian Katzenbach - 2021 - Big Data and Society 8 (2).
    Technologies of “artificial intelligence” and machine learning are increasingly presented as solutions to key problems of our societies. Companies are developing, investing in, and deploying machine learning applications at scale in order to filter and organize content, mediate transactions, and make sense of massive sets of data. At the same time, social and legal expectations are ambiguous, and the technical challenges are substantial. This is the introductory article to a special theme that addresses this turn to AI as a technical, (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Authority to Moderate: Social Media Moderation and its Limits.Bhanuraj Kashyap & Paul Formosa - 2023 - Philosophy and Technology 36 (4):1-22.
    The negative impacts of social media have given rise to philosophical questions around whether social media companies have the authority to regulate user-generated content on their platforms. The most popular justification for that authority is to appeal to private ownership rights. Social media companies own their platforms, and their ownership comes with various rights that ground their authority to moderate user-generated content on their platforms. However, we argue that ownership rights can be limited when their exercise results in significant harms (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  • Content moderation, AI, and the question of scale.Tarleton Gillespie - 2020 - Big Data and Society 7 (2):2053951720943234.
    AI seems like the perfect response to the growing challenges of content moderation on social media platforms: the immense scale of the data, the relentlessness of the violations, and the need for human judgments without wanting humans to have to make them. The push toward automated content moderation is often justified as a necessary response to the scale: the enormity of social media platforms like Facebook and YouTube stands as the reason why AI approaches are desirable, even inevitable. But even (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   6 citations  
  • An Institutionalist Approach to AI Ethics: Justifying the Priority of Government Regulation over Self-Regulation.Thomas Ferretti - 2022 - Moral Philosophy and Politics 9 (2):239-265.
    This article explores the cooperation of government and the private sector to tackle the ethical dimension of artificial intelligence. The argument draws on the institutionalist approach in philosophy and business ethics defending a ‘division of moral labor’ between governments and the private sector. The goal and main contribution of this article is to explain how this approach can provide ethical guidelines to the AI industry and to highlight the limits of self-regulation. In what follows, I discuss three institutionalist claims. First, (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • The “neo-intermediation” of large on-line platforms : Perspectives of analysis of the “state of health” of the digital information ecosystem.Isabella de Vivo - 2023 - Communications 48 (3):420-439.
    The key role played by online platforms in the neo-intermediation of the public debate requires a review of current tools for mapping the digital information ecosystem, highlighting the political nature of such an analysis: Starting from a synoptic overview of the main models of platform governance, we try to understand whether the ongoing European shift towards the Limited Government Regulation (LGR) model will be able to counterbalance the “systemic opinion power” of the giant platforms and restore the “health” of the (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Algorithmic Censorship by Social Platforms: Power and Resistance.Jennifer Cobbe - 2020 - Philosophy and Technology 34 (4):739-766.
    Effective content moderation by social platforms is both important and difficult; numerous issues arise from the volume of information, the culturally sensitive and contextual nature of that information, and the nuances of human communication. Attempting to scale moderation, social platforms are increasingly adopting automated approaches to suppressing communications that they deem undesirable. However, this brings its own concerns. This paper examines the structural effects of algorithmic censorship by social platforms to assist in developing a fuller understanding of the risks of (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  • Informational Quality Labeling on Social Media: In Defense of a Social Epistemology Strategy.John P. Wihbey, Matthew Kopec & Ronald Sandler - manuscript
    Social media platforms have been rapidly increasing the number of informational labels they are appending to user-generated content in order to indicate the disputed nature of messages or to provide context. The rise of this practice constitutes an important new chapter in social media governance, as companies are often choosing this new “middle way” between a laissez-faire approach and more drastic remedies such as removing or downranking content. Yet information labeling as a practice has, thus far, been mostly tactical, reactive, (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  • Computing and moral responsibility.Merel Noorman - forthcoming - Stanford Encyclopedia of Philosophy.
  • Computing and moral responsibility.Kari Gwen Coleman - 2008 - Stanford Encyclopedia of Philosophy.