12 found
Order:
  1. (1 other version)The ethics of algorithms: key problems and solutions.Andreas Tsamados, Nikita Aggarwal, Josh Cowls, Jessica Morley, Huw Roberts, Mariarosaria Taddeo & Luciano Floridi - 2021 - AI and Society.
    Research on the ethics of algorithms has grown substantially over the past decade. Alongside the exponential development and application of machine learning algorithms, new ethical problems and solutions relating to their ubiquitous use in society have been proposed. This article builds on a review of the ethics of algorithms published in 2016, 2016). The goals are to contribute to the debate on the identification and analysis of the ethical implications of algorithms, to provide an updated analysis of epistemic and normative (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   51 citations  
  2. The Chinese approach to artificial intelligence: an analysis of policy, ethics, and regulation.Huw Roberts, Josh Cowls, Jessica Morley, Mariarosaria Taddeo, Vincent Wang & Luciano Floridi - 2021 - AI and Society 36 (1):59–⁠77.
    In July 2017, China’s State Council released the country’s strategy for developing artificial intelligence, entitled ‘New Generation Artificial Intelligence Development Plan’. This strategy outlined China’s aims to become the world leader in AI by 2030, to monetise AI into a trillion-yuan industry, and to emerge as the driving force in defining ethical norms and standards for AI. Several reports have analysed specific aspects of China’s AI policies or have assessed the country’s technical capabilities. Instead, in this article, we focus on (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   29 citations  
  3.  51
    Achieving a ‘Good AI Society’: Comparing the Aims and Progress of the EU and the US.Huw Roberts, Josh Cowls, Emmie Hine, Francesca Mazzi, Andreas Tsamados, Mariarosaria Taddeo & Luciano Floridi - 2021 - Science and Engineering Ethics 27 (6):1-25.
    Over the past few years, there has been a proliferation of artificial intelligence strategies, released by governments around the world, that seek to maximise the benefits of AI and minimise potential harms. This article provides a comparative analysis of the European Union and the United States’ AI strategies and considers the visions of a ‘Good AI Society’ that are forwarded in key policy documents and their opportunity costs, the extent to which the implementation of each vision is living up to (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  4.  42
    Digital Sovereignty, Digital Expansionism, and the Prospects for Global AI Governance.Huw Roberts, Emmie Hine & Luciano Floridi - 2023 - In Marina Timoteo, Barbara Verri & Riccardo Nanni, Quo Vadis, Sovereignty? : New Conceptual and Regulatory Boundaries in the Age of Digital China. Springer Nature Switzerland. pp. 51-75.
    In recent years, policymakers, academics, and practitioners have increasingly called for the development of global governance mechanisms for artificial intelligence (AI). This paper considers the prospects for these calls in light of two other geopolitical trends: digital sovereignty and digital expansionism. While calls for global AI governance promote the surrender of some state sovereignty over AI, digital sovereignty and expansionism seek to secure greater state control over digital technologies. To demystify the tensions between these trends and their potential consequences, we (...)
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  5.  60
    The case for a broader approach to AI assurance: addressing “hidden” harms in the development of artificial intelligence.Christopher Thomas, Huw Roberts, Jakob Mökander, Andreas Tsamados, Mariarosaria Taddeo & Luciano Floridi - forthcoming - AI and Society:1-16.
    Artificial intelligence (AI) assurance is an umbrella term describing many approaches—such as impact assessment, audit, and certification procedures—used to provide evidence that an AI system is legal, ethical, and technically robust. AI assurance approaches largely focus on two overlapping categories of harms: deployment harms that emerge at, or after, the point of use, and individual harms that directly impact a person as an individual. Current approaches generally overlook upstream collective and societal harms associated with the development of systems, such as (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  6.  29
    Submarine Cables and the Risks to Digital Sovereignty.Abra Ganz, Martina Camellini, Emmie Hine, Claudio Novelli, Huw Roberts & Luciano Floridi - 2024 - Minds and Machines 34 (3):1-23.
    The international network of submarine cables plays a crucial role in facilitating global telecommunications connectivity, carrying over 99% of all internet traffic. However, submarine cables challenge digital sovereignty due to their ownership structure, cross-jurisdictional nature, and vulnerabilities to malicious actors. In this article, we assess these challenges, current policy initiatives designed to mitigate them, and the limitations of these initiatives. The nature of submarine cables curtails a state’s ability to regulate the infrastructure on which it relies, reduces its data security, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  7. Artificial intelligence in support of the circular economy: ethical considerations and a path forward.Huw Roberts, Joyce Zhang, Ben Bariach, Josh Cowls, Ben Gilburt, Prathm Juneja, Andreas Tsamados, Marta Ziosi, Mariarosaria Taddeo & Luciano Floridi - forthcoming - AI and Society:1-14.
    The world’s current model for economic development is unsustainable. It encourages high levels of resource extraction, consumption, and waste that undermine positive environmental outcomes. Transitioning to a circular economy (CE) model of development has been proposed as a sustainable alternative. Artificial intelligence (AI) is a crucial enabler for CE. It can aid in designing robust and sustainable products, facilitate new circular business models, and support the broader infrastructures needed to scale circularity. However, to date, considerations of the ethical implications of (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  8.  16
    Correction: Submarine Cables and the Risks to Digital Sovereignty.Abra Ganz, Martina Camellini, Emmie Hine, Claudio Novelli, Huw Roberts & Luciano Floridi - 2025 - Minds and Machines 35 (1):1-1.
  9.  18
    Digital sovereignty and artificial intelligence: a normative approach.Huw Roberts - 2024 - Ethics and Information Technology 26 (4):1-10.
    Digital sovereignty is a term increasingly used by academics and policymakers to describe efforts by states, private companies, and citizen groups to assert control over digital technologies. This descriptive conception of digital sovereignty is normatively deficient as it centres discussion on how power is being asserted rather than evaluating whether actions are legitimate. In this article, I argue that digital sovereignty should be understood as a normative concept that centres on authority (i.e., legitimate control). A normative approach to digital sovereignty (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  10.  4
    Correction: The case for a broader approach to AI assurance: addressing “hidden” harms in the development of artificial intelligence.Christopher Thomas, Huw Roberts, Jakob Mökander, Andreas Tsamados, Mariarosaria Taddeo & Luciano Floridi - 2025 - AI and Society 40 (3):2003-2003.
  11.  4
    The case for a broader approach to AI assurance: addressing “hidden” harms in the development of artificial intelligence.Christopher Thomas, Huw Roberts, Jakob Mökander, Andreas Tsamados, Mariarosaria Taddeo & Luciano Floridi - 2025 - AI and Society 40 (3):1469-1484.
    Artificial intelligence (AI) assurance is an umbrella term describing many approaches—such as impact assessment, audit, and certification procedures—used to provide evidence that an AI system is legal, ethical, and technically robust. AI assurance approaches largely focus on two overlapping categories of harms: deployment harms that emerge at, or after, the point of use, and individual harms that directly impact a person as an individual. Current approaches generally overlook upstream collective and societal harms associated with the development of systems, such as (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  12.  9
    Correction: The case for a broader approach to AI assurance: addressing “hidden” harms in the development of artificial intelligence.Christopher Thomas, Huw Roberts, Jakob Mökander, Andreas Tsamados, Mariarosaria Taddeo & Luciano Floridi - forthcoming - AI and Society:1-1.