Switch to: Citations

Add references

You must login to add references.
  1. How the machine ‘thinks’: Understanding opacity in machine learning algorithms.Jenna Burrell - 2016 - Big Data and Society 3 (1):205395171562251.
    This article considers the issue of opacity as a problem for socially consequential mechanisms of classification and ranking, such as spam filters, credit card fraud detection, search engines, news trends, market segmentation and advertising, insurance or loan qualification, and credit scoring. These mechanisms of classification all frequently rely on computational algorithms, and in many cases on machine learning algorithms to do this work. In this article, I draw a distinction between three forms of opacity: opacity as intentional corporate or state (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   182 citations  
  • A history of AI and Law in 50 papers: 25 years of the international conference on AI and Law. [REVIEW]Trevor Bench-Capon, Michał Araszkiewicz, Kevin Ashley, Katie Atkinson, Floris Bex, Filipe Borges, Daniele Bourcier, Paul Bourgine, Jack G. Conrad, Enrico Francesconi, Thomas F. Gordon, Guido Governatori, Jochen L. Leidner, David D. Lewis, Ronald P. Loui, L. Thorne McCarty, Henry Prakken, Frank Schilder, Erich Schweighofer, Paul Thompson, Alex Tyrrell, Bart Verheij, Douglas N. Walton & Adam Z. Wyner - 2012 - Artificial Intelligence and Law 20 (3):215-319.
    We provide a retrospective of 25 years of the International Conference on AI and Law, which was first held in 1987. Fifty papers have been selected from the thirteen conferences and each of them is described in a short subsection individually written by one of the 24 authors. These subsections attempt to place the paper discussed in the context of the development of AI and Law, while often offering some personal reactions and reflections. As a whole, the subsections build into (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   18 citations  
  • Accountability in a computerized society.Helen Nissenbaum - 1996 - Science and Engineering Ethics 2 (1):25-42.
    This essay warns of eroding accountability in computerized societies. It argues that assumptions about computing and features of situations in which computers are produced create barriers to accountability. Drawing on philosophical analyses of moral blame and responsibility, four barriers are identified: 1) the problem of many hands, 2) the problem of bugs, 3) blaming the computer, and 4) software ownership without liability. The paper concludes with ideas on how to reverse this trend.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   48 citations  
  • How Bioethics Can Shape Artificial Intelligence and Machine Learning.Junaid Nabi - 2018 - Hastings Center Report 48 (5):10-13.
    Artificial intelligence and machine learning have the potential to revolutionize the delivery of health care. But designing machine learning‐based decision support systems is not a merely technical challenge. It also requires attention to bioethical principles. As AI and machine learning advance, bioethical frameworks need to be tailored to address the problems that these evolving systems might pose, and the development of these automated systems also needs to be tailored to incorporate bioethical principles.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  • Computer knows best? The need for value-flexibility in medical AI.Rosalind J. McDougall - 2019 - Journal of Medical Ethics 45 (3):156-160.
    Artificial intelligence is increasingly being developed for use in medicine, including for diagnosis and in treatment decision making. The use of AI in medical treatment raises many ethical issues that are yet to be explored in depth by bioethicists. In this paper, I focus specifically on the relationship between the ethical ideal of shared decision making and AI systems that generate treatment recommendations, using the example of IBM’s Watson for Oncology. I argue that use of this type of system creates (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   42 citations  
  • Groundhog Day for Medical Artificial Intelligence.Alex John London - 2018 - Hastings Center Report 48 (3):inside back cover-inside back co.
    Following a boom in investment and overinflated expectations in the 1980s, artificial intelligence entered a period of retrenchment known as the “AI winter.” With advances in the field of machine learning and the availability of large datasets for training various types of artificial neural networks, AI is in another cycle of halcyon days. Although medicine is particularly recalcitrant to change, applications of AI in health care have professionals in fields like radiology worried about the future of their careers and have (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  • AI4People—an ethical framework for a good AI society: opportunities, risks, principles, and recommendations.Luciano Floridi, Josh Cowls, Monica Beltrametti, Raja Chatila, Patrice Chazerand, Virginia Dignum, Christoph Luetge, Robert Madelin, Ugo Pagallo, Francesca Rossi, Burkhard Schafer, Peggy Valcke & Effy Vayena - 2018 - Minds and Machines 28 (4):689-707.
    This article reports the findings of AI4People, an Atomium—EISMD initiative designed to lay the foundations for a “Good AI Society”. We introduce the core opportunities and risks of AI for society; present a synthesis of five ethical principles that should undergird its development and adoption; and offer 20 concrete recommendations—to assess, to develop, to incentivise, and to support good AI—which in some cases may be undertaken directly by national or supranational policy makers, while in others may be led by other (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   171 citations  
  • Legal personhood for artificial intelligences.Lawrence B. Solum - 1992 - North Carolina Law Review 70:1231.
    Could an artificial intelligence become a legal person? As of today, this question is only theoretical. No existing computer program currently possesses the sort of capacities that would justify serious judicial inquiry into the question of legal personhood. The question is nonetheless of some interest. Cognitive science begins with the assumption that the nature of human intelligence is computational, and therefore, that the human mind can, in principle, be modelled as a program that runs on a computer. Artificial intelligence (AI) (...)
    Direct download  
     
    Export citation  
     
    Bookmark   41 citations