5 found
Order:
  1.  46
    The Puzzle of Evaluating Moral Cognition in Artificial Agents.Madeline G. Reinecke, Yiran Mao, Markus Kunesch, Edgar A. Duéñez-Guzmán, Julia Haas & Joel Z. Leibo - 2023 - Cognitive Science 47 (8):e13315.
    In developing artificial intelligence (AI), researchers often benchmark against human performance as a measure of progress. Is this kind of comparison possible for moral cognition? Given that human moral judgment often hinges on intangible properties like “intention” which may have no natural analog in artificial agents, it may prove difficult to design a “like‐for‐like” comparison between the moral behavior of artificial and human agents. What would a measure of moral behavior for both humans and AI look like? We unravel the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  2.  52
    Building machines that learn and think for themselves.Matthew Botvinick, David G. T. Barrett, Peter Battaglia, Nando de Freitas, Darshan Kumaran, Joel Z. Leibo, Timothy Lillicrap, Joseph Modayil, Shakir Mohamed, Neil C. Rabinowitz, Danilo J. Rezende, Adam Santoro, Tom Schaul, Christopher Summerfield, Greg Wayne, Theophane Weber, Daan Wierstra, Shane Legg & Demis Hassabis - 2017 - Behavioral and Brain Sciences 40.
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  3.  17
    What is the simplest model that can account for high-fidelity imitation?Joel Z. Leibo, Raphael Köster, Alexander Sasha Vezhnevets, Edgar A. Duénez-Guzmán, John P. Agapiou & Peter Sunehag - 2022 - Behavioral and Brain Sciences 45:e261.
    What inductive biases must be incorporated into multi-agent artificial intelligence models to get them to capture high-fidelity imitation? We think very little is needed. In the right environments, both instrumental- and ritual-stance imitation can emerge from generic learning mechanisms operating on non-deliberative decision architectures. In this view, imitation emerges from trial-and-error learning and does not require explicit deliberation.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  4.  12
    Negotiating team formation using deep reinforcement learning.Yoram Bachrach, Richard Everett, Edward Hughes, Angeliki Lazaridou, Joel Z. Leibo, Marc Lanctot, Michael Johanson, Wojciech M. Czarnecki & Thore Graepel - 2020 - Artificial Intelligence 288 (C):103356.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  5.  20
    Learning agents that acquire representations of social groups.Joel Z. Leibo, Alexander Sasha Vezhnevets, Maria K. Eckstein, John P. Agapiou & Edgar A. Duéñez-Guzmán - 2022 - Behavioral and Brain Sciences 45.
    Humans are learning agents that acquire social group representations from experience. Here, we discuss how to construct artificial agents capable of this feat. One approach, based on deep reinforcement learning, allows the necessary representations to self-organize. This minimizes the need for hand-engineering, improving robustness and scalability. It also enables “virtual neuroscience” research on the learned representations.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark