7 found
Order:
  1. Explainable Artificial Intelligence (XAI) 2.0: A Manifesto of Open Challenges and Interdisciplinary Research Directions.Luca Longo, Mario Brcic, Federico Cabitza, Jaesik Choi, Roberto Confalonieri, Javier Del Ser, Riccardo Guidotti, Yoichi Hayashi, Francisco Herrera, Andreas Holzinger, Richard Jiang, Hassan Khosravi, Freddy Lecue, Gianclaudio Malgieri, Andrés Páez, Wojciech Samek, Johannes Schneider, Timo Speith & Simone Stumpf - 2024 - Information Fusion 106 (June 2024).
    As systems based on opaque Artificial Intelligence (AI) continue to flourish in diverse real-world applications, understanding these black box models has become paramount. In response, Explainable AI (XAI) has emerged as a field of research with practical and ethical benefits across various domains. This paper not only highlights the advancements in XAI and its application in real-world scenarios but also addresses the ongoing challenges within XAI, emphasizing the need for broader perspectives and collaborative efforts. We bring together experts from diverse (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  2.  8
    A computational framework for conceptual blending.Manfred Eppe, Ewen Maclean, Roberto Confalonieri, Oliver Kutz, Marco Schorlemmer, Enric Plaza & Kai-Uwe Kühnberger - 2018 - Artificial Intelligence 256 (C):105-129.
  3. Two Approaches to Ontology Aggregation Based on Axiom Weakening.Daniele Porello, Nicolaas Troquard, Oliver Kutz, Rafael Penaloza, Roberto Confalonieri & Pietro Galliani - 2018 - In Daniele Porello, Nicolaas Troquard, Oliver Kutz, Rafael Penaloza, Roberto Confalonieri & Pietro Galliani (eds.), Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, {IJCAI} 2018, July 13-19, 2018, Stockholm, Sweden. pp. 1942--1948.
    Axiom weakening is a novel technique that allows for fine-grained repair of inconsistent ontologies. In a multi-agent setting, integrating ontologies corresponding to multiple agents may lead to inconsistencies. Such inconsistencies can be resolved after the integrated ontology has been built, or their generation can be prevented during ontology generation. We implement and compare these two approaches. First, we study how to repair an inconsistent ontology resulting from a voting-based aggregation of views of heterogeneous agents. Second, we prevent the generation of (...)
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  4. Towards Knowledge-driven Distillation and Explanation of Black-box Models.Roberto Confalonieri, Guendalina Righetti, Pietro Galliani, Nicolas Toquard, Oliver Kutz & Daniele Porello - 2021 - In Proceedings of the Workshop on Data meets Applied Ontologies in Explainable {AI} {(DAO-XAI} 2021) part of Bratislava Knowledge September {(BAKS} 2021), Bratislava, Slovakia, September 18th to 19th, 2021. CEUR 2998.
    We introduce and discuss a knowledge-driven distillation approach to explaining black-box models by means of two kinds of interpretable models. The first is perceptron (or threshold) connectives, which enrich knowledge representation languages such as Description Logics with linear operators that serve as a bridge between statistical learning and logical reasoning. The second is Trepan Reloaded, an ap- proach that builds post-hoc explanations of black-box classifiers in the form of decision trees enhanced by domain knowledge. Our aim is, firstly, to target (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  5. Repairing Socially Aggregated Ontologies Using Axiom Weakening.Daniele Porello, Nicolas Triquard, Roberto Confalonieri, Pietro Galliani, Oliver Kutz & Rafael Penaloza - 2017 - In {PRIMA} 2017: Principles and Practice of Multi-Agent Systems - 20th International Conference, Nice, France, October 30 - November 3, 2017, Proceedings. Lecture Notes in Computer Science 10621,. pp. 441-449.
    Ontologies represent principled, formalised descriptions of agents’ conceptualisations of a domain. For a community of agents, these descriptions may differ among agents. We propose an aggregative view of the integration of ontologies based on Judgement Aggregation (JA). Agents may vote on statements of the ontologies, and we aim at constructing a collective, integrated ontology, that reflects the individual conceptualisations as much as possible. As several results in JA show, many attractive and widely used aggregation procedures are prone to return inconsistent (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  6. Towards Even More Irresistible Axiom Weakening.Roberto Confalonieri, Pietro Galliani, Oliver Kutz, Daniele Porello, Guendalina Righetti & Nicolas Toquard - 2020 - In Proceedings of the 33rd International Workshop on Description Logics {(DL} 2020) co-located with the 17th International Conference on Principles of Knowledge Representation and Reasoning {(KR} 2020), Online Event, Rhodes, Greece.
    Axiom weakening is a technique that allows for a fine-grained repair of inconsistent ontologies. Its main advantage is that it repairs on- tologies by making axioms less restrictive rather than by deleting them, employing the use of refinement operators. In this paper, we build on pre- viously introduced axiom weakening for ALC, and make it much more irresistible by extending its definitions to deal with SROIQ, the expressive and decidable description logic underlying OWL 2 DL. We extend the definitions of (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  7.  16
    Using ontologies to enhance human understandability of global post-hoc explanations of black-box models.Roberto Confalonieri, Tillman Weyde, Tarek R. Besold & Fermín Moscoso del Prado Martín - 2021 - Artificial Intelligence 296 (C):103471.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark