Explainable Answer-set Programming

Tobias Geibinger
(TU Wien)

The interest in explainability in artificial intelligence (AI) is growing vastly due to the near ubiquitous state of AI in our lives and the increasing complexity of AI systems. Answer-set Programming (ASP) is used in many areas, among them are industrial optimisation, knowledge management or life sciences, and thus of great interest in the context of explainability. To ensure the successful application of ASP as a problem-solving paradigm in the future, it is thus crucial to investigate explanations for ASP solutions. Such an explanation generally tries to give an answer to the question of why something is, respectively is not, part of the decision produced or solution to the formulated problem. Although several explanation approaches for ASP exist, almost all of them lack support for certain language features that are used in practice. Most notably, this encompasses the various ASP extensions that have been developed in the recent years to enable reasoning over theories, external computations, or neural networks. This project aims to fill some of these gaps and contribute to the state of the art in explainable ASP. We tackle this by extending the language support of existing approaches but also by the development of novel explanation formalisms, like contrastive explanations.

In Enrico Pontelli, Stefania Costantini, Carmine Dodaro, Sarah Gaggl, Roberta Calegari, Artur D'Avila Garcez, Francesco Fabiano, Alessandra Mileo, Alessandra Russo and Francesca Toni: Proceedings 39th International Conference on Logic Programming (ICLP 2023), Imperial College London, UK, 9th July 2023 - 15th July 2023, Electronic Proceedings in Theoretical Computer Science 385, pp. 423–429.
Published: 12th September 2023.

ArXived at: https://dx.doi.org/10.4204/EPTCS.385.52 bibtex PDF
References in reconstructed bibtex, XML and HTML format (approximated).
Comments and questions to: eptcs@eptcs.org
For website issues: webmaster@eptcs.org