Published: 13th February 2025
DOI: 10.4204/EPTCS.416
ISSN: 2075-2180

EPTCS 416

Proceedings 40th International Conference on
Logic Programming
University of Texas at Dallas, Dallas Texas, USA, October 14-17 2024

Edited by: Pedro Cabalar, Francesco Fabiano, Martin Gebser, Gopal Gupta and Theresa Swift

Preface

Logic Programming and Neural Models

On LLM-generated Logic Programs and their Inference Execution Methods
Paul Tarau
1
Visual Graph Question Answering with ASP and LLMs for Language Parsing
Jakob Johannes Bauer, Thomas Eiter, Nelson Higuera Ruiz and Johannes Oetsch
15
LLM+Reasoning+Planning for Supporting Incomplete User Queries in Presence of APIs
Sudhir Agarwal, Anu Sreepathy, David H. Alonso and Prarit Lamba
29
Logical Lease Litigation: Prolog and LLMs for Rental Law Compliance in New York
Sanskar Sehgal and Yanhong A. Liu
59
LP-LM: No Hallucinations in Question Answering with Logic Programming
Katherine Wu and Yanhong A. Liu
69
Neuro-Symbolic Contrastive Learning for Cross-domain Inference
Mingyue Liu, Ryo Ueda, Zhen Wan, Katsumi Inoue and Chris G. Willcocks
78

Autonomy and Agents

Architecture for Simulating Behavior Mode Changes in Norm-Aware Autonomous Agents
Sean Glaze and Daniela Inclezan
95
Policies, Penalties, and Autonomous Agents (Extended Abstract)
Vineel Tummala and Daniela Inclezan
108
Mind the Gaps: Logical English, Prolog, and Multi-agent Systems for Autonomous Vehicles
Galileo Sartor, Adam Wyner and Giuseppe Contissa
111
Simulating Supply-Chain Contract Execution: A Multi-Agent Approach (Extended Abstract)
Long Tran, Tran Cao Son, Dylan Flynn and Marcello Balduccini
125
Modular Stochastic Rewritable Petri Nets
Lorenzo Capra
128

Explanation and Argumentation

Semantic Argumentation using Rewriting Systems (Extended Abstract)
Esteban Guerrero and Juan Carlos Nieves
135
Data2Concept2Text: An Explainable Multilingual Framework for Data Analysis Narration
Flavio Bertini, Alessandro Dal Palù, Federica Zaglio, Francesco Fabiano and Andrea Formisano
139
Counterfactual Explanations as Plans
Vaishak Belle
153

Answer Set Programming

Abduction of Domain Relationships from Data for VQA
Al Mehdi Saadat Chowdhury, Paulo Shakarian and Gerardo I. Simari
168
Graphical Conditions for the Existence, Unicity and Number of Regular Models
Van-Giang Trinh, Belaid Benhamou, Sylvain Soliman and François Fages
175
Efficient OWL2QL Meta-reasoning Using ASP-based Hybrid Knowledge Bases
Haya Majid Qureshi and Wolfgang Faber
188
Pearce's Characterisation in an Epistemic Domain
Ezgi Iraz Su
201
ASP-driven User-interaction with Clinguin
Alexander Beiser, Susana Hahn and Torsten Schaub
215

Prolog, Types, and Unification

A Prolog Program for Bottom-up Evaluation
David S. Warren
229
Regular Typed Unification
João Barbosa, Mário Florido and Vítor Santos Costa
236
Order-Sorted Intensional Logic: Expressing Subtyping Polymorphism with Typing Assertions and Quantification over Concepts
Đorđe Marković and Marc Denecker
253
A Coq Formalization of Unification Modulo Exclusive-Or
Yichi Xu, Daniel J. Dougherty and Rose Bohrer
267

Applications

Geospatial Trajectory Generation via Efficient Abduction: Deployment for Independent Testing
Divyagna Bavikadi, Dyuman Aditya, Devendra Parkar, Paulo Shakarian, Graham Mueller, Chad Parvis and Gerardo I. Simari
274
Towards Mass Spectrum Analysis with ASP (Extended Abstract)
Nils Küchenmeister, Alex Ivliev and Markus Krötzsch
288
Monitoring and Scheduling of Semiconductor Failure Analysis Labs (Extended Abstract)
Elena Mastria, Domenico Pagliaro, Francesco Calimeri, Simona Perri, Martin Pleschberger and Konstantin Schekotihin
291

Recently Published Research

Declarative AI design in Unity using Answer Set Programming (Extended Abstract)
Denise Angilica, Giovambattista Ianni, Francesco Pacenza and Jessica Zangari
295
stableKanren: Integrating Stable Model Semantics with miniKanren (Extended Abstract)
Xiangyu Guo, James Smith and Ajay Bansal
298

System Demo

Alda: Integrating Logic Rules with Everything Else, Seamlessly (System Demonstration)
Yanhong A. Liu, Scott D. Stoller, Yi Tong and Bo Lin
301

Doctoral Consortium

Generating Causally Compliant Counterfactual Explanations using ASP
Sopam Dasgupta
306
Bridging Logic Programming and Deep Learning for Explainability through ILASP
Talissa Dreossi
314
Computational methods for Dynamic Answer Set Programming
Susana Hahn
324
Relating Answer Set Programming and Many-sorted Logics for Formal Verification
Zachary Hansen
332
Answer Set Counting and its Applications
Mohimenul Kabir
345
Logical foundations of Smart Contracts
Kalonji Kalala
351
Commonsense Reasoning-Aided Autonomous Vehicle Systems
Keegan Kimbrell
358
A Category-Theoretic Perspective on Approximation Fixpoint Theory
Samuele Pollaci
365
Hybrid Answer Set Programming: Foundations and Applications
Nicolas Rühling
374
Autonomous Task Completion Based on Goal-directed Answer Set Programming
Alexis R. Tudor
381
Early Validation of High-level Requirements on Cyber-Physical Systems
Ondřej Vašíček
390
Reliable Conversational Agents under ASP Control that Understand Natural Language
Yankai Zeng
398

Preface

The 40th International Conference on Logic Programming (ICLP 24), was held in Dallas, Texas on October 14-17, 2024, and was co-located with the 17th International Conference on Logic Programming and Non-monotonic Reasoning held on October 11-14, 2024.

This volume contains Technical Communications in Section 1, papers from the affiliated Doctoral Consortium in Section 2, and in Section 3, abstracts from the ICLP 24 invited talks and tutorials.1

Technical Communications

ICLP 24 technical communications include several types of contributions: regular papers, short papers, and extended abstracts of regular papers. Under the rubric of technical communications we also include extended abstracts of system demos and of recently published research. The high quality of all contributions has been ensured by triple-revieweing; and apart from extended abstracts of recently published research, all contributions in this volume are original work.

Technical communications often represent research that is very new and sometimes longer versions of extended abstracts are published in other venues. However, the impact of technical communications can be quite high. In fact, the Alain Colmerauer 10-year Test of Time Award for ICLP 2024 was given to a technical communication, "Clingo = ASP + Control: Preliminary Report"2, by Martin Gebser, Roland Kaminski, Benjamin Kaufmann and Torsten Schaub, originally published as part of the Proceedings of the Thirtieth International Conference on Logic Programming 2014.

We loosely group the technical communications in this volume as follows.

Logic Programming and Neural Models

Given the phenominal capabilities of recent Large Language Models (LLMs) such as Chat-GPT and Llama, a number of technical communications in ICLP 24 explored how logic programming can be combined with LLMs in meaningful ways, or included within neuro-symbolic approaches.

Autonomy and Agents

Another active research topic investigates the role of logic programming in autonomous, distributed and adaptive dynamic systems.

Explanation and Argumentation

An important advantage of logic compared to neural models is that logical reasoning can be explained in human terms using different techniques, including causality and argumentation. This was the topic of three papers in ICLP 24.

Answer Set Programming

As with other recent ICLPs, the topic of Answer Set Programming (ASP) was well-represented. This year, the ASP topics for technical communications included the abductive capabilities of ASP for Visual Query Answering (VQA), the regular models of an ASP program, the use of ASP for meta-reasoning with ontologies, the relations among epistemic extensions of ASP, and using ASP to drive interfaces to other ASP systems.

Prolog, Types, and Unification

New perspectives on types and unification algorithms played a part in several technical communications. These papers concerned Prolog as well as other logic-oriented systems and frameworks, including the Coq proof assistant.

Applications

Several technical communications described applications that were developed using annotation logic, abduction, constraint programming, and ASP.

Recently Published Research and Demos

Doctoral Consortium Papers

The Doctoral Consortium of ICLP 24 was jointly organized with the 17th International Conference on Logic Programming and Non-monotonic Reasoning (LPNMR 24). Applications were submitted by 12 PhD students from universities in the US (5), Germany (2), Belgium, Canada, Czechia, Italy and Singapore. Their research summaries received three reviews each by senior members of the ICLP 24 and LPNMR 24 research communities, giving critical yet constructive feedback on the student contributions.

The Autumn School on Logic Programming, held in conjunction with the Doctoral Consortium, featured four tutorials presented by the following senior researchers.

These Autumn School presenters also shared their experiences and ideas with the PhD students during a mentoring lunch event on October 13.

Abstracts of Invited Talks and Tutorials

ICLP 24 had four invited talks and two invited tutorials.

Invited Talk

Logic Programming and Logical Algorithmics

Moshe Vardi

Rice University. Houston, Tx.

Moshe Vardi presented his talk in a plenary session attended by ICLP 24 particiants together with participants of the 17th International Conference on Logic Programming and Non-monotonic Reasoning.

Abstract

Logic programming, born around 1972, expresses a program as a set of Horn clauses. Computation is then performed by applying logical reasoning to that set of clauses. The approach was eventually described as "Algorithm=Logic+Control". Another approach to logic and algorithms was developed by Codd in the early 1970s. In his approach, the problem domain is expressed as a relational database, and the problem is expressed as a first-order formula, called "query". Computation is performed by a meta-algorithm, the query-evaluation algorithm. In this talk, I will describe this approach, which I call Logical Algorithmics, in detail. I will show how this approach yielded multi-variate computational-complexity theory, which offers a more nuanced approach to complexity analysis. It also enabled the development the model-checking algorithms, which are today used in industrial semiconductor design tools.

Invited Tutorial

Logic rules and commonsense in uncertain times:
A simple unified semantics for reasoning with assurance and agreement

Y. Annie Liu

Stony Brook University. Stony Brook, NY.

Abstract

Complex reasoning problems are most clearly and easily specified using logical rules, but require recursive rules with aggregation such as count and sum and more for practical applications. Unfortunately, the meaning of such rules has been a significant challenge, with many disagreeing semantics, baffling commonsense for rigorous logic.

This tutorial examines a simple unified semantics for reasoning with assurance and agreement, and consists of three main parts:

  1. An introduction to complex reasoning problems expressed using logic rules, with recursion, negation, quantification, and aggregation; the key idea of a simple unified semantics, supporting simple expression of different assumptions; and how it unifies different prior semantics.

  2. An overview of the precise rule language; the formal semantics, called Founded Semantics and Constraint Semantics, or Founded+Constraint Semantics (FCS) for short here, supporting efficient and precise inference over aggregation even with approximation; and the properties of the semantics.

  3. An exploration of a wide range of challenging examples, including the well-known problem of company control and extended win-not-win games. FCS is simple and matches the desired results in all cases.

Additionally, we discuss how to combine logic/rigorous languages and LLMs/ML for problem solving and question answering with assurance, where a simple unified semantics is critical for its generality, power, and ease of use.

Invited Talk

Linear Algebraic Approaches to Logic Programming

Katsumi Inoue

National Institute of Informatics. Chiyoda, Japan.

Abstract

Integration of symbolic reasoning and machine learning is important for robust AI. Realization of symbolic reasoning based on algebraic methods is promising to bridge between symbolic reasoning and machine learning, since algebraic data structures have been used in machine learning.

To this end, Sakama, Inoue and Sato have defined notable relations between logic programming and linear algebra and have proposed algorithms to compute logic programs numerically using tensors. This work has been extended in various ways, to compute supported and stable models of normal logic programs, to enhance the efficiency of computation using sparse methods, and to enable abduction for abductive logic programming. A common principle in this approach is to formulate logical formulas as vectors/matrices/tensors, and linear algebraic operations are applied on these elements for computation of logic programming. Partial evaluation can be realized in parallel and by self-multiplication, showing the potential for exponential speedup.

Furthermore, the idea to represent logic programs as tensors and matrices and to transform logical reasoning to numeric computation can be the basis of the differentiable methods for learning logic programs.

Invited Talk

The Anatomy of the SICStus Finite-Domain Constraint Solver

Mats Carlsson

Research Institute of Sweden. Kista, Sweden.

Abstract

Constraint programming (CP) is a powerful problem-solving paradigm with roots in combinatorics, linear programming, logic programming, and AI. A notable development in the 1980s was the fusion of constraint solving with logic programming into constraint logic programming (CLP), which extended Prolog by integrating constraints into its framework. To extend a Prolog system with constraints, a large number of algorithms and data structures must be added for tasks like domain representation, constraint propagation, search, a whole menagerie of filtering algorithms for specific constraints, and peaceful coexistence with the Prolog virtual machine and runtime system.

This talk focuses on the constraint programming support in SICStus Prolog: the key extensions to Prolog that were necessary, details of the solver architecture, and a discussion of design choices. I will try to put the work in a historical perspective and also say something about programming interfaces, use cases, and my outlook on the future of CP.

Invited Tutotial

Encoding High-Level Constraints into SAT and MIP

Neng-Fa Zhou

CUNY Brooklyn College and Graduate Center. Brooklyn, NY.

Abstract: Picat provides four solver modules, including CP, SAT, MIP, and SMT, for modeling and solving constraint satisfaction and optimization problems (CSPs). This tutorial introduces the inner workings of Picat's SAT and MIP modules. PicatSAT encodes constraints into SAT based on unary and binary encodings of domain variables. PicatSAT adopts many optimizations from CP systems, language compilers, and hardware design systems for encoding primitive constraints into compact and efficient SAT encodings. PicatSAT also employs some novel algorithms for decomposing global constraints, especially graph constraints. PicatMIP, while generally not as competitive as PicatSAT on MiniZinc and XCSP benchmarks, is a good supplementary solver for CSPs. PicatMIP adopts the well-known big-M method for linearizing nonlinear constraints, and employs some optimizations for translating special nonlinear constraints into linear ones.

Invited Talk

How Structure Shapes Logic Programming and Counting-Based Reasoning

Markus Hecher

Massachusetts Institute of Technology. Cambridge, Ma.

Abstract

When can we efficiently solve combinatorially hard problems? In practice, state-of-the-art solvers can tackle instances with millions of variables, creating a significant gap between empirical performance and theoretical limits. A key factor in bridging this gap is understanding the structural properties of problem instances. In this talk, we explore how to efficiently leverage these structures, with a particular focus on the role of treewidth and the answer set programming framework. We establish tight runtime upper and lower bounds, grounded in reasonable complexity-theoretic assumptions. Special attention is given to counting-based reasoning, a computationally intensive task where structure plays a critical role. Through empirical results, we demonstrate how structural insights can drastically improve the efficiency of counting in combinatorial problem-solving. This emphasizes the importance of theoretical studies and their practical applications, showcasing how we bring theory and practice closer together.

Acknowledgements

We would like to begin by thanking everyone who together organized ICLP 24 and contributed to its success. These include

The ICLP 24 program committee consisted of representatives of logic programming groups in Australia, Austria, Canada, China, the Czech Republic, Finland, Germany, Greece, Israel, Italy, Japan, Netherlands, Portugal, Spain, Sweden, Turkey, the United Kingdom, and the United States. The issue editors feel fortunate for the close involvement of a responsive and energetic program committee. The program committee members were as follows.

Salvador Abreu

Mario Alviano

Nicos Angelopoulos

Joaquín Arias

Marcello Balduccini

Mutsunori Banbara

Chitta Baral

Roman Barták

Elena Bellodi

Bart Bogaerts

Roberta Calegari

Francesco Calimeri

Manuel Carro

Angelos Charalambidis

Michael Codish

Stefania Costantini

Alessandro Dal Palù

Marina De Vos

Marc Denecker

Agostino Dovier

Thomas Eiter

Esra Erdem

Wolfgang Faber

Jorge Fandinno

Johannes K. Fichte

Fabio Fioravanti

Andrea Formisano

Gerhard Friedrich

Marco Gavanelli

Martin Gebser

Laura Giordano

Ricardo Gonçalves

Gopal Gupta

Markus Hecher

Giovambattista Ianni

Daniela Inclezan

Tomi Janhunen

Matthias Kno

Joao Leite

Michael Leuschel

Yuliya Lierler

Vladimir Lifschitz

Francesca Alessandra Lisi

Yanhong A. Liu

Marco Maratea

Viviana Mascardi

Laurent Michel

Jose F. Morales

Johannes Oetsch

Manuel Ojeda-Aciego

Simona Perri

Enrico Pontelli

Francesco Ricca

Fabrizio Riguzzi

Ricardo Rocha

Chiaki Sakama

Vitor Santos-Costa

Zeynep G. Saribatur

Torsten Schaub

Konstantin Schekotihin

Tom Schrijvers

Tran Cao Son

Mohan Sridharan

Paul Tarau

Hans Tompits

Mirek Truszczynski

German Vidal

Concepcion Vidal

Alicia Villanueva

Kewen Wang

David Warren

Felix Weitkämper

Jan Wielemaker

Stefan Woltran

Roland Yap

Jia-Huai You

Zhizheng Zhang

Yuanlin Zhang

Neng-Fa Zhou

We also thank our external reviewers:

João Barbosa

Manuel Borroto

Geoffrey Churchill

Michele Collevati

Daniele Theseider Dupre

Müge Fidan

Tobias Geibinger

Michael Gelfond

Kenji Hashimoto

Antonio Ielo

Ryan Kepler

Matthias König

Elena Mastria

Ciaran McCreesh

Samuele Pollaci

Maurizio Proietti

Abhiramon Rajasekharan

Javier Romero

Takehide Soh

Gioacchino Sterlicchio

Robbe Van den Eede

Linde Vanbesien

Huaduo Wang.

We are grateful to the University of Texas at Dallas on whose campus the conference was organized. The following organizations within the University provided direct financial support: The Department of Computer Science, The Erik Jonsson School of Engineering and Computer Science, The Office of the Vice President for Research, and The Center for Applied AI and Machine Learning

Additionally, ICLP 2024 received funds for student support from the Artificial Intelligence Journal and the National Science Foundation.


Policies, Penalties, and Autonomous Agents (Extended Abstract)

Vineel Tummala (Miami University)
Daniela Inclezan (Miami University)

We introduce a framework for enabling policy-aware autonomous agents to reason about penalties for non-compliant behavior, and act accordingly. We use the AOPL language for policy specification and ASP for reasoning about policies and penalties. We build upon existing work by Harders and Inclezan on simulating the behavior of policy-aware autonomous agents and run tests on two different domains. We conclude that our framework produces higher quality plans than the previous approach.

Introduction

In this paper we explore autonomous agents that operate in dynamic environments governed by policies or norms. We introduce a framework that enables these policy-aware agents to reason about potential penalties for non-compliance and generate suitable plans for their goals. Unlike some of the previous work that only focused on compliant agents (e.g., [7,1]), we believe that it is important to study the different nuances of non-compliant agent behavior for two reasons. First, autonomous agents may be tasked to accomplish high-stakes goals (e.g., assist in rescue operations) that may only be achievable if one or more non-compliant actions are executed as part of the plan. Second, this can be useful to policy makers if policy-aware agents are used to model human behavior since humans do not always comply to norms. In our proposed framework, policies are specified in the Authorization and Obligation Policy Language (AOPL) by Gelfond and Lobo [3], and implemented in Answer Set Programming (ASP). We expand AOPL to enable the representation of, and reasoning about, penalties that may be incurred for non-compliance with a policy. Previous work on policy-aware autonomous agents [6,4] defined var- ious attitudes towards compliance with policies, called behavior modes. Non-compliant behavior was acceptable as part of the Risky behavior mode. However, in that framework the different plans contain- ing non-compliant actions were ranked solely based on the length of the plan. Instead, our framework is capable of distinguishing between different non-compliant plans via penalties, so that an agent that disobeys certain policies to achieve its goal can do so with minimum repercussions. Based on our exper- iments, we conclude that our framework produces better quality plans. At least for some domains, our approach is also more efficient.

Background: Policy-Specification Language AOPL

Gelfond and Lobo [3] designed the Authorization and Obligation Policy Language AOPL for specifying policies for an intelligent agent acting in a dynamic environment. A policy is a collection of authorization and obligation statements. An authorization indicates whether an agent's action is permitted or not, and under which conditions. An obligation describes whether an agent is obligated or not obligated to per- form a specific action under certain conditions. An AOPL policy assumes that the agent's environment is described in an action language such as ALd [2]. An ALd system description defines the domain's transition diagram whose states are complete and consistent sets of fluent literals and whose arcs are labeled by actions. The signature of an AOPL policy includes (a) the signature of the dynamic system description (which consists of predicates for sorts, fluents, and actions) and (b) predicates permitted for authorizations, obl for obligations, and prefer for specifying preferences between authorizations or obligations. An AOPL policy P is a finite collection of statements of the form:

$ \begin{array}{llll} & \ \ permitted\left(e\right) & \textbf{ if } \ cond & (1a)\\ & \neg permitted\left(e\right) & \textbf{ if } \ cond & (1b)\\ & \ \ obl\left(h\right) & \textbf{ if } \ cond & (1c)\\ & \neg obl\left(h\right) & \textbf{ if } \ cond (1d)\\ d: \textbf{normally } & \ \ permitted(e) & \textbf{ if } \ cond & (1e)\\ d: \textbf{normally } & \neg permitted(e) & \textbf{ if } \ cond & (1f)\\ d: \textbf{normally } & \ \ obl(h) & \textbf{ if } \ cond & (1g)\\ d: \textbf{normally } & \neg obl(h) & \textbf{ if } \ cond & (1h)\\ & \ \ \textit{prefer}(d_i, d_j) & & (1i) \end{array} $

where e is an elementary action; h is a happening (i.e., an elementary action or its negation); cond is a set of atoms of the signature, except for atoms containing the predicate prefer; d appearing in (1e)- (1h) denotes a defeasible rule label; and di, dj in (1i) refer to distinct defeasible rule labels from P. Rules (1a)-(1d) encode strict policy statements, while rules (1e)-(1h) encode defeasible statements (i.e., statements that may have exceptions). Rule (1i) captures priorities between defeasible statements only.

Penalization Framework for Policy-Aware Agents

We extend the AOPL syntax by a new type of statement for penalties:

$penalty(r, p)\ \textbf{if}\ cond_p$

where r is the label of the prohibition or obligation rule for which the penalty is specified, p stands for the number of penalty points imposed if the rule r is broken, and condp is a collection of static literals. As in Inclezan's work [5], we assume that all AOPL rules are labeled, including strict ones, which is not the case in the original definition of the language. For instance, here is a strict policy rule labeled r6(L) and its associated 3-point penalty. The rule says that the agent is obligated to stop at a location L whenever there are pedestrians crossing at that location:

$ \begin{array}{l} r6(L): obl(stop(L)) \ \ \textbf{if} \ \ pedestrians\_are\_crossing(L)\\ penalty(r6(L), 3) \end{array} $

Multiple penalty values can be associated with the same rule, based on different gravity levels. Let's consider the following defeasible policy rule r1(L1, L2, S, S1) saying that normally one is not permitted to exceed the speed limit by more than 5 mph if the speed limit is under 55 mph:

$ \begin{array}{lrl} r1(L_1, L_2, S, S_1) : & \textbf{normally} & \neg permitted(drive(L_1, L_2, S)) \\ & \textbf{if} & speed\_limit(S_1), S > S_1 + 5, S_1 < 55 \end{array} $

The various levels of penalties for 1, 2, and 3 points respectively are assigned to the rule in AOPL as:

$ \begin{array}{lll} penalty(r1(L_1, L_2, S, S_1),1) & \textbf{if} & S - S_1 < 10.\\ penalty(r1(L_1, L_2, S, S_1),2) & \textbf{if} & S - S_1 >= 10, S - S1 < 20.\\ penalty(r1(L_1, L_2, S, S_1),3) & \textbf{if} & S - S_1 >= 20. \end{array} $

Next, we develop ASP rules that can identify the penalties an agent incurs at each time step and specify which policy rule has been violated. We do so by introducing a predicate add_penalty(r, p, i), which says that a penalty of p points should be added to the total for breaking policy rule r at time step i. This predicate's definition addresses three scenarios: (1) when a prohibited action is included in the plan; (2) when a required action is missing from the plan; and (3) when a forbidden action is erroneously included in the plan.

$ \begin{array}{l} add\_penalty(R, P, I) \ \leftarrow \ rule(R), \ holds(R, I), \ head(R, \neg permitted(E)), \ occurs(E, I),\ penalty(R, P)\\ add\_penalty(R, P, I) \ \leftarrow \ rule(R), \ holds(R, I), \ head(R, obl(E)), \ \neg occurs(E, I), \ penalty(R, P)\\ add\_penalty(R, P, I) \ \leftarrow rule(R), \ holds(R, I), \ head(R, obl(\neg e)), \ occurs(e, I), \ penalty(R, P) \end{array} $

where R is a rule, I is a time step, E is an elementary agent action, and P is a penalty.

The overall penalty for a plan is captured by the predicate cumulative_penalty defined using the #sum aggregate of CLINGO, and similarly for cumulative time to complete a plan:

$ \begin{array}{lll} cumulative\_penalty(N) & \leftarrow & \#sum\{P, R, I: add\_penalty(R, P, I)\} = N \end{array} $

An agent can either prioritize minimizing the cumulative penalty or minimizing the cumulative time when searching for an optimal plan.

Findings from Experiments

Our framework enhances the specification and simulation of agent behavior compared to the previous work by Harders and Inclezan (HI framework) [4], which lacked the concept of penalties. We tested our framework on the Rooms Domain [4] and observed significant improvements in time efficiency. When testing it on a newly developed Traffic Norms Domain, in which selecting an appropriate driving speed was fundamental, our framework produced higher quality plans. The HI framework chose random speeds, as it was penalty-agnostic. The choice of appropriate speeds resulted in longer execution times for our framework, as it required more optimization cycles compared to the HI framework.

References

  1. Natasha Alechina, Mehdi Dastani & Brian Logan (2012): Programming norm-aware agents. In: Proceedings of AAMAS'12 - Volume 2, Richland, SC, pp. 1057-1064, doi:10.5555/2343776.2343848.
  2. Michael Gelfond & Daniela Inclezan (2013): Some properties of system descriptions of ALd. J. Appl. Non Class. Logics 23(1-2), pp. 105–120, doi:10.1080/11663081.2013.798954.
  3. Michael Gelfond & Jorge Lobo (2008): Authorization and Obligation Policies in Dynamic Systems. In: Maria Garcia de la Banda & Enrico Pontelli: Logic Programming, Lecture Notes in Computer Science. Springer, Berlin, Heidelberg, pp. 22–36, doi:10.1007/978-3-540-89982-2_7.
  4. Charles Harders & Daniela Inclezan (2023): Plan Selection Framework for Policy-Aware Autonomous Agents. In: Sarah Alice Gaggl, Maria Vanina Martinez & Magdalena Ortiz: Proceedings of JELIA'23, Lecture Notes in Computer Science 14281. Springer, pp. 638–646, doi:10.1007/978-3-031-43619-2_43.
  5. Daniela Inclezan (2023): An ASP Framework for the Refinement of Authorization and Obligation Policies. Theory and Practice of Logic Programming 23(4), pp. 832-847, doi:10.1017/S147106842300011X.
  6. John Meyer & Daniela Inclezan (2021): APIA: An Architecture for Policy-Aware Intentional Agents. In: Proceedings of ICLP'21 (Tech. Comm.), EPTCS 345, pp. 84–98, doi:10.4204/EPTCS.345.23.
  7. Nir Oren, Wamberto Vasconcelos, Felipe Meneguzzi & Michael Luck (2011): Acting on Norm Constrained Plans. In: João Leite, Paolo Torroni, Thomas Ågotnes, Guido Boella & Leon van der Torre: Computational Logic in Multi-Agent Systems. Springer, pp. 347–363, doi:10.1007/978-3-642-22359-4_24.

Simulating Supply-Chain Contract Execution: A Multi-Agent Approach (Extended Abstract)

Long Tran (New Mexico State University)
Tran Cao Son (New Mexico State University)
Dylan Flynn (Saint Joseph's University)
Marcello Balduccini (Saint Joseph's University)

Supply chains exhibit complex dynamics and intricate dependencies among their components, whose understanding is crucial for addressing the challenges highlighted by recent global disruptions. This paper presents a novel multi-agent system designed to simulate supply chains, linking reasoning about dynamic domains and multi-agent systems to reasoning about the high-level primitives of the NIST CPS Framework. Our approach synthesizes existing research on supply chain formalization and integrates these insights with multi-agent techniques, employing a declarative approach to model interactions and dependencies. The simulation framework models a set of autonomous agents within a partially observable environment, and whose interactions are dictated by contracts. The system dynamically reconciles agents' actions, assessing their feasibility and consequences. Based on the state of the domain, the simulation framework also draws conclusions about the high-level notions of requirements and concerns of the NIST CPS Framework, which provide a uniform and domain-agnostic vocabulary for the understanding of such complex systems as supply chains.

1 Introduction

Building on our previous work on formalizing supply chains and the underlying contracts [2], this paper introduces a simulation of the dynamics of supply that relies on a view of the supply chain as a multi-agent system. Central to our approach is the use of formal contracts, which define the obligations and interactions between agents within the simulation environment. Our simulation integrates an additional knowledge layer based on the National Institute of Standards and Technology's Cyber-Physical Systems (NIST CPS) Framework [1]. The CPS Framework makes it possible to link the low-level view of contracts to the high-level aspects of stakeholders' requirements and concerns about the supply chain as a whole. By integrating a multi-agent simulation with the structured approach of the NIST CPS Framework, we aim to capture the nuanced interplay of contractual obligations, agent behaviors, and their ramifications on the supply chain, providing insights into the potential points of failure and enabling strategies to improve resilience. The agents and the simulation environment are modeled using declarative techniques through Answer Set Programming (ASP) [3]. This approach allows for a clear specification of the logic governing agent behaviors and their contractual interactions. This paper presents not only the theoretical underpinnings of our simulation model but also a practical use case that illustrates its potential. Through these simulations, stakeholders can better understand the critical dependencies within their supply chains, evaluate the robustness of contractual arrangements, and explore strategies to enhance overall resilience.

2 A Distributed Multi-Agent Simulator

Two main components of the system are the environment simulator and the agent program. The code of the system is available on GitHub at github.com/ltran1612/Research_CPS_SupplyChain.

Environment Simulator. The environment simulator consists of two main parts: the controller and the reasoning engine. The reasoning engine is a logic program that takes a state (of the environment) and a set of actions and determines whether the actions can be executed. In this case, the reasoning engine computes the state of the world. The environment simulator works with the global domain, denoted by Denv, which is the union of domains of all agents. The main portion of the code of the reasoning engine is for computing the effects of a set of actions on a state of the environment and is similar to the code for computing a plan in [5]. The controller is responsible for all the communication between the environment and the agents (e.g., receiving actions that agents execute, informing agents about the changes in the local state of agents). Besides disallowing the execution of incompatible actions, the simulator can also disrupt the execution of a supply chain by randomly disallowing the execution of some action. This forces the agents, whose actions are not executed, to replan with the new local state.

Agent Program. Each agent program consists of the following components: the action domain, the set of rules for reasoning about concerns of the agent, the set of rules for reasoning about the agent's actions as well as planning, and a controller. The controller is responsible for the communication of the agent with the environment. The controller's behaviour is described by the pseudocode given in Algorithm 1. The set of rules for reasoning about the agent's actions and planning is similar to the code for planning as described in [5].

Algorithm 1: Overall Behavior of the Agent Program
Require: agent ID, action domain, set of contracts
  registers ID with environment and wait for acknowledgement
  sends the action domain to the environment and wait for acknowledgement
  step = 0
  generate a plan p for the set of contracts
  while true do
     send p[step] (the step-th action of p) to environment
     wait for response (locals - the local state) from the environment
     if locals ≠ ⊥ then      ▷ all actions were executed successfully
         update the current state with locals
         step = step + 1
     else      ▷ some action cannot be executed
         generate a new plan p for the agent
         step = 0
     end if
  end while

Communications. We used the publish/subscribe architecture (MQTT protocol specifically, see [6] for details) to facilitate the communications between agents and the environment. Each entity sends and receives messages by topics through a publish/subscribe broker component (middleman) [4]. Communication between an agent (with a unique AgentID) and the environment are done using two topics: "env/AgentID" and "for/AgentID". The former is for the agent to send and the latter is for receiving from the environment.

3 Case Study: Supply Chain Simulation

As a case study, we leveraged a dataset developed by the Observatory of Economic Complexity (OEC, https://oec.world/en), an organization that collects and analyzes international trade data. The dataset contains data about international imports and exports, from which we extracted part of an automotive supply chain. The scenario involves 8 agents with 7 contracts containing a total of 21 clauses, 8 supply chain requirements, and 5 concerns. An example of a contract clause from the scenario is: C1: A responsible_for produced(vehicle_parts, 9K) when by_week 4. Agent A is called speedy_auto_part. The private parts of the contract map C1 to various requirements, addressing concerns of the CPS ontology. An example is: C1: material-safe-for-production. In this scenario, each agent can perform 4 types of actions: produce, deliver, pay, and receive. Each also keeps track of relevant fluents, like the total amount of payments made. The corresponding domains are encoded using a template. For example, action produce from A and related fluents are specified by rules including:

produce_item(speedy_auto_parts, vehicle_parts).
action(Agent, produce, (Item, Amount)) :- produce_item(Agent, Item), number(Amount).
causes(Agent, produce, Value, produced, increase, ()) :- action(Agent, produce, Value).

A sample run of the simulator with the OEC agents progresses as follows. At first, the agents register with the environment and send their information (action domains and initial state) to the environment. At each step i, the agents send to the environment the actions they plan to execute. The environment computes the next state, sending back to the agents their local states, and waits for another set of actions. For example, at step 0, A has 0 vehicle_part and plans to take the action that produces 9,000 parts. The global state of the environment records at step 1 that A now has 9,000 vehicle_part, which is reflected in the local state of A as well. At the end of each iteration, the system determines which clauses are satisfied. For example, at step 0 no clauses are satisfied; at step 1, clause C1 is satisfied. In turn, the system uses this information to infer which requirements and concerns are satisfied.

Conclusion

In this paper, we presented a novel multi-agent simulation framework designed to address the complexities and challenges inherent in modern supply chains. We demonstrate that standard ASP programs such as planning or diagnosing modules can be integrated into a system for monitoring contract executions. By integrating the NIST CPS Framework with advanced contract formalization techniques, we have developed a robust system capable of simulating diverse supply chain scenarios and assessing the impact of various types of disruptions. Our evaluation on a realistic case study demonstrates that the framework not only enhances the understanding of supply chain dynamics but also provides actionable insights into improving resilience and reliability. It is understandable that a system utilizing ASP technology will inherit all of its problems (e.g., grounding, scalability). However, the prototype works fine with the use cases and appears acceptable to the owner of the use cases. Identifying the limit of the current system in terms of the limit of the system (e.g., the complexity of the domains or contracts) is an interesting issue and is our immediate future work.

Acknowledgement. Portions of this publication and research effort are made possible through the help and support of NIST via cooperative agreement 70NANB21H167.

References

  1. Edward, Christopher Greer, David A. Wollman & Martin J. Burns (2017): Framework for cyber-physical systems: volume 1, overview. doi:10.6028/NIST.SP.1500-201.
  2. Dylan Flynn, Chasity Nadeau, Jeannine Shantz, Marcello Balduccini, Tran Cao Son & Edward Griffor (2023): Formalizing and Reasoning about Supply Chain Contracts between Agents. In: 25th PADL, 13880, doi:10.1007/978-3-031-24841-2_10.
  3. Michael Gelfond & Vladimir Lifschitz (1991): Classical Negation in Logic Programs and Disjunctive Databases. New Generation Computing 9, pp. 365-385, doi:10.1007/BF03037169.
  4. B. Reselman (2021): The pros and cons of the Pub-Sub architecture pattern. Red Hat. Available at https://www.redhat.com/architect/pub-sub-pros-and-cons.
  5. Tran Cao Son, Enrico Pontelli, Marcello Balduccini & Torsten Schaub (2022): Answer Set Planning: A Survey. Theory and Practice of Logic Programming, pp. 1-73, doi:10.1017/S1471068422000072.
  6. M. Yuan (2021): Getting to know MQTT. IBM. Available at https://developer.ibm.com/articles/iot-mqtt-why-good-for-iot/.

Semantic Argumentation using Rewriting Systems (Extended Abstract)

Esteban Guerrero (Department of Computing Science, Umeå University, Sweden)
Juan Carlos Nieves (Department of Computing Science, Umeå University, Sweden)

Abstract

In this article, we introduce a general framework for structured argumentation providing consistent and well-defined justification for conclusions that can and cannot be inferred and there is certainty about them, which we call semantic and NAF-arguments, respectively. We propose the so-called semantic argumentation guaranteeing well-known principles for quality in structured argumentation, with the ability to generate semantic and NAF-arguments, those where the conclusion atoms are semantically interpreted as true, and those where the conclusion is assumed to be false. This framework is defined on the set of all logic programs in terms of rewriting systems based on a confluent set of transformation rules, the so-called Confluent Logic Programming Systems, making this approach a general framework. We implement our framework named semantic argumentation solver available open source.

Motivation

We propose a general argument construction based on the partial interpretation of programs using different families of logic programming semantics induced by rewriting systems functions [6]. Rewriting rules are used to replace parts of a logic program based on the concept of a normal form, which is the least expression of a program that cannot be rewritten any further [9]. For example, having a program with the only rule: \(innocent(x)\) \(\leftarrow \ not\ guilty(x)\), current structured argumentation approaches [10] generate the only consistent argument: \(\langle \underbrace{\{ innocent(X) \leftarrow \ not \ guilty(x) \}}_{Support}, \ \underbrace{innocent(x)}_{Conclusion} \rangle\), expressing that person \(x\) is innocent if \(x\) can not be proved guilty. However, in domain applications that need the generation of argument-based reason explanations, providing structured and well-defined reasons why \(x\) is not guilty (\(not \ guilty(x)\)) are needed. We emphasize the role of investigating such computational mechanisms that can also build arguments justifying conclusions based on the atoms that are inferred as false, e.g., to state that there is certainty in affirming that the guiltiness of \(x\) is false (there is no evidence), therefore the \(x\) must be innocent, i.e., \(\langle \underbrace{\{ innocent(X) \leftarrow \ not \ guilty(x) \}}_{Support}, \ \underbrace{not \ guilty(x)}_{Conclusion} \rangle\). These types of arguments have been less explored in the formal argumentation theory, except for assumption-based argumentation (ABA) [8].

Syntax and semantics

We use propositional logic with the following connectives \(\wedge,\leftarrow,\;not\), and \(\top\) where \(\wedge\), and \(\leftarrow\) are 2-place connectives, \(\;not\) and \(\top\). The negation symbol not is regarded as the so-called negation as failure (NAF). We follow standard logic programming syntax, e.g., [5], for lack of space we do not include some basic and well-established syntax.

An interpretation of the signature \(\mathcal{L}_P\) is a function from \(\mathcal{L}_P\) to {false,true}. A partial interpretation of \(\mathcal{L}_P\), are the sets \(\langle I_1, I_2\rangle\) where \(I_1 \cup I_2 \subseteq \mathcal{L}_P\). We use \(\texttt{SEM}(P) = \langle P^{true}, P^{false} \rangle\), where \(P^{true} := \{ p |\ p \leftarrow \top \in P\}\) and \(P^{false} := \{ p |\ p \in \mathcal{L}_P \backslash \texttt{HEAD}(P)\}\). \(\texttt{SEM}(P)\) is also called model of \(P\) [6] . We use three value semantics that are characterized by rewriting systems following a set of Basic Transformation Rules for Rewriting Systems (see details in [6]), those rules are named: RED+, RED-, Success, Failure, Loop, SUB, and TAUT. Then, two rewriting systems (\(\mathcal{CS}\)) can be defined based on the previous basic transformations: \(CS_0\) = \(\{RED^+\), \(RED^-\), \(Success\), \(Failure\), \(Loop\) \(\}\), induces the WFS [3]. \(CS_1\) = \(CS_0 \cup \{SUB,TAUT,LC\}\), induces WFS+ [6]. The normal form of a normal logic program \(P\) with respect to a rewriting system \(\mathcal{CS}\) is denoted by \(\mathit{norm}_\mathcal{CS}(P)\). Every rewriting system \(\mathcal{CS}\) induces a 3-valued logic semantics \(\texttt{SEM}_{\mathcal{CS}}\) as \(\texttt{SEM}_{\mathcal{CS}}(P):= \texttt{SEM}(\mathit{norm}_\mathcal{CS}(P))\). To simplify the presentation, we use the entailment \(\models_{\texttt{SEM}_{\mathcal{CS}}}\) applied to a logic program \(P\) is defined by \(\texttt{SEM}_{\mathcal{CS}}(P)=\langle T,F\rangle\) in which \(P \models_{\texttt{SEM}_{\mathcal{CS}}^{T}} a\) if and only if \(a \in T\), similarly, if \(P \models_{\texttt{SEM}_{\mathcal{CS}}^{F}} a\) if and only if \(a \in F\). We use the entailment \(\models_{\texttt{SEM}_{\mathcal{CS}_0}}\) and \(\models_{\texttt{SEM}_{\mathcal{CS}_1}}\) for confluent rewriting system \(CS_0\) and \(CS_1\) respectively; and the form \(\models_{\texttt{SEM}_{\mathcal{CS}}}\) to indicate that any rewriting system can be used.

Semantic and NAF-arguments

Let us introduce a formal definition of semantic arguments.

Definition 1 (Semantic argument). Given a normal logic program \(P\) and \(S \subseteq P\). \(Arg_P = \langle S, \;g \rangle\) is a semantic argument under \(\texttt{SEM}_{\mathcal{CS}}\) w.r.t. \(P\), if the following conditions hold true:

  1. \(S \models_{\texttt{SEM}_{\mathcal{CS}}^{T}} g\)

  2. \(S\) is minimal w.r.t. the set inclusion satisfying 1.

We simplify the notation of these semantic arguments as \(\mathcal{A}rg^{+}\). Condition 1 states that the interpretation of conclusion \(g\) is true w.r.t. \(T\) in \(\texttt{SEM}_{\mathcal{CS}}(S)\). Condition 2 in Definition 1 guarantees the support minimality.

Now, let us define NAF-arguments as follows:

Definition 2 (NAF-arguments).

Given a normal logic program \(P\) and \(S \subseteq P\). \(Arg_P = \langle S, \; not \;g \rangle\) is a NAF-argument under the \(\texttt{SEM}_{\mathcal{CS}}\) w.r.t. \(P\), if the following conditions hold true:

  1. \(S \models_{\texttt{SEM}_{\mathcal{CS}}^{F}} g\),

  2. \(S\) is minimal w.r.t. the set inclusion satisfying 1.

Condition 1 in Definition 2 is the interpretation of the conclusion w.r.t. \(\models_{\texttt{SEM}_{\mathcal{CS}}^{F}}\), with the set of all the NAF-arguments noted as \(\mathcal{A}rg^{-}\). The addition of \(\; not\) in the conclusion of a NAF-argument stresses that such an atom is interpreted as false by \(\texttt{SEM}_{\mathcal{CS}}\).

Example 1.

Let us consider a program \(P_3\) for building semantic and NAF-arguments considering \(\mathcal{CS}_0\) and \(\mathcal{CS}_1\).

We build semantic and NAF-arguments as follows: 1) get related clauses of atoms (\(S_i\)); 2) for every related clause compute \(\texttt{SEM}_{\mathcal{CS}_{0}}(S_i)\) and \(\texttt{SEM}_{\mathcal{CS}_{1}}(S_i)\); 3) the support (every \(S_i\)) is joined to the conclusion1. Then, the following sets of arguments are built considering \(\mathcal{CS}_0\) and \(\mathcal{CS}_1\):

Case \(\mathcal{CS}_0\): \(\mathcal{A}rg_{P_{3}} = \{A_1^{+},A_2^{+},A_3^{+},A_1^{-},A_2^{-},A_3^{-},A_4^{-},A_6^{-}\}\).

Case \(\mathcal{CS}_1\): \(\mathcal{A}rg_{P_{3}} = \{A_1^{+},A_2^{+},A_3^{+},A_5^{+},A_1^{-},A_2^{-},A_3^{-},A_4^{-},A_6^{-}\}\).

An effect of interpreting argument supports under \(\texttt{SEM}_{\mathcal{CS}}\) is that some atoms (or sets of them) are evaluated in opposition to other arguments (e.g., \(A_1^{+} = \langle S_2,a \rangle\) and \(A_1^{-} = \langle S_1,\; not \; a \rangle\) in Example 1), suggesting a semantic attack relationship.

Definition 3 (Semantic attack). Let \(A = \langle S_A, a\rangle \in \mathcal{A}rg^{+}\), \(B = \langle S_B, not \; b \rangle \in \mathcal{A}rg^{-}\) be two semantic arguments where \(\texttt{SEM}_{\mathcal{CS}}(S_A) = \langle T_A, F_A \rangle\) and \(\texttt{SEM}_{\mathcal{CS}}(S_B ) = \langle T_B, F_B \rangle\). We say that \(A\) attacks \(B\) if \(x \in T_A\) and \(x \in F_B\), denoted \(\texttt{attacks} (x,y)\).

Lemma 1. Semantic and NAF-arguments built from any normal logic program are always consistent.

Definition 4 (Semantic Argumentation Framework (SAF)). Let \(P\) be a normal program. Then, a semantic argumentation framework is the tuple: \(SAF_{P}= \langle \mathcal{A}rg_P, \mathcal{A}tt\rangle\)

We can straightforward extend the definitions of argumentation semantics in [7] as follows:

Definition 5. Let \(SAF_{P}= \langle \mathcal{A}rg_P, \mathcal{A}tt\rangle\) be a semantic argumentation framework. An admissible set of arguments \(S \subseteq AR\) is:

Example 2.

Let us consider \(P_5= \{a \leftarrow not \ b; \ b \leftarrow not \ a; \ c \leftarrow not \ c, not \ a; \ d \leftarrow not \ d, not \ b; \}\). \(\texttt{SEM}_{\mathcal{CS}}\) will remove rules involving atoms \(c\) and \(d\). Then, applying Definition 4, we have the framework: \(SAF_{P_5} =\) \(\langle \{A_{6}^{-} = \langle \{a \leftarrow not \ b\}, \; not \ a \rangle\),\(A_{6}^{+} = \langle \{b \leftarrow not \ a\}, \; b \rangle\), \(A_{5}^{-} = \langle \{b \leftarrow not \ a\}, \; not \ b \rangle\), \(A_{5}^{+} = \langle \{a \leftarrow not \ b\}, \; a \rangle \}, \ \texttt{attacks}(A_{5}^{+},A_{6}^{+}),\) \(\texttt{attacks}(A_{5}^{+},A_{6}^{-}),\) \(\texttt{attacks}(A_{6}^{+},A_{5}^{+}),\) \(\texttt{attacks}(A_{6}^{+},A_{5}^{-})\rangle\). When we apply Definition 5 to \(SAF_{P_5}\) we obtained the following extensions:

Conclusions

The main contributions are: 1) Semantic Argumentation Frameworks (SAF) can be used for justifying true and false interpreted conclusions. 2) SAF is based on families of rewriting confluent systems. 3) Satisfies all the well-known argumentation postulates [1][4]. Future work will involve the exploration of our framework under other Confluent Logic Programming Systems, the satisfaction of other argumentation principles, and the investigation of commonalities between ABA and semantic argumentation.

References

  1. Leila Amgoud (2014): Postulates for logic-based argumentation systems. International Journal of Approxi mate Reasoning 55(9), pp. 2028-2048, doi:10.1016/j.ijar.2013.10.004.
  2. Pietro Baroni, Martin Caminada & Massimiliano Giacomin (2011): An introduction to argumentation se mantics. The knowledge engineering review 26(4), pp. 365-410, doi:10.1017/S0269888911000166.
  3. Stefan Brass, Ulrich Zukowski & Burkhard Freitag (1997): Transformation-based bottom-up computation of the well-founded model. In: Non-Monotonic Extensions of Logic Programming, Springer, pp. 171-201, doi:10.1007/BFb0023807.
  4. Martin Caminada & Leila Amgoud (2007): On the evaluation of argumentation formalisms. Artificial Intel ligence 171(5-6), pp. 286-310, doi:10.1016/j.artint.2007.02.003.
  5. Jürgen Dix (1995): A Classification Theory of Semantics of Normal Logic Programs: I. Strong Properties. Fundam. Inform. 22(3), pp. 227-255, doi:10.3233/FI-1995-2234.
  6. Jürgen Dix, Mauricio Osorio & Claudia Zepeda (2001): A general theory of confluent rewriting systems for logic programming and its applications. Annals of Pure and Applied Logic 108(1-3), pp. 153-188, doi:10.1016/S0168-0072(00)00044-0.
  7. Phan Minh Dung (1995): On the Acceptability of Arguments and its Fundamental Role in Nonmono tonic Reasoning, Logic Programming and n-Person Games. Artificial Intelligence 77(2), pp. 321-358, doi:10.1016/0004-3702(94)00041-X.
  8. Phan Minh Dung, Robert A. Kowalski & Francesca Toni (2009): Assumption-Based Argumentation. In: Argumentation in Artificial Intelligence, Springer, pp. 199-218, doi:10.1007/978-0-387-98197-0_10.
  9. Juan Carlos Nieves & Mauricio Osorio (2016): Ideal extensions as logical programming models. Journal of Logic and Computation 26(5), pp. 1361-1393, doi:10.1093/logcom/exu014.
  10. Henry Prakken (2010): An abstract framework for argumentation with structured arguments. Argument and Computation 1(2), pp. 93-124, doi:10.1080/19462160903564592

  1. We implemented this procedure and the sources are open, then can be found in https://github.com/esteban-g/semantic_argumentation

  2. In it is shown that grounded can be characterized in terms of complete extensions.


Towards Mass Spectrum Analysis with ASP (Extended Abstract)

Nils Küchenmeister (TU Dresden)
Alex Ivliev (TU Dresden)
Markus Krötzsch (TU Dresden)

We present a new use of Answer Set Programming (ASP) to discover the molecular structure of chemical samples based on mass spectrometry data. To constrain the exponential search space for this combinatorial problem, we develop canonical representations of molecular structures and an ASP implementation that uses these definitions. This implementation forms the core of our new tool, Genmol, designed to enumerate potential molecular structures given a specified composition of fragments.

Introduction

Mass spectrometry is a powerful technique to determine the chemical composition of a substance. However, the mass spectrum of a substance does not reveal its exact molecular structure, but merely the possible ratios of elements in the compound and its fragments. To identify a sample, researcher may use commercial databases (for common compounds), or software tools that can discover molecular structures from the partial information available. The latter leads to a combinatorial search problem that is a natural fit for answer set programming (ASP). Molecules can be modeled as undirected graphs, representing the different elements and atomic bonds as node and edge labels, respectively. ASP is well-suited to encode chemical domain knowledge (e.g., possible number of bonds for carbon) and extra information about the sample (e.g., that it has an OH group), so that each answer set encodes a valid molecular graph.

Unfortunately, this does not work: a direct ASP encoding yields exponentially many answer sets for each molecular graph due to the large number of symmetries (automorphisms) in such graphs. For example, C6H12O admits 211 distinct molecule structures but leads to 111,870 answer sets. Removing redundant solutions and limiting the search to unique representations are common techniques used in the ASP community where they have motivated research on symmetry breaking. Related approaches work by adding additional rules to ASP [1, 4, 9], by rewriting the ground program before solving [5, 3, 2], or by introducing dedicated solvers [7]. However, our experiments with some of these approaches still produced 10-10,000 times more answer sets than molecules even in simple cases.

In this extended abstract, we summarize our new approach that prevents symmetries in graph representations already during grounding [8]. This technique forms the basis of our ASP-driven prototype implementation for enumerating molecular structures using partial chemical information. Our ASP source code, evaluation helpers, and data sets are available online at https://github.com/knowsys/eval-2024-asp-molecules. The sources of our prototype application are at https://gitlab.com/nkuechen/genmol/.

Analysis of Mass Spectra with Genmol

Many mass spectrometers break up samples into smaller fragments and measure their relative abundance. The resulting mass spectrum forms a characteristic pattern, enabling inferences about the underlying sample. High-resolution spectra may contain information such as "the molecule has six carbon atoms" or "there is an OH group", but cannot reveal the samples's full molecular structure. In chemical analysis, we are looking for molecular structures that are consistent with the measured mass spectrum.

Genmol tool screenshot
Figure 1: User interface of Genmol

To address this task, we have developed Genmol, a prototype application for enumerating molecular structures for a given composition of fragments. It is available as a command-line tool and as a progressive web application (PWA), shown in Fig. 1. Genmol is implemented in Rust, with the web front-end using the Yew framework on top of a JSON-API, whereas the search for molecular structures is implemented in Answer Set Programming (ASP) and solved using clingo [6]. An online demo of Genmol is available for review at https://tools.iccl.inf.tu-dresden.de/genmol/.

The screenshot shows the use of Genmol with a sum formula C6H5ON and two fragments as input. Specifying detected fragments and restricting bond types helps to reduce the search space. Alternatively, users can provide a molecular mass or a complete mass spectrum, which will then be associated with possible chemical formulas using, e.g., information about the abundance of isotopes. The core task of Genmol then is to find molecules that match the given input constraints. Molecules in this context are viewed as undirected graphs of atoms, linked by covalent bonds that result from sharing electrons. [This graph does not always determine the spacial configuration of molecules, which cannot be determined by mass spectrometry alone, yet it suffices for many applications.] Many chemical elements admit a fixed number of bonds, the so-called valence, according to the number of electrons available for binding (e.g., carbon has a valence of 4). Bonds may involve several electrons, leading to single, double, triple bonds, etc. The graph structure of molecules, the assignment of elements, and the possible types of bonds can lead to a large number of possible molecules for a single chemical formula, and this combinatorial search task is a natural match for ASP.

Canonical Representation of Molecules

Our experiments show (https://github.com/knowsys/eval-2024-asp-molecules/tree/main/results) that the existing approaches to symmetry breaking outlined in the introduction do not achieve adequate performance for real-world use cases. Our dedicated encoding therefore relies on a canonical representation of molecules, which restricts the overall search space. It is based on SMILES, a widely used serialization format for molecular graphs. SMILES start from an (arbitrary) spanning tree of the molecular graph, serialized in a depth-first order, with subtrees enclosed in parentheses. Edges not covered by the spanning tree are indicated by pairs of numeric cycle markers.

Our ASP implementation can then be summarized as follows: (1) guess the spanning tree, defined by the element of each node, the parent-relation between nodes, and the bond type of the parent edge of each node, (2) guess the cycle markers to complete the molecule, (3) eliminate non-canonical solutions.

Canonical Molecular Trees We first consider the acyclic case. We compare tree nodes by considering the (1) maximum depth and number of vertices of its subtree, (2) number of children, and (3) bond type of its parent edge. This ordering extends to trees by comparing their roots and recursively descending on ties. The minimal tree according to this ordering results in a unique representation.

Canonical Molecular Graphs We reduce the general case to the acyclic case by introducing edges to fresh nodes for each cycle marker. Since we cannot compare every such representation in practice, we implement a heuristic, which disallows shortening cycle edges. Intuitively, such cycles occur if another assignment of cycle markers would lead to a deeper tree. The ASP implementation detects shortening cycle edges by a pattern-matching approach.

References

  1. Michael Codish, Alice Miller, Patrick Prosser & Peter J. Stuckey (2019): Constraints for symmetry breaking in graph representation. Constraints 24(1), pp. 1-24, doi:10.1007/s10601-018-9294-5.
  2. Jo Devriendt & Bart Bogaerts (2016): BreakID: Static Symmetry Breaking for ASP (System Description). CoRR abs/1608.08447. Available at http://arxiv.org/abs/1608.08447.
  3. Jo Devriendt, Bart Bogaerts, Maurice Bruynooghe & Marc Denecker (2016): Improved Static Symmetry Breaking for SAT. In: Proc. 19th Int. Conf. Theory and Applications of Satisfiability Testing (SAT'16), LNCS 9710, Springer, pp. 104-122, doi:10.1007/978-3-319-40970-2_8.
  4. Jo Devriendt, Bart Bogaerts, Maurice Bruynooghe & Marc Denecker (2016): On local domain symmetry for model expansion. Theory Pract. Log. Program. 16(5-6), pp. 636-652, doi:10.1017/S1471068416000508.
  5. Christian Drescher, Oana Tifrea & Toby Walsh (2011): Symmetry-breaking answer set solving. AI Communications 24(2), pp. 177-194, doi:10.3233/AIC-2011-0495.
  6. Martin Gebser, Benjamin Kaufmann, Roland Kaminski, Max Ostrowski, Torsten Schaub & Marius Schneider (2011): Potassco: The Potsdam Answer Set Solving Collection. AI Commun. 24(2), pp. 107-124, doi:10.3233/AIC-2011-0491.
  7. Tarek Khaled & Belaid Benhamou (2018): Symmetry breaking in a new stable model search method. In: LPAR-22 Workshop and Short Paper Proceedings, Kalpa Publications in Computing 9, EasyChair, pp. 58-74, doi:10.29007/1l5r.
  8. Nils Küchenmeister, Alex Ivliev & Markus Krötzsch (2024): Towards Mass Spectrum Analysis with ASP. In: Proc. Int. Conf. on Logic Programming and Nonmonotonic Reasoning (LPNMR'24), Springer. To appear.
  9. Alice Tarzariol, Martin Gebser, Konstantin Schekotihin & Mark Law (2023): Learning to Break Symmetries for Efficient Optimization in Answer Set Programming. In: Proc. 35th AAAI Conf. on Artificial Intelligence (AAAI'23), AAAI Press, pp. 6541-6549, doi:10.1609/aaai.v37i5.25804.

Monitoring and Scheduling of Semiconductor Failure Analysis Labs (Extended Abstract)

Elena Mastria (University of Calabria, Italy)
Domenico Pagliaro (KAI GmbH, Austria)
Francesco Calimeri (University of Calabria, Italy)
Simona Perri (University of Calabria, Italy)
Martin Pleschberger (KAI GmbH)
Konstantin Schekotihin (University of Klagenfurt, Austria)

Abstract

Identifying and locating non-conformities, such as physical failures causing electrical malfunctioning of a device, in modern semiconductor devices is a challenging task. Typically, highly qualified employees in a failure analysis (FA) lab use sophisticated and expensive tools like scanning electron microscopes to identify and locate such non-conformities. Given the increasing complexity of investigated devices and very limited resources, labs may struggle to deliver analysis results in time. This paper suggests an approach to optimize the usage of FA lab resources by adaptive scheduling and monitoring. In particular, we combine constraints programming for the computation of a schedule with stream reasoning to monitor the lab’s conditions and maintain the schedule depending on the situation. Evaluation results indicate that our system can significantly improve the tardiness of a real-world FA lab, and all its computational tasks can be finished in an average time of 3.6 seconds, with a maximum of 15.2 seconds, which is acceptable for the lab’s workflows.

Background and Proposal

Failure Analysis (FA) of semiconductor devices is an important activity aiming at the identification of non-conformities during various stages of the device’s lifecycle. For instance, an FA lab might provide qualifications for new designs, execute tests for quality management tasks, or identify failures in malfunctioning devices returned by customers. Daily, FA labs process a large number of jobs and, given their complexity, often face issues with delivering all analysis results on time. To minimize the likelihood of violating deadlines, a lab must optimally utilize all available resources, such as equipment and personnel. At the moment, First-In-First-Out (FIFO) in combination with job prioritization is the most widely used strategy for assigning FA resources. Although FIFO can easily be implemented in a lab, its long-term performance is unsatisfactory, since it handles poorly situations when jobs have different priorities or some of the resources have limited availability. To solve this issue, researchers suggested various methods for modeling and solving the scheduling problem for different types of labs, such as medical or industrial ones. In the literature, researchers addressed different aspects of laboratory scheduling reflecting the complex and dynamic nature of the environment. The papers present scheduling models usually derived from two classic problems: job-shop scheduling (JSS), e.g., [7, 5], or resource-constrained project scheduling (RCPS), e.g., [4, 3]. In addition to modeling aspects, designers of these approaches investigated efficient solving methods able to compute high-quality schedules within a given time budget, thus, avoiding delays while interacting with lab employees over graphical interfaces. For instance, [4] suggest engineers precisely plan their daily tasks and provide updates to the schedule in case of any unforeseen events. High-performance scheduling algorithms of the system ensure the responsiveness of a scheduling system during this interaction.

The main drawback of existing solutions is that they focus solely on the scheduling aspect of the problem and require users to monitor the lab’s dynamic environment characterized by a large number of stochastic events, such as equipment breakdowns, personnel emergencies, or unexpected task delays. All these events must be timely and correctly communicated to the scheduling system to ensure its correct operation. However, manual maintenance of the work schedule by lab engineers is time-consuming and might be a source of erroneous data causing incorrect scheduling results. To overcome these issues, we propose a novel approach that aims to avoid the aforementioned issues by automatic monitoring, repair, and (re)scheduling of lab tasks. Our approach relies on two components: a stream reasoner based on DP-sr [2] and a scheduler using the constraints programming approach of [5].

DP-sr supports an input language that properly extends Answer Set Programming (ASP) with ad-hoc means for dealing with Stream Reasoning (SR) scenarios, yet maintaining the highly declarative nature and ease of use that are signature features of ASP. Along with typical ASP features (e.g., aggregates, disjunction, strong/weak constraints, unstratified negation), specific constructs can appear in DP-sr programs that are supposed to be evaluated over a stream of events. The scheduler is implemented using IBM ILOG Cplex Optimization Studio. This constraints programming framework is widely used in the scheduling of production processes in the semiconductor industry. The framework provides a versatile knowledge representation language with specific constraints designed for efficient encoding of scheduling problems and a powerful solver able to efficiently find optimal solutions for a given instance.

Intuitively, in our solution the stream reasoner monitors the lab’s events to ensure that the current situation corresponds to the existing schedule. Whenever violations are detected the reasoner tries to automatically repair the schedule or, otherwise, notifies the scheduler about the violation. Given the current situation in a lab, the scheduler computes an assignment of tasks to engineers and machines optimizing the selected measure, such as tardiness or makespan. The former is often preferred since it corresponds to the popular key performance indicator of an FA lab. We evaluated the system on simulated re-scheduling scenarios derived from logs of real-world FA labs. Obtained results show that our method averages 6.8 seconds for each reasoning cycle, ranging from 1 to 15 seconds, including monitoring (0.8-3.3 seconds) and scheduling stages (2.1-13.4 seconds). This is significantly faster than the standard tasks in FA Lab, which usually take over 20 minutes on average. [8].

Problem and Data

Scheduling refers to the process of assigning and managing the execution of tasks, given a set of resources. Following [5], we model the problem as a variant of the JSS problem with the following extensions: multi-resource - more than one resource type, e.g., engineers or machines [6], and flexible - there are multiple instances of every resource. [1]. The resulting JSS variant is an NP-hard optimization problem, where the aim is to find an optimal allocation of resources to tasks and time points, such that: (i) each resource instance is assigned to only one task at a time; (ii) every task gets all required resources; (iii) execution of tasks is non-preemptive; (iv) no tasks of a job are executed in parallel; and (v) tasks must be scheduled w.r.t. their order. Two of the possible objective functions that can be implemented when evaluating the effectiveness of a scheduling system are tardiness that sums up all job delays, given their deadlines, and makespan that measures the total time required to complete the jobs. In this work, we encode the scheduling problem using constraints programming and employ tardiness as the optimization criterion since FA labs often use the fulfillment of deadlines as a key performance indicator. Nevertheless, we also report the makespan in the evaluation results.

Our main data source consists of an anonymized slice of the main data source of an FA database comprising 1646 jobs over 3 years, from January 2020 to December 2022. The dataset provides all the information needed by the scheduler to compute the optimal task assignments and make recommendations in a simulated environment.

ARCH

Figure 1: General architecture of the system.

General Architecture

In the following, we provide an overview of the system; the general architecture is depicted in Figure 1. At first, the initial dataset is pre-processed, and fed to the scheduler for the computation of a new schedule. Along with the schedule, the system produces plots of the scheduled tasks and information about the overall tardiness. The monitoring component is set up as soon as the scheduled execution starts. More in detail, the stream reasoner DP-sr is fed with the DP-sr program, corresponding to the monitoring task to perform over the FA events, along with the background knowledge, representing the available resources, task-resources compatibility, and job deadlines. From now on, DP-sr will be waiting for schedule information and/or timestamped events coming from the database, and an interaction loop starts between the scheduler and the monitor managed by the Python application. At the first iteration, DP-sr is provided with the schedule previously computed along with possible events occurring at the beginning; subsequently, iterations are executed at a predefined frequency. At each iteration, all events that occurred after the previous iteration are fed to DP-sr, that might also be fed a new schedule, if a reschedule was needed. Each time DP-sr receives the data, it performs its computation across all timestamps featured by the events of the current iteration, providing an answer for all included timestamps. The DP-sr answer is analyzed and classified into one of the following categories. Propose of new resource(s) in case of conflicts. When there is no need of a full reschedule, DP-sr suggests a resource replacement based on the conflicts that could happen if one or more tasks are delayed. The proposal of a new machine and/or a new worker is considered before changing the whole schedule. The new resources are inserted into the current schedule and an acknowledgment is sent back to DP-sr, such that the scheduled events are updated. Plots are as well generated highlighting the changes in the schedule. Irreparable conflicts. When conflicts are not solvable, DP-sr identifies it and notifies the need of a complete reschedule. Therefore, a reschedule takes place and a new schedule is computed and sent back to DP-sr, so to update the knowledge base. The system evaluates the new schedule defining a new tardiness and new plots are generated. No conflicts found. In this case, there are no errors or conflicts and the system keeps running with no exceptions.

References

  1. P. Brucker & R. Schlie (1990): Job-shop scheduling with multi-purpose machines. In: Computing 45(4), pp. 369–375, doi:10.1007/BF02238804.
  2. Francesco Calimeri, Elena Mastria & Simona Perri (2024): DP-SR: a purely Declarative Programming framework for Stream Reasoning. Available at https://sites.google.com/unical.it/dp-sr.
  3. Mariana M. Cunha, Joaquim L. Viegas, Miguel S.E. Martins, Tiago Coito, Andrea Costigliola, João Figueiredo, João M.C. Sousa & Susana M. Vieira (2019): Dual Resource Constrained Scheduling for Quality Control Laboratories. IFAC-PapersOnLine 52(13), pp. 1421–1426, doi:10.1016/j.ifacol.2019.11.398.
  4. Philipp Danzinger, Tobias Geibinger, David Janneau, Florian Mischek, Nysret Musliu & Christian Poschalko (2023): A System for Automated Industrial Test Laboratory Scheduling. ACM Trans. Intell. Syst. Technol. 14(1), doi:10.1145/3546871.
  5. Giulia Francescutto, Konstantin Schekotihin & Mohammed M. S. El-Kholany (2021): Solving a Multi-resource Partial-Ordering Flexible Variant of the Job-Shop Scheduling Problem with Hybrid ASP. In: Wolfgang Faber, Gerhard Friedrich, Martin Gebser & Michael Morak, editors: Logics in Artificial Intelligence - 17th European Conference, JELIA 2021, Virtual Event, May 17-20, 2021, Proceedings, Lecture Notes in Computer Science 12678, Springer, pp. 313–328, doi:10.1007/978-3-030-75775-5_21.
  6. V. B. Gargeya & R. H. Deane (1996): Scheduling research in multiple resource constrained job shops: a review and critique. International Journal of Production Research 34(8), pp. 2077–2097, doi:10.1080/00207549608905015.
  7. Takeshi D. Itoh, Takaaki Horinouchi, Hiroki Uchida, Koichi Takahashi & Haruka Ozaki (2021): Optimal Scheduling for Laboratory Automation of Life Science Experiments with Time Constraints. SLAS Technology 26(6), pp. 650–659, doi:10.1177/24726303211021790.
  8. Domenico Pagliaro, Martin Pleschberger, Olivia Pfeiler, Thomas Freislich & Konstantin Schekotihin (2023): Working Time Prediction and Workflow Mining at Failure Analysis. In: ISTFA, Phoenix, Arizona, USA, pp. 121–130, doi:10.31399/asm.cp.istfa2023p0121.

Declarative AI design in Unity using Answer Set Programming (Extended Abstract)

Denise Angilica (University of Calabria, Italy)
Giovambattista Ianni (University of Calabria, Italy)
Francesco Pacenza (University of Calabria, Italy)
Jessica Zangari (University of Calabria, Italy)

Abstract

Declarative methods such as Answer Set Programming show potential in cutting down development costs in commercial videogames and real-time applications in general. Many shortcomings, however, prevent their adoption, such as performance and integration gaps. In this work we illustrate ThinkEngine, a framework in which a tight integration of declarative methods within the typical game development workflow is made possible in the context of the Unity game engine.

Introduction.

The AI research field shares a past and a present of reciprocal exchange with the game design industry, both whether we are talking of inductive/machine learning-based techniques or knowl-edge-based, deductive techniques. In this context, machine learning is useful in several respects but still presents some limitations like the fact that it is not easily tunable and configurable at will, and has non-negligible design-time costs. In order to overcome those limits, one can consider the introduction of declarative knowledge representation techniques, which offer better explainability potential and, especially, are elaboration tolerant in the sense of McCarthy[4]. However, known performance and integration shortcomings limited so far the usage of these techniques in videogames. Among declarative approaches, Answer Set Programming (ASP) has been experimentally used in videogames to various extents, such as for declaratively generating level maps [6] and for the definition of artificial players [2] for the Angry Birds AI competition. ASP makes no exception with respect to the shortcomings that restrict the utility of declarative approaches. Two obstacles are of concern: performance in real-time contexts and integration, i.e., ease of wiring with other standard parts of the game logic.

ThinkEngine is an asset working for the popular Unity game engine, aiming at narrowing the above gaps. Our tool enables the possibility of introducing ASP-based reasoning modules within Unity-made games. The key contributions of ThinkEngine are the following:

Overview of ThinkEngine.

At design-time, Unity allows to work on Game Objects (GOs), whereas the game logic can be defined by attaching scripted code to specific game events. In this context, one can use our ThinkEngine by adding, wiring and programming brains containing declarative specifications written in ASP.

Concerning run-time, ThinkEngine interacts in the game scene at hand according to the architecture shown in Figure 1. During the game, Unity iteratively updates the game scene in the canonical game loop ThinkEngine makes use of some code parts working in the Unity main loop, but it also uses additional threads for offloading decision tasks. Intuitively, one or more programmable brains (i.e. modules embedding ASP logic programs) are in charge of decision-making tasks and run in separate ThinkEngine threads. Brains are connected to the game scene using sensors, actuators and plans. Sensors allow read access to desired parts of the current game state, whereas actuators and plans allow to make changes to the game scene itself.

Figure 1. General run-time architecture of the ThinkEngine framework

Reactive brains operate immediately on actuators, while planning brains make possible to declaratively program more complex, deliberative-like plans. A plan $P$ is a list of actions $[ a_1 \dots, a_n ]$ to be executed in the given sequence. One or more planning brains, each of which capable of generating a plan, can be associated to a given GO. Plans are associated to a priority value and, once generated, they are submitted for execution to a scheduler. A designer can decide how and when to abort a plan and how and when to trigger re-planning. Re-planning can produce new plans which replace older plan versions, so to cope with game state changes.

Planning brains are grouped by GO. Each group has its own planning scheduler. A planning scheduler $PS$ has a current plan $R$ and manages a queue of available plans $Q = [ P_1, \dots, P_m ]$, where subscripts denote priority (lower subscript values denote higher priority). $PS$ periodically checks if $R$ has to be interrupted and, possibly, substituted with a different plan in $Q$.

Performance.

We tested ThinkEngine in several classic games like Space Invaders, Pacman and Frogger, in which we added an automated player whose artificial intelligence is managed via both planning brains and reactive brains. Concerning performance, we considered the following measures: i) frame rate, the most common metric in the videogame field, representing the number of screen updates per second that can be achieved given the computational burden of the game implementation at hand; ii)the required number of frames to achieve a complete sensors update cycle; this depends on the number of sensors in the scene; iii)the required time to compute a new plan; this affects the delay between the event triggering the generation of a plan and the execution of the first action of the plan. Frame rate was not affected by the introduction of the ThinkEngine. Concerning the sensor update cycle, the higher the number of sensors in the scene the more frames the update cycle coroutine is spread on, resulting in some reasonable delay of the starting of the reasoning task. As for the required time to compute a new plan, it is clear that plan size should be kept under a certain limit. In many scenarios, the usage of an incremental solver drastically reduces evaluation times.

Final remarks.

To the best of our knowledge, our contribution is the first attempt at introducing declarative methods in the general setting of commercial game engines. As a distinctive feature, the ThinkEngine proposes a hybrid architecture in which procedural and declarative parts coexists, and where integration and computational offload issues are explicitly addressed. In future work we aim to improve the integration level of ThinkEngine since using declarative paradigms like ASP can be not so natural for game developers. Concerning the real-time performance problem, improvement has been gained by adding an incremental ASP solver, while the sensor update cycle has been optimized drastically lowering the number of frames required to perform the update[1]. Our ThinkEngine is publicly available alongside a games showcase1.


1 https://github.com/DeMaCS-UNICAL/ThinkEngine and https://github.com/DeMaCS-UNICAL/ThinkEngine-Showcase

Acknowledgment. This work was partially supported by the PNRR MUR project PE0000013-FAIR, Spoke 9 - Green-aware AI - WP9.1 and by the LAIA laboratory (part of the SILA laboratory network at University of Calabria).

References

  1. D. Angilica, G.M.D. Giorgio & G. Ianni (2023): On the impact of sensors update in declarative AI for videogames. In: ASPOCP 2023, CEUR Workshop Proceedings 3437, CEUR-WS.org.
  2. F. Calimeri et al. (2016): An Artificial Player for Angry Birds Based on Declarative Knowledge Bases. IEEE Transactions on Computational Intelligence and AI in Games, 8(2), pp. 128-139, doi:10.1109/TCIAIG.2015.2509600.
  3. G. Ianni et al. (2020): Incremental maintenance of overgrounded logic programs with tailored simplifications. Theory and Practice of Logic Programming, doi:10.1017/S147106842000040X.
  4. J. McCarthy (1999): Elaboration Tolerance. Keynote speech at CommonSense 1998. Available at http://www-formal.stanford.edu/jmc/elaboration.html.
  5. R.R. Murphy (2000): Introduction to AI Robotics, chapter 7, pp. 257-292. MIT Press. Chapter 7: The Hybrid Deliberative/Reactive paradigm.
  6. A.M. Smith & M. Mateas (2011): Answer Set Programming for Procedural Content Generation: A Design Space Approach. IEEE Transactions on Computational Intelligence and AI in Games, 3(3), pp. 187-200, doi:10.1109/TCIAIG.2011.2158545.

stableKanren: Integrating Stable Model Semantics with miniKanren (Extended Abstract)

Xiangyu Guo (Arizona State University)
James Smith (Arizona State University)
Ajay Bansal (Arizona State University)

miniKanren implements the essence of Prolog and allows easy access and modifications of underlying resolution and unification. The resolution stream constructed by miniKanren is static. We realize that a dynamic resolution stream is needed to support non-monotonic reasoning. So, we extend the traditional resolution and unification to achieve our goal. We present stableKanren, a miniKanren extension with normal logic program support under stable model semantics. stableKanren adds multiple innovative macros to compile a normal logic program into a program with its complement form, obtain the domain of a variable under different contexts, and generate a new stream during resolution.

1 Overview

miniKanren is a family of languages specially designed for relational programming. The core miniKanren implementation is purely functional and is designed to be easily modified and extended [1]. Unlike Prolog, miniKanren uses a complete interleaving search strategy on a stream to simulate backtracking. Unlike Mercury [9], miniKanren uses full unification, which is required to implement goals that take only fresh logic variables as their arguments. Unlike Curry [6], miniKanren did not use residuation to suspend execution on non-ground terms. The core implementation was hosted on a functional language, Scheme [2] and introduces only a few operators to users: == for unification, fresh for existential quantification, conde for disjunction, and a run interface.

To the best of our knowledge, only a few attempts have been made to add negation to miniKanren. The constructive negation used by Moiseenko [8] works for stratified programs only. Also, the semantics of negation is more like a filter, where the solver executes the positive goal inside the negation. It gets a differential set between the substitutions before and after the negated goal and eventually subtracts the differential set from the original set. However, none of the functional language-based implementations supported negation in a non-stratified normal program. By integrating the stable model semantics into miniKanren, we believe we have given it a more proper or widely accepted negation semantics.

Unlike a bottom-up solver that grounds all variables and propagates the constraint among values, a top-down solver, such as Prolog and miniKanren, uses resolution and unification to obtain a minimal model of the program. The advantage of top-down solving is that it produces answers without issues with blow-ups. Gupta et al. introduce coinductive logic (co-LP) and co-SLD resolution to handle the infinite terms in logic programs with co-SLD produces the greatest fixed point of a program [5]. Later, Min et al. evolved co-SLD to co-SLDNF to achieve normal program solving [7]. However, we believe that sticking with the least fixed point of SLD resolution is closer to stable model semantics, and the transition is simpler. Therefore, unlike co-SLD and co-SLDNF, our algorithms still produce the least fixed points as the original SLD resolution and focus on adding stable model semantics under finite Herbrand models.

miniKanren takes advantage of the traits (macros and continuations) of Scheme in its implementation. They use macros to transform the input program into a continuations-based search stream that works perfectly for monotonic reasoning. The static stream does not change once it is compiled (transformed). However, for non-monotonic reasoning, as in normal logic programs, the information we obtain during the run-time will change the outcome of a previous search, such as reverting a result, adjusting a conflict, or generating different results. Therefore, a dynamic search stream is needed. Our extension, stableKanren [4], generates or modifies the stream (code) on the fly based on the new information received during the resolution. Hence, we advance macros and continuations further into stable model solving. We use macros and continuations to form a static search stream and to modify a stream to weave the run-time information into the streams dynamically.

From the stable model semantics definition, we have the following relationship between the stable model of Π and the minimal model of the reduct ΠM.

Definition 1.1 (stable model property) Let Π be a propositional program and M be an interpretation. M is a stable model of Π if M is a minimal model of the reduct ΠM.

Stable model semantics consider a minimal model of the corresponding reduct program to be one of the program’s stable models. We want our algorithms to simultaneously produce the minimal model and its corresponding reduct program. The underlying resolution process guarantees the production of a minimal model as long as the input program can be handled. The reduct created from the interpretation removes the recursive rules and the negation completely so that the program can be handled using traditional Prolog to produce the minimal model ([3], Remark 2). Then, the interpretation must be verified to show that it is the minimal model of the reduct program.

Initially, the resolution had only one role: selecting a goal. Unification has two roles: the first is to assign a truth value to a goal, and the second is to assign a value to a variable. A goal is proven to be true iff all of its variables are successfully unified, and a goal is grounded iff all of its variables are unified with a value. Both resolution and unification require changes to support the recursive rules and negation introduced by the normal program. We grant more roles to resolution and unification. For resolution, it has four more roles: distributing negation to the unification level, producing truth value for the recursive rules, getting the domain of a variable, and continuing execution after getting a partial result. Unification has to produce the truth value for negated unification.

To prove a rule (statement) with a head H to be true, the resolution tries to prove each sub-goal individually in the rule’s body. To prove H to be true, we know H is failing somewhere in the rule’s body among one of the sub-goals. Therefore, we need to distribute negation to the unification level. For example, consider the logic statement;

$ H(X, Y) \leftarrow \exists X, Y (B_1(X, Y) \land B_2(X) \land B_3(Y)) $

We assume that under negation ¬H(X,Y), the variables X and Y are safe, always with a value x and y. For this reason, we safely drop the ∃ quantifier and focus on the rule’s body transformation with assigned values. We obtain the propositional form of the rule as follows;

$ \lnot H^{x, y} \stackrel{?}{\leftarrow} transform( B_1^{x, y} \land B_2^x \land B_3^y) $

Each sub-goal in a rule’s body could fail, and when a sub-goal fails, the prior sub-goals must have succeeded. To capture this property, its transformation should be a disjunction of the negation to each sub-goal in conjunction with all sub-goals before the current one. Furthermore, we noticed that each sub-goal works as a checker, and the values are checked by the unification independently inside the sub-goal could fail. Therefore, the transformation applies to the sub-goal and each variable to distribute the negation to the unification level.

$\lnot H \leftarrow \lnot B_1^x \lor (B_1^x \land \lnot B_1^y) \lor (B_1^x \land B_1^y \land \lnot B_2^x) \lor (B_1^x \land B_1^y \land B_2^x \land \lnot B_3^y))$

stableKanren supports normal logic programs and solves the n-queens problem faster than s(ASP) in a top-down fashion. We did performance testing on a 2012 MacBook Pro with 2.6GHz Intel Quad-Cores i7, 8GB memory, and macOS 10.15.7. We set an hour timeout unless the number of queens is less than or equal to 8. Table 1 shows the testing result time in seconds. The prefix “1” means finding one answer, and “all” means finding all answers.


Table 1: Time on finding one or all nqueens solutions in stableKanren and s(ASP).






nqueens#solutions1-s(ASP)1-stableKanrenall-s(ASP)all-stableKanren






1 1 0.15 0 0.17 0
2 0 1.65 0 1.66 0
3 0 9.38 0 9.57 0
4 2 24.20 0 58.81 0
5 10 23.69 0.01 397.31 0.01
6 4 421.88 0.03 2470.24 0.08
7 40 148.27 0.04 13852.54 0.43
8 92 4649.88 0.36131429.13 4.08







References

  1. Daniel P. Friedman, William E. Byrd & Oleg Kiselyov (2005): The Reasoned Schemer. The MIT Press. MIT Press, doi:10.7551/mitpress/5801.001.0001.
  2. Daniel P. Friedman & Matthias Felleisen (1996): The Little Schemer (4th Ed.). MIT Press, Cambridge, MA, USA, doi:10.1016/0898-1221(96)87329-9.
  3. Michael Gelfond & Vladimir Lifschitz (1988): The stable model semantics for logic programming. In: ICLP/SLP 88. Cambridge, MA, pp. 1070–1080.
  4. Xiangyu Guo, James Smith & Ajay Bansal (2023): StableKanren: Integrating Stable Model Semantics with MiniKanren. In: Proceedings of the 25th International Symposium on Principles and Practice of Declarative Programming, PPDP '23. Association for Computing Machinery, New York, NY, USA, doi:10.1145/3610612.3610617.
  5. Gopal Gupta, Ajay Bansal, Richard Min, Luke Simon & Ajay Mallya (2007): Coinductive Logic Programming and Its Applications. In: Proceedings of the 23rd International Conference on Logic Programming, ICLP'07. Springer-Verlag, Berlin, Heidelberg, pp. 27–44, doi:10.1007/978-3-540-74610-2_4.
  6. Michael Hanus, Herbert Kuchen & Juan Jose Moreno-Navarro (1995): Curry: A truly functional logic language. In: Proc. ILPS 95, pp. 95–107.
  7. Richard Min & Gopal Gupta (2010): Coinductive Logic Programming with Negation. In: Danny De Schreye: Logic-Based Program Synthesis and Transformation. Springer Berlin Heidelberg, pp. 97–112, doi:10.1007/978-3-642-12592-8_8.
  8. Evgenii Moiseenko (2019): Constructive negation for miniKanren. In: ICFP 2019, The miniKanren and Relational Programming Workshop.
  9. Zoltan Somogyi, Fergus Henderson & Thomas Conway (1996): The execution algorithm of mercury, an efficient purely declarative logic programming language. The Journal of Logic Programming 29(1), pp. 17–64, doi:10.1016/S0743-1066(96)00068-4. High-Performance Implementations of Logic Programming Systems.

Alda: Integrating Logic Rules with Everything Else, Seamlessly (System Demonstration)

Yanhong A. Liu (Stony Brook University)
Scott D. Stoller (Stony Brook University)
Yi Tong (Stony Brook University)
Bo Lin (Stony Brook University)

Sets and rules have been used for easier programming since the late 1960s. While sets are central to database programming with SQL and are also supported as built-ins in high-level languages like Python, logic rules have been supported as libraries or in rule-based languages with limited extensions for other features. However, rules are central to deductive database and knowledge base programming, and better support is needed.

This system demonstration highlights the design of a powerful language, Alda [16, 14], that supports logic rules together with sets, functions, updates, and objects, all as seamlessly integrated built-ins, including concurrent and distributed processes. The key idea is to allow sets of rules to be defined in any scope, support predicates in rules as set-valued variables that can be used and updated directly, and support queries using rules as either explicit or implicit automatic calls to an inference function.

Alda has a formal semantics [15] and is implemented by building on an object-oriented language (DistAlgo [13, 3] extending Python [18]) and an efficient logic rule system (XSB [19, 20]). It has been used successfully on benchmarks and problems from a wide variety of application domains—including those in OpenRuleBench [6], role-based access control (RBAC) [1, 4], and program analysis—with generally good performance [17]. Our implementation and benchmarks are publicly available [23].

This system demonstration shows how Alda is used for OpenRuleBench benchmarks, ANSI standard for role-based access control, and program analysis for large Python programs, including with persistence support for large datasets, all programmed seamlessly without boiler-plate code. For comparisons with related work on rule languages and benchmarking, see [16, 14, 17].

An example

Figure 1 shows an example program in Alda. It is for a small portion of the ANSI standard for role-based access control (RBAC) [1, 4]. It shows the uses (with line numbers in parentheses) of

where the rules are defined in a rule set (15-17), explained in the next part.

Note that queries using set comprehensions (e.g., on lines 8, 19, 21) can also be expressed by using rules and inference, even though comprehensions are more widely used. However, only some queries using rules and inference can be expressed by using comprehensions; queries using recursive rules (e.g., on lines 16-17) cannot be expressed using comprehensions.

1   class CoreRBAC:                  # class for Core RBAC component/object
2     def setup():                   # method to set up the object, with no arguments
3       self.USERS, self.ROLES, self.UR := {},{},{}
4                                    # set users, roles, user-role pairs to empty sets
5     def AddRole(role):             # method to add a role
6       ROLES.add(role)              # add the role to ROLES
7     def AssignedUsers(role):       # method to return assigned users of a role
8       return {u: u in USERS | (u,role) in UR}  # return set of users having the role
      ...
9   class HierRBAC extends CoreRBAC: # Hierarchical RBAC extending Core RBAC
10    def setup():
11      super().setup()              # call setup of CoreRBAC, to set sets as in there
12      self.RH := {}                # set ascendant-descendant role pairs to empty set
13    def AddInheritance(a,d):       # to add inherit. of an ascendant by a descendant
14      RH.add((a,d))                # add pair (a,d) to RH
15    rules trans_rs:                # rule set defining transitive closure
16      path(x,y) if edge(x,y)       # path holds for (x,y) if edge holds for (x,y)
17      path(x,y) if edge(x,z), path(z,y)  # ... if edge holds for (x,z) and for (z,y)
18    def transRH():                 # to return transitive RH and reflexive role pairs
19      return infer(path, edge=RH, rules=trans_rs) + {(r,r): r in ROLES}
20    def AuthorizedUsers(role):     # to return users having a role transitively
21      return {u: u in USERS, r in ROLES | (u,r) in UR and (r,role) in transRH()}
      ...
22  h = new(HierRBAC, [])            # create HierRBAC object h, with no args to setup
23  h.AddRole('chair')               # call AddRole of h with role 'chair'
    ...
24  h.AuthorizedUsers('chair')       # call AuthorizedUsers of h with role `chair'
    ...
Figure 1: An example program in Alda, for Role-Based Access Control (RBAC). In rules trans_rs, the first rule says there is a path from x to y if there is an edge from x to y, and the second rule says there is a path from x to y if there is an edge from x to z and there is an edge from z to y. The call to infer queries and returns the set of pairs for which path holds given that edge holds for exactly the pairs in set RH, by doing inference using rules in trans_rs.

Rules with sets, functions, updates, and objects

In Alda, rules are defined in rule sets, each with a name and optional declarations for the predicates in the rules.

ruleset ::= rules name (declarations): rule+
   rule ::= p(arg1, ..., arga) if p1(arg11, ..., arg1a1), ..., pk(argk1, ..., argkak)

In the rule form, p, p1, ..., pk denote predicates, p(arg1,...,argk) denotes that p holds for its tuple of arguments, and if denotes that its left-side conclusion holds if its right-side conditions all hold. In a rule set, predicates not in any conclusion are called base predicates; the other predicates are called derived predicates.

The key ideas of seamless integration of rules with sets, functions, updates, and objects are:

  1. a predicate is a set-valued variable that holds the set of tuples for which the predicate is true;

  2. queries using rules are calls to an inference function, infer, that computes desired values of derived predicates using given values of base predicates;

  3. values of base predicates can be updated directly as for other variables, whereas values of derived predicates can only be updated by infer; and

  4. predicates and rule sets can be object attributes as well as global and local names, just as variables and functions can.

Declarative semantics of rules are ensured by automatically maintaining values of derived predicates when values of base predicates are updated, by appropriate implicit calls to infer.

For example, in Figure 1, one could use an object field transRH in place of calls to transRH() in AuthorizedUsers(role), use the following rule set instead of trans_rs, and remove transRH().

  rules transRH_rs:              # no need to call infer explicitly
    transRH(x,y) if RH(x,y)
    transRH(x,y) if RH(x,z), transRH(z,y)
    transRH(x,x) if ROLES(x)

Field transRH is automatically maintained at updates to RH and ROLES by implicit calls to infer.

Higher-order, patterns, distributed programming, and more

Higher-order.

Note that predicates in rules as set-valued variables, e.g., edge, and calling infer to take or return values of set variables, e.g., RH in edge=RH, avoids the need of high-order predicates or other sophisticated features, e.g., [2], to reuse rules for different predicates in logic languages.

Patterns.

Alda also supports tuple patterns for set elements in set queries (as in DistAlgo [13]) and in queries using rules, e.g., (1,=x,y) in p matches any triple in set p whose first element is 1 and whose second element equals the value of x, and binds y to the third element if such a triple exists.

Distributed programming.

Of course, by building on DistAlgo, Alda also supports distributed programming with distributed processes, message passing, and high-level queries of message histories, e.g., for distributed RBAC [7,9], also called trust management [5], in decentralized systems.

Declarations for predicates in rules.

Declarations in rules could specify predicate types and scopes, but are designed more importantly for specifying assumptions about predicates being certain, complete, closed, or not [10, 11, 12]. This is to give respective desired semantics for rules with unrestricted negation, quantification, and aggregation.

Python syntax.

Note that the examples discussed use an ideal syntax, while the Alda implementation supports the Python syntax. For example, x := {} is written as x = set() in Python syntax.

Implementation.

The Alda implementation compiles rule sets in rules and queries using infer to XSB rules and queries, and compiles the rest to Python, which calls XSB to do the inference. The current implementation supports primarily Datalog rules, but also handles unrestricted negation by using XSB’s computation of the well-founded semantics [24]. More general forms of rules and queries can be compiled to rules and queries in XSB or other rule systems using the same approach. In general, any efficient inference algorithm and implementation method can be used to compute the semantics of rules and infer.

Future work.

Future work includes (1) support for easy use of different desired semantics, especially with modular use of rules, similar to knowledge units in DA-logic [11]; and (2) efficient implementation with complexity guarantees [8, 21, 22] for computing different desired semantics.

Acknowledgments.

This work was supported in part by NSF under grants CCF-1954837, CCF-1414078, and IIS-1447549 and ONR under grants N00014-21-1-2719, N00014-20-1-2751, and N00014-15-1-2208.

References

  1. ANSI INCITS (2004): Role-Based Access Control. ANSI INCITS 359-2004, American National Standards Institute, International Committee for Information Technology Standards.
  2. Weidong Chen, Michael Kifer & David S. Warren (1993): HiLog: A Foundation for Higher-Order Logic Programming. Journal of Logic Programming 15(3), pp. 187–230, doi:10.1016/0743-1066(93)90039-J.
  3. (2024): DistAlgo. http://distalgo.cs.stonybrook.edu. Accessed July 8, 2024.
  4. David F. Ferraiolo, Ravi Sandhu, Serban Gavrila, D. Richard Kuhn & Ramaswamy Chandramouli (2001): Proposed NIST Standard for Role-Based Access Control. ACM Transactions on Information and Systems Security 4(3), pp. 224–274, doi:10.1145/501978.501980.
  5. Ninghui Li, John C. Mitchell & William H. Winsborough (2002): Design of a Role-Based Trust-Management Framework. In: Proceedings of the 2002 IEEE Symposium on Security and Privacy, pp. 114–130, doi:10.1109/SECPRI.2002.1004366.
  6. Senlin Liang, Paul Fodor, Hui Wan & Michael Kifer (2009): OpenRuleBench: An Analysis of the Performance of Rule Engines. In: Proceedings of the 18th International Conference on World Wide Web. ACM Press, pp. 601–610, doi:10.1145/1526709.1526790.
  7. Yanhong A. Liu (2018): Role-Based Access Control as a Programming Challenge. In: LPOP: Challenges and Advances in Logic and Practice of Programming. https://arxiv.org/abs/2008.07901, pp. 14–17, doi:10.48550/arXiv.2008.07901.
  8. Yanhong A. Liu & Scott D. Stoller (2009): From Datalog Rules to Efficient Programs with Time and Space Guarantees. ACM Transactions on Programming Languages and Systems 31(6), pp. 1–38, doi:10.1145/1552309.1552311.
  9. Yanhong A. Liu & Scott D. Stoller (2018): Easier Rules and Constraints for Programming. In: LPOP: Challenges and Advances in Logic and Practice of Programming. https://arxiv.org/abs/2008.07901, pp. 52–60, doi:10.48550/arXiv.2008.07901.
  10. Yanhong A. Liu & Scott D. Stoller (2020): Founded Semantics and Constraint Semantics of Logic Rules. Journal of Logic and Computation 30(8), pp. 1609–1638, doi:10.1093/logcom/exaa056. Also http://arxiv.org/abs/1606.06269.
  11. Yanhong A. Liu & Scott D. Stoller (2021): Knowledge of Uncertain Worlds: Programming with Logical Constraints. Journal of Logic and Computation 31(1), pp. 193–212, doi:10.1093/logcom/exaa077. Also https://arxiv.org/abs/1910.10346.
  12. Yanhong A. Liu & Scott D. Stoller (2022): Recursive Rules with Aggregation: A Simple Unified Semantics. Journal of Logic and Computation 32(8), pp. 1659–1693, doi:10.1093/logcom/exac072. Also http://arxiv.org/abs/2007.13053.
  13. Yanhong A. Liu, Scott D. Stoller & Bo Lin (2017): From Clarity to Efficiency for Distributed Algorithms. ACM Transactions on Programming Languages and Systems 39(3), pp. 12:1–12:41, doi:10.1145/2994595.
  14. Yanhong A. Liu, Scott D. Stoller, Yi Tong & Bo Lin (2023): Integrating logic rules with everything else, seamlessly. Theory and Practice of Logic Programming 23(4), pp. 678–695, doi:10.1017/S1471068423000108.
  15. Yanhong A. Liu, Scott D. Stoller, Yi Tong & Bo Lin (2023): Integrating logic rules with everything else, seamlessly. Computing Research Repository arXiv:2305.19202 [cs.PL], doi:10.48550/arXiv.2305.19202.
  16. Yanhong A. Liu, Scott D. Stoller, Yi Tong, Bo Lin & K. Tuncay Tekle (2022): Programming with Rules and Everything Else, Seamlessly. Computing Research Repository arXiv:2205.15204 [cs.PL], doi:10.48550/arXiv.2205.15204.
  17. Yanhong A. Liu, Scott D. Stoller, Yi Tong & K. Tuncay Tekle (2023): Benchmarking for Integrating Logic Rules with Everything Else. In: Proceedings of the 39th International Conference on Logic Programming (Technical Communications). Open Publishing Association, pp. 12–26, doi:10.4204/EPTCS.385.3.
  18. Python Software Foundation (2024): Python. http://python.org.
  19. Terrance Swift & David S Warren (2012): XSB: Extending Prolog with tabled logic programming. Theory and Practice of Logic Programming 12(1-2), pp. 157–187, doi:10.1017/S1471068411000500.
  20. Theresa Swift, David S. Warren, Konstantinos Sagonas, Juliana Freire, Prasad Rao, Baoqiu Cui, Ernie Johnson, Luis de Castro, Rui F. Marques, Diptikalyan Saha, Steve Dawson & Michael Kifer (2022): The XSB System Version 5.0,x. http://xsb.sourceforge.net. Latest release May 12, 2022..
  21. K. Tuncay Tekle & Yanhong A. Liu (2010): Precise Complexity Analysis for Efficient Datalog Queries. In: Proceedings of the 12th International ACM SIGPLAN Symposium on Principles and Practice of Declarative Programming, pp. 35–44, doi:10.1145/1836089.1836094.
  22. K. Tuncay Tekle & Yanhong A. Liu (2011): More Efficient Datalog Queries: Subsumptive Tabling Beats Magic Sets. In: Proceedings of the 2011 ACM SIGMOD International Conference on Management of Data, pp. 661–672, doi:10.1145/1989323.1989393.
  23. Yi Tong, Bo Lin, Yanhong A. Liu & Scott D. Stoller (2018 (Latest update May, 2024)): Alda. http://github.com/DistAlgo/alda.
  24. Allen Van Gelder, Kenneth Ross & John S. Schlipf (1991): The Well-Founded Semantics for General Logic Programs. Journal of the ACM 38(3), pp. 620–650, doi:10.1145/116825.116838.