Published: 13th February 2025 DOI: 10.4204/EPTCS.416 ISSN: 2075-2180 |
The 40th International Conference on Logic Programming (ICLP 24), was held in Dallas, Texas on October 14-17, 2024, and was co-located with the 17th International Conference on Logic Programming and Non-monotonic Reasoning held on October 11-14, 2024.
This volume contains Technical Communications in Section 1, papers from the affiliated Doctoral Consortium in Section 2, and in Section 3, abstracts from the ICLP 24 invited talks and tutorials.1
ICLP 24 technical communications include several types of contributions: regular papers, short papers, and extended abstracts of regular papers. Under the rubric of technical communications we also include extended abstracts of system demos and of recently published research. The high quality of all contributions has been ensured by triple-revieweing; and apart from extended abstracts of recently published research, all contributions in this volume are original work.
Technical communications often represent research that is very new and sometimes longer versions of extended abstracts are published in other venues. However, the impact of technical communications can be quite high. In fact, the Alain Colmerauer 10-year Test of Time Award for ICLP 2024 was given to a technical communication, "Clingo = ASP + Control: Preliminary Report"2, by Martin Gebser, Roland Kaminski, Benjamin Kaufmann and Torsten Schaub, originally published as part of the Proceedings of the Thirtieth International Conference on Logic Programming 2014.
We loosely group the technical communications in this volume as follows.
Given the phenominal capabilities of recent Large Language Models (LLMs) such as Chat-GPT and Llama, a number of technical communications in ICLP 24 explored how logic programming can be combined with LLMs in meaningful ways, or included within neuro-symbolic approaches.
On LLM-generated Logic Programs and their Inference Execution Methods by Paul Tarau
Visual Graph Question Answering with ASP and LLMs for Language Parsing by Jakob Johannes Bauer, Thomas Eiter, Nelson Higuera Ruiz and Johannes Oetsch
LLM+Reasoning+Planning for Supporting Incomplete User Queries in Presence of APIs by Sudhir Agarwal, Anu Sreepathy, David H. Alonso and Prarit Lamba
Logical Lease Litigation Prolog and LLMs for Rental Law Compliance in New York by Sanskar Sehgal and Yanhong A. Liu
LP-LM: No Hallucinations in Question Answering with Logic Programming by Katherine Wu and Yanhong A. Liu.
Neuro-symbolic Contrastive Learning for Cross-domain Inference by Mingyue Liu, Ryo Ueda, Zhen Wan, Katsumi Inoue and Chris Willcocks
Another active research topic investigates the role of logic programming in autonomous, distributed and adaptive dynamic systems.
Architecture for Simulating Behavior Mode Changes in Norm-Aware Autonomous Agents by Sean Glaze and Daniela Inclezan
Policies, Penalties, and Autonomous Agents (Extended Abstract) by Esteban Guerrero and Juan Carlos Nieves
Mind the Gaps: Logical English, Prolog, and Multi-agent Systems for Autonomous Vehicles by Galileo Sartor, Adam Wyner and Giuseppe Contissa
Simulating Supply-Chain Contract Execution: A Multi-Agent Approach (Extended Abstract) by Long Tran, Tran Cao Son, Dylan Flynn and Marcello Balduccini
Modular Stochastic Rewritable Petri Nets by Lorenzo Capra
An important advantage of logic compared to neural models is that logical reasoning can be explained in human terms using different techniques, including causality and argumentation. This was the topic of three papers in ICLP 24.
Semantic Argumentation using Rewriting Systems (Extended Abstract) by Vineel Tummala and Daniela Inclezan
Data2Concept2Text: An Explainable Multilingual Framework for Data Analysis Narration by Flavio Bertini, Alessandro Dal Palù, Federica Zaglio, Francesco Fabiano and Andrea Formisano
Counterfactual Explanations as Plans by Vaishak Belle3
As with other recent ICLPs, the topic of Answer Set Programming (ASP) was well-represented. This year, the ASP topics for technical communications included the abductive capabilities of ASP for Visual Query Answering (VQA), the regular models of an ASP program, the use of ASP for meta-reasoning with ontologies, the relations among epistemic extensions of ASP, and using ASP to drive interfaces to other ASP systems.
Abduction of Domain Relationships from Data for VQA by Al Mehdi Saadat Chowdhury, Paulo Shakarian and Gerardo I. Simari
Graphical Conditions for Existence, Unicity and Multiplicity of Non-Trivial Regular Rodels by Van-Giang Trinh, Belaid Benhamou, Sylvain Soliman and Francois Fages
Efficient OWL2QL Meta-reasoning Using ASP-based Hybrid Knowledge Bases by Haya Majid Qureshi and Wolfgang Faber
Pearce's Characterisation in an Epistemic Domain (Extended Abstract) by Ezgi Iraz Su
ASP-driven User-interaction with Clinguin by Alexander Beiser, Susana Hahn and Torsten Schaub
New perspectives on types and unification algorithms played a part in several technical communications. These papers concerned Prolog as well as other logic-oriented systems and frameworks, including the Coq proof assistant.
A Prolog Program for Bottom-up Evaluation by David S. Warren
Regular Typed Unification by João Barbosa, Mário Florido and Vitor Santos Costa
Order-Sorted Intensional Logic: Expressing Subtyping Polymorphism with Typing Assertions and Quantification over Concepts by Djordje Marković and Marc Denecker
A Coq Formalization of Unification Modulo Exclusive-Or by Yichi Xu, Daniel J. Dougherty and Rose Bohrer
Several technical communications described applications that were developed using annotation logic, abduction, constraint programming, and ASP.
Geospatial Trajectory Generation via Efficient Abduction: Deployment for Independent Testing by Divyagna Bavikadi, Dyuman Aditya, Devendra Parkar, Paulo Shakarian, Graham Mueller, Chad Parvis and Gerardo I. Simari
Towards Mass Spectrum Analysis with ASP (Extended Abstract) by Nils Küchenmeister, Alex Ivliev and Markus Krötzsch
Monitoring and Scheduling of Semiconductor Failure Analysis Labs (Extended Abstract) by Elena Mastria, Domenico Pagliaro, Francesco Calimeri, Simona Perri, Martin Pleschberger and Konstantin Schekotihin
Declarative AI design in Unity using Answer Set Programming by Denise Angilica, Giovambattista Ianni, Francesco Pacenza and Jessica Zangari. (Extended Abstract of recently published research.)
stableKanren: Integrating Stable Model Semantics with miniKanren by Xiangyu Guo, James Smith and Ajay Bansal. (Extended Abstract of recently published research.)
Alda: Integrating Logic Rules with Everything Else, Seamlessly by Yanhong A. Liu, Scott Stoller, Yi Tong and Bo Lin. (System Demonstration.)
The Doctoral Consortium of ICLP 24 was jointly organized with the 17th International Conference on Logic Programming and Non-monotonic Reasoning (LPNMR 24). Applications were submitted by 12 PhD students from universities in the US (5), Germany (2), Belgium, Canada, Czechia, Italy and Singapore. Their research summaries received three reviews each by senior members of the ICLP 24 and LPNMR 24 research communities, giving critical yet constructive feedback on the student contributions.
Generating Causally Compliant Counterfactual Explanations using ASP by Sopam Dasgupta
Bridging Deep Learning and Logic Programming for Explainability through ILP by Talissa Dreossi
Computational methods for Dynamic Answer Set Programming by Susana Hahn
Relating Answer Set Programming and Many-sorted First-order Logic by Zachary Hansen
Answer Set Counting and its Applications by Mohimenul Kabir
Logical Foundations of Smart Contracts by Kalonji Kalala
Commonsense Reasoning-Aided Autonomous Vehicle Systems by Keegan Kimbrell
A Category-Theoretic Perspective on Approximation Fixpoint Theory by Samuele Pollaci
Hybrid Answer Set Programming: Foundations and Applications by Nicolas Rühling
Autonomous Task Completion Based on Goal-directed Answer Set Programming by Alexis Tudor
Early Validation of High-level Requirements on Cyber-Physical Systems by Ondřej Vašíček
Reliable Conversational Agents under ASP Control that Understand Natural Language by Yankai Zeng
The Autumn School on Logic Programming, held in conjunction with the Doctoral Consortium, featured four tutorials presented by the following senior researchers.
Tran Cao Son, New Mexico State University. Las Cruces, NM.
Y. Annie Liu, Stony Brook University. Stony Brook, NY.
Manuel Hermenegildo, Technical University of Madrid. Madrid, Spain.
Torsten Schaub and Susana Hahn, University of Potsdam. Potsdam, Germany.
These Autumn School presenters also shared their experiences and ideas with the PhD students during a mentoring lunch event on October 13.
ICLP 24 had four invited talks and two invited tutorials.
Logic Programming and Logical Algorithmics
Moshe Vardi
Rice University. Houston, Tx.
Moshe Vardi presented his talk in a plenary session attended by ICLP 24 particiants together with participants of the 17th International Conference on Logic Programming and Non-monotonic Reasoning.
Logic programming, born around 1972, expresses a program as a set of Horn clauses. Computation is then performed by applying logical reasoning to that set of clauses. The approach was eventually described as "Algorithm=Logic+Control". Another approach to logic and algorithms was developed by Codd in the early 1970s. In his approach, the problem domain is expressed as a relational database, and the problem is expressed as a first-order formula, called "query". Computation is performed by a meta-algorithm, the query-evaluation algorithm. In this talk, I will describe this approach, which I call Logical Algorithmics, in detail. I will show how this approach yielded multi-variate computational-complexity theory, which offers a more nuanced approach to complexity analysis. It also enabled the development the model-checking algorithms, which are today used in industrial semiconductor design tools.
Logic rules and commonsense in uncertain times:
A simple unified semantics for reasoning with assurance and
agreement
Y. Annie Liu
Stony Brook University. Stony Brook, NY.
Complex reasoning problems are most clearly and easily specified using logical rules, but require recursive rules with aggregation such as count and sum and more for practical applications. Unfortunately, the meaning of such rules has been a significant challenge, with many disagreeing semantics, baffling commonsense for rigorous logic.
This tutorial examines a simple unified semantics for reasoning with assurance and agreement, and consists of three main parts:
An introduction to complex reasoning problems expressed using logic rules, with recursion, negation, quantification, and aggregation; the key idea of a simple unified semantics, supporting simple expression of different assumptions; and how it unifies different prior semantics.
An overview of the precise rule language; the formal semantics, called Founded Semantics and Constraint Semantics, or Founded+Constraint Semantics (FCS) for short here, supporting efficient and precise inference over aggregation even with approximation; and the properties of the semantics.
An exploration of a wide range of challenging examples, including the well-known problem of company control and extended win-not-win games. FCS is simple and matches the desired results in all cases.
Additionally, we discuss how to combine logic/rigorous languages and LLMs/ML for problem solving and question answering with assurance, where a simple unified semantics is critical for its generality, power, and ease of use.
Linear Algebraic Approaches to Logic Programming
Katsumi Inoue
National Institute of Informatics. Chiyoda, Japan.
Integration of symbolic reasoning and machine learning is important for robust AI. Realization of symbolic reasoning based on algebraic methods is promising to bridge between symbolic reasoning and machine learning, since algebraic data structures have been used in machine learning.
To this end, Sakama, Inoue and Sato have defined notable relations between logic programming and linear algebra and have proposed algorithms to compute logic programs numerically using tensors. This work has been extended in various ways, to compute supported and stable models of normal logic programs, to enhance the efficiency of computation using sparse methods, and to enable abduction for abductive logic programming. A common principle in this approach is to formulate logical formulas as vectors/matrices/tensors, and linear algebraic operations are applied on these elements for computation of logic programming. Partial evaluation can be realized in parallel and by self-multiplication, showing the potential for exponential speedup.
Furthermore, the idea to represent logic programs as tensors and matrices and to transform logical reasoning to numeric computation can be the basis of the differentiable methods for learning logic programs.
The Anatomy of the SICStus Finite-Domain Constraint Solver
Mats Carlsson
Research Institute of Sweden. Kista, Sweden.
Constraint programming (CP) is a powerful problem-solving paradigm with roots in combinatorics, linear programming, logic programming, and AI. A notable development in the 1980s was the fusion of constraint solving with logic programming into constraint logic programming (CLP), which extended Prolog by integrating constraints into its framework. To extend a Prolog system with constraints, a large number of algorithms and data structures must be added for tasks like domain representation, constraint propagation, search, a whole menagerie of filtering algorithms for specific constraints, and peaceful coexistence with the Prolog virtual machine and runtime system.
This talk focuses on the constraint programming support in SICStus Prolog: the key extensions to Prolog that were necessary, details of the solver architecture, and a discussion of design choices. I will try to put the work in a historical perspective and also say something about programming interfaces, use cases, and my outlook on the future of CP.
Encoding High-Level Constraints into SAT and MIP
Neng-Fa Zhou
CUNY Brooklyn College and Graduate Center. Brooklyn, NY.
Abstract: Picat provides four solver modules, including CP, SAT, MIP, and SMT, for modeling and solving constraint satisfaction and optimization problems (CSPs). This tutorial introduces the inner workings of Picat's SAT and MIP modules. PicatSAT encodes constraints into SAT based on unary and binary encodings of domain variables. PicatSAT adopts many optimizations from CP systems, language compilers, and hardware design systems for encoding primitive constraints into compact and efficient SAT encodings. PicatSAT also employs some novel algorithms for decomposing global constraints, especially graph constraints. PicatMIP, while generally not as competitive as PicatSAT on MiniZinc and XCSP benchmarks, is a good supplementary solver for CSPs. PicatMIP adopts the well-known big-M method for linearizing nonlinear constraints, and employs some optimizations for translating special nonlinear constraints into linear ones.
How Structure Shapes Logic Programming and Counting-Based Reasoning
Markus Hecher
Massachusetts Institute of Technology. Cambridge, Ma.
When can we efficiently solve combinatorially hard problems? In practice, state-of-the-art solvers can tackle instances with millions of variables, creating a significant gap between empirical performance and theoretical limits. A key factor in bridging this gap is understanding the structural properties of problem instances. In this talk, we explore how to efficiently leverage these structures, with a particular focus on the role of treewidth and the answer set programming framework. We establish tight runtime upper and lower bounds, grounded in reasonable complexity-theoretic assumptions. Special attention is given to counting-based reasoning, a computationally intensive task where structure plays a critical role. Through empirical results, we demonstrate how structural insights can drastically improve the efficiency of counting in combinatorial problem-solving. This emphasizes the importance of theoretical studies and their practical applications, showcasing how we bring theory and practice closer together.
We would like to begin by thanking everyone who together organized ICLP 24 and contributed to its success. These include
General Chair: Gopal Gupta, University of Texas at Dallas
Honorary General Chair: Doug DeGroot, University of Texas at Dallas
Workshop Chair: Joaquín Arias, Universidad Rey Juan Carlos
Webmasters: Abhiramon Rajasekharan, University of Texas at Dallas; Varad Abhyankar, University of Texas at Dallas; and Keegan Kimbrell, University of Texas at Dallas.
Local Organizing Committee: Varad Abhyankar, Sopam Dasgupta. Doug DeGroot. Serdar Erbatur. Keegan Kimbrell. Richard Min, Parth Padalkar, Abhiramon Rajasekharan, Elmer Salazar, Alexis Tudor, Jey Veerasamy, Yankai Zeng.
Conference Administrators: Maria Simplicio, Norma Richardson, Erich Finch, Alphonse Coleman, Vicente Torres. Zeng.
The ICLP 24 program committee consisted of representatives of logic programming groups in Australia, Austria, Canada, China, the Czech Republic, Finland, Germany, Greece, Israel, Italy, Japan, Netherlands, Portugal, Spain, Sweden, Turkey, the United Kingdom, and the United States. The issue editors feel fortunate for the close involvement of a responsive and energetic program committee. The program committee members were as follows.
Salvador Abreu
Mario Alviano
Nicos Angelopoulos
Joaquín Arias
Marcello Balduccini
Mutsunori Banbara
Chitta Baral
Roman Barták
Elena Bellodi
Bart Bogaerts
Roberta Calegari
Francesco Calimeri
Manuel Carro
Angelos Charalambidis
Michael Codish
Stefania Costantini
Alessandro Dal Palù
Marina De Vos
Marc Denecker
Agostino Dovier
Thomas Eiter
Esra Erdem
Wolfgang Faber
Jorge Fandinno
Johannes K. Fichte
Fabio Fioravanti
Andrea Formisano
Gerhard Friedrich
Marco Gavanelli
Martin Gebser
Laura Giordano
Ricardo Gonçalves
Gopal Gupta
Markus Hecher
Giovambattista Ianni
Daniela Inclezan
Tomi Janhunen
Matthias Kno
Joao Leite
Michael Leuschel
Yuliya Lierler
Vladimir Lifschitz
Francesca Alessandra Lisi
Yanhong A. Liu
Marco Maratea
Viviana Mascardi
Laurent Michel
Jose F. Morales
Johannes Oetsch
Manuel Ojeda-Aciego
Simona Perri
Enrico Pontelli
Francesco Ricca
Fabrizio Riguzzi
Ricardo Rocha
Chiaki Sakama
Vitor Santos-Costa
Zeynep G. Saribatur
Torsten Schaub
Konstantin Schekotihin
Tom Schrijvers
Tran Cao Son
Mohan Sridharan
Paul Tarau
Hans Tompits
Mirek Truszczynski
German Vidal
Concepcion Vidal
Alicia Villanueva
Kewen Wang
David Warren
Felix Weitkämper
Jan Wielemaker
Stefan Woltran
Roland Yap
Jia-Huai You
Zhizheng Zhang
Yuanlin Zhang
Neng-Fa Zhou
We also thank our external reviewers:
João Barbosa
Manuel Borroto
Geoffrey Churchill
Michele Collevati
Daniele Theseider Dupre
Müge Fidan
Tobias Geibinger
Michael Gelfond
Kenji Hashimoto
Antonio Ielo
Ryan Kepler
Matthias König
Elena Mastria
Ciaran McCreesh
Samuele Pollaci
Maurizio Proietti
Abhiramon Rajasekharan
Javier Romero
Takehide Soh
Gioacchino Sterlicchio
Robbe Van den Eede
Linde Vanbesien
Huaduo Wang.
We are grateful to the University of Texas at Dallas on whose campus the conference was organized. The following organizations within the University provided direct financial support: The Department of Computer Science, The Erik Jonsson School of Engineering and Computer Science, The Office of the Vice President for Research, and The Center for Applied AI and Machine Learning
Additionally, ICLP 2024 received funds for student support from the Artificial Intelligence Journal and the National Science Foundation.
We introduce a framework for enabling policy-aware autonomous agents to reason about penalties for non-compliant behavior, and act accordingly. We use the AOPL language for policy specification and ASP for reasoning about policies and penalties. We build upon existing work by Harders and Inclezan on simulating the behavior of policy-aware autonomous agents and run tests on two different domains. We conclude that our framework produces higher quality plans than the previous approach.
In this paper we explore autonomous agents that operate in dynamic environments governed by policies or norms. We introduce a framework that enables these policy-aware agents to reason about potential penalties for non-compliance and generate suitable plans for their goals. Unlike some of the previous work that only focused on compliant agents (e.g., [7,1]), we believe that it is important to study the different nuances of non-compliant agent behavior for two reasons. First, autonomous agents may be tasked to accomplish high-stakes goals (e.g., assist in rescue operations) that may only be achievable if one or more non-compliant actions are executed as part of the plan. Second, this can be useful to policy makers if policy-aware agents are used to model human behavior since humans do not always comply to norms. In our proposed framework, policies are specified in the Authorization and Obligation Policy Language (AOPL) by Gelfond and Lobo [3], and implemented in Answer Set Programming (ASP). We expand AOPL to enable the representation of, and reasoning about, penalties that may be incurred for non-compliance with a policy. Previous work on policy-aware autonomous agents [6,4] defined var- ious attitudes towards compliance with policies, called behavior modes. Non-compliant behavior was acceptable as part of the Risky behavior mode. However, in that framework the different plans contain- ing non-compliant actions were ranked solely based on the length of the plan. Instead, our framework is capable of distinguishing between different non-compliant plans via penalties, so that an agent that disobeys certain policies to achieve its goal can do so with minimum repercussions. Based on our exper- iments, we conclude that our framework produces better quality plans. At least for some domains, our approach is also more efficient.
Gelfond and Lobo [3] designed the Authorization and Obligation Policy Language AOPL for specifying policies for an intelligent agent acting in a dynamic environment. A policy is a collection of authorization and obligation statements. An authorization indicates whether an agent's action is permitted or not, and under which conditions. An obligation describes whether an agent is obligated or not obligated to per- form a specific action under certain conditions. An AOPL policy assumes that the agent's environment is described in an action language such as ALd [2]. An ALd system description defines the domain's transition diagram whose states are complete and consistent sets of fluent literals and whose arcs are labeled by actions. The signature of an AOPL policy includes (a) the signature of the dynamic system description (which consists of predicates for sorts, fluents, and actions) and (b) predicates permitted for authorizations, obl for obligations, and prefer for specifying preferences between authorizations or obligations. An AOPL policy P is a finite collection of statements of the form:
$ \begin{array}{llll} & \ \ permitted\left(e\right) & \textbf{ if } \ cond & (1a)\\ & \neg permitted\left(e\right) & \textbf{ if } \ cond & (1b)\\ & \ \ obl\left(h\right) & \textbf{ if } \ cond & (1c)\\ & \neg obl\left(h\right) & \textbf{ if } \ cond (1d)\\ d: \textbf{normally } & \ \ permitted(e) & \textbf{ if } \ cond & (1e)\\ d: \textbf{normally } & \neg permitted(e) & \textbf{ if } \ cond & (1f)\\ d: \textbf{normally } & \ \ obl(h) & \textbf{ if } \ cond & (1g)\\ d: \textbf{normally } & \neg obl(h) & \textbf{ if } \ cond & (1h)\\ & \ \ \textit{prefer}(d_i, d_j) & & (1i) \end{array} $where e is an elementary action; h is a happening (i.e., an elementary action or its negation); cond is a set of atoms of the signature, except for atoms containing the predicate prefer; d appearing in (1e)- (1h) denotes a defeasible rule label; and di, dj in (1i) refer to distinct defeasible rule labels from P. Rules (1a)-(1d) encode strict policy statements, while rules (1e)-(1h) encode defeasible statements (i.e., statements that may have exceptions). Rule (1i) captures priorities between defeasible statements only.
We extend the AOPL syntax by a new type of statement for penalties:
$penalty(r, p)\ \textbf{if}\ cond_p$where r is the label of the prohibition or obligation rule for which the penalty is specified, p stands for the number of penalty points imposed if the rule r is broken, and condp is a collection of static literals. As in Inclezan's work [5], we assume that all AOPL rules are labeled, including strict ones, which is not the case in the original definition of the language. For instance, here is a strict policy rule labeled r6(L) and its associated 3-point penalty. The rule says that the agent is obligated to stop at a location L whenever there are pedestrians crossing at that location:
$ \begin{array}{l} r6(L): obl(stop(L)) \ \ \textbf{if} \ \ pedestrians\_are\_crossing(L)\\ penalty(r6(L), 3) \end{array} $Multiple penalty values can be associated with the same rule, based on different gravity levels. Let's consider the following defeasible policy rule r1(L1, L2, S, S1) saying that normally one is not permitted to exceed the speed limit by more than 5 mph if the speed limit is under 55 mph:
$ \begin{array}{lrl} r1(L_1, L_2, S, S_1) : & \textbf{normally} & \neg permitted(drive(L_1, L_2, S)) \\ & \textbf{if} & speed\_limit(S_1), S > S_1 + 5, S_1 < 55 \end{array} $The various levels of penalties for 1, 2, and 3 points respectively are assigned to the rule in AOPL as:
$ \begin{array}{lll} penalty(r1(L_1, L_2, S, S_1),1) & \textbf{if} & S - S_1 < 10.\\ penalty(r1(L_1, L_2, S, S_1),2) & \textbf{if} & S - S_1 >= 10, S - S1 < 20.\\ penalty(r1(L_1, L_2, S, S_1),3) & \textbf{if} & S - S_1 >= 20. \end{array} $Next, we develop ASP rules that can identify the penalties an agent incurs at each time step and specify which policy rule has been violated. We do so by introducing a predicate add_penalty(r, p, i), which says that a penalty of p points should be added to the total for breaking policy rule r at time step i. This predicate's definition addresses three scenarios: (1) when a prohibited action is included in the plan; (2) when a required action is missing from the plan; and (3) when a forbidden action is erroneously included in the plan.
$ \begin{array}{l} add\_penalty(R, P, I) \ \leftarrow \ rule(R), \ holds(R, I), \ head(R, \neg permitted(E)), \ occurs(E, I),\ penalty(R, P)\\ add\_penalty(R, P, I) \ \leftarrow \ rule(R), \ holds(R, I), \ head(R, obl(E)), \ \neg occurs(E, I), \ penalty(R, P)\\ add\_penalty(R, P, I) \ \leftarrow rule(R), \ holds(R, I), \ head(R, obl(\neg e)), \ occurs(e, I), \ penalty(R, P) \end{array} $where R is a rule, I is a time step, E is an elementary agent action, and P is a penalty.
The overall penalty for a plan is captured by the predicate cumulative_penalty defined using the #sum aggregate of CLINGO, and similarly for cumulative time to complete a plan:
$ \begin{array}{lll} cumulative\_penalty(N) & \leftarrow & \#sum\{P, R, I: add\_penalty(R, P, I)\} = N \end{array} $An agent can either prioritize minimizing the cumulative penalty or minimizing the cumulative time when searching for an optimal plan.
Supply chains exhibit complex dynamics and intricate dependencies among their components, whose understanding is crucial for addressing the challenges highlighted by recent global disruptions. This paper presents a novel multi-agent system designed to simulate supply chains, linking reasoning about dynamic domains and multi-agent systems to reasoning about the high-level primitives of the NIST CPS Framework. Our approach synthesizes existing research on supply chain formalization and integrates these insights with multi-agent techniques, employing a declarative approach to model interactions and dependencies. The simulation framework models a set of autonomous agents within a partially observable environment, and whose interactions are dictated by contracts. The system dynamically reconciles agents' actions, assessing their feasibility and consequences. Based on the state of the domain, the simulation framework also draws conclusions about the high-level notions of requirements and concerns of the NIST CPS Framework, which provide a uniform and domain-agnostic vocabulary for the understanding of such complex systems as supply chains.
Building on our previous work on formalizing supply chains and the underlying contracts [2], this paper introduces a simulation of the dynamics of supply that relies on a view of the supply chain as a multi-agent system. Central to our approach is the use of formal contracts, which define the obligations and interactions between agents within the simulation environment. Our simulation integrates an additional knowledge layer based on the National Institute of Standards and Technology's Cyber-Physical Systems (NIST CPS) Framework [1]. The CPS Framework makes it possible to link the low-level view of contracts to the high-level aspects of stakeholders' requirements and concerns about the supply chain as a whole. By integrating a multi-agent simulation with the structured approach of the NIST CPS Framework, we aim to capture the nuanced interplay of contractual obligations, agent behaviors, and their ramifications on the supply chain, providing insights into the potential points of failure and enabling strategies to improve resilience. The agents and the simulation environment are modeled using declarative techniques through Answer Set Programming (ASP) [3]. This approach allows for a clear specification of the logic governing agent behaviors and their contractual interactions. This paper presents not only the theoretical underpinnings of our simulation model but also a practical use case that illustrates its potential. Through these simulations, stakeholders can better understand the critical dependencies within their supply chains, evaluate the robustness of contractual arrangements, and explore strategies to enhance overall resilience.
Two main components of the system are the environment simulator and the agent program. The code of the system is available on GitHub at github.com/ltran1612/Research_CPS_SupplyChain.
Environment Simulator. The environment simulator consists of two main parts: the controller and the reasoning engine. The reasoning engine is a logic program that takes a state (of the environment) and a set of actions and determines whether the actions can be executed. In this case, the reasoning engine computes the state of the world. The environment simulator works with the global domain, denoted by Denv, which is the union of domains of all agents. The main portion of the code of the reasoning engine is for computing the effects of a set of actions on a state of the environment and is similar to the code for computing a plan in [5]. The controller is responsible for all the communication between the environment and the agents (e.g., receiving actions that agents execute, informing agents about the changes in the local state of agents). Besides disallowing the execution of incompatible actions, the simulator can also disrupt the execution of a supply chain by randomly disallowing the execution of some action. This forces the agents, whose actions are not executed, to replan with the new local state.
Agent Program. Each agent program consists of the following components: the action domain, the set of rules for reasoning about concerns of the agent, the set of rules for reasoning about the agent's actions as well as planning, and a controller. The controller is responsible for the communication of the agent with the environment. The controller's behaviour is described by the pseudocode given in Algorithm 1. The set of rules for reasoning about the agent's actions and planning is similar to the code for planning as described in [5].
Require: agent ID, action domain, set of contracts
registers ID with environment and wait for acknowledgement
sends the action domain to the environment and wait for acknowledgement
step = 0
generate a plan p for the set of contracts
while true do
send p[step] (the step-th action of p) to environment
wait for response (locals - the local state) from the environment
if locals ≠ ⊥ then ▷ all actions were executed successfully
update the current state with locals
step = step + 1
else ▷ some action cannot be executed
generate a new plan p for the agent
step = 0
end if
end while
Communications. We used the publish/subscribe architecture (MQTT protocol specifically, see [6] for details) to facilitate the communications between agents and the environment. Each entity sends and receives messages by topics through a publish/subscribe broker component (middleman) [4]. Communication between an agent (with a unique AgentID) and the environment are done using two topics: "env/AgentID" and "for/AgentID". The former is for the agent to send and the latter is for receiving from the environment.
As a case study, we leveraged a dataset developed by the Observatory of Economic Complexity (OEC, https://oec.world/en), an organization that collects and analyzes international trade data. The dataset contains data about international imports and exports, from which we extracted part of an automotive supply chain. The scenario involves 8 agents with 7 contracts containing a total of 21 clauses, 8 supply chain requirements, and 5 concerns. An example of a contract clause from the scenario is: C1: A responsible_for produced(vehicle_parts, 9K) when by_week 4
. Agent A is called speedy_auto_part. The private parts of the contract map C1 to various requirements, addressing concerns of the CPS ontology. An example is: C1: material-safe-for-production
. In this scenario, each agent can perform 4 types of actions: produce, deliver, pay, and receive. Each also keeps track of relevant fluents, like the total amount of payments made. The corresponding domains are encoded using a template. For example, action produce from A and related fluents are specified by rules including:
produce_item(speedy_auto_parts, vehicle_parts).
action(Agent, produce, (Item, Amount)) :- produce_item(Agent, Item), number(Amount).
causes(Agent, produce, Value, produced, increase, ()) :- action(Agent, produce, Value).
A sample run of the simulator with the OEC agents progresses as follows. At first, the agents register with the environment and send their information (action domains and initial state) to the environment. At each step i, the agents send to the environment the actions they plan to execute. The environment computes the next state, sending back to the agents their local states, and waits for another set of actions. For example, at step 0, A has 0 vehicle_part and plans to take the action that produces 9,000 parts. The global state of the environment records at step 1 that A now has 9,000 vehicle_part, which is reflected in the local state of A as well. At the end of each iteration, the system determines which clauses are satisfied. For example, at step 0 no clauses are satisfied; at step 1, clause C1 is satisfied. In turn, the system uses this information to infer which requirements and concerns are satisfied.
In this paper, we presented a novel multi-agent simulation framework designed to address the complexities and challenges inherent in modern supply chains. We demonstrate that standard ASP programs such as planning or diagnosing modules can be integrated into a system for monitoring contract executions. By integrating the NIST CPS Framework with advanced contract formalization techniques, we have developed a robust system capable of simulating diverse supply chain scenarios and assessing the impact of various types of disruptions. Our evaluation on a realistic case study demonstrates that the framework not only enhances the understanding of supply chain dynamics but also provides actionable insights into improving resilience and reliability. It is understandable that a system utilizing ASP technology will inherit all of its problems (e.g., grounding, scalability). However, the prototype works fine with the use cases and appears acceptable to the owner of the use cases. Identifying the limit of the current system in terms of the limit of the system (e.g., the complexity of the domains or contracts) is an interesting issue and is our immediate future work.
Acknowledgement. Portions of this publication and research effort are made possible through the help and support of NIST via cooperative agreement 70NANB21H167.
In this article, we introduce a general framework for structured argumentation providing consistent and well-defined justification for conclusions that can and cannot be inferred and there is certainty about them, which we call semantic and NAF-arguments, respectively. We propose the so-called semantic argumentation guaranteeing well-known principles for quality in structured argumentation, with the ability to generate semantic and NAF-arguments, those where the conclusion atoms are semantically interpreted as true, and those where the conclusion is assumed to be false. This framework is defined on the set of all logic programs in terms of rewriting systems based on a confluent set of transformation rules, the so-called Confluent Logic Programming Systems, making this approach a general framework. We implement our framework named semantic argumentation solver available open source.
We propose a general argument construction based on the partial interpretation of programs using different families of logic programming semantics induced by rewriting systems functions [6]. Rewriting rules are used to replace parts of a logic program based on the concept of a normal form, which is the least expression of a program that cannot be rewritten any further [9]. For example, having a program with the only rule: \(innocent(x)\) \(\leftarrow \ not\ guilty(x)\), current structured argumentation approaches [10] generate the only consistent argument: \(\langle \underbrace{\{ innocent(X) \leftarrow \ not \ guilty(x) \}}_{Support}, \ \underbrace{innocent(x)}_{Conclusion} \rangle\), expressing that person \(x\) is innocent if \(x\) can not be proved guilty. However, in domain applications that need the generation of argument-based reason explanations, providing structured and well-defined reasons why \(x\) is not guilty (\(not \ guilty(x)\)) are needed. We emphasize the role of investigating such computational mechanisms that can also build arguments justifying conclusions based on the atoms that are inferred as false, e.g., to state that there is certainty in affirming that the guiltiness of \(x\) is false (there is no evidence), therefore the \(x\) must be innocent, i.e., \(\langle \underbrace{\{ innocent(X) \leftarrow \ not \ guilty(x) \}}_{Support}, \ \underbrace{not \ guilty(x)}_{Conclusion} \rangle\). These types of arguments have been less explored in the formal argumentation theory, except for assumption-based argumentation (ABA) [8].
We use propositional logic with the following connectives \(\wedge,\leftarrow,\;not\), and \(\top\) where \(\wedge\), and \(\leftarrow\) are 2-place connectives, \(\;not\) and \(\top\). The negation symbol not is regarded as the so-called negation as failure (NAF). We follow standard logic programming syntax, e.g., [5], for lack of space we do not include some basic and well-established syntax.
An interpretation of the signature \(\mathcal{L}_P\) is a function from \(\mathcal{L}_P\) to {false,true}. A partial interpretation of \(\mathcal{L}_P\), are the sets \(\langle I_1, I_2\rangle\) where \(I_1 \cup I_2 \subseteq \mathcal{L}_P\). We use \(\texttt{SEM}(P) = \langle P^{true}, P^{false} \rangle\), where \(P^{true} := \{ p |\ p \leftarrow \top \in P\}\) and \(P^{false} := \{ p |\ p \in \mathcal{L}_P \backslash \texttt{HEAD}(P)\}\). \(\texttt{SEM}(P)\) is also called model of \(P\) [6] . We use three value semantics that are characterized by rewriting systems following a set of Basic Transformation Rules for Rewriting Systems (see details in [6]), those rules are named: RED+, RED-, Success, Failure, Loop, SUB, and TAUT. Then, two rewriting systems (\(\mathcal{CS}\)) can be defined based on the previous basic transformations: \(CS_0\) = \(\{RED^+\), \(RED^-\), \(Success\), \(Failure\), \(Loop\) \(\}\), induces the WFS [3]. \(CS_1\) = \(CS_0 \cup \{SUB,TAUT,LC\}\), induces WFS+ [6]. The normal form of a normal logic program \(P\) with respect to a rewriting system \(\mathcal{CS}\) is denoted by \(\mathit{norm}_\mathcal{CS}(P)\). Every rewriting system \(\mathcal{CS}\) induces a 3-valued logic semantics \(\texttt{SEM}_{\mathcal{CS}}\) as \(\texttt{SEM}_{\mathcal{CS}}(P):= \texttt{SEM}(\mathit{norm}_\mathcal{CS}(P))\). To simplify the presentation, we use the entailment \(\models_{\texttt{SEM}_{\mathcal{CS}}}\) applied to a logic program \(P\) is defined by \(\texttt{SEM}_{\mathcal{CS}}(P)=\langle T,F\rangle\) in which \(P \models_{\texttt{SEM}_{\mathcal{CS}}^{T}} a\) if and only if \(a \in T\), similarly, if \(P \models_{\texttt{SEM}_{\mathcal{CS}}^{F}} a\) if and only if \(a \in F\). We use the entailment \(\models_{\texttt{SEM}_{\mathcal{CS}_0}}\) and \(\models_{\texttt{SEM}_{\mathcal{CS}_1}}\) for confluent rewriting system \(CS_0\) and \(CS_1\) respectively; and the form \(\models_{\texttt{SEM}_{\mathcal{CS}}}\) to indicate that any rewriting system can be used.
Let us introduce a formal definition of semantic arguments.
Definition 1 (Semantic argument). Given a normal logic program \(P\) and \(S \subseteq P\). \(Arg_P = \langle S, \;g \rangle\) is a semantic argument under \(\texttt{SEM}_{\mathcal{CS}}\) w.r.t. \(P\), if the following conditions hold true:
\(S \models_{\texttt{SEM}_{\mathcal{CS}}^{T}} g\)
\(S\) is minimal w.r.t. the set inclusion satisfying 1.
We simplify the notation of these semantic arguments as \(\mathcal{A}rg^{+}\). Condition 1 states that the interpretation of conclusion \(g\) is true w.r.t. \(T\) in \(\texttt{SEM}_{\mathcal{CS}}(S)\). Condition 2 in Definition 1 guarantees the support minimality.
Now, let us define NAF-arguments as follows:
Definition 2 (NAF-arguments).
Given a normal logic program \(P\) and \(S \subseteq P\). \(Arg_P = \langle S, \; not \;g \rangle\) is a NAF-argument under the \(\texttt{SEM}_{\mathcal{CS}}\) w.r.t. \(P\), if the following conditions hold true:
\(S \models_{\texttt{SEM}_{\mathcal{CS}}^{F}} g\),
\(S\) is minimal w.r.t. the set inclusion satisfying 1.
Condition 1 in Definition 2 is the interpretation of the conclusion w.r.t. \(\models_{\texttt{SEM}_{\mathcal{CS}}^{F}}\), with the set of all the NAF-arguments noted as \(\mathcal{A}rg^{-}\). The addition of \(\; not\) in the conclusion of a NAF-argument stresses that such an atom is interpreted as false by \(\texttt{SEM}_{\mathcal{CS}}\).
Example 1.
Let us consider a program \(P_3\) for building semantic and NAF-arguments considering \(\mathcal{CS}_0\) and \(\mathcal{CS}_1\).
We build semantic and NAF-arguments as follows: 1) get related clauses of atoms (\(S_i\)); 2) for every related clause compute \(\texttt{SEM}_{\mathcal{CS}_{0}}(S_i)\) and \(\texttt{SEM}_{\mathcal{CS}_{1}}(S_i)\); 3) the support (every \(S_i\)) is joined to the conclusion1. Then, the following sets of arguments are built considering \(\mathcal{CS}_0\) and \(\mathcal{CS}_1\):
Case \(\mathcal{CS}_0\): \(\mathcal{A}rg_{P_{3}} = \{A_1^{+},A_2^{+},A_3^{+},A_1^{-},A_2^{-},A_3^{-},A_4^{-},A_6^{-}\}\).
Case \(\mathcal{CS}_1\): \(\mathcal{A}rg_{P_{3}} = \{A_1^{+},A_2^{+},A_3^{+},A_5^{+},A_1^{-},A_2^{-},A_3^{-},A_4^{-},A_6^{-}\}\).
An effect of interpreting argument supports under \(\texttt{SEM}_{\mathcal{CS}}\) is that some atoms (or sets of them) are evaluated in opposition to other arguments (e.g., \(A_1^{+} = \langle S_2,a \rangle\) and \(A_1^{-} = \langle S_1,\; not \; a \rangle\) in Example 1), suggesting a semantic attack relationship.
Definition 3 (Semantic attack). Let \(A = \langle S_A, a\rangle \in \mathcal{A}rg^{+}\), \(B = \langle S_B, not \; b \rangle \in \mathcal{A}rg^{-}\) be two semantic arguments where \(\texttt{SEM}_{\mathcal{CS}}(S_A) = \langle T_A, F_A \rangle\) and \(\texttt{SEM}_{\mathcal{CS}}(S_B ) = \langle T_B, F_B \rangle\). We say that \(A\) attacks \(B\) if \(x \in T_A\) and \(x \in F_B\), denoted \(\texttt{attacks} (x,y)\).
Lemma 1. Semantic and NAF-arguments built from any normal logic program are always consistent.
Definition 4 (Semantic Argumentation Framework (SAF)). Let \(P\) be a normal program. Then, a semantic argumentation framework is the tuple: \(SAF_{P}= \langle \mathcal{A}rg_P, \mathcal{A}tt\rangle\)
We can straightforward extend the definitions of argumentation semantics in [7] as follows:
Definition 5. Let \(SAF_{P}= \langle \mathcal{A}rg_P, \mathcal{A}tt\rangle\) be a semantic argumentation framework. An admissible set of arguments \(S \subseteq AR\) is:
stable if and only if \(S\) attacks each argument which does not belong to \(S\).
preferred if and only if \(S\) is a maximal (w.r.t. inclusion) admissible set of \(AF\).
complete if and only if each argument, which is acceptable with respect to \(S\), belongs to \(S\).
grounded if and only if \(S\) is the minimal (w.r.t. inclusion) complete extension of \(AF\)2.
Example 2.
Let us consider \(P_5= \{a \leftarrow not \ b; \ b \leftarrow not \ a; \ c \leftarrow not \ c, not \ a; \ d \leftarrow not \ d, not \ b; \}\). \(\texttt{SEM}_{\mathcal{CS}}\) will remove rules involving atoms \(c\) and \(d\). Then, applying Definition 4, we have the framework: \(SAF_{P_5} =\) \(\langle \{A_{6}^{-} = \langle \{a \leftarrow not \ b\}, \; not \ a \rangle\),\(A_{6}^{+} = \langle \{b \leftarrow not \ a\}, \; b \rangle\), \(A_{5}^{-} = \langle \{b \leftarrow not \ a\}, \; not \ b \rangle\), \(A_{5}^{+} = \langle \{a \leftarrow not \ b\}, \; a \rangle \}, \ \texttt{attacks}(A_{5}^{+},A_{6}^{+}),\) \(\texttt{attacks}(A_{5}^{+},A_{6}^{-}),\) \(\texttt{attacks}(A_{6}^{+},A_{5}^{+}),\) \(\texttt{attacks}(A_{6}^{+},A_{5}^{-})\rangle\). When we apply Definition 5 to \(SAF_{P_5}\) we obtained the following extensions:
Stable = preferred: \(\{ \{A_{5}^{+},A_{5}^{-}\}, \ \{A_{6}^{+},A_{6}^{-}\}\}\)
Complete: \(\{ \{A_{5}^{+},A_{5}^{-}\}, \ \{A_{6}^{+},A_{6}^{-}\}, \ \{\}\}\)
Grounded: \(\{ \}\)
The main contributions are: 1) Semantic Argumentation Frameworks (SAF) can be used for justifying true and false interpreted conclusions. 2) SAF is based on families of rewriting confluent systems. 3) Satisfies all the well-known argumentation postulates [1][4]. Future work will involve the exploration of our framework under other Confluent Logic Programming Systems, the satisfaction of other argumentation principles, and the investigation of commonalities between ABA and semantic argumentation.
We implemented this procedure and the sources are open, then can be found in https://github.com/esteban-g/semantic_argumentation
In it is shown that grounded can be characterized in terms of complete extensions.
Mass spectrometry is a powerful technique to determine the chemical composition of a substance. However, the mass spectrum of a substance does not reveal its exact molecular structure, but merely the possible ratios of elements in the compound and its fragments. To identify a sample, researcher may use commercial databases (for common compounds), or software tools that can discover molecular structures from the partial information available. The latter leads to a combinatorial search problem that is a natural fit for answer set programming (ASP). Molecules can be modeled as undirected graphs, representing the different elements and atomic bonds as node and edge labels, respectively. ASP is well-suited to encode chemical domain knowledge (e.g., possible number of bonds for carbon) and extra information about the sample (e.g., that it has an OH group), so that each answer set encodes a valid molecular graph.
Unfortunately, this does not work: a direct ASP encoding yields exponentially many answer sets for each molecular graph due to the large number of symmetries (automorphisms) in such graphs. For example, C6H12O admits 211 distinct molecule structures but leads to 111,870 answer sets. Removing redundant solutions and limiting the search to unique representations are common techniques used in the ASP community where they have motivated research on symmetry breaking. Related approaches work by adding additional rules to ASP [1, 4, 9], by rewriting the ground program before solving [5, 3, 2], or by introducing dedicated solvers [7]. However, our experiments with some of these approaches still produced 10-10,000 times more answer sets than molecules even in simple cases.
In this extended abstract, we summarize our new approach that prevents symmetries in graph representations already during grounding [8]. This technique forms the basis of our ASP-driven prototype implementation for enumerating molecular structures using partial chemical information. Our ASP source code, evaluation helpers, and data sets are available online at https://github.com/knowsys/eval-2024-asp-molecules. The sources of our prototype application are at https://gitlab.com/nkuechen/genmol/.
Many mass spectrometers break up samples into smaller fragments and measure their relative abundance. The resulting mass spectrum forms a characteristic pattern, enabling inferences about the underlying sample. High-resolution spectra may contain information such as "the molecule has six carbon atoms" or "there is an OH group", but cannot reveal the samples's full molecular structure. In chemical analysis, we are looking for molecular structures that are consistent with the measured mass spectrum.
To address this task, we have developed Genmol, a prototype application for enumerating molecular structures for a given composition of fragments. It is available as a command-line tool and as a progressive web application (PWA), shown in Fig. 1. Genmol is implemented in Rust, with the web front-end using the Yew framework on top of a JSON-API, whereas the search for molecular structures is implemented in Answer Set Programming (ASP) and solved using clingo [6]. An online demo of Genmol is available for review at https://tools.iccl.inf.tu-dresden.de/genmol/.
The screenshot shows the use of Genmol with a sum formula C6H5ON and two fragments as input. Specifying detected fragments and restricting bond types helps to reduce the search space. Alternatively, users can provide a molecular mass or a complete mass spectrum, which will then be associated with possible chemical formulas using, e.g., information about the abundance of isotopes. The core task of Genmol then is to find molecules that match the given input constraints. Molecules in this context are viewed as undirected graphs of atoms, linked by covalent bonds that result from sharing electrons. [This graph does not always determine the spacial configuration of molecules, which cannot be determined by mass spectrometry alone, yet it suffices for many applications.] Many chemical elements admit a fixed number of bonds, the so-called valence, according to the number of electrons available for binding (e.g., carbon has a valence of 4). Bonds may involve several electrons, leading to single, double, triple bonds, etc. The graph structure of molecules, the assignment of elements, and the possible types of bonds can lead to a large number of possible molecules for a single chemical formula, and this combinatorial search task is a natural match for ASP.
Our experiments show (https://github.com/knowsys/eval-2024-asp-molecules/tree/main/results) that the existing approaches to symmetry breaking outlined in the introduction do not achieve adequate performance for real-world use cases. Our dedicated encoding therefore relies on a canonical representation of molecules, which restricts the overall search space. It is based on SMILES, a widely used serialization format for molecular graphs. SMILES start from an (arbitrary) spanning tree of the molecular graph, serialized in a depth-first order, with subtrees enclosed in parentheses. Edges not covered by the spanning tree are indicated by pairs of numeric cycle markers.
Our ASP implementation can then be summarized as follows: (1) guess the spanning tree, defined by the element of each node, the parent-relation between nodes, and the bond type of the parent edge of each node, (2) guess the cycle markers to complete the molecule, (3) eliminate non-canonical solutions.
Canonical Molecular Trees We first consider the acyclic case. We compare tree nodes by considering the (1) maximum depth and number of vertices of its subtree, (2) number of children, and (3) bond type of its parent edge. This ordering extends to trees by comparing their roots and recursively descending on ties. The minimal tree according to this ordering results in a unique representation.
Canonical Molecular Graphs We reduce the general case to the acyclic case by introducing edges to fresh nodes for each cycle marker. Since we cannot compare every such representation in practice, we implement a heuristic, which disallows shortening cycle edges. Intuitively, such cycles occur if another assignment of cycle markers would lead to a deeper tree. The ASP implementation detects shortening cycle edges by a pattern-matching approach.
Identifying and locating non-conformities, such as physical failures causing electrical malfunctioning of a device, in modern semiconductor devices is a challenging task. Typically, highly qualified employees in a failure analysis (FA) lab use sophisticated and expensive tools like scanning electron microscopes to identify and locate such non-conformities. Given the increasing complexity of investigated devices and very limited resources, labs may struggle to deliver analysis results in time. This paper suggests an approach to optimize the usage of FA lab resources by adaptive scheduling and monitoring. In particular, we combine constraints programming for the computation of a schedule with stream reasoning to monitor the lab’s conditions and maintain the schedule depending on the situation. Evaluation results indicate that our system can significantly improve the tardiness of a real-world FA lab, and all its computational tasks can be finished in an average time of 3.6 seconds, with a maximum of 15.2 seconds, which is acceptable for the lab’s workflows.
Failure Analysis (FA) of semiconductor devices is an important activity aiming at the identification of non-conformities during various stages of the device’s lifecycle. For instance, an FA lab might provide qualifications for new designs, execute tests for quality management tasks, or identify failures in malfunctioning devices returned by customers. Daily, FA labs process a large number of jobs and, given their complexity, often face issues with delivering all analysis results on time. To minimize the likelihood of violating deadlines, a lab must optimally utilize all available resources, such as equipment and personnel. At the moment, First-In-First-Out (FIFO) in combination with job prioritization is the most widely used strategy for assigning FA resources. Although FIFO can easily be implemented in a lab, its long-term performance is unsatisfactory, since it handles poorly situations when jobs have different priorities or some of the resources have limited availability. To solve this issue, researchers suggested various methods for modeling and solving the scheduling problem for different types of labs, such as medical or industrial ones. In the literature, researchers addressed different aspects of laboratory scheduling reflecting the complex and dynamic nature of the environment. The papers present scheduling models usually derived from two classic problems: job-shop scheduling (JSS), e.g., [7, 5], or resource-constrained project scheduling (RCPS), e.g., [4, 3]. In addition to modeling aspects, designers of these approaches investigated efficient solving methods able to compute high-quality schedules within a given time budget, thus, avoiding delays while interacting with lab employees over graphical interfaces. For instance, [4] suggest engineers precisely plan their daily tasks and provide updates to the schedule in case of any unforeseen events. High-performance scheduling algorithms of the system ensure the responsiveness of a scheduling system during this interaction.
The main drawback of existing solutions is that they focus solely on the scheduling aspect of the problem and require users to monitor the lab’s dynamic environment characterized by a large number of stochastic events, such as equipment breakdowns, personnel emergencies, or unexpected task delays. All these events must be timely and correctly communicated to the scheduling system to ensure its correct operation. However, manual maintenance of the work schedule by lab engineers is time-consuming and might be a source of erroneous data causing incorrect scheduling results. To overcome these issues, we propose a novel approach that aims to avoid the aforementioned issues by automatic monitoring, repair, and (re)scheduling of lab tasks. Our approach relies on two components: a stream reasoner based on DP-sr [2] and a scheduler using the constraints programming approach of [5].
DP-sr supports an input language that properly extends Answer Set Programming (ASP) with ad-hoc means for dealing with Stream Reasoning (SR) scenarios, yet maintaining the highly declarative nature and ease of use that are signature features of ASP. Along with typical ASP features (e.g., aggregates, disjunction, strong/weak constraints, unstratified negation), specific constructs can appear in DP-sr programs that are supposed to be evaluated over a stream of events. The scheduler is implemented using IBM ILOG Cplex Optimization Studio. This constraints programming framework is widely used in the scheduling of production processes in the semiconductor industry. The framework provides a versatile knowledge representation language with specific constraints designed for efficient encoding of scheduling problems and a powerful solver able to efficiently find optimal solutions for a given instance.
Intuitively, in our solution the stream reasoner monitors the lab’s events to ensure that the current situation corresponds to the existing schedule. Whenever violations are detected the reasoner tries to automatically repair the schedule or, otherwise, notifies the scheduler about the violation. Given the current situation in a lab, the scheduler computes an assignment of tasks to engineers and machines optimizing the selected measure, such as tardiness or makespan. The former is often preferred since it corresponds to the popular key performance indicator of an FA lab. We evaluated the system on simulated re-scheduling scenarios derived from logs of real-world FA labs. Obtained results show that our method averages 6.8 seconds for each reasoning cycle, ranging from 1 to 15 seconds, including monitoring (0.8-3.3 seconds) and scheduling stages (2.1-13.4 seconds). This is significantly faster than the standard tasks in FA Lab, which usually take over 20 minutes on average. [8].
Scheduling refers to the process of assigning and managing the execution of tasks, given a set of resources. Following [5], we model the problem as a variant of the JSS problem with the following extensions: multi-resource - more than one resource type, e.g., engineers or machines [6], and flexible - there are multiple instances of every resource. [1]. The resulting JSS variant is an NP-hard optimization problem, where the aim is to find an optimal allocation of resources to tasks and time points, such that: (i) each resource instance is assigned to only one task at a time; (ii) every task gets all required resources; (iii) execution of tasks is non-preemptive; (iv) no tasks of a job are executed in parallel; and (v) tasks must be scheduled w.r.t. their order. Two of the possible objective functions that can be implemented when evaluating the effectiveness of a scheduling system are tardiness that sums up all job delays, given their deadlines, and makespan that measures the total time required to complete the jobs. In this work, we encode the scheduling problem using constraints programming and employ tardiness as the optimization criterion since FA labs often use the fulfillment of deadlines as a key performance indicator. Nevertheless, we also report the makespan in the evaluation results.
Our main data source consists of an anonymized slice of the main data source of an FA database comprising 1646 jobs over 3 years, from January 2020 to December 2022. The dataset provides all the information needed by the scheduler to compute the optimal task assignments and make recommendations in a simulated environment.
In the following, we provide an overview of the system; the general architecture is depicted in Figure 1. At first, the initial dataset is pre-processed, and fed to the scheduler for the computation of a new schedule. Along with the schedule, the system produces plots of the scheduled tasks and information about the overall tardiness. The monitoring component is set up as soon as the scheduled execution starts. More in detail, the stream reasoner DP-sr is fed with the DP-sr program, corresponding to the monitoring task to perform over the FA events, along with the background knowledge, representing the available resources, task-resources compatibility, and job deadlines. From now on, DP-sr will be waiting for schedule information and/or timestamped events coming from the database, and an interaction loop starts between the scheduler and the monitor managed by the Python application. At the first iteration, DP-sr is provided with the schedule previously computed along with possible events occurring at the beginning; subsequently, iterations are executed at a predefined frequency. At each iteration, all events that occurred after the previous iteration are fed to DP-sr, that might also be fed a new schedule, if a reschedule was needed. Each time DP-sr receives the data, it performs its computation across all timestamps featured by the events of the current iteration, providing an answer for all included timestamps. The DP-sr answer is analyzed and classified into one of the following categories. Propose of new resource(s) in case of conflicts. When there is no need of a full reschedule, DP-sr suggests a resource replacement based on the conflicts that could happen if one or more tasks are delayed. The proposal of a new machine and/or a new worker is considered before changing the whole schedule. The new resources are inserted into the current schedule and an acknowledgment is sent back to DP-sr, such that the scheduled events are updated. Plots are as well generated highlighting the changes in the schedule. Irreparable conflicts. When conflicts are not solvable, DP-sr identifies it and notifies the need of a complete reschedule. Therefore, a reschedule takes place and a new schedule is computed and sent back to DP-sr, so to update the knowledge base. The system evaluates the new schedule defining a new tardiness and new plots are generated. No conflicts found. In this case, there are no errors or conflicts and the system keeps running with no exceptions.
tunableand configurable at will, and has non-negligible design-time costs. In order to overcome those limits, one can consider the introduction of declarative knowledge representation techniques, which offer better explainability potential and, especially, are elaboration tolerant in the sense of McCarthy[4]. However, known performance and integration shortcomings limited so far the usage of these techniques in videogames. Among declarative approaches, Answer Set Programming (ASP) has been experimentally used in videogames to various extents, such as for declaratively generating level maps [6] and for the definition of artificial players [2] for the Angry Birds AI competition. ASP makes no exception with respect to the shortcomings that restrict the utility of declarative approaches. Two obstacles are of concern: performance in real-time contexts and integration, i.e., ease of wiring with other standard parts of the game logic.
ThinkEngine is an asset working for the popular Unity game engine, aiming at narrowing the above gaps. Our tool enables the possibility of introducing ASP-based reasoning modules within Unity-made games. The key contributions of ThinkEngine are the following:
game loop. It must be noted that the game loop, the typical execution paradigm of a videogame run-time, is an exemplary real-life usage of the sense-think-act cycle commonly deployed in agents and robotic systems[5]. ThinkEngine introduces a hybrid/deliberative scheme, transparently using auxiliaries threads in order to offload time consuming reasoning tasks.
Concerning run-time, ThinkEngine interacts in the game scene at hand according to the architecture shown in Figure 1. During the game, Unity iteratively updates the game scene in the canonical game loop ThinkEngine makes use of some code parts working in the Unity main loop, but it also uses additional threads for offloading decision tasks. Intuitively, one or more programmable brains (i.e. modules embedding ASP logic programs) are in charge of decision-making tasks and run in separate ThinkEngine threads. Brains are connected to the game scene using sensors, actuators and plans. Sensors allow read access to desired parts of the current game state, whereas actuators and plans allow to make changes to the game scene itself.
Reactive brains operate immediately on actuators, while planning brains make possible to declaratively program more complex, deliberative-like plans. A plan $P$ is a list of actions $[ a_1 \dots, a_n ]$ to be executed in the given sequence. One or more planning brains, each of which capable of generating a plan, can be associated to a given GO. Plans are associated to a priority value and, once generated, they are submitted for execution to a scheduler. A designer can decide how and when to abort a plan and how and when to trigger re-planning. Re-planning can produce new plans which replace older plan versions, so to cope with game state changes.
Planning brains are grouped by GO. Each group has its own planning scheduler. A planning scheduler $PS$ has a current plan $R$ and manages a queue of available plans $Q = [ P_1, \dots, P_m ]$, where subscripts denote priority (lower subscript values denote higher priority). $PS$ periodically checks if $R$ has to be interrupted and, possibly, substituted with a different plan in $Q$.
1 https://github.com/DeMaCS-UNICAL/ThinkEngine and https://github.com/DeMaCS-UNICAL/ThinkEngine-Showcase
Acknowledgment. This work was partially supported by the PNRR MUR project PE0000013-FAIR, Spoke 9 - Green-aware AI - WP9.1 and by the LAIA laboratory (part of the SILA laboratory network at University of Calabria).
miniKanren implements the essence of Prolog and allows easy access and modifications of underlying resolution and unification. The resolution stream constructed by miniKanren is static. We realize that a dynamic resolution stream is needed to support non-monotonic reasoning. So, we extend the traditional resolution and unification to achieve our goal. We present stableKanren, a miniKanren extension with normal logic program support under stable model semantics. stableKanren adds multiple innovative macros to compile a normal logic program into a program with its complement form, obtain the domain of a variable under different contexts, and generate a new stream during resolution.
miniKanren is a family of languages specially designed for relational programming. The core miniKanren implementation is purely functional and is designed to be easily modified and extended [1]. Unlike Prolog, miniKanren uses a complete interleaving search strategy on a stream to simulate backtracking. Unlike Mercury [9], miniKanren uses full unification, which is required to implement goals that take only fresh logic variables as their arguments. Unlike Curry [6], miniKanren did not use residuation to suspend execution on non-ground terms. The core implementation was hosted on a functional language, Scheme [2] and introduces only a few operators to users: == for unification, fresh for existential quantification, conde for disjunction, and a run interface.
To the best of our knowledge, only a few attempts have been made to add negation to miniKanren. The constructive negation used by Moiseenko [8] works for stratified programs only. Also, the semantics of negation is more like a filter, where the solver executes the positive goal inside the negation. It gets a differential set between the substitutions before and after the negated goal and eventually subtracts the differential set from the original set. However, none of the functional language-based implementations supported negation in a non-stratified normal program. By integrating the stable model semantics into miniKanren, we believe we have given it a more proper or widely accepted negation semantics.
Unlike a bottom-up solver that grounds all variables and propagates the constraint among values, a top-down solver, such as Prolog and miniKanren, uses resolution and unification to obtain a minimal model of the program. The advantage of top-down solving is that it produces answers without issues with blow-ups. Gupta et al. introduce coinductive logic (co-LP) and co-SLD resolution to handle the infinite terms in logic programs with co-SLD produces the greatest fixed point of a program [5]. Later, Min et al. evolved co-SLD to co-SLDNF to achieve normal program solving [7]. However, we believe that sticking with the least fixed point of SLD resolution is closer to stable model semantics, and the transition is simpler. Therefore, unlike co-SLD and co-SLDNF, our algorithms still produce the least fixed points as the original SLD resolution and focus on adding stable model semantics under finite Herbrand models.
miniKanren takes advantage of the traits (macros and continuations) of Scheme in its implementation. They use macros to transform the input program into a continuations-based search stream that works perfectly for monotonic reasoning. The static stream does not change once it is compiled (transformed). However, for non-monotonic reasoning, as in normal logic programs, the information we obtain during the run-time will change the outcome of a previous search, such as reverting a result, adjusting a conflict, or generating different results. Therefore, a dynamic search stream is needed. Our extension, stableKanren [4], generates or modifies the stream (code) on the fly based on the new information received during the resolution. Hence, we advance macros and continuations further into stable model solving. We use macros and continuations to form a static search stream and to modify a stream to weave the run-time information into the streams dynamically.
From the stable model semantics definition, we have the following relationship between the stable model of Π and the minimal model of the reduct ΠM.
Definition 1.1 (stable model property) Let Π be a propositional program and M be an interpretation. M is a stable model of Π if M is a minimal model of the reduct ΠM.
Stable model semantics consider a minimal model of the corresponding reduct program to be one of the program’s stable models. We want our algorithms to simultaneously produce the minimal model and its corresponding reduct program. The underlying resolution process guarantees the production of a minimal model as long as the input program can be handled. The reduct created from the interpretation removes the recursive rules and the negation completely so that the program can be handled using traditional Prolog to produce the minimal model ([3], Remark 2). Then, the interpretation must be verified to show that it is the minimal model of the reduct program.
Initially, the resolution had only one role: selecting a goal. Unification has two roles: the first is to assign a truth value to a goal, and the second is to assign a value to a variable. A goal is proven to be true iff all of its variables are successfully unified, and a goal is grounded iff all of its variables are unified with a value. Both resolution and unification require changes to support the recursive rules and negation introduced by the normal program. We grant more roles to resolution and unification. For resolution, it has four more roles: distributing negation to the unification level, producing truth value for the recursive rules, getting the domain of a variable, and continuing execution after getting a partial result. Unification has to produce the truth value for negated unification.
To prove a rule (statement) with a head H to be true, the resolution tries to prove each sub-goal individually in the rule’s body. To prove H to be true, we know H is failing somewhere in the rule’s body among one of the sub-goals. Therefore, we need to distribute negation to the unification level. For example, consider the logic statement;
$ H(X, Y) \leftarrow \exists X, Y (B_1(X, Y) \land B_2(X) \land B_3(Y)) $We assume that under negation ¬H(X,Y), the variables X and Y are safe, always with a value x and y. For this reason, we safely drop the ∃ quantifier and focus on the rule’s body transformation with assigned values. We obtain the propositional form of the rule as follows;
$ \lnot H^{x, y} \stackrel{?}{\leftarrow} transform( B_1^{x, y} \land B_2^x \land B_3^y) $Each sub-goal in a rule’s body could fail, and when a sub-goal fails, the prior sub-goals must have succeeded. To capture this property, its transformation should be a disjunction of the negation to each sub-goal in conjunction with all sub-goals before the current one. Furthermore, we noticed that each sub-goal works as a checker, and the values are checked by the unification independently inside the sub-goal could fail. Therefore, the transformation applies to the sub-goal and each variable to distribute the negation to the unification level.
$\lnot H \leftarrow \lnot B_1^x \lor (B_1^x \land \lnot B_1^y) \lor (B_1^x \land B_1^y \land \lnot B_2^x) \lor (B_1^x \land B_1^y \land B_2^x \land \lnot B_3^y))$
stableKanren supports normal logic programs and solves the n-queens problem faster than s(ASP) in a top-down fashion. We did performance testing on a 2012 MacBook Pro with 2.6GHz Intel Quad-Cores i7, 8GB memory, and macOS 10.15.7. We set an hour timeout unless the number of queens is less than or equal to 8. Table 1 shows the testing result time in seconds. The prefix “1” means finding one answer, and “all” means finding all answers.
nqueens | #solutions | 1-s(ASP) | 1-stableKanren | all-s(ASP) | all-stableKanren |
1 | 1 | 0.15 | 0 | 0.17 | 0 |
2 | 0 | 1.65 | 0 | 1.66 | 0 |
3 | 0 | 9.38 | 0 | 9.57 | 0 |
4 | 2 | 24.20 | 0 | 58.81 | 0 |
5 | 10 | 23.69 | 0.01 | 397.31 | 0.01 |
6 | 4 | 421.88 | 0.03 | 2470.24 | 0.08 |
7 | 40 | 148.27 | 0.04 | 13852.54 | 0.43 |
8 | 92 | 4649.88 | 0.36 | 131429.13 | 4.08 |
Sets and rules have been used for easier programming since the late 1960s. While sets are central to database programming with SQL and are also supported as built-ins in high-level languages like Python, logic rules have been supported as libraries or in rule-based languages with limited extensions for other features. However, rules are central to deductive database and knowledge base programming, and better support is needed.
This system demonstration highlights the design of a powerful language, Alda [16, 14], that supports logic rules together with sets, functions, updates, and objects, all as seamlessly integrated built-ins, including concurrent and distributed processes. The key idea is to allow sets of rules to be defined in any scope, support predicates in rules as set-valued variables that can be used and updated directly, and support queries using rules as either explicit or implicit automatic calls to an inference function.
Alda has a formal semantics [15] and is implemented by building on an object-oriented language (DistAlgo [13, 3] extending Python [18]) and an efficient logic rule system (XSB [19, 20]). It has been used successfully on benchmarks and problems from a wide variety of application domains—including those in OpenRuleBench [6], role-based access control (RBAC) [1, 4], and program analysis—with generally good performance [17]. Our implementation and benchmarks are publicly available [23].
This system demonstration shows how Alda is used for OpenRuleBench benchmarks, ANSI standard for role-based access control, and program analysis for large Python programs, including with persistence support for large datasets, all programmed seamlessly without boiler-plate code. For comparisons with related work on rule languages and benchmarking, see [16, 14, 17].
Figure 1 shows an example program in Alda. It is for a small portion of the ANSI standard for role-based access control (RBAC) [1, 4]. It shows the uses (with line numbers in parentheses) of
classes (1-8, 9-21) with inheritance (9, 11), and object creation (22) with setup (2-3, 10-12);
sets, including relations (3, 12);
methods, including procedures (5-6, 13-14) and functions (7-8, 18-19, 20-21), and calls (23, 24);
updates, including initialization (3, 12) and membership changes (6, 14); and
queries, including set queries (8, 19 after union
“+
”, 21) and queries using rules (19 before
“+
”);
where the rules are defined in a rule set (15-17), explained in the next part.
Note that queries using set comprehensions (e.g., on lines 8, 19, 21) can also be expressed by using rules and inference, even though comprehensions are more widely used. However, only some queries using rules and inference can be expressed by using comprehensions; queries using recursive rules (e.g., on lines 16-17) cannot be expressed using comprehensions.
1 class CoreRBAC: # class for Core RBAC component/object
2 def setup(): # method to set up the object, with no arguments
3 self.USERS, self.ROLES, self.UR := {},{},{}
4 # set users, roles, user-role pairs to empty sets
5 def AddRole(role): # method to add a role
6 ROLES.add(role) # add the role to ROLES
7 def AssignedUsers(role): # method to return assigned users of a role
8 return {u: u in USERS | (u,role) in UR} # return set of users having the role
...
9 class HierRBAC extends CoreRBAC: # Hierarchical RBAC extending Core RBAC
10 def setup():
11 super().setup() # call setup of CoreRBAC, to set sets as in there
12 self.RH := {} # set ascendant-descendant role pairs to empty set
13 def AddInheritance(a,d): # to add inherit. of an ascendant by a descendant
14 RH.add((a,d)) # add pair (a,d) to RH
15 rules trans_rs: # rule set defining transitive closure
16 path(x,y) if edge(x,y) # path holds for (x,y) if edge holds for (x,y)
17 path(x,y) if edge(x,z), path(z,y) # ... if edge holds for (x,z) and for (z,y)
18 def transRH(): # to return transitive RH and reflexive role pairs
19 return infer(path, edge=RH, rules=trans_rs) + {(r,r): r in ROLES}
20 def AuthorizedUsers(role): # to return users having a role transitively
21 return {u: u in USERS, r in ROLES | (u,r) in UR and (r,role) in transRH()}
...
22 h = new(HierRBAC, []) # create HierRBAC object h, with no args to setup
23 h.AddRole('chair') # call AddRole of h with role 'chair'
...
...
rules trans_rs
,
the first rule says there is a path from x
to
y
if there is an edge from x
to
y
, and the second rule says there is a path from
x
to y
if there is an edge from x
to z
and there is an edge from z
to
y
. The call to infer
queries and returns the set of pairs for which path
holds
given that edge
holds for exactly the pairs in set
RH
, by doing inference using rules in
trans_rs
.In Alda, rules are defined in rule sets, each with a name and optional declarations for the predicates in the rules.
ruleset ::= rules name (declarations): rule+
rule ::= p(arg1, ..., arga) if p1(arg11, ..., arg1a1), ..., pk(argk1, ..., argkak)
In the rule form, p
, p1
, ..., pk
denote predicates,
p(arg1,...,argk)
denotes that p
holds for its tuple
of arguments, and if
denotes that its
left-side conclusion holds if its right-side conditions all hold. In a
rule set, predicates not in any conclusion are called base predicates;
the other predicates are called derived
predicates.
The key ideas of seamless integration of rules with sets, functions, updates, and objects are:
a predicate is a set-valued variable that holds the set of tuples for which the predicate is true;
queries using rules are calls to an inference function, infer
, that computes desired values of
derived predicates using given values of base predicates;
values of base predicates can be updated directly as for other
variables, whereas values of derived predicates can only be updated by
infer
; and
predicates and rule sets can be object attributes as well as global and local names, just as variables and functions can.
Declarative semantics of rules are ensured by automatically
maintaining values of derived predicates when values of base predicates
are updated, by appropriate implicit calls to infer
.
For example, in Figure 1, one could use an object field
transRH
in place of calls to transRH()
in
AuthorizedUsers(role)
, use the following rule set instead
of trans_rs
, and remove transRH()
.
rules transRH_rs: # no need to call infer explicitly
transRH(x,y) if RH(x,y)
transRH(x,y) if RH(x,z), transRH(z,y)
transRH(x,x) if ROLES(x)
Field transRH
is automatically maintained at updates to
RH
and ROLES
by implicit calls to infer
.
Note that predicates in rules
as
set-valued variables, e.g., edge
, and calling infer
to take or return values of set
variables, e.g., RH
in edge=RH
, avoids the
need of high-order predicates or other sophisticated features,
e.g., [2], to reuse
rules for different predicates in logic languages.
Alda also supports tuple patterns for set elements in set queries (as
in DistAlgo [13]) and in queries using rules,
e.g., (1,=x,y) in p
matches any triple in set
p
whose first element is 1 and whose second element equals
the value of x
, and binds y
to the third
element if such a triple exists.
Of course, by building on DistAlgo, Alda also supports distributed programming with distributed processes, message passing, and high-level queries of message histories, e.g., for distributed RBAC [7,9], also called trust management [5], in decentralized systems.
Declarations in rules
could specify
predicate types and scopes, but are designed more importantly for
specifying assumptions about predicates being certain, complete, closed,
or not [10, 11, 12].
This is to give respective desired semantics for rules with unrestricted
negation, quantification, and aggregation.
Note that the examples discussed use an ideal syntax, while the Alda
implementation supports the Python syntax. For example,
x := {}
is written as x = set()
in Python
syntax.
The Alda implementation compiles rule sets in rules
and queries using infer
to XSB rules and queries, and compiles
the rest to Python, which calls XSB to do the inference. The current
implementation supports primarily Datalog rules, but also handles
unrestricted negation by using XSB’s computation of the well-founded
semantics [24]. More
general forms of rules and queries can be compiled to rules and queries
in XSB or other rule systems using the same approach. In general, any
efficient inference algorithm and implementation method can be used to
compute the semantics of rules
and
infer
.
Future work includes (1) support for easy use of different desired semantics, especially with modular use of rules, similar to knowledge units in DA-logic [11]; and (2) efficient implementation with complexity guarantees [8, 21, 22] for computing different desired semantics.