Published: 12th September 2023 DOI: 10.4204/EPTCS.385 ISSN: 2075-2180 |
The International Conference on Logic Programming (ICLP) is the premiere international scientific venue for presenting, discussing, and disseminating fundamental and applied research in the field of logic programming and in related interdisciplinary areas. The conference was launched in 1982, with its first edition in Marseille, France, and continues to serve the broad international logic programming community.
ICLP 2023 was a special edition of the conference as it represented the closing event of the "Year of Prolog" a year-long celebration of the 50th anniversary of the introduction of the Prolog programming language. The conference included a dedicated session to discuss the achievements accomplished throughout the year and the assignment of the 2023 Alan Colmerauer Prize, as well as an event on "Teaching Prolog: the Next 50 Years."
ICLP 2023 also included an Industry Event, with the participation of representatives of several for-profit organizations engaged in the use of logic programming technologies. The event consisted of a panel discussion on the role of symbolic AI in today's AI industrial landscape. The Conference featured Invited Speakers: Marco Maratea (University of Calabria, Italy), presenting on applications of answer set programming; Jan Wielemaker (Centrum Wiskunde & Informatica, The Netherlands), presenting on the rise, fall, and future of Prolog; Keith Stenning (The University of Edinburgh, Scotland) for the special track on Logic Programming and Machine Learning; Gopal Gupta (University of Texas at Dallas, USA) for the special track on Logic Programming and Explainability, Ethics and Trustworthiness; Mehul Bhatt (Orebro University, Sweden), tutorialist speaking on Cognitive Vision; and Joseph Townsend (Fujitsu Research of Europe, UK), tutorialist speaking on medical applications of neuro symbolic AI.
TCs also include accepted short papers, summaries of system presentations, and accepted extended abstracts from the Recently Published Research Track. In particular: 26 papers have been accepted for regular TC publication; 5 papers have been accepted as short papers in the TC category; 8 papers have been accepted as system demonstrations. In addition, we received and accepted 11 abstracts for the Recently Published Research track. All of them received an allocation of time in the conference program for presentation. In order to foster the engagement of students and their precious contribution to the research community, ICLP also hosted a doctoral consortium with 9 accepted submissions, two of which by women. A mentoring lunch event with a dedicated panel session were specifically held in order to facilitate exchange of ideas and networking.
ICLP 2023 gratefully received generous funding from the following sponsors.
In this extended abstract we report about new results about reasoning on weighted knowledge bases for Description Logics (DLs) with typicality under a ``concept-wise'' multi-preferential semantics, which have been shown to provide a logical interpretation of MultiLayer Perceptrons [18]. In particular, we provided a completeness result for the complexity of the problem, and new Answer Set Programming encodings in the same complexity class, that are shown to be able to deal with weighted knowledge bases with large search spaces. A detailed description in available in the full paper [1].
Description logics are widely used for knowledge representation, often to verify and discover properties of individuals in a concept by means of DLs inference services [2, 22]. Many properties of real world concepts, however, are defeasible, that is, they are not universally true, but have exceptions, and actually hold only for some prototypical individuals in the concept. This led to research on defeasible DLs [8,15,10,16]. Specifically, to represent the defeasible properties of a concept, DLs have been extended with a typicality operator $\bf{T}$ that is applied to concepts to obtain typicality inclusions of the form ${\bf{T}}(C) \sqsubseteq D$ [15,16}]. Intuitively, ${\bf{T}}(C) \sqsubseteq D$ means that the typical individuals in the concept $C$ also belong to concept $D$ (that, normally C's are D's), and corresponds to a conditional implication $C {\mathrel{{\scriptstyle\mid\!\sim}}} D$ in KLM preferential logics [24, 25]. From the semantic point of view a preference relation is considered as in KLM preferential semantics, to select the most typical individuals in a class (concept). A (conditional) knowledge base (KB) comprising typicality inclusions enables defeasible reasoning, as in fact properties holding for typical individuals in $C$ are not necessarily enforced on all individuals in $C$.
Based on the idea that each concept $C$ can be associated a set of prototypical properties, ranked DL knowledge bases [17] --- reminiscent of ranked KBs by Brewka [6] --- and weighted DL KBs with typicality [18] have been proposed. In particular, in weighted knowledge bases, a set ${\cal T}_C$ of weighted typicality inclusions $({\bf{T}}(C) \sqsubseteq D_1, \ w_1),$ $\ldots,$ $( {\bf{T}}(C) \sqsubseteq D_k, \ w_k)$ is associated to a concept $C$ to describe the prototypical properties of C-elements. For each property, the associated weight $w_i$ is a real number, representing the plausibility/implausibility of the property $D_i$ for $C$-elements.
Semantically, these formalisms have led to a concept-wise multi-preferential semantics, as the relative typicality of two domain individuals usually depends on the aspects we are considering for comparison [20, 17]: Bob may be a more typical as sport lover than Jim ($\mathit{bob <_{SportLover} jim}$), but Jim may be a more typical than Bob as a swimmer ($\mathit{jim <_{Swimmer} bob}$). It has been proven that this semantics has interesting properties in both the two-valued and in the fuzzy case (e.g., it is able to deal with specificity, irrelevance and satisfies the KLM properties).
The concept-wise multi-preferential semantics has been used to provide a logical interpretation to some neural network models, such as Self-Organising Maps [23], psychologically and biologically plausible neural network models, and for MultiLayer Perceptrons (MLPs) [21]. In both cases, a preferential interpretation can be constructed from a trained network (see [14] and [18]) and exploited for verification of typicality properties of the network through model checking. For MLPs a trained network can as well be regarded as a weighted knowledge base by encoding synaptic connections as weighted typicality inclusions under a fuzzy semantics [18, 13]. This concrete application and the widespread interest in neural networks strongly motivates the development of proof methods for reasoning with weighted KBs.
Entailment for fuzzy DLs is in general undecidable [11,4], and this motivates the investigation of many-valued approximations of fuzzy multi-preferential entailment. In particular, the finitely many-valued case is widely studied [12,3,5], and has been recently considered in the context of weighted DL KBs [19] by means of the notions of coherent, faithful and $\varphi$-coherent models of such KBs, previously considered in the fuzzy case [18, 13]. A proof-of-concept implementation in Answer Set Programming (ASP) and asprin [7] has been provided for the $\cal{LC}$ fragment of $\cal{ALC}$, which is obtained by disabling roles, and universal and existential restrictions. The approach adopts Gödel (alternatively, Łukasiewicz) connectives and addresses $\varphi$-coherent entailment, a form of defeasible reasoning based on canonical $\varphi$-coherent models. The $\varphi$-coherence condition makes weighted KBs correspond to MLPs with $\varphi$ as activation function; here a finitely many-valued approximation of such a function is used. As concerns the complexity of the problem, a $\Pi^p_2$ upper bound was given [19], but the exact complexity was unknown.
We briefly describe here two significant advancements with respect to the earlier work [19], from a theoretical point of view and on the practical side, for the $\varphi$-coherent entailment of typicality inclusions [1]. The complexity upper bound can be improved to ${ P}^{ NP[log]}$ by showing an algorithm running in polynomial time and performing parallel queries to an NP oracle ($ P^{||NP}$). As $ P^{||NP}$ is known to coincide with $ P^{NP[log]}$ [9], while $\Pi^p_2 = P^{NP[log]}$ is unlikely to hold (unless the polynomial hierarchy collapses to $ P^{NP[log]}$), there was space for improving the proof-of-concept implementation.
Different ASP encodings have been considered. For the enforcement of $\varphi$-coherence (where grounding explosion should be avoided) the best results are obtained using weight constraints. Weak constraints are used to encode the preferential semantics, relying on an order encoding to stay in the $P^{NP[log]}$ class. Further improvements at an asymptotic level are unlikely, as the problem is shown to be ${ P}^{ NP[log]}$-complete by giving a polynomial-time reduction of the MAX SAT ODD problem [26], which amounts to determining whether the maximum number of jointly satisfiable clauses among a given set is an odd number.
The scalability of the ASP encodings has been assessed empirically on defeasible entailment queries over synthetic weighted DL KBs to show that the redesigned ASP encoding can go beyond the small KBs and search spaces processable by the earlier proof-of-concept implementation [1].
This work was partially supported by GNCS-INdAM. Mario Alviano was partially supported by Italian Ministry of Research (MUR) under PNRR project FAIR ``Future AI Research'', CUP H23C22000860006 and by the LAIA lab (part of the SILA labs).
Abstract interpretation [3] allows constructing sound program analysis tools which can extract properties of a program by safely approximating its semantics. Static analysis tools are a crucial component of the development environments for many programming languages. Abstract interpretation proved practical and effective in the context of (Constraint) Logic Programming ((C)LP) [15,12,14,13,1,10,9] which was one of its first application areas (see[6]), and the techniques developed in this context have also been applied to the analysis and verification of other programming paradigms by using semantic translation to Horn Clauses (see the recent survey [4]). Unfortunately, the implementation of (sound, precise, efficient) abstract domains usually requires coding from scratch a large number of domain-related operations. Moreover, due to undecidability, a loss of precision is inevitable, which makes the design (and implementation) of more domains, as well as their combinations, eventually necessary to successfully prove arbitrary program properties. In this paper we focus on the latter problem by proposing a rule-based methodology for the design and rapid prototyping of new domains for logic programs, as well as composing and combining existing ones. Our techniques are inspired by those used in logic-based languages for implementing constraint domains at different abstraction levels.
Proposal. The construction of analyses based on abstract interpretation requires the defintion of some basic domain operations ($\subseteq$,$\sqcap$,$\sqcup$ and, optionally, the widening $\nabla$ operator); the abstract semantics of the primitive constraints (representing the built-ins, or basic operations of the source language) via transfer functions ($f(\alpha)$); and possibly some other additional instrumental operations over abstract substitutions. In addition, the classical top-down analysis approach requires a number of additional definitions of derived operations used by the analysis framework to implement procedure call, return, recursion, etc. Detailed descriptions of all these operations can be found in[12,11,2,7,5]. We propose a rule language inspired in rewriting to derive, script, and combine abstract domains. The objective is to reduce the time and effort required to write new abstract domains, both from scratch and as combinations of other domains
The proposed rule-based language. Given $s+1$ sets of constraints, $\mathcal{L}, \mathcal{C}_{1},\ldots, \mathcal{C}_{s}$, we define $AND(\mathcal{L}, \mathcal{C}_{1},\ldots, \mathcal{C}_{s})$ as the set of rules of the form $l_{1},\dots,l_{n} \,\, | \,\, g_{1},\ldots,g_{l} \,\, \Rightarrow \,\, r_{1},\ldots,r_{m} \,\, \# \,\, label$, where $s$, $n$, $m$, and $l$ are arbitrary positive integers, and the rule meets the following condition: \begin{equation*} \forall i, j, k \text{ s.t. } i \in \{1,\ldots,n\}, j \in \{1,\ldots,m\} \text{ and } k \in \{1,\ldots,l\} : (l_{i}, r_{j} \in \mathcal{L} \text{ and } \exists u \in \{1,\ldots,s\} \text{ s.t. } g_{k} \in \mathcal{C}_{u}) \end{equation*} The elements $l_{1},\dots,l_{n}$ constitute the left side of the rule; $r_{1},\ldots,r_{m}$ the right side; and $g_{1},\ldots,g_{l}$ the guards. Given $t+s$ sets of constraints $\mathcal{L}_{1}, \ldots, \mathcal{L}_{t}, \mathcal{C}_{1}, \ldots, \mathcal{C}_{s}$ such that $\forall v \in \{1,\ldots,t\} : \mathcal{L}_{v} \subseteq \mathcal{L}$, we define $OR(\mathcal{L}, \mathcal{C}_{1}, \ldots, \mathcal{C}_{n})$ as the set of rules of the form $ l_{1};\ldots;l_{n} \,\, | \,\, g_{1},\dots,g_{l} \,\, \Rightarrow \,\, r_{1},\ldots,r_{m} \,\, \# \,\, label$, where , , , , and are arbitrary positive integers, and the rule meets the following condition: $ \begin{array}{ll} \forall i, j, k & \text{ s.t. } i \in \{1,\ldots,n\}, j \in \{1,\ldots,m\} \text{ and } k \in \{1,\ldots,l\} : \\ % & \exists v \in \{1,\ldots,t\} \: \exists u \in \{1,\ldots,s\} % : \text{ s.t. } (l_{i} \in \mathcal{L}_{v}, r_{j} \in \mathcal{L} \text{ and } g_{k} \in \mathcal{C}_{u}) \end{array} $
Notice that while in $\mathit{AND}$-rules all the elements $l_{i}$ belong to the same set of constraints $\mathcal{L}$ in the $\mathit{OR}$-rules they belong to (possibly) different subsets of a set of constraints $\mathcal{L}$. The operational meaning of $\mathit{AND}$-rules is similar to that of rewriting rules. If the left side holds in the set where the rule is being applied to, and the guards also hold, then the left-side elements are replaced by the right-side elements. The operational meaning of $\mathit{OR}$-rules is similar, but instead of rewriting over the "same" set the right-side elements are written in a "new" set. When no more rules can be applied, different strategies can be followed. In general we can rewrite the pending elements to a given element or simply delete them.
In the context of abstract interpretation the sets of constraints that we have mentioned have to be seen as abstract domains being the rules applied then over abstract substitutions/constraints. AND-rules are intended to capture the behaviour of operations over one abstract substitution with the knowledge that can be inferred from other substitutions that meet the guards. This is useful for example when defining the greatest lower bound. Moreover, these rules are also useful for refining abstract substitutions, performing abstractions, combining different abstract domains, etc. On the other hand, OR-rules are intended to capture the behaviour of operations applied over multiple abstract substitutions of an abstract domain, such as the least upper bound or the widening.
partition([], _, [], []).
partition([E|R], C, Left, [E|Right1]) :-
E >= C,
partition(R, C, Left, Right1).
partition([E|R], C, [E|Left1], Right) :-
E < C,
partition(R, C, Left,1 Right).
An example. Fig.1 shows a classic Prolog predicate for partitioning a list. A call partition(L, X, L1, L2) is expected to satisfy some properties; for example, that $\forall v \in L2, X \leq v$, which we can express as inf(L2, X). With the help of two auxiliary domains to deal with structures containing variables and with constraints (resp. $\mathit{depth-k}$ and $\mathit{polyhedra}$) we can derive an abstract domain for the inf/2 property. A subset of the rules can be seen in Fig.2. These rules allow, when connected with the abstract domain operations, to exploit the information gathered from the previous domains and use it to infer inf(L2, X). Similarly, we can also capture the equivalent sup(L1, X), or multiset properties capturing that $L \subseteq L1 \cup L2$ and $L1 \cup L2 \subseteq L$. Moreover, we can infer the sortedness property for the classical quicksort implementation.
inf(X, top) | X = [] ==> inf(X, 0.Inf). # empty
inf(L, X) | L = [H|T] ==> inf(T, X). # list_const1
inf(T, X) | L = [H|T], H =< X ==> inf(L, H). # list_const2
inf(L, X) | L = S ==> inf(S, X). # unif_prop
inf(L, X) | Y =< X ==> inf(L, Y). # reduction
inf(X, A) ; inf(X, B) | A =< B ==> inf(X, A). # lub_1
inf(X, A) ; inf(X, B) | A >= B ==> inf(X, B). # lub_2
Conclusions & future work. We have presented a framework for simplifying the development of abstract domains for logic programs in the context of abstract interpretation frameworks, and concretely that ofCiaoPP. While some domains are easier to specify with a rule-based language, keeping a constraint-based representation for abstract substitutions may not be efficient compared with specialized representations and operations. In this respect, we plan to explore as future work the use of rules both as an input language for abstract domain compilation and as a specification language for debugging or verifying properties of hand-written domains. In our experience so far, the proposed approach seems promising for prototyping and experimenting with new domains, enhancing the precision for particular programs, and adding domain combination rules, without the need for understanding the analysis framework internals.
[1]M.García de la Banda, M.Hermenegildo, M.Bruynooghe, V.Dumortier, G.Janssens & W.Simoens (1996): Global Analysis of Constraint Logic Programs. ACM Transactions on Programming Languages and Systems 18(5), pp. 564−615.
[2]M.Bruynooghe (1991): A Practical Framework for the Abstract Interpretation of Logic Programs. Journal of Logic Programming 10, pp. 91 −124.
[3]P.Cousot & R.Cousot (1977): Abstract Interpretation: A Unified Lattice Model for Static Analysis of Programs by Construction or Approximation of Fixpoints. In: ACM Symposium on Principles of Programming Languages (POPL'77), ACM Press, pp. 238−252, doi:10.1145/512950.512973.
[4]Emanuele De Angelis, Fabio Fioravanti, JohnP. Gallagher, ManuelV. Hermenegildo, Alberto Pettorossi & Maurizio Proietti (2021): Analysis and Transformation of Constrained Horn Clauses for Program Verification. Theory and Practice of Logic Programming FirstView, pp. 1−69, doi:10.1017/S1471068421000211. Available at http://arxiv.org/abs/2108.00739.
[5]I.Garcia-Contreras, J.F. Morales & M.V. Hermenegildo (2021): Incremental and Modular Context-sensitive Analysis. Theory and Practice of Logic Programming 21(2), pp. 196−243, doi:10.1017/S1471068420000496. Available at https://arxiv.org/abs/1804.01839.
[6]R.Giacobazzi & F.Ranzato (2022): History of Abstract Interpretation. IEEE Ann. Hist. Comput. 44(2), pp. 33−43. Available at https://doi.org/10.1109/MAHC.2021.3133136.
[7]M.V. Hermenegildo, G.Puebla, K.Marriott & P.Stuckey (2000): Incremental Analysis of Constraint Logic Programs. ACM Transactions on Programming Languages and Systems 22(2), pp. 187−223.
[8]D.Jurjo, J.F. Morales, P.Lopez-Garcia & M.V. Hermenegildo (2022): A Rule-based Approach for Designing and Composing Abstract Domains. Technical Report, CLIP Lab, IMDEA Software Institute. Available at https://cliplab.org/papers/jurjo-domcons-tr.pdf.
[9]A.Kelly, K.Marriott, H.Søndergaard & P.J. Stuckey (1998): A Practical Object-Oriented Analysis Engine for CLP. Software: Practice and Experience 28(2), pp. 188−224.
[10]B.Le Charlier & P.Van Hentenryck (1994): Experimental Evaluation of a Generic Abstract Interpretation Algorithm for Prolog. ACM Transactions on Programming Languages and Systems 16(1), pp. 35−101.
[11]K.Muthukumar & M.Hermenegildo (1989): Determination of Variable Dependence Information at Compile-Time Through Abstract Interpretation. In: 1989 North American Conference on Logic Programming, MIT Press, pp. 166−189.
[12]K.Muthukumar & M.Hermenegildo (1990): Deriving A Fixpoint Computation Algorithm for Top-down Abstract Interpretation of Logic Programs. Technical Report ACT-DC-153-90, Microelectronics and Computer Technology Corporation (MCC), Austin, TX 78759. Available at http://cliplab.org/papers/mcctr-fixpt.pdf.
[13]K.Muthukumar & M.Hermenegildo (1992): Compile-time Derivation of Variable Dependency Using Abstract Interpretation. Journal of Logic Programming 13(2/3), pp. 315−347.
[14]P.Van Roy & A.M. Despain (1990): The Benefits of Global Dataflow Analysis for an Optimizing Prolog Compiler. In: North American Conference on Logic Programming, MIT Press, pp. 501−515.
[15]R.Warren, M.Hermenegildo & S.K. Debray (1988): On the Practicality of Global Flow Analysis of Logic Programs. In: Fifth International Conference and Symposium on Logic Programming, MIT Press, pp. 684−699.