Published: 13th September 2017 DOI: 10.4204/EPTCS.258 ISSN: 2075-2180 |
Preface Ekaterina Komendantskaya and John Power | |
Extending Coinductive Logic Programming with Co-Facts Davide Ancona, Francesco Dagnino and Elena Zucca | 1 |
Structural Resolution for Abstract Compilation of Object-Oriented Languages Luca Franceschini, Davide Ancona and Ekaterina Komendantskaya | 19 |
Trace and Stable Failures Semantics for CSP-Agda Bashar Igried and Anton Setzer | 36 |
Structural Resolution with Co-inductive Loop Detection Yue Li | 52 |
An Executable Specification of Typing Rules for Extensible Records based on Row Polymorphism (Extended Abstract) Ki Yung Ahn | 68 |
Abstract Compilation for Type Analysis of Object-Oriented Languages (Extended Abstract) Davide Ancona | 70 |
Refinement Types and Higher-Order Constrained Horn Clauses (Extended Abstract) C.-H. Luke Ong and Steven J. Ramsay | 72 |
Category Theoretic Semantics for Logic Programming: Laxness and Saturation (Extended Abstract) John Power | 74 |
The Workshop on Coalgebra, Horn Clause Logic Programming and Types was held on the 28-29 November 2016 in Edinburgh. The workshop marked the end of the EPSRC Grant Coalgebraic Logic Programming for Type Inference, by E. Komendantskaya, Heriot-Watt University and J. Power, University of Bath, UK.
The workshop consisted of two parts:Over recent years, there has been considerable research on the semantics of operational aspects of logic programming. The underlying mathematics has often involved coalgebra on a presheaf category or a variant of a presheaf category, with index given by the Lawvere theory generated by a signature. That has much in common with many years of research on the semantics of functional programming.
The combination of coalgebra with Lawvere theories has already led to applied development, and there seems good reason to hope that that will continue, with application to logic and functional programming both separately and in combination with each other. The workshop brought researchers in the area together to compare the techniques they are developing and the problems they are addressing.
Our semantic investigations led to analysis of theorem proving, and that was reciprocated, with theorem proving further influencing our semantics. Theorem proving in turn led us to study type inference, leading to the central applied focus of our work, thus the second topic of the workshop.
The workshop featured six invited and seven regular talks, with topics ranging from Categorical Logic, Semantics, Type Theory, Proof Theory, Programming Languages and Type Inference. We were very fortunate to have six excellent invited speakers inExtended abstracts of all accepted contributions appeared in the pre-proceedings of the workshop. The present volume contains a selection of full papers, submitted and peer-reviewed after the workshop. The present volume also includes extended abstracts of four invited talks. We thank the authors for their contributions.
The paper Extending Coinductive Logic Programming with Co-Facts by Davide Ancona, Francesco Dagnino and Elena Zucca presents their novel work on extending the syntax of coinductive logic programming. The key idea is in allowing some unit clauses contained in a logic program to be assigned a coinductive interpretation. The paper shows correcteness of the operational semantics of this extension, and presents a prototype implementation.
The paper Structural Resolution for Abstract Compilation of Object-Oriented Languages by Luca Franceschini, Davide Ancona and Ekaterina Komendantskaya shows an application of structural resolution to resolving nonterminating cases of type inference in Java. Structural resolution is an algorithm for coinductive proof inference that arose from earlier work by Komendantskaya and Power on coalgebraic semantics of logic programming.
The paper Trace and Stable Failures Semantics for CSP-Agda by Bashar Igried and Anton Setzer extends a formalisation CSP-Agda of the process algebra CSP in the interactive theorem prover Agda. Agda is a theorem prover based on dependent types. The presented paper proposes a formalisation of the trace semantics and the stable failures semantics of CSP and proves selected algebraic laws of CSP using these semantics.
The paper Structural Resolution with Co-inductive Loop Detection by Yue Li presents an extension of structural resolution with coinductive loop detection, proves soundness of this approach and presents an implementation of the resulting novel inference algorithm.
The papers were peer-reviewed by the Programme Committee consisting of:Many thanks to the Programme Committee and all who helped to organise the event! In particular we are grateful to Morag Jones, June Maxwell, Iain McCrone and Christine McBride for technical and administrative support. Thanks to International Center for Mathematical Sciences for providing an excellent venue for the workshop. We are grateful to EPSRC (grants EP/K031864/1-2, EP/K028243/1) for supporting this research and this workshop. Finally, we are very grateful to all those who supported the workshop by submitting abstracts and attending the meeting.
Ekaterina Komendantskaya and John Power
Type inference is perceived as a natural application of logic programming (LP). Natively supported unification in LP can serve as a basic building block of typical type inference algorithms. In particular, polymorphic type inference in the Hindley--Milner type system (HM) and its extensions are known to be succinctly specifiable and executable in Prolog [5,7]. Our previous work [2] demonstrates Prolog specifications of more advanced features of parametric polymorphism beyond HM such as type-constructor polymorphism (TCPoly) (a.k.a. higher-kinded polymorphisim). In addition to typing judgments ($\Delta;\Gamma \vdash t : \tau$), which relate terms ($t$) to types ($\tau$), there are kinding judgements ($\Delta \vdash \tau : \kappa$), which relate types ($\tau$) to kinds ($\kappa$) in such more advanced polymorphic type systems. Execution of our Prolog specification for such a type system (HM+TCpoly) are multi-staged, that is, kinding judgements and typing judgements are executed in separate stages.
In this talk, we demonstrate a specification for typing records, which is one of the most widely supported compound data structures in real-world programming languages. More specifically, we develop a specification of a type system that supports extensible records based on row polymorphism (RowPoly) [3]. We build upon our previous work on HM+TCPoly because the use of rows in record types are similar to supplying type arguments to a type constructor. For instance, a type constructor for polymorphic lists ($\textsf{List} : \star \to \star$) becomes a specific list type (e.g., $\textsf{List Int}: \star$) when it is supplied a type (e.g., $\textsf{Int}:\star$) as its argument. Similarly, the record type constructor ($\textsf{Rec} : \textsf{row} \to \star$) becomes a specific record type (e.g., $\textsf{Rec}\;\{\textit{name}:\textsf{String},\;\textit{age}:\textsf{Int}\} : \star$) when it is supplied a row (e.g., $\{\textit{name}:\textsf{String},\;\textit{age}:\textsf{Int}\}$ : row) as its argument. %% \footnote{Sometimes, the record constructor $\textsf{Rec}$ is made %% invisible in implementations, i.e., $\textsf{Rec}\;\rho$ is abbreviate as $\rho$ to users.} Similarly to a polymorphic list type $(\forall\alpha:\star.\,\textsf{List}\;\alpha)$ that range over any specific instances of lists, a row-polymorphic record type $(\forall\rho:\textsf{row}.\,\textsf{Rec}\;\{\textsf{name}:\textsf{String}\mid\rho\})$ ranges over any record type with the $\textit{name}$ field of type $\textsf{String}$; e.g., $\textsf{Rec}\,\{\textit{name}:\textsf{String}\}$ when $\rho$ instantiates to the empty row $\{\}$, or $\textsf{Rec}\,\{\textit{name}:\textsf{String},\,\textit{age}:\textsf{Int}\}$ when $\rho$ instantiates to $\{\textit{age}:\textsf{Int}\}$.
The extension from HM+TCPoly to HM+TCPoly+RowPoly consists of two aspects. First is structurally propagating the extension to the kind syntax from $\,\kappa ::= \star \mid \kappa \to \kappa\,$ to $\,\kappa ::= \textsf{row} \mid \star \mid \kappa \to \kappa$. The syntax of types and terms need to be extended with rows and records, and the kiding and typing rules for rows and record types need to be added accordingly. Second is on how to implement those additional rules in Prolog. Unlike lists, rows are order-irrelevant, for instance, $\{\textit{name}:\textsf{String},\,\textit{age}:\textsf{Int}\}$ and $\{\textit{age}:\textsf{Int},\,\textit{name}:\textsf{String}\}$ should be considered equivalent. Moreover, type inference involving row-polymorphic types requires unification between rows of unknown size. Unfortunately, Prolog's native unification supports neither order-irrelevant structures nor collections of unknown size effectively. We overcome this limitation of Prolog's unification by implementing a user-defined unification between rows of unknown size, inspired by the Stolzenburg's work [5] on unification between sets of unknown size.
Our type system specifications are self-contained succinct logic programs with minimal dependencies over external libraries or constraint solving systems, as summarized in Table 1 below.
type system | size (in 80 col) | auxiliary code and extra-logical built-ins used |
---|---|---|
HM | 15 lines | \== , copy_term |
HM+TCPoly | 30 lines | DCG related predicates, gensym |
HM+TCPoly+RowPoly | 45 lines | +40 lines of code for row unification, cut (! ) |
There are mainly two limitations in our approach of using Prolog to develop
relational executable specification of type systems. First is the use of
extra-logical predicates and meta-programming methods, which makes it difficult
to analyze the specification and argue its correctness. Second is the lack of
error message (except getting false
from Prolog's failure).
For future work, we hope to overcome the first limitation by
(1) formulating a more limited but principled theory of
meta-programming in LP that is suited for specifying multiple levels of
typing rules and (2) by supporting basic operations (e.g., first matching lookup
in a context, row unification) as primitives of the LP system and separately
verify their internal implementation rather than treating as a part of a user
provided specification.
Regarding the second limitation, there are many work on type error messages for
functional languages (e.g., [6]) but needs further research on
which of the ideas would be compatible with our approach.
Abstract compilation (AC) is a technique for static type analysis which is based on coinductive Logic Programming, and consists in translating the code under analysis into a set of Horn clauses, and to solve queries w.r.t. the coinductive interpretation of the generated clauses.
AC can be employed to perform precise type analysis of object-oriented languages, and support data polymorphism with recursive record and union types. To improve AC, two interesting research directions can be investigated: AC provides opportunities for reusing compilation technology, as the use of the Static Single Assignment intermediate representation, which allows more precise analysis; coinductive Constraint Logic Programming with a suitable notion of subtyping can further improve the accuracy of AC.
Abstract compilation (AC) [7,5,8] is a static type analysis approach aiming to reconcile compositional and global analysis through symbolic execution. It consists in translating the program under analysis in a more abstract type based representation expressed by a set of Horn clauses, and to solve an automatically generated goal w.r.t. the coinductive interpretation of such a translation. Coinduction is required to properly deal with both recursive types, represented by cyclic terms, and mutually recursive user-defined functions/methods.
AC allows modular analysis because several compilation schemes can be defined for the same language, each corresponding to a different kind of analysis, without changing the inference engine, which typically implements coinductive logic programming [12, 11, 6]. Compilation schemes based on union and record types have been studied to support parametric and data polymorphism (that is, polymorphic methods and fields) in object-oriented languages, and a smooth integration of nominal and structural types [7]; other proposed compilation schemes aim to detect uncaught exceptions [5], or to exploit advanced intermediate representations allowing more accurate analysis [8, 9].
AC reconciles compositional and global analysis because on the one hand the source code of the program under analysis is compiled once and for all, hence preexisting source code needs not to be reinspected when new code is added, as happens in compositional analysis; on the other hand, context sensitive analysis is supported since different goals can be solved at different times for the same code fragment. Furthermore, with some limitations on analysis precision, compositional compilation can be achieved with more general non ground goals. Another advantage of AC consists in its direct relationship with Prolog, and the natural support offered by this programming language for rapid prototyping of type analysis [1].
Recent compiler technology provides fast algorithms for translating source code into IR (Intermediate Representation), as, for instance, SSA (Static Single Assignment) form, particularly suitable for efficient static analysis. Using SSA, and extensions such as SSI (Static Single Information), improves the precision of type analysis of local variables performed with AC [8, 9]; recently, a high-level notion of SSA [3] has been introduced to simplify the compilation scheme used by AC.
To exploit SSA in AC, union types have to be introduced to be able to compile ϕ pseudo-functions that are introduced at program merge points during the translation into IR. In this way it is possible to infer different types for each occurrence of the same local variables.
SSI can further improve the analysis by refining the type of local variables in the branches of a program; to this aim, intersection and negations types are required.
One of the limitations of AC based on coinductive Logic Programming is that in the presence of recursive methods the inference engine may fail to find a regular derivation even if the program under analysis is type safe.
To overcome this problem, one can exploit either structural resolution [10] to deal with non regular derivations, or subtyping constraints [5] to employ subsumption instead of simple term unification to make derivations regular. This latter solution allows exploitation of coinductive constraint logic programming with advanced forms of subtyping with union and intersection types [2, 4]; to this aim, predicates used to generate the logic program most curry variance annotations, and a sound decision procedure for satisfaction of subtyping constraints must be provided.
Constrained Horn clauses have recently emerged as a very promising candidate for a logical basis for automatic program verification (especially symbolic model checking). There is evidence to suggest that many first-order verification problems which, at their heart, require finding relations satisfying some given conditions, can be posed as questions of solving systems of Horn clauses, modulo constraints expressed in some suitable background theory [1, 2]. Consequently, researchers have developed several highly efficient solvers for first-order constrained Horn clauses.
We are interested in verifying higher-order programs. Here too it seems very worthwhile to look for a unifying, logical basis for automated program verification. Furthermore, it is natural to frame the verification problem as a search for relations that meet appropriate criteria, and for these criteria to be expressed as constrained Horn clauses. However, one major difference with the foregoing work is that the relations will, in general, be of higher type. For example, consider the following higher-order program: \[ \begin{array}{l} \mathsf{let}\ \mathit{add}\ x\ y = x + y \\ \mathsf{let}\ \mathsf{rec}\ \mathit{iter}_0\ f\ n =\\ \quad\mathsf{if}\ n \leq 0\ \mathsf{then}\ 0\ \mathsf{else}\ f\ n\ (\mathit{iter}_0\ f\ (n - 1)) \\ \mathsf{in}\ \lambda n.\ \mathsf{assert}\ (n \leq \mathit{iter}_0\ \mathit{add}\ n) \end{array} \]
The term $\mathit{iter}_0\ \mathit{add}\ n$ is the sum of the integers from $1$ to $n$ in case $n$ is non-negative and is $0$ otherwise. The program is safe just in case the assertion is never violated, i.e. the summation is not smaller than $n$. To verify safety, we must find an invariant that implies the required property. For our purposes, an invariant will be an over-approximation of the input-output graphs of the functions defined in the program.
The existence of such an invariant is specified by the set of higher-order constrained Horn clauses on the right in the unknowns $\mathit{Iter}_0 : (\mathsf{int} \to \mathsf{int} \to \mathsf{int} \to \mathsf{bool}) \to \mathsf{int} \to \mathsf{int} \to \mathsf{bool}$ and $\mathit{Add} : \mathsf{int} \to \mathsf{int} \to \mathsf{int} \to \mathsf{bool}$. \[ \begin{array}{l} \forall xyz.\, z = x + y \Rightarrow \mathit{Add}\ x\ y\ z \\ \forall fnr.\, n \leq 0 \wedge r = 0 \Rightarrow \mathit{Iter}_0\ f\ n\ r \\ \forall fnr.\,(\exists p.\, n > 0 \wedge \mathit{Iter}_0\ f\ (n-1)\ p \wedge f\ n\ p\ r) \Rightarrow \mathit{Iter}_0\ f\ n\ r \\ \forall nr.\, \mathit{Iter}_0\ \mathit{Add}\ n\ r \Rightarrow n \leq r \end{array} \] The first, a definite clause, constrains $\mathit{Add}$ to be an over-approximation of the addition function. The second and third, both definite clauses, constrain $\mathit{Iter}_0$ to be an over-approximation of the iteration combinator. The last, a goal clause, constrains these over-approximations to be small enough that they exclude the possibility of relating an input $n$ to a smaller output $r$. A model of these clauses in the theory of integer linear arithmetic, assigning (the characteristic function of) relations to $\mathit{Iter}_0$ and $\mathit{Add}$, is exactly such an invariant. Hence, such an assignment constitutes a witness to safety of the program. One such model is the following (expressed as terms of higher-order logic): \[ \begin{array}{l} \mathit{Add} \mapsto \lambda xyz.\ z = x+y \\ \mathit{Iter}_0 \mapsto \lambda fnr.\ (\forall x y.\, f\ x\ y\ z \Rightarrow 0 \leq x \Rightarrow y \leq z) \Rightarrow n \leq r \end{array} \] The aim of our work is to automate the search for such models. We take inspiration from an already successful method for higher-order program verification, namely refinement type inference [4, 3, 5, 6].
Refinement types are a way to specify invariants for higher-order programs. Their power comes from the ability to lift rich first-order theories over data to higher types using subtyping and the dependent product. We have developed a system for assigning refinement types to higher-order constrained Horn clauses. The idea is that every valid type assignment $\Gamma$ to the higher-order unknowns is a syntactic representation of a model of the definite clauses. For example, the model discussed previously can be represented by the type assignment $\Gamma_I$: \[ \begin{array}{l} \mathit{Add}: x\!\!: \mathsf{int} \to y\!\!: \mathsf{int} \to z\!\!: \mathsf{int} \to \mathsf{bool}\langle z = x + y \rangle \\ \mathit{Iter}_0: (x\!\!: \mathsf{int} \to y\!\!: \mathsf{int} \to z\!\!: \mathsf{int} \to \mathsf{bool}\langle 0 \leq x \Rightarrow y < z \rangle) \to n\!\!:\mathsf{int} \to r\!\!:\mathsf{int} \to \mathsf{bool}\langle n \leq r \rangle \end{array} \] The key to the enterprise is the refinement type for Booleans, $\mathsf{bool}\langle \phi \rangle$, which is parametrised by a first-order constraint formula $\phi$. The meaning of this type under a valuation $\alpha$ of its free first-order variables is the set of truth values $\{\mathsf{false}, [\![\phi]\!](\alpha)\}$ determined by evaluation of $\phi$. The dependent product and integer types are interpreted standardly, so that the first type above can be read as the set of all ternary relations on integers $x,y$ and $z$ that are false whenever $z$ is not $x + y$.
Soundness of the type system allows one to conclude that certain first-order constraint formulas can be used to approximate higher-order Horn formulas. Given goal formula $s$, from the derivability the judgement $\Gamma \vdash s : \mathsf{bool}\langle\phi\rangle$ it follows that $s \Rightarrow \phi$ in the model (represented by) $\Gamma$. Consequently, if $\Gamma$ is a valid model of the definite clauses and $s$ is the negation of the goal formula, then $\Gamma \vdash s : \mathsf{bool}\langle\mathsf{false}\rangle$ implies that the model validates the desired property. In particular, since $\Gamma_I$ is a valid type assignment for $\mathit{Add}$ and $\mathit{Iter}_0$ and since $\Gamma_I \vdash \exists nr.\,\mathit{Iter}_0\ \mathit{Add}\ n\ r \wedge n > r : \mathsf{bool}\langle \mathsf{false} \rangle$ is derivable, it follows that the constrained Horn clause problem above is solvable and hence the program is safe.
By phrasing the higher-order constrained Horn clause problem in this way, machinery already developed for refinement type inference can be adapted to automate the search for models (i.e. type assignments). This machinery actually reduces refinement type assignment to the problem of solving sets of first-order constrained Horn clauses. Consequently, in total, our work can be seen as a method for obtaining solutions of higher-order constrained Horn clause problems by solving first-order constrained Horn clause problems. An implementation of the method yields promising results.
John Power would like to acknowledge the support of EPSRC grant EP/K028243/1 and Royal Society grant IE151369. No data was generated in the course of this project.
Recent research on category theoretic semantics of logic programming has focused on two ideas: lax semantics ([3]) and saturated semantics ([1]). Until now, the two ideas have been presented as alternatives, but that competition is illusory, the two ideas being two views of a single, elegant body of theory, reflecting different but complementary aspects of logic programming.
Given a set of atoms $At$, one can identify a variable-free logic program $P$ built over $At$ with a $P_fP_f$-coalgebra structure on $At$, where $P_f$ is the finite powerset functor on $Set$: each atom is the head of finitely many clauses in $P$, and the body of each clause contains finitely many atoms. If $C(P_fP_f)$ is the cofree comonad on $P_fP_f$, then, given a logic program $P$ qua $P_fP_f$-coalgebra, the corresponding $C(P_fP_f)$-coalgebra structure characterises the and-or derivation trees generated by $P$.
Extending this to arbitrary programs, given a signature $\Sigma$ of function symbols, let $\mathcal{L}_{\Sigma}$ denote the Lawvere theory generated by $\Sigma$, and, given a logic program $P$ with function symbols in $\Sigma$, consider the functor category $[\mathcal{L}_{\Sigma}^{op},Set]$, extending the set $At$ of atoms in a variable-free logic program to the functor from $\mathcal{L}_{\Sigma}^{op}$ to $Set$ sending a natural number $n$ to the set $At(n)$ of atomic formulae with at most $n$ variables generated by the function symbols in $\Sigma$ and the predicate symbols in $P$. We would like to model $P$ by a $[\mathcal{L}_{\Sigma}^{op},P_fP_f]$-coalgebra $p:At\longrightarrow P_fP_fAt$ that, at $n$, takes an atomic formula $A(x_1,\ldots ,x_n)$ with at most $n$ variables, considers all substitutions of clauses in $P$ into clauses with variables among $x_1,\ldots ,x_n$ whose head agrees with $A(x_1,\ldots ,x_n)$, and gives the set of sets of atomic formulae in antecedents. However, that does not work for two reasons. The first may be illustrated as follows.
$1.\ \mathtt{nat(0)} \gets $
$2.\ \mathtt{nat(s(x))} \gets \mathtt{nat(x)}$
$3.\ \mathtt{list(nil)} \gets $
$4.\ \mathtt{list(cons (x, y))} \gets \mathtt{nat(x), list(y)}$
ListNat has nullary function symbols $\mathtt{0}$ and $\mathtt{nil}$. So there is a map in $\mathcal{L}_{\Sigma}$ of the form $0\rightarrow 1$ that models the symbol $0$. Naturality of $p:At\longrightarrow P_fP_fAt$ in $[\mathcal{L}_{\Sigma}^{op},Set]$ would yield commutativity of the diagram
$ \begin{CD} At(1) @>p_1>> P_fP_fAt(1)\\ @VVAt(\mathsf{0})V @VVP_fP_fAt(\mathsf{0})V\\ At(0) @>p_0>> P_fP_fAt(0) \end{CD}$But consider $\mathtt{nat(x)}\in At(1)$: there is no clause of the form $\mathtt{nat(x)}\gets \, $ in ListNat, so commutativity of the diagram would imply that there cannot be a clause in ListNat of the form $\mathtt{nat(0)}\gets \, $ either, but in fact there is one. Thus $p$ is not a map in the functor category $[\mathcal{L}_{\Sigma}^{op},Set]$.
Lax semantics addresses this by relaxing the naturality condition on $p$ to a subset condition, so that, given, for instance, a map in $\mathcal{L}_{\Sigma}$ of the form $f:0 \rightarrow 1$, the diagram need not commute, but rather the composite via $P_fP_fAt(1)$ need only yield a subset of that via $At(0)$.
In contrast, saturation semantics works as follows. Regarding $ob(\mathcal{L}_{\Sigma})$, equally $ob(\mathcal{L}_{\Sigma})^{op}$, as a discrete category with inclusion functor $I:ob(\mathcal{L}_{\Sigma})\longrightarrow \mathcal{L}_{\Sigma}$, the functor \[ [I,Set]:[\mathcal{L}_{\Sigma}^{op},Set]\longrightarrow [ob(\mathcal{L}_{\Sigma})^{op},Set] \] that sends $H:\mathcal{L}_{\Sigma}^{op}\longrightarrow Set$ to the composite $HI:ob(\mathcal{L}_{\Sigma})^{op} \longrightarrow Set$ has a right adjoint $R$, given by right Kan extension. So the data for $p:At\longrightarrow P_fP_fAt$ may be seen as a map in $[ob(\mathcal{L}_{\Sigma})^{op},Set]$, which, by the adjointness, corresponds to a map $\bar{p}:At\longrightarrow R(P_fP_fAtI)$ in $[\mathcal{L}_{\Sigma}^{op},Set]$, yielding saturation semantics. . In this talk, we show that the two approaches can elegantly be unified, the relationship corresponding to the relationship between theorem proving and proof search in logic programming.
The second problem mentioned above is about existential variables, which we now illustrate.
GC (for graph connectivity) denotes the logic program
$1.\ \mathtt{connected(x,x)} \gets $
$2.\ \mathtt{connected(x,y)} \gets \mathtt{edge(x,z)}, \mathtt{connected(z,y)}$
There is a variable $z$ in the tail of the second clause of GC that does not appear in its head. Such a variable is called an existential variable, the presence of which challenges the algorithmic significance of lax semantics. In describing the putative coalgebra $p:At\longrightarrow P_fP_fAt$ just before Example 1, we referred to all substitutions of clauses in $P$ into clauses with variables among $x_1,\ldots ,x_n$ whose head agrees with $A(x_1,\ldots ,x_n)$. If there are no existential variables, that amounts to term-matching, which is algorithmically efficient; but if existential variables do appear, the mere presence of a unary function symbol generates an infinity of such substitutions, creating algorithmic difficulty, which, when first introducing lax semantics, we, also Bonchi and Zanasi, avoided modelling by replacing the outer instance of $P_f$ by $P_c$, thus allowing for countably many choices. Such infiniteness militates against algorithmic efficiency, and we resolve it by refining the functor $P_fP_f$ while retaining finiteness.
This talk is based upon the paper ([2]).