Published: 9th July 2019
DOI: 10.4204/EPTCS.296
ISSN: 2075-2180


Proceedings of the Sixth Workshop on
Horn Clauses for Verification and Synthesis
and Third Workshop on
Program Equivalence and Relational Reasoning
Prague, Czech Republic, 6-7th April 2019

Edited by: Emanuele De Angelis, Grigory Fedyukovich, Nikos Tzevelekos and Mattias Ulbrich

Emanuele De Angelis, Grigory Fedyukovich, Nikos Tzevelekos and Mattias Ulbrich
Invited Talk: Foundations and Techniques for the Static Analysis of Ethereum Smart Contracts
Matteo Maffei
Invited Talk: Concolic Testing of Higher-order Functional Languages
Konstantinos (Kostis) Sagonas
Extended Abstract: Challenges in the Specialisation of Smart Horn Clause Interpreters
John P. Gallagher
Understanding Counterexamples for Relational Properties with DIbugger
Mihai Herda, Michael Kirsten, Etienne Brunner, Joana Plewnia, Ulla Scheler, Chiara Staudenmaier, Benedikt Wagner, Pascal Zwick and Bernhard Beckert
On Quantitative Comparison of Chemical Reaction Network Models
Ozan Kahramanoğulları
Relational Verification via Invariant-Guided Synchronization
Qi Zhou, David Heath and William Harris
Ultimate TreeAutomizer (CHC-COMP Tool Description)
Daniel Dietsch, Matthias Heizmann, Jochen Hoenicke, Alexander Nutz and Andreas Podelski
Proving Properties of Sorting Programs: A Case Study in Horn Clause Verification
Emanuele De Angelis, Fabio Fioravanti, Alberto Pettorossi and Maurizio Proietti


This volume contains the joint post-proceedings of the 3rd Workshop on Program Equivalence and Relational Reasoning (PERR 2019) and the 6th Workshop on Horn Clauses for Verification and Synthesis (HCVS 2019), which took place in Prague, Czech Republic on April 6 and 7, respectively, as affiliated workshops of the European Joint Conferences on Theory and Practice of Software (ETAPS). PERR and HCVS are both concerned with formal program verification and related questions, with PERR focussing on a particular type of program properties and HVCS focussing on the use of Horn Clauses for reasoning about programs. The workshop topics have an overlap, e.g. techniques based on Horn clauses are also used to conduct equivalence proofs, and it is natural to publish the proceedings in a common volume.

Program Equivalence is a topic touching on multiple communities within the field of formal program analysis. The PERR workshops are dedicated to the formal verification of program equivalence and related relational problems. This edition is the third in a series of meetings that bring together researchers from different areas interested in equivalence. The topic touches upon many aspects of formal methods: regression verification, translation validation, verified compilation, language semantics, deductive verification, (bounded) model checking, specification inference, software evolution and testing.

The goal of the workshop is to bring researchers of the different fields in touch and to stipulate an exchange of ideas leading to forging a community working on PERR. It welcomes contributions from the aforementioned topics, but is also open to new questions regarding program equivalence. This includes related research areas of relational reasoning like program refinement or the verification of hyper-properties, in particular of secure information flow.

The workshop featured eight presentations, and two invited talks by Margus Veanes (Microsoft Research) and Marco Eilers (ETH Zürich).

Many Program Verification and Synthesis problems of interest can be modeled directly using Horn clauses, and many recent advances in the Constraint/Logic Programming, Verification, and Automated Deduction have centered around solving problems presented as Horn clauses efficiently. The aim of the HCVS workshop series is to bring together researchers working in the communities of Constraint/Logic Programming (e.g., ICLP and CP), Program Verification (e.g., CAV, TACAS, and VMCAI), and Automated Deduction (e.g., CADE), on the topic of Horn clause based analysis, verification and synthesis. Horn clauses for verification and synthesis have been advocated by these communities at different times and from different perspectives, and this workshop is organized to stimulate interaction and a fruitful exchange and integration of experiences.

This edition follows five previous meetings: HCVS 2018 in Oxford, UK (with FLoC), HCVS 2017 in Gothenburg, Sweden (with CADE), HCVS 2016 in Eindhoven, The Netherlands (with ETAPS), HCVS 2015 in San Francisco, CA, USA (with CAV), and HCVS 2014 in Vienna, Austria (with VSL).

The workshop featured six papers, a report on the Second Competition on Constrained Horn Clauses (CHC-COMP), and two invited talks by Matteo Maffei (TU Wien, Austria) and Konstantinos Sagonas (Uppsala University, Sweden).

We wish to express our gratitude to the authors who submitted papers, the speakers, the invited speakers, the program committee members for the work they have generously done. We would like to thank the ETAPS general chair Jan Kofroň and the ETAPS workshops chairs Milan Ceska (Brno University of Technology, Czech Republic) and Ryan Culpepper (Czech Technical University, Czech Republic) for their help in making the event possible. Our thanks go also to Artem Pelenitsyn (Northeastern University, United States) who assisted us in the managing the workshops websites. Finally, we warmly thank the EasyChair organization for supporting the various tasks related to the selection of the contributions, and also EPTCS and arXiv for hosting the proceedings.

July 2019,
Emanuele De Angelis, Grigory Fedyukovich, Nikos Tzevelekos and Mattias Ulbrich.

PERR 2019 Program Chairs

PERR 2019 Program Committee

HCVS 2019 Program Chairs

HCVS 2019 Program Committee

Foundations and Techniques for the Static Analysis of Ethereum Smart Contracts

Matteo Maffei (Technische Universität Wien, Austria)

The recent growth of the blockchain technology market puts its main cryptocurrencies in the spotlight. Among them, Ethereum stands out due to its virtual machine (EVM) supporting smart contracts, i.e., distributed programs that control the flow of the digital currency Ether. Being written in a Turing complete language, Ethereum smart contracts allow for expressing a broad spectrum of financial applications. The price for this expressiveness, however, is a significant semantic complexity, which increases the risk of programming errors. Recent attacks exploiting bugs in smart contract implementations call for the design of formal verification techniques for smart contracts. This, however, requires rigorous semantic foundations, a formal characterization of the expected security properties, and dedicated abstraction techniques tailored to the specific EVM semantics.

This tutorial will overview the state-of-the-art in smart contract verification, covering formal semantics, security definitions, and verification tools. We will then focus on EtherTrust, a framework for the static analysis of Ethereum smart contracts that we recently introduced, which includes the first complete small-step semantics of EVM bytecode, the first formal characterization of a large class of security properties for smart contracts, and the first static analysis for EVM bytecode, based on Horn clause resolution, that comes with a proof of soundness.

Concolic Testing of Higher-order Functional Languages

Konstantinos (Kostis) Sagonas (Uppsala University, Sweden)

Concolic testing is a fully automatic software testing technique that combines concrete and symbolic execution of a program unit in an attempt to explore all the code paths in this unit or at least explore all its paths up to a depth bound. In this talk, we will describe how concolic testing can be applicable to high-level languages in general and to functional programming languages in particular. For such languages, the concolic engine needs to efficiently support pattern matching, recursive data types such as lists, recursion and higher-order functions. We will also briefly talk about the engineering effort that concolic testing tools require, in particular in interfacing with SMT solvers.

Time permitting, the talk will also include a demo of CutEr (as in "more cute"), a concolic testing tool for Erlang and some of the bugs in the implementation of Erlang/OTP that CutEr has discovered.

Challenges in the Specialisation of Smart Horn Clause Interpreters

John P. Gallagher (Roskilde University, Denmark; IMDEA Software Institute, Spain)

Automatic program specialisation has re-emerged in the last few years as a key technique of meta-programming. Notable recent applications using specialisation include collapsing towers of interpreters in the system stack [1], high-performance implementations of widely-used dynamic languages [20], SQL query compilation [19] and code obfuscation [9]. Program specialisation has fundamental connections to language compilation [6], optimisation [10], and recursive function theory (the S-m-n theorem) [13]. Indeed, in [19], the authors state that their technique is based on "an old but under-appreciated idea known as Futamura projections, which fundamentally link interpreters and compilers".

This abstract focusses on the challenges of achieving the first Futamura projection [6], which refers to the specialisation of an interpreter with respect to a program with unknown input data. Consider an interpreter $Int$ as a function $Int: Prog \times I \rightarrow O$, where $I$ and $O$ are the input and output domains respectively for programs in $Prog$. Specialising $Int$ with respect to a program $p \in Prog$, a version of $Int$ is obtained, called $Int_p$, that interprets $p$ but no other program. This is denoted as a function $Int_p : I \rightarrow O$. In short, $Int_p$ intertwines $Int$ with $p$ and has the same signature as $p$ (meta-language encoding is ignored here). $Int_p$ can be regarded as a "compilation" of $p$ into the language of the interpreter (which could be the same language). The semi-formal concept of "Jones-optimality" [11] is sometimes used to describe the ideal specialisation in which the "meta-layer" of interpretation of $p$ is completely removed in $Int_p$.

Smart interpreters. Jones-optimality applies in practice to "standard" interpreters that are driven by the syntactic structure of the object program. The recent works cited above could all be considered to be of this kind. However, the most far-reaching potential of interpreter specialisation for program transformation arises from the specialisation of "smart" interpreters, which can be arbitrarily complex and incorporate run-time operations such as memoisation, constraint propagation and dynamic task management. Such operations are not guided by the structure of the input program, but rather depend on dynamic structures such as memo-tables, task pools, and constraint stores, which are hard to "compile away".

The overhead of smart interpreters can be prohibitive and limit their use to problems where the overhead is offset by the savings. Thus there is a great incentive to remove the overhead of smart interpreters with respect to a given object program. This suggestion is well known [7][18][12]; however, the complexity of smart interpreters has so far presented a barrier to realising its potential.

For example, super-linear speed-ups are known to be achievable by specialisation of a memoising interpreter, as pointed out by Jones [12]; a classical example of a transformation achievable in principle is the generation of a linear implementation Fibonacci function from the recursive functional definition, whose straightforward execution requires exponential time. However, the difficulties of effective specialisation of a memoising interpreter are considerable and have so far (to our knowledge) defied efforts to specialise a memoising interpreter; the problems are discussed briefly below.

Despite the difficulties, a foundation for a breakthrough in interpreter specialisation is available, provided by the spectacular advances in tools and techniques for reasoning about and transforming programs. New abstract interpretation techniques and powerful back-end constraint solvers [14][5][2] have played a crucial role in the recent successes of scalable program analysis and verification tools.

The following three challenges outline the promises, difficulties and potential solutions for specialisation of smart interpreters. The common theme is that their solution requires a combination of techniques from abstract interpretation, verification and constraint solving. Constrained Horn clauses (CHCs) have been shown to be an effective framework for exploiting these advances for software verification [3]. It is argued that CHCs offer similar advantages in the quest to realise the potential of smart interpreter specialisation. We propose to implement the interpreters sketched below as CHCs, and specialise them using a polyvariant specialisation techniques as recently reported [8], which makes use of program analysis techniques to derive invariant properties that guide the specialisation. Recent work by Nys and De Schreye [15] on "compiling control" shares some of the goals of specialising smart interpreters.

Challenge 1: A memoising interpreter. A memoising interpreter maintains a table during execution, recording the function calls that arise and their results when they are returned. Each time a function is called, the memo-table is consulted, and so repeated executions of the same function call are avoided. Memoising can result in exponential speed-up.

Applying the first Futamura projection, one would expect at least that the memo-table becomes "compiled in" to the interpreted program as a data structure, and every function call is replaced by a conditional statement that checks whether the call's result is already available in the table. To achieve effective specialisation (say for the Fibonacci example), more is needed. For some function calls in the program, it can be detected statically that the memo-table definitely contains the result, while for others, the memo-table lookup will definitely fail. Furthermore, the memo-table needs only to contain the calls that might possibly be needed later, otherwise it will contain many useless entries.

Sufficiently precise abstractions of the memo-table during specialisation are needed to achieve this. Recent results on static cache analysis (for example [4]) provide inspiration for possible approaches. Caches are essentially memo-tables for memory reads, and one wants to know in a source program which read operations will definitely cause a cache hit and which ones will miss, and furthermore one wants to ensure that the cache contains only live values that might be hit.

Challenge 2: A constraint propagating interpreter.Many problems are conveniently stated as "generate-and-test"; the space of potential solutions is given by a generate function, while the test function checks whether the candidate solutions satisfy the required constraints. Naive interpretation of generate-and-test as a solution procedure is usually impractical since the solution space is too large to be enumerated. A constraint propagating interpreter explores the generate and test functions in parallel, applying principles of constraint programming [17] to limit search, propagate constraints, identify inconsistencies early and discover invariants. The specialisation of a constraint propagating interpreter is a significant challenge, inviting cross-fertilisation of constraint programming with verification and program analysis; Pelleau et al. [16] point to many connections such as the use of over-approximations obtained by abstract interpretation to reduce search.

Challenge 3: A parallelising interpreter. The first Futamura projection shows that a sequential program can be "compiled" to a parallel program, where $Int$ is written in a parallel language, and the object program $p$ is written in a sequential language. The first Futamura projection yields a parallel program $Int_p$ and requires a specialiser for a parallel programming language; this topic has received a lot of attention but presents a number of problems compared to specialisation of sequential programs, due to the need to preserve synchronisation between parallel processes. The interpreter $Int$ manages a pool of parallel processes and computing resources; the purpose of the specialisation is to perform as much "compile-time" scheduling as possible. Abstractions of the process pool are needed during specialisation, a challenging problem requiring input from the state of the art in timing analysis and cache analysis.


  1. N. Amin & T. Rompf (2018): Collapsing towers of interpreters. PACMPL 2(POPL), pp. 52:1-52:33, doi:10.1145/3158140.
  2. R. Bagnara, P. M. Hill & E. Zaffanella (2008): The Parma Polyhedra Library: Toward a Complete Set of Numerical Abstractions for the Analysis and Verification of Hardware and Software Systems. Science of Computer Programming 72(1-2), pp. 3-21, doi:10.1016/j.scico.2007.08.001.
  3. N. Bjørner, A. Gurfinkel, K. L. McMillan & A. Rybalchenko (2015): Horn Clause Solvers for Program Verification. In L. D. Beklemishev, A. Blass, N. Dershowitz, B. Finkbeiner & W. Schulte, editors: Fields of Logic and Computation II, LNCS 9300, Springer, pp. 24-51, doi:10.1007/978-3-319-23534-9_2.
  4. G. Doychev, B. Köpf, L. Mauborgne & J. Reineke (2015): CacheAudit: A Tool for the Static Analysis of Cache Side Channels. ACM Trans. Inf. Syst. Secur. 18(1), pp. 4:1-4:32, doi:10.1145/2756550.
  5. B. Dutertre & L. M. de Moura (2006): A Fast Linear-Arithmetic Solver for DPLL(T). In T. Ball & R. B. Jones, editors: Computer Aided Verification, CAV 2006, LNCS 4144, Springer, pp. 81-94, doi:10.1007/11817963_11.
  6. Y. Futamura (1971): Partial Evaluation of Computation Process - An Approach to a Compiler-Compiler. Systems, Computers, Controls 2(5), pp. 45-50.
  7. J. P. Gallagher (1986): Transforming Logic Programs by Specialising Interpreters. In: Proceedings of the 7th European Conference on Artificial Intelligence (ECAI-86), Brighton, pp. 109-122.
  8. J. P. Gallagher (2019): Polyvariant program specialisation with property-based abstraction. In: VPT 2019, Genoa, Italy (accepted for EPTCS proceedings). Available at A Proceedings version available at
  9. R. Giacobazzi, N. D. Jones & I. Mastroeni (2012): Obfuscation by partial evaluation of distorted interpreters. In: PEPM, ACM, pp. 63-72, doi:10.1145/2103746.2103761.
  10. N. D. Jones, C. Gomard & P. Sestoft (1993): Partial Evaluation and Automatic Software Generation. Prentice Hall, doi:10.1016/j.scico.2004.03.010.
  11. N. D. Jones (1990): Partial Evaluation, Self-Application and Types. In: ICALP, Lecture Notes in Computer Science 443, Springer, pp. 639-659, doi:10.1007/BFb0032064.
  12. N. D. Jones (2004): Transformation by interpreter specialisation. Sci. Comput. Program. 52, pp. 307-339, doi:10.1016/j.scico.2004.03.010.
  13. S. C. Kleene (1952): Introduction to Metamathematics. van Nostrand.
  14. L. M. de Moura & N. Bjørner (2008): Z3: An Efficient SMT Solver. In C. R. Ramakrishnan & J. Rehof, editors: TACAS 2008, LNCS 4963, Springer, pp. 337-340, doi:10.1007/978-3-540-78800-3_24.
  15. V. Nys & D. De Schreye (2018): Compiling Control as Offline Partial Deduction. CoRR abs/1808.05360.
  16. M. Pelleau, A. Miné, C. Truchet & F. Benhamou (2013): A Constraint Solver Based on Abstract Domains. In R. Giacobazzi, J. Berdine & I. Mastroeni, editors: VMCAI 2013, Lecture Notes in Computer Science 7737, Springer, pp. 434-454, doi:10.1007/978-3-642-35873-9_26.
  17. F. Rossi, P. van Beek & T. Walsh, editors (2006): Handbook of Constraint Programming. Foundations of Artificial Intelligence 2, Elsevier.
  18. S. Safra & E. Shapiro (1986): Meta Interpreters For Real (Invited Paper). In: IFIP Congress, pp. 271-278.
  19. R. Y. Tahboub, G. M. Essertel & T. Rompf (2018): How to Architect a Query Compiler, Revisited. In G. Das, C. M. Jermaine & P. A. Bernstein, editors: Proceedings of SIGMOD 2018, ACM, pp. 307-322, doi:10.1145/3183713.3196893.
  20. T. Würthinger, C. Wimmer, C. Humer, A. Wöß, L. Stadler, C. Seaton, G. Duboscq, D. Simon & M. Grimmer (2017): Practical partial evaluation for high-performance dynamic language runtimes. In A. Cohen & M. T. Vechev, editors: Proceedings of the 38th ACM SIGPLAN Conference on Programming Language Design and Implementation, PLDI 2017, ACM, pp. 662-676, doi:10.1145/3062341.3062381.