Published: 23rd August 2017
DOI: 10.4204/EPTCS.253
ISSN: 2075-2180

EPTCS 253

Proceedings Fifth International Workshop on
Verification and Program Transformation
Uppsala, Sweden, 29th April 2017

Edited by: Alexei Lisitsa, Andrei P. Nemytykh and Maurizio Proietti

Preface
Alexei Lisitsa, Andrei P. Nemytykh and Maurizio Proietti
1
Invited Talk: Polynomial Analysis Algorithms for Free-Choice Workflow Nets
Javier Esparza
3
Invited Talk: Energy Consumption Analysis and Verification by Transformation into Horn Clauses and Abstract Interpretation
Manuel V. Hermenegildo, Maximiliano Klemen, Umer Liqat and Pedro López-García
4
Invited Talk: Software Model Checking of Linux Device Drivers
Alexey Khoroshilov
7
Transforming Coroutining Logic Programs into Equivalent CHR Programs
Vincent Nys and Danny De Schreye
9
Generating Loop Invariants for Program Verification by Transformation
G. W. Hamilton
36
Verification of Programs via Intermediate Interpretation
Alexei P. Lisitsa and Andrei P. Nemytykh
54
Towards Evaluating Size Reduction Techniques for Software Model Checking
Gyula Sallai, Ákos Hajdu, Tamás Tóth and Zoltán Micskei
75
Trustworthy Refactoring via Decomposition and Schemes: A Complex Case Study
Dániel Horpácsi, Judit Kőszegi and Zoltán Horváth
92
From Concurrent Programs to Simulating Sequential Programs: Correctness of a Transformation
Allan Blanchard, Frédéric Loulergue and Nikolai Kosmatov
109

Preface

This volume contains the proceedings of the Fifth International Workshop on Verification and Program Transformation (VPT 2017). The workshop took place in Uppsala, Sweden, on April 29th, 2017, affiliated with ETAPS. Previous editions of the Workshop were held in Saint-Petersburg (Russia, affiliated with CAV 2013), Vienna (Austria, affiliated with CAV 2014), London (UK, affiliated with ETAPS 2015), and Eindhoven (The Netherlands, affiliated with ETAPS 2015).

The aim of the VPT workshop series is to provide a forum where people from the areas of program transformation and program verification can fruitfully exchange ideas and gain a deeper understanding of the interactions between those two fields. The research papers which have been recently published in those fields, show that the interactions are very beneficial and, indeed, go both ways. In one direction, methods and tools developed in the field of program transformation, such as partial deduction, partial evaluation, fold/unfold transformations, and supercompilation, have all been applied with success for the verification of software systems, and in particular, the verification of infinite state and parameterized systems. In the other direction, methods developed in program verification, such as model checking, abstract interpretation, SAT and SMT solving, and automated theorem proving, have been used to enhance program transformation techniques, thereby making these techniques more powerful and useful in practice.

The program committee received eight submissions and selected seven of them for presentation at the workshop. Each paper was revised by three referees, and the selection was based on originality, quality, and relevance to the topics of the call for papers. Six contributions are published as full papers in this volume. The seventh contributed presentation was given by Nikolay Shilov: "Etude on Recursion Elimination by Program Manipulation and Problem Analysis". Additionally, there were the following three invited talks:

We wish to express our gratitude to the authors who submitted papers, the speakers, the invited speakers, the program committee members, and the additional reviewers for the work they have generously done. We would like to thank the ETAPS workshops chairs Konstantinos Sagonas and Mohamed Faouzi Atig (University of Uppsala, Sweden,) for their help in making the event possible. Our thanks go also to Guglielmo De Angelis (IASI-CNR, Rome, Italy) who assisted us in the many organizational matters. Finally, we warmly thank the EasyChair organization for supporting the various tasks related to the selection of the contributions, and also EPTCS and arXiv for hosting the proceedings. The workshop received financial support from:


August 2017,
Alexei Lisitsa, Andrei Nemytykh, and Maurizio Proietti.

Organizing Committee

Guglielmo De Angelis, IASI-CNR, Rome, Italy
Alexei Lisitsa, The University of Liverpool, UK
Andrei P. Nemytykh, Program Systems Institute of RAS, Russia
Maurizio Proietti, IASI-CNR, Rome, Italy

Programme Committee

Emanuele De Angelis, University of Chieti-Pescara, Italy
John Gallagher, Roskilde University, Denmark
Miguel Gómez-Zamalloa, Complutense University of Madrid, Spain
Nikos Gorogiannis, Middlesex University, UK
Geoff W. Hamilton, Dublin City University, Republic of Ireland
Alexei Lisitsa, The University of Liverpool, UK (co-chair)
David Monniaux, VERIMAG, CNRS - University of Grenoble, France
Jorge A. Navas, SRI International, USA
Andrei P. Nemytykh, Program Systems Institute of RAS, Russia (co-chair)
Maurizio Proietti, IASI-CNR, Rome, Italy (co-chair)
Philipp Rümmer, Uppsala University, Sweden
Hirohisa Seki, Nagoya Institute of Technology, Japan
Harald Søndergaard, The University of Melbourne, Australia
Morten H. Sørensen, Formalit, Denmark

External Reviewers

Alberto Pettorossi
Clara Segura

Steering Committee

Geoff W. Hamilton, Dublin City University, Republic of Ireland
Alexei Lisitsa, The University of Liverpool, UK
Andrei P. Nemytykh, Program Systems Institute of RAS, Russia
Alberto Pettorossi, Università di Roma Tor Vergata, Italy


Polynomial Analysis Algorithms for Free-Choice Workflow Nets

Javier Esparza (Technische Universität München, Germany)

Abstract

Workflow Petri nets are a popular formalism for modeling and analyzing business processes. A large number of models belong to the special class of free-choice workflow Petri nets. The state space of this class of workflows can grow exponentially in the size of the net, and so analysis algorithms based on its exploration have worst-case exponential runtime. We show that many analysis problems for this model can be solved in polynomial time by means of reduction algorithms that exhaustively apply syntactic reduction rules.


Energy Consumption Analysis and Verification by Transformation into Horn Clauses and Abstract Interpretation

Manuel V. Hermenegildo (IMDEA Software Institute and UPM, Spain)
Maximiliano Klemen (IMDEA Software Institute and UPM, Spain)
Umer Liqat (IMDEA Software Institute and UPM, Spain)
Pedro López-García (IMDEA Software Institute and CSIC, Spain)

Extended Abstract*

The static estimation of the energy consumed by program executions has applications in program optimization and verification, and is instrumental in energy-aware software development. We describe our approach for estimating such energy consumption statically (i.e., at compile-time, without running the program) in the form of functions on the input data sizes of procedures (and possibly other hardwaredependent parameters), and for using such functions for verifying and finding errors with respect to a rich class of energy consumption specifications for programs. We also present the implementation of this approach within the CiaoPP system.

Determining program energy consumption requires the analysis of low-level program representations, since sufficiently accurate energy models are only available at the instruction set architecture level or, with some precision loss, at intermediate levels (such as, e.g., LLVM). We have developed an approach to the analysis and verification of energy consumption [14, 8, 10, 7, 6] that is based on a transformation of instruction set architecture- (or LLVM-)level programs into Horn clauses [13, 4]. The Horn clauses encode the semantics of these low-level programs, at different levels of abstraction, as determined by different abstract domains. Such abstract domains approximate properties that are instrumental for energy analysis, such as sized types, determinacy, or non-failure, as well as the energy consumption itself [17]. The latter contrasts with previous approaches to cost analysis that are not based directly on abstract interpretation.

Given such a set of Horn clauses, the objective of our analysis is to compute their abstract minimal model for each abstract domain or combination of domains. For this purpose we use the PLAI analyzer of the CiaoPP system [5]. PLAI computes the abstract minimal model of a set of Horn clauses using a topdown, memo table-based fixpoint algorithm — which can be seen as an extension of a highly optimized SLDT resolution engine with the abstract domains taking the traditional role of constraint domains in the logic. This extended abstract tabling algorithm includes optimizations for fixpoint acceleration such as dependency tracking or dynamic strongly connected component detection. It is incremental and, in contrast to bottom-up algorithms (also available in CiaoPP), multivariant (context sensitive). The result of the abstract model computation — the analysis — is the set of memo tables which store all the abstract call-success pairs that occur in the program (and their dependencies). In the case of energy analysis this includes, for each procedure, and for each possible abstract call state and path, a function that returns the corresponding energy consumed by that procedure and class of calls, as a function of input data sizes.

This work builds on our earlier work on cost analysis, initially developed for granularity control during automatic program parallelization [1, 11], and which was capable of inferring both upper- and lower-bound functions [3, 2], for a wide class of user-definable resources [16, 15]. This configureability of the system is instrumental in the energy application for representing low-level energy models. As mentioned before, this work was recently extended by us to be fully based on multivariant abstract interpretation using sized shapes (sized types) [17], and to perform static profiling [12].

For verification and (performance) error detection the inferred abstract models of energy consumption are compared to the energy specifications [9, 5]. This can optionally be done during the analysis or after it. In our approach specifications can include both lower and upper bounds on energy usage, and they can express intervals within which energy usage is to be certified to be within such bounds. The bounds of the intervals can be given in general as functions on input data sizes. Our verification system can prove whether such energy usage specifications are met or not. It can also infer the particular conditions under which the specifications hold. To this end, these conditions are also expressed as intervals of functions of input data sizes, such that a given specification can be proved for some intervals but disproved for others. The specifications themselves can also include preconditions expressing intervals for input data sizes.

We also report on a prototype implementation of our approach within the CiaoPP system for the XC language and XS1-L architecture, with an emphasis on embedded programs. We provide experimental results from energy analysis of such programs at the instruction set architecture level and compare them to the actual energy consumption measured on the hardware. We also illustrate with an example how embedded software developers can use this tool for determining values for program parameters that ensure meeting a given energy budget while minimizing losses in quality of service.

References

  1. S. K. Debray, N.-W. Lin & M. Hermenegildo (1990): Task Granularity Analysis in Logic Programs. In: Proc. 1990 ACM Conf. on Programming Language Design and Implementation (PLDI), ACM Press, pp. 174–188, doi:10.1145/93548.93564.
  2. S. K. Debray, P. López-García, M. Hermenegildo & N.-W. Lin (1994): Estimating the Computational Cost of Logic Programs. In: SAS'94, LNCS 864, Springer-Verlag, pp. 255–265, doi:10.1007/3-540-58485-4_45.
  3. S. K. Debray, P. López-García, M. Hermenegildo & N.-W. Lin (1997): Lower Bound Cost Estimation for Logic Programs. In: ILPS'97, MIT Press, pp. 291–305.
  4. K. S. Henriksen & J. P. Gallagher (2006): Abstract Interpretation of PIC Programs through Logic Programming. In: SCAM'06: Proceedings of the Sixth IEEE International Workshop on Source Code Analysis and Manipulation, IEEE Computer Society, pp. 184–196, doi:10.1109/SCAM.2006.1.
  5. M. Hermenegildo, G. Puebla, F. Bueno & P. López-García (2005): Integrated Program Debugging, Verification, and Optimization Using Abstract Interpretation (and The Ciao System Preprocessor). Science of Comp. Progr. 58(1–2), pp. 115–140, doi:10.1016/j.scico.2005.02.006.
  6. U. Liqat, Z. Banković, P. López-García & M. Hermenegildo (2016): Inferring Energy Bounds Statically by Evolutionary Analysis of Basic Blocks. In: (HIP3ES'16), IEEE Computer Society. arXiv:1601.02800.
  7. U. Liqat, K. Georgiou, P. López-García, M. Hermenegildo, J. P. Gallagher & K. Eder (2016): Inferring Parametric Energy Consumption Functions at Different Software Levels: ISA vs. LLVM IR. In: Proc. of FOPARA-2015, LNCS 9964, Springer, pp. 81–100, doi:10.1007/978-3-319-46559-3_5.
  8. U. Liqat, S. Kerrison, A. Serrano, K. Georgiou, P. López-García, N. Grech, M. Hermenegildo & K. Eder (2014): Energy Consumption Analysis of Programs based on XMOS ISA-level Models. In: Proceedings of LOPSTR'13, LNCS 8901, Springer, pp. 72–90, doi:10.1007/978-3-319-14125-1_5.
  9. P. López-García, L. Darmawan, F. Bueno & M. Hermenegildo (2012): Interval-Based Resource Usage Verification: Formalization and Prototype. In: Proc. of FOPARA-2011, LNCS 7177, Springer-Verlag, pp. 54–71, doi:10.1007/978-3-642-32495-6_4.
  10. P. López-García, R. Haemmerlé, M. Klemen, U. Liqat & M. Hermenegildo (2015): Towards Energy Consumption Verification via Static Analysis. In: WS on High Perf. Energy Efficient Embedded Sys. (HIP3ES). arXiv:1512.09369.
  11. P. López-García, M. Hermenegildo & S. K. Debray (1996): A Methodology for Granularity Based Control of Parallelism. In: Logic Programs. J. of Symbolic Computation, Special Issue on Parallel Symbolic Computation 21(4–6), pp. 715–734, doi:10.1006/jsco.1996.0038.
  12. P. López-García, M. Klemen, U. Liqat & M. Hermenegildo (2016): A General Framework for Static Profiling of Parametric Resource Usage. Theory and Practice of Logic Programming 16(5–6 (32nd International Conference on Logic Programming)), pp. 849–865, doi:10.1017/S1471068416000442.
  13. M. Méndez-Lojo, J. Navas & M. Hermenegildo (2007): A Flexible (C)LP-Based Approach to the Analysis of Object-Oriented Programs. In: Proc. of LOPSTR-2007, LNCS 4915, Springer-Verlag, pp. 154–168, doi:10.1007/978-3-540-78769-3_11.
  14. J. Navas, M. Méndez-Lojo & M. Hermenegildo (2008): Safe Upper-bounds Inference of Energy Consumption for Java Bytecode Applications. In: The Sixth NASA Langley Formal Methods Workshop (LFM 08), NASA, pp. 29–32.
  15. J. Navas, M. Méndez-Lojo & M. Hermenegildo (2009): User-Definable Resource Usage Bounds Analysis for Java Bytecode. In: BYTECODE'09, ENTCS 253, Elsevier, pp. 65–82, doi:10.1016/j.entcs.2009.11.015.
  16. J. Navas, E. Mera, P. López-García & M. Hermenegildo (2007): User-Definable Resource Bounds Analysis for Logic Programs. In: Proc. of ICLP'07, LNCS 4670, Springer, pp. 348–363, doi:10.1007/978-3-540-74610-2_24.
  17. A. Serrano, P. López-García & M. Hermenegildo (2014): Resource Usage Analysis of Logic Programs via Abstract Interpretation Using Sized Types. Theory and Practice of Logic Programming 14(4–5 (30th International Conference on Logic Programming)), pp. 739–754, doi:10.1017/S147106841400057X.



* The research leading to these results has received funding from the European Union 7th Framework Programme under grant agreement no 318337, ENTRA - Whole-Systems Energy Transparency, Spanish MINECO TIN2015-67522-C3-1-R TRACES project, and the Madrid M141047003 N-GREENS program.


Software Model Checking of Linux Device Drivers

Alexey Khoroshilov (Linux Verification Center, ISPRAS Moscow, Russia)

Extended Abstract

The talk presents experience of Linux Verification Center of ISPRAS in verification of Linux device drivers with software model checking tools [5]. Software model checkers solve reachability problems, i.e. they prove that error locations in a program cannot be reached by any execution starting from an entry point. Linux device drivers have neither a traditional entry point no labeled error locations. We present our approach to bridge the gap.

The key element of the approach is generating a model environment [6] that tries to reproduce the same scenarios of interactions with the driver as the actual kernel does as well as it tries to be simple enough to be analyzable by existing tools. Injection of code containing error locations for target kinds of bugs is implemented with C Instrumentation Framework [2], a source to source transformation tool built on top of concepts of aspect-oriented programming. The main software model checking tools used in the projects includes CPAchecker [4] and BLAST [3, 7].

The current results are quite positive. So far, there are more than 280 bugs have been found, reported and already fixed in Linux device drivers by our project [1]. But still many open problems remain to be solved. The existing approach for environment model generation works well for cases when verification of target property does not require precise information about memory management in the environment. Otherwise the current approach proved itself to not be scalable enough. Software verification tools also require many improvements. For example, ability to efficiently detect and avoid irrelevant code may lead to a big step forward. More precise memory model and built-in support for abstract data types like maps and sets can help in some cases as well. There are also some kind of bugs like memory safety and data races, which is not well suited for encoding and analysis as reachability problems. Development of specific analyzers for this kind of bugs can help a lot.

Another aspect to be discussed is lessons learned in application of academic tools to solve industrial tasks. We found that it is important that the software model checker supports a full set of language features and could parse it. Moreover, the tool should not fail if it does not support some features. If it cannot prove correctness it still may find bugs. Another lesson is that for verification tools it is even more important to ignore thousands irrelevant transitions than efficiently handle relevant ones. Also, the engineering efforts can help to get significantly better results and we shared experience in speeding up BLAST from 8 times on small-sized drivers and to 30 times on medium-sized drivers [7]. Our experience in application some tools demonstrating good results in international software model checking competitions was stopped by inability to analyze errors traces without knowing implementation details. That leads to the lesson of usability requirements for the verification tools.

Another interesting result is importance of source code coverage that demonstrates which parts of code was analyzed and ideally how many time was spent for analysis of that parts. It may look strange at the first glance how source code coverage can be used in the context of static analysis. But it explicitly demonstrates what was analyzed helping to detect problems with environment model generator and other divergences between user expectations and tools' perspective. Also the coverage is useful in case of time and memory limits since it can suggest how to decompose code under analysis, for example, to achieve at least partial verification.

References

  1. A list of bugs in Linux kernel reported by LDV project. [online]. http://linuxtesting.org/results/ldv
  2. C Instrumentation Framework. [online]. https://forge.ispras.ru/projects/cif
  3. D. Beyer, T. Henzinger, R. Jhala & R. Majumdar (2007): The software model checker BLAST: Applications to software engineering. Int. J. Software Tool Tech. Tran. 9(5–6), pp. 505–525, doi:10.1007/s10009-007-0044-z.
  4. D. Beyer, T. A. Henzinger & G. Théoduloz (2007): Configurable software verification: concretizing the convergence of model checking and program analysis. In: Proceedings of CAV, LNCS 4590, pp. 504–518, doi:10.1007/978-3-540-73368-3_51.
  5. A. Khoroshilov, M. Mandrykin, V. Mutilin, E. Novikov, A. Petrenko & I. Zakharov (2015): Configurable toolset for static verification of operating systems kernel modules. Programming and Computer Software 41(1), pp. 49–64, doi:10.1134/S0361768815010065.
  6. A. V. Khoroshilov, V. S. Mutilin & I. S. Zakharov (2015): Pattern-based environment modeling for static verification of Linux kernel modules. Programming and Computer Software 41(3), pp. 183–195, doi:10.1134/S036176881503007X.
  7. P. Shved, M. Mandrykin & V. Mutilin (2012): Predicate Analysis with Blast 2.7. In: Proceedings of TACAS, LNCS 7214, Springer, pp. 525–527, doi:10.1007/978-3-642-28756-5_39.