Published: 10th February 2013
DOI: 10.4204/EPTCS.107
ISSN: 2075-2180


Proceedings 14th International Workshop on
Verification of Infinite-State Systems
Paris, France, 27th August 2012

Edited by: Mohamed Faouzi Atig and Ahmed Rezine

Mohamed Faouzi Atig and Ahmed Rezine
Invited Presentation: Formal Correctness Methods at the Million CPU Scale
Ganesh Gopalakrishnan
Invited Presentation: On the complexity of automatic structures
Peter Habermehl
Invited Tutorial: Petri Nets with Time and Cost
Parosh Aziz Abdulla and Richard Mayr
Invited Tutorial: Constraint-based reachability
Arnaud Gotlieb, Tristan Denmat and Nadjib Lazaar
On Consistency of Operational Transformation Approach
Aurel Randolph, Hanifa Boucheneb, Abdessamad Imine and Alejandro Quintero
Effective Marking Equivalence Checking in Systems with Dynamic Process Creation
Łukasz Fronc


This volume contains the proceedings of the 14th International Workshop on Verification of Infinite-State Systems (Infinity 2012). The workshop was held in Paris as a satellite event of the the 18th International Symposium on Formal Methods - FM 2012 on August 27th, 2012.

The aim of the INFINITY workshop is to provide a forum for researchers interested in the development of formal methods and algorithmic techniques for the analysis of systems with infinitely many states, and their application in automated verification of complex software and hardware systems.

Topics of interest include (but are not limited to):

This volume contains five contributions: two regular papers and three invited tutorials. Each regular paper was reviewed by three different reviewers. The program of INFINITY 2012 was further enriched by four invited tutorials given by Parosh Aziz Abudlla, Ganesh Gopalakrishnan, Arnaud Gotlieb and Peter Habermehl. Each invited tutorial was reviewed by the editors before acceptance. Moreover, the program committee of INFINITY 2012 has selected the two following presentations:

We thank the authors for their contributions, the programme committee members for reviewing and selecting the papers and the invited talks, and the FM 2012 Organizing Committee Kamel Barkaoui, Béatrice Bérard, Nihal Pekergin, Laure Petrucci, and Tayssir Touili for their support.

Final proceedings will appear in the Electronic Proceedings in Theoretical Computer Science (EPTCS) series. We thank Rob Van Glabbeek for his support and Rajeev Alur for providing this opportunity.

Linköping and Uppsala, January 2013Mohamed Faouzi Atig and Ahmed Rezine

Organizers and Program Chairs

Mohamed Faouzi Atig   Uppsala UniversitySweden
Ahmed Rezine Linköping UniversitySweden

Program Committee

Nathalie Bertrand   INRIA Rennes Bretagne AtlantiqueFrance
Yu-Fang Chen Academia SinicaTaiwan
Pierre Ganty Imdea SoftwareSpain
Lukáš Holík Uppsala UniversitySweden
Axel Legay IRISA/INRIA RennesFrance
Roland Meyer University of KaiserslauternGermany
Ruzica Piskac Max Planck Institue for Software Systems (MPI-SWS)  Germany
Noam Rinetzky Queen Mary University of LondonUnited Kingdom
Martin Vechev Swiss Federal Institute of Technology (ETH) Switzerland
Fang Yu National Chengchi University Taiwan

Formal Correctness Methods at the Million CPU Scale

Ganesh Gopalakrishnan (School of Computing, University of Utah, USA)


Million-core computing systems will be commonplace in the arena of high-performance computing. This paper explores ongoing work that examines the organization of a problem solving environment called Uintah [7, 2, 3, 11] developed by the University of Utah School of Computing and the SCI Institute. Uintah is well on its way to reaching (and soon exceeding) the million core mark while simulating real-world problems. This brief paper examines formal and semi-formal methods that are being investigated to sustain this growth—a role that these methods are expected to play in all future designs.


Given that computing is inexorably marching toward (and beyond) the million core regime, it is quite timely for the research community to explore formal methods that are likely to provide value at that scale. We have embarked on such a project wherein our ideas are being driven by (and are being tested on) a large-scale case study: the Uintah [7, 2, 3, 11] problem solving environment developed by the University of Utah School of Computing and the SCI Institute. Uintah is well on its way to reaching (and soon exceeding) the million core mark while simulating real-world problems. This brief paper examines formal and semi-formal methods that are being investigated to sustain this growth—a role that these methods are expected to play in all future designs.

The set of concurrency models that an HPC system designer grapples with can be roughly divided into three classes (see Figure 1). At the lowest level, concurrency in various computing devices (CPUs, GPUs, other accelerators, etc.) is exploited to gain speed advantages in terms of single instruction multiple data (SIMD) concurrency and hardware features such as direct support for hardware transactional memory primitives being added to upcoming processors. At the next level, one finds several components including highly optimized Message Passing Interface (MPI, [9]) and thread libraries. In this layer, a prominent trend is the use of non-blocking data structures to implement various critical components, for instance the MPI library, high performance thread libraries, and lock-free data structures. Still higher, one finds more classical "textbook" [10] MPI routines organized around various master/worker patterns. A clear trend in this area is the increasing prevalence of higher-level work organization principles such as task graphs—the central organizational principle in Uintah. The use of these task graphs relegates the use of MPI to a much simpler form—meaning, handling communication, collective operations, and synchronization at the system level in a portable way, using fewer combinations of MPI primitives in the process (and often using only simpler forms of MPI communication). The task graph formalism itself defines the necessary causal dependencies. The Uintah scheduler (Figure 2) orchestrates work in a more explicit and tractable manner. This leads to a much more elegant and robust software architecture when compared to MPI programming practiced in earlier decades where many applications suffered from (1) MPI calls stewn all over a user application, and (2) code corresponding to cross-cutting concerns (such as load balancing) mixed up with mainline computing code.

Figure 1: Three Layers of Large-Scale HPC Systems

                    Figure 1: Three Layers of Large-Scale HPC Systems

Project Milestones

The Uintah formal methods work is still in its infancy. The Infinity conference in 2012 provided an excellent opportunity to air our future plans. The important things we want to keep in mind going forward are now briefly summarized:

We plan to update the community on the ongoing status of the Uintah project on our webpage


The author thanks Martin Berzins, Zvonimir Rakamarić, Alan Humphrey, Diego Caminha B. de Oliveira and Brandon Gibson for their collaboration and feedback. He also gratefully acknowledges funding from NSF CCF 241849 (EAGER: Formal Reliability Enhancement Methods for Million Core Computational Frameworks).

Figure 2: Scheduler Used in the Uintah Problem Solving Environment

Figure 2: Scheduler Used in the Uintah Problem Solving Environment.


  1. Howard Barringer, Yliès Falcone, Bernd Finkbeiner, Klaus Havelund, Insup Lee, Gordon J. Pace, Grigore Rosu, Oleg Sokolsky & Nikolai Tillmann (2010): Runtime Verification - First International Conference, RV 2010, St. Julians, Malta, November 1-4, 2010. Proceedings. Lecture Notes in Computer Science 6418. Springer, doi:10.1007/978-3-642-16612-9.
  2. M. Berzins, J. Schmidt, Q. Meng & A. Humphrey (2013): Past, Present, and Future Scalability of the Uintah Software. In: Proceedings of the Blue Waters Workshop 2012, pp. (to appear). Available at
  3. Martin Berzins, Justin Luitjens, Qingyu Meng, Todd Harman, Charles A. Wight & Joseph R. Peterson (2010): Uintah: a scalable framework for hazard analysis. In: Proceedings of the 2010 TeraGrid Conference, TG '10. ACM, New York, NY, USA, pp. 3:1–3:8, doi:10.1145/1838574.1838577.
  4. Patrice Godefroid (1997): Model Checking for Programming Languages using Verisoft. In: Peter Lee, Fritz Henglein & Neil D. Jones: POPL. ACM Press, pp. 174–186, doi:10.1145/263699.263717. Available at
  5. Maurice Herlihy & Nir Shavit (2008): The Art of Multiprocessor Programming. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA.
  6. Tobias Hilbrich, Martin Schulz, Bronis R. de Supinski & Matthias S. Müller (2009): MUST: A Scalable Approach to Runtime Error Detection in MPI Programs. In: Matthias S. Müller, Michael M. Resch, Alexander Schulz & Wolfgang E. Nagel: Parallel Tools Workshop. Springer, pp. 53–66, doi:10.1007/978-3-642-11261-4_5.
  7. Alan Humphrey, Qingyu Meng, Martin Berzins & Todd Harman (2012): Radiation modeling using the Uintah heterogeneous CPU/GPU runtime system. In: Proceedings of the 1st Conference of the Extreme Science and Engineering Discovery Environment: Bridging from the eXtreme to the campus and beyond, XSEDE '12. ACM, New York, NY, USA, pp. 4:1–4:8, doi:10.1145/2335755.2335791.
  8. Pallavi Joshi, Mayur Naik, Chang-Seo Park & Koushik Sen (2009): CalFuzzer: An Extensible Active Testing Framework for Concurrent Programs. In: Ahmed Bouajjani & Oded Maler: CAV, Lecture Notes in Computer Science 5643. Springer, pp. 675–681, doi:10.1007/978-3-642-02658-4_54.
  9. Message Passing Interface Forum (2009): MPI: A Message-Passing Interface Standard, Version 2.2. Specification. Available at
  10. Peter S. Pacheco (1997): Parallel programming with MPI. Morgan Kaufmann.
  11. Steven G. Parker (2006): A component-based architecture for parallel multi-physics PDE simulation. Future Generation Comp. Syst. 22(1-2), pp. 204–216, doi:10.1016/j.future.2005.04.001.
  12. Sarvani S. Vakkalanka, Ganesh Gopalakrishnan & Robert M. Kirby (2008): Dynamic Verification of MPI Programs with Reductions in Presence of Split Operations and Relaxed Orderings. In: Aarti Gupta & Sharad Malik: CAV, Lecture Notes in Computer Science 5123. Springer, pp. 66–79, doi:10.1007/978-3-540-70545-1_9.

On the complexity of automatic structures

Peter Habermehl (LIAFA, University Paris Diderot (Paris 7))

Automatic structures are relational structures over some domain where the domain and the basic relations can be defined by finite-state automata. To show that satisfiability of first-order logic formulae over an automatic structure is decidable it is enough to observe that logical conjunction corresponds to intersection of automata, negation to negation and existential quantification to projection. Therefore, it is possible to construct for a first-order formula with free variables a representation of all its solutions as an automaton. One of the best known examples of an automatic structure is Presburger arithmetic, the first-order logic over integers with addition. A naive analysis of the complexity of the automata based decision procedure yields a non-elementary upper bound since each quantifier alternation necessitates a determinisation which might be exponential (so in total a tower of exponentials). However, there are several automatic structures where the known complexities are elementary (like Presburger arithmetic or structures of bounded degree). We will give a survey of these results as well as explain an approach to obtain complexity bounds of the automata based decision procedures from the ones of the other decision procedures. This approach is based on the use of Ehrenfeucht-Fraisse relations and their transposition to the automata setting.

Part of this talk is based on joint work with Antoine Durand-Gasselin.