Published: 10th February 2013 DOI: 10.4204/EPTCS.107 ISSN: 2075-2180 |
Preface Mohamed Faouzi Atig and Ahmed Rezine | 1 |
Invited Presentation: Formal Correctness Methods at the Million CPU Scale Ganesh Gopalakrishnan | 3 |
Invited Presentation: On the complexity of automatic structures Peter Habermehl | 7 |
Invited Tutorial: Petri Nets with Time and Cost Parosh Aziz Abdulla and Richard Mayr | 9 |
Invited Tutorial: Constraint-based reachability Arnaud Gotlieb, Tristan Denmat and Nadjib Lazaar | 25 |
On Consistency of Operational Transformation Approach Aurel Randolph, Hanifa Boucheneb, Abdessamad Imine and Alejandro Quintero | 45 |
Effective Marking Equivalence Checking in Systems with Dynamic Process Creation Łukasz Fronc | 61 |
The aim of the INFINITY workshop is to provide a forum for researchers interested in the development of formal methods and algorithmic techniques for the analysis of systems with infinitely many states, and their application in automated verification of complex software and hardware systems.
Topics of interest include (but are not limited to):This volume contains five contributions: two regular papers and three invited tutorials. Each regular paper was reviewed by three different reviewers. The program of INFINITY 2012 was further enriched by four invited tutorials given by Parosh Aziz Abudlla, Ganesh Gopalakrishnan, Arnaud Gotlieb and Peter Habermehl. Each invited tutorial was reviewed by the editors before acceptance. Moreover, the program committee of INFINITY 2012 has selected the two following presentations:
We thank the authors for their contributions, the programme committee members for reviewing and selecting the papers and the invited talks, and the FM 2012 Organizing Committee Kamel Barkaoui, Béatrice Bérard, Nihal Pekergin, Laure Petrucci, and Tayssir Touili for their support.
Final proceedings will appear in the Electronic Proceedings in Theoretical Computer Science (EPTCS) series. We thank Rob Van Glabbeek for his support and Rajeev Alur for providing this opportunity.
Linköping and Uppsala, January 2013 | Mohamed Faouzi Atig and Ahmed Rezine |
Mohamed Faouzi Atig | Uppsala University | Sweden |
Ahmed Rezine | Linköping University | Sweden |
Nathalie Bertrand | INRIA Rennes Bretagne Atlantique | France |
Yu-Fang Chen | Academia Sinica | Taiwan |
Pierre Ganty | Imdea Software | Spain |
Lukáš Holík | Uppsala University | Sweden |
Axel Legay | IRISA/INRIA Rennes | France |
Roland Meyer | University of Kaiserslautern | Germany |
Ruzica Piskac | Max Planck Institue for Software Systems (MPI-SWS) | Germany |
Noam Rinetzky | Queen Mary University of London | United Kingdom |
Martin Vechev | Swiss Federal Institute of Technology (ETH) | Switzerland |
Fang Yu | National Chengchi University | Taiwan |
Million-core computing systems will be commonplace in the arena of high-performance computing. This paper explores ongoing work that examines the organization of a problem solving environment called Uintah [7, 2, 3, 11] developed by the University of Utah School of Computing and the SCI Institute. Uintah is well on its way to reaching (and soon exceeding) the million core mark while simulating real-world problems. This brief paper examines formal and semi-formal methods that are being investigated to sustain this growth—a role that these methods are expected to play in all future designs.
Given that computing is inexorably marching toward (and beyond) the million core regime, it is quite timely for the research community to explore formal methods that are likely to provide value at that scale. We have embarked on such a project wherein our ideas are being driven by (and are being tested on) a large-scale case study: the Uintah [7, 2, 3, 11] problem solving environment developed by the University of Utah School of Computing and the SCI Institute. Uintah is well on its way to reaching (and soon exceeding) the million core mark while simulating real-world problems. This brief paper examines formal and semi-formal methods that are being investigated to sustain this growth—a role that these methods are expected to play in all future designs.
The set of concurrency models that an HPC system designer grapples with can be roughly divided into three classes (see Figure 1). At the lowest level, concurrency in various computing devices (CPUs, GPUs, other accelerators, etc.) is exploited to gain speed advantages in terms of single instruction multiple data (SIMD) concurrency and hardware features such as direct support for hardware transactional memory primitives being added to upcoming processors. At the next level, one finds several components including highly optimized Message Passing Interface (MPI, [9]) and thread libraries. In this layer, a prominent trend is the use of non-blocking data structures to implement various critical components, for instance the MPI library, high performance thread libraries, and lock-free data structures. Still higher, one finds more classical "textbook" [10] MPI routines organized around various master/worker patterns. A clear trend in this area is the increasing prevalence of higher-level work organization principles such as task graphs—the central organizational principle in Uintah. The use of these task graphs relegates the use of MPI to a much simpler form—meaning, handling communication, collective operations, and synchronization at the system level in a portable way, using fewer combinations of MPI primitives in the process (and often using only simpler forms of MPI communication). The task graph formalism itself defines the necessary causal dependencies. The Uintah scheduler (Figure 2) orchestrates work in a more explicit and tractable manner. This leads to a much more elegant and robust software architecture when compared to MPI programming practiced in earlier decades where many applications suffered from (1) MPI calls stewn all over a user application, and (2) code corresponding to cross-cutting concerns (such as load balancing) mixed up with mainline computing code.
The Uintah formal methods work is still in its infancy. The Infinity conference in 2012 provided an excellent opportunity to air our future plans. The important things we want to keep in mind going forward are now briefly summarized:
We plan to update the community on the ongoing status of the Uintah project on our webpage http://www.cs.utah.edu/fv/URV.
The author thanks Martin Berzins, Zvonimir Rakamarić, Alan Humphrey, Diego Caminha B. de Oliveira and Brandon Gibson for their collaboration and feedback. He also gratefully acknowledges funding from NSF CCF 241849 (EAGER: Formal Reliability Enhancement Methods for Million Core Computational Frameworks).
Automatic structures are relational structures over some domain where the domain and the basic relations can be defined by finite-state automata. To show that satisfiability of first-order logic formulae over an automatic structure is decidable it is enough to observe that logical conjunction corresponds to intersection of automata, negation to negation and existential quantification to projection. Therefore, it is possible to construct for a first-order formula with free variables a representation of all its solutions as an automaton. One of the best known examples of an automatic structure is Presburger arithmetic, the first-order logic over integers with addition. A naive analysis of the complexity of the automata based decision procedure yields a non-elementary upper bound since each quantifier alternation necessitates a determinisation which might be exponential (so in total a tower of exponentials). However, there are several automatic structures where the known complexities are elementary (like Presburger arithmetic or structures of bounded degree). We will give a survey of these results as well as explain an approach to obtain complexity bounds of the automata based decision procedures from the ones of the other decision procedures. This approach is based on the use of Ehrenfeucht-Fraisse relations and their transposition to the automata setting.
Part of this talk is based on joint work with Antoine Durand-Gasselin.