Published: 30th December 2013 DOI: 10.4204/EPTCS.138 ISSN: 2075-2180 |
Preface Anton Wijs, Dragan Bošnački and Stefan Edelkamp | |
Invited Presentations | |
Verification of Contract-based Communicating Systems Gwen Salaün | |
Control Software Synthesis from System Level Formal Specifications Enrico Tronci | |
Recipes for Reduced State Spaces Arend Rensink | |
State Space c-Reductions of Concurrent Systems in Rewriting Logic Alberto Lluch-Lafuente | |
GPUVerify: A verifier for GPU kernels Jeroen Ketema | |
Informal Presentations | |
Graph Matching and Shortest Path Search for Multi-Modal Public Transportation Stefan Edelkamp and Christoph Greulich | |
Sequential and Distributed On-The-Fly Computation of Weak Tau-Confluence Anton Wijs | |
Parallel Algorithms for Reconstruction of Interaction Graphs from Perturbation Experiments Dragan Bošnački | |
Contributed Presentation | |
Reachability in Cooperating Systems with Architectural Constraints is PSPACE-Complete Mila Majster-Cederbaum and Nils Semmelrock | 1 |
The topic of the GRAPHITE workshop is graph analysis in all its forms in computer science. Graphs are used to represent data in many application areas, and they are subjected to various computational algorithms in order to acquire the desired information. These graph algorithms tend to have common characteristics, such as duplicate detection to guarantee their termination, independent of their application domain. Over the past few years, it has been shown that the scalability of such algorithms can be dramatically improved by using, e.g., external memory, by exploiting parallel architectures, such as clusters, multi-core CPUs, and graphics processing units, and by using heuristics to guide the search. Novel techniques to further scale graph search algorithms, and new applications of graph search are within the scope of this workshop. Another topic of interest of the event is more related to the structural properties of graphs: which kind of graph characteristics are relevant for a particular application area, and how can these be measured? Finally, any novel way of using graphs for a particular application area is on topic. The goal of this event is to gather scientists from different communities, such as model checking, artificial intelligence planning, game playing, and algorithm engineering, who do research on graph search algorithms, such that awareness of each others' work is increased.
In 2012, the First Workshop on GRAPH Inspection and Traversal Engineering (GRAPHITE 2012) was held in Tallinn, Estonia, also as a satellite event of ETAPS. In 2009, the same motivation led to organising Dagstuhl Seminar 09491, entitled ``Graph Search Engineering''. The GRAPHITE workshop resulted from the overall success of that seminar.
We had five invited speakers:
Choreographies are contracts specifying interactions among a set of services from a global point of view. These contracts serve as reference for the further development steps of the distributed system. Therefore their specification and analysis is crucial to avoid issues (e.g., deadlocks) that may induce delays and additional costs if identified lately in the design and development process. In this talk, we present a modular framework for performing automatically a number of crucial verification tasks on choreography specifications. Our framework accepts as input various interaction-based choreography description languages (e.g., BPMN 2.0 choreographies). In order to favour extensibility and reusability of our framework, we propose an intermediate format for representing choreography description languages. Another advantage of such an intermediate format is that it makes possible to use jointly several formal verification tools and techniques as back-end, provided that a connection to those tools exist. We have already developed a connection to the CADP verification toolbox via a translation from our intermediate format to the LNT process algebra, which is one of the CADP input specification languages. This enables the automated verification of some key analysis tasks (repairability, synchronizability, realizability, conformance, and control) using model and equivalence checking techniques.
Many control systems are indeed Software Based Control Systems (SBCS), that is control systems whose controller consists of control software running on a microcontroller device. This motivates investigation on Formal Model Based Design approaches for automatic synthesis of correct-by-construction control software.
We present algorithms and tools (namely, QKS) that from a Discrete Time Linear Hybrid System (DTLHS) model of the controlled system (the plant), Implementation Specifications (that is, number of bits in the Analog-to-Digital, AD, conversion) and System Level Formal Specifications (that is, safety and liveness requirements for the closed loop system) return correct-by-construction control software whose Worst Case Execution Time (WCET) is at most linear in the number of AD bits (this is important since typically a SBCS is a hard real-time system).
We model the plant as well as safety and liveness properties as Boolean combinations of linear constraints over real as well as discrete variables. We use a Mixed Integer Linear Programming (MILP) solver (namely, GLPK), to explicitly compute a suitable finite state automaton overapproximating the plant behaviour and use Ordered Binary Decision Diagrams (OBDDs) to compute a controller meeting the given specifications and to generate a C implementation for such a controller.
We present experimental results on using the above approach to synthesize control software for classical challenging examples: the buck DC-DC converter (a widely used mixed-mode analog circuit) and the inverted pendolum.
A number of ways have been proposed to combat state space explosion: well-known are, for instance, symmetry reduction, partial order reduction, or minimisation modulo bisimilarity. In this presentation we study the notion of *transaction*, rechristened *recipe* to reduce overloading of terminology, as another technique that achieves the same purpose. A recipe defines a sequence of actions, possibly unbounded and possibly with choices, that is thereafter to be treated as a single action. Moreover, a recipe has all-or-nothing semantics: if it cannot be completed, it is as if it has never started. This makes it possible to create actions with quite powerful effects, which would otherwise require many single steps. We work in a setting where the actions correspond to graph transformation rules, and the recipes are a way to control the order in which rules are considered for application. A distinguishing feature of the control language is a notion of *prioritised choice*, in which one operand is preferred, and the other is only enabled if the first fails. We discuss the theoretical and practical aspects of recipes in this setting.
C-reductions are a simple, flexible and very general state space reduction technique that exploits an equivalence relation on states that is a bisimulation. Reduction is achieved by a canonizer function, which maps each state into a not necessarily unique canonical representative of its equivalence class. The approach captures symmetry reduction and name reuse and name abstraction as special cases, and exploits the expressiveness of rewriting logic and its realization in Maude to automate c-reductions and to seamlessly integrate model checking and the discharging of correctness proof obligations. This talk will present the c-reductions technique and will discuss its performance.
I will describe GPUVerify, a collaboration between Imperial College London and Microsoft Research Redmond on the design and implementation of a technique and tool for analysing concurrent software written to run on graphics processing units (GPUs). GPUVerify checks the kernel functions that execute across cores of a GPU architecture and attempts to detect, or verify absence of, two kinds of defects: data races and barrier divergence. To scale to large numbers of threads, GPUVerify exploits the relatively simple nature of the GPU programming model in a way that allows verification of massively parallel kernels to be reduced to a sequential program verification task. In this talk I will describe the defects that GPUVerify checks for, present some details of the verification technique, and discuss open problems in this area.
This work is lead by Alastair Donaldson (Imperial College) and is joint with:
Multi-modal route planning using public transportation (including travel on buses, trams, metros) and individual transportation (including cars, bikes or foot) requires an accurate linkage of the street map to the ones for public transportation.
Even though public available and community-edited maps often contain information on stops and travel routes for public transportation, in multi-modal route planning most frequently the resources of the two are different, especially if exact time scheduling constraints are involved. Time tables for public transportation units are, e.g., to be extracted from travel agency databases and geo-located to the maps.
While this problem seems to be easy in theory, real-word challenges for the mapping include different wordings of stops, missing or reduced line information. Moreover, given geo-coordinates may not distinguish between multiple locations of stops for different travel directions at crossings.
In this work, we provide an efficient algorithm, resolving the different namings for the geo-location and matching problem. It uses a generalization of the Needleham-Wunsch algorithm for computing the alignment of genome strings. The layered algorithm is implemented in a database query language. Similarities matrices are extracted from a low-level approximate string matching, while the top-level alignment compares every two different lines that pass an additional filter.
The resulting bijection between public transport and street map data together with time-tabling information on travel and waiting times of the mobile vehicles is then used for initiating a realistic travel simulation executing many shortest path graph search queries within a multi-agent simulation framework. For this we use a radix heap implementation with joint heap/graph node representation.
The inferred time tables are additionally fed into a public transport information system agent, that provides route information to individual passenger agents that take the information as a guideline to plan their travel, so that interleaved planning and plan execution fosters the simulation. The traffic information system available to the agents supports different implementation for shortest route queries.
The notion of τ-confluence provides a form of partial order reduction of Labelled Transition Systems (LTSs), by enabling to identify the τ-transitions, whose execution does not alter the observable behaviour of the system. Several forms of τ-confluence adequate with branching bisimulation were studied in the literature, ranging from strong to weak forms according to the length of τ-transition sequences considered. Weak τ-confluence is more complex to compute than strong τ-confluence, but provides better LTS reductions. In this talk, I will present a method to efficiently detect weak τ-confluent transitions during an on-the-fly exploration of LTSs. With this purpose, we define and prove correct new encodings of several weak τ-confluence variants using alternation-free Boolean Equation Systems (BESs), and we apply efficient local BES resolution algorithms to perform the detection. The resulting reduction module, developed within the CADP toolbox using the generic Open/Caesar environment for LTS exploration, was tested in both a sequential and a distributed setting on numerous examples of large LTSs underpinning communication protocols and distributed systems. These experiments assessed the efficiency of the reduction and enabled us to identify the best variants of weak τ-confluence that are useful in practice.
This presentation is based on joint work with Radu Mateescu (INRIA Grenoble - Rhône-Alpes).
We discuss efficient GPU-oriented parallel algorithms for the analysis of interaction graphs based on perturbation experiments. In perturbation experiments the nodes of an interaction graph with unknown structure are perturbed in a systematic manner, usually one by one. The effects of the perturbation of one or set of nodes on the other nodes are registered. Based on the output of the experiments the connections between the nodes are reconstructed. An example of such perturbation experiments are the knockout experiments that are used to reconstruct genetic networks. The gene of the networks are disabled one at a time and the expression level of the other genes is measured.
One of the main problems in the perturbation experiments is to distinguish between direct and indirect interactions. To this end, we consider a technique called transitive reduction - an operation opposite to transitive closure.
We develop efficient scalable parallel algorithms for reconstruction of interaction graphs on general purpose graphics processing units for both standard (unweighted) and weighted graphs. A direct interaction between two nodes is removed only if there exists an indirect interaction path between the same nodes which has strictly greater weight than the direct one. This is a refinement of the removal condition for the unweighted graphs and avoids to a great extent the erroneous elimination of direct edges.
Experimental evidence shows that parallel implementations of these algorithms can be more than hundred times faster than their sequential counterparts.
We also discuss how analogues of these parallel algorithms can be used in formal verification - in particular for model checking of timed and probabilistic systems.
This presentation is based on a joint work with Maximilian Odenbrett, Anton Wijs, Willem Ligtenberg, and Peter Hilbers.