Published: 22nd December 2022
DOI: 10.4204/EPTCS.374
ISSN: 2075-2180

EPTCS 374

Proceedings of the Thirteenth International Workshop on
Graph Computation Models
Nantes, France, 6th July 2022

Edited by: Reiko Heckel and Christopher M. Poskitt

Preface
Reiko Heckel and Christopher M. Poskitt
Invited Presentation: Graph Representations in Traditional and Neural Program Analysis
Elizabeth Dinella
1
Panel Discussion: Learning Graph Transformation Rules
Reiko Heckel
2
Graph-Based Specification and Automated Construction of ILP Problems
Sebastian Ehmes, Maximilian Kratz and Andy Schürr
3
Time and Space Measures for a Complete Graph Computation Model
Brian Courtehoute and Detlef Plump
23
A Foundation for Functional Graph Programs: The Graph Transformation Control Algebra (GTA)
Jens H. Weber
45
Towards Mechanised Proofs in Double-Pushout Graph Transformation
Robert Söldner and Detlef Plump
59
A Graph-Transformational Approach for Proving the Correctness of Reductions between NP-Problems
Hans-Jörg Kreowski, Sabine Kuske, Aaron Lye and Aljoscha Windhorst
76

Preface

This volume contains the post-proceedings of the 13th International Workshop on Graph Computation Models (GCM 2022), which was held in Nantes, France on 6th July 2022 as part of STAF 2022.

Graphs are common mathematical structures that are visual and intuitive. They constitute a natural and seamless way for system modelling in science, engineering, and beyond, including computer science, biology, and business process modelling. Graph computation models constitute a class of very high-level models where graphs are first-class citizens. The aim of the International GCM Workshop series is to bring together researchers interested in all aspects of computation models based on graphs and graph transformation. It promotes the cross-fertilising exchange of ideas and experiences among senior and young researchers from different communities interested in the foundations, applications, and implementations of graph computation models and related areas.

Previous editions of the GCM series were held in Natal, Brazil (2006); Leicester, UK (2008); Enschede, The Netherlands (2010); Bremen, Germany (2012); York, UK (2014); L'Aquila, Italy (2015); Wien, Austria (2016); Marburg, Germany (2017); Toulouse, France (2018); Eindhoven, The Netherlands (2019); online (2020); and online (2021).

This post-proceedings volume contains revised versions of five selected papers presented at GCM 2022. All submissions were subject to careful refereeing (at least three reviews per paper), and span a wide range of topics from the foundations and applications of graph computation models.

We would like to thank everyone who contributed to the success of this volume, including our contributing authors, as well as our Programme Committee and additional reviewers, who provided several valuable insights throughout the selection process.

Reiko Heckel and Chris Poskitt
Programme Committee Co-Chairs
Leicester (UK) and Singapore (Singapore), December 2022

GCM 2022 Programme Committee

Additional Reviewers


Graph Representations in Traditional and Neural Program Analysis

Elizabeth Dinella (University of Pennsylvania, USA)

Traditional program analysis techniques often represent programs as graphs where nodes are program entities and edges represent various relations (data flow, control flow, call graphs, etc). Graphs are a standard representation for programs as they can capture syntactic and semantic structure with long range dependencies. Recently, "neural" program analysis has been proposed for traditional analysis tasks. Representing a program as input to a machine learning model is an active area of research. Popular representations include sequences of tokens, relational databases, and graph neural networks. In many cases, a graph representation is preferable. Program repair is an example of such a task. In this talk I will discuss the advantages and challenges of representing programs as graphs in both traditional and neural program repair.

Speaker Biography. Elizabeth is a PhD student at the University of Pennsylvania advised by Mayur Naik. She works on models of code for program analysis tasks such as program repair (Hoppity). As an intern at Microsoft Research, Elizabeth worked on the DeepMerge project contributing an edit-aware neural approach to merge programs. More recently, her interests lie in neural test suite generation (TOGA), and web3 security. In her free time, Elizabeth enjoys hot coffee and her chow chow puppy Cinnabon.


Learning Graph Transformation Rules

Reiko Heckel (University of Leicester, UK)

As with any detailed model of a complex problem, creating a graph transformation system is hard. An alternative in some situations is to learn models from historical or observed data or derive them by optimising a basic model based on some objective function using feedback from simulations or real-world processes. The session is dedicated to discussing application scenarios and solutions for learning, in particular, the rules of a graph transformation system with the aim of inspiring collaborative research on this topic. A preliminary classification of rule learning problems could distinguish these two aspects.

Prescriptive vs descriptive model. A model is prescriptive if it is intended as a design of a system to be built, descriptive if it represents an existing system to be analysed.

In the prescriptive case, creating a model that best suits certain requirements is a form of optimisation based on a given objective function. In order to assess the fitness of a model, we have to either run the model directly (e.g., in a simulation) or use it to control a real-world process we can measure. For example, we may want to optimise the topology management for a decentralised network, maximising connectivity while minimising the number of links to be maintained. Such requirements should be expressed by the objective function and the aim should be to optimise the application of a set of basic rules, e.g. for nodes joining or leaving, making or breaking links.

In the descriptive case, we infer the model from existing data observed from a real process, e.g., the version history of a software project. This can be a precursor to a prescriptive use of the model in a re-engineering scenario, e.g., inferring a model of how faults are detected and fixed in software projects in order to recommend typical fixes. Rules in this case should represent typical fault patterns and their repair actions.

Online vs offline learning. In either of the above scenarios, learning can be online (at runtime, using a form of reinforcement learning) or offline (based on historical data). In the online case we make continuous observations and use them to modify the model, e.g., in a stochastic or probabilistic model, adding weight to successful rules (in the prescriptive scenario) or frequently observed rules (in the descriptive case). In the offline case, there is no direct feedback or continuous update, but for prescriptive models learning can happen in longer cycles of model inference, usage and feedback.

Based on the above we would like to collect, describe and classify application scenarios for rule learning, such as network topology management or fault detection and repair, and discuss possible solutions. The result could be a position paper summarising our findings.

Acknowledgements. The topic for this panel arose from a series of discussions between the chair, Nicolas Behr, Bello Shehu Bello, and Sebastian Ehmes, with further input from Marc Santolini and Ashiq Anjum.