Published: 21st October 2021
DOI: 10.4204/EPTCS.348
ISSN: 2075-2180

EPTCS 348

Proceedings Third Workshop on
Formal Methods for Autonomous Systems
Virtual, 21st-22nd of October 2021

Edited by: Marie Farrell and Matt Luckcuck

Preface
Marie Farrell and Matt Luckcuck
Invited Talk: Help or Hazard: Towards Verifying Autonomous Robot Systems
Clare Dixon
Invited Talk: Understanding and Verifying Deep Neural Networks
Divya Gopinath
Extending Urban Multi-Lane Spatial Logic to Formalise Road Junction Rules
Maike Schwammberger and Gleifer Vaz Alves
1
Simulation and Model Checking for Close to Realtime Overtaking Planning
Daumantas Pagojus, Alice Miller, Bernd Porr and Ivaylo Valkov
20
Towards Partial Monitoring: It is Always too Soon to Give Up
Angelo Ferrando and Rafael C. Cardoso
38
Complete Agent-driven Model-based System Testing for Autonomous Systems
Kerstin I. Eder, Wen-ling Huang and Jan Peleska
54
Formal Guarantees of Timely Progress for Distributed Knowledge Propagation
Saswata Paul, Stacy Patterson and Carlos Varela
73
QuantifyML: How Good is my Machine Learning Model?
Muhammad Usman, Divya Gopinath and Corina S. Păsăreanu
92
Complete Test of Synthesised Safety Supervisors for Robots and Autonomous Systems
Mario Gleirscher and Jan Peleska
101
Improving Online Railway Deadlock Detection using a Partial Order Reduction
Bjørnar Luteberget
110
Online Strategy Synthesis for Safe and Optimized Control of Steerable Needles
Sascha Lehmann, Antje Rogalla, Maximilian Neidhardt, Alexander Schlaefer and Sibylle Schupp
128
Towards a Formalisation of Justification and Justifiability
Willem Hagemann
136
Assuring Increasingly Autonomous Systems in Human-Machine Teams: An Urban Air Mobility Case Study
Siddhartha Bhattacharyya, Jennifer Davis, Anubhav Gupta, Nandith Narayan and Michael Matessa
150
Observable and Attention-Directing BDI Agents for Human-Autonomy Teaming
Blair Archibald, Muffy Calder, Michele Sevegnani and Mengwei Xu
167

Preface

Autonomous systems are highly complex and present unique challenges for the application of formal methods. Autonomous systems act without human intervention, and are often embedded in a robotic system, so that they can interact with the real world. As such, they exhibit the properties of safety-critical, cyber-physical, hybrid, and real-time systems.

This EPTCS volume contains the proceedings for the third workshop on Formal Methods for Autonomous Systems (FMAS 2021), which was held virtually on the 21st and 22nd of October 2021. Like the previous workshop, FMAS 2021 was an online, stand-alone event, as an adaptation to the ongoing COVID-19 restrictions. Despite the challenges this brought, we were determined to build on the success of the previous two FMAS workshops.

The goal of FMAS is to bring together leading researchers who are tackling the unique challenges of autonomous systems using formal methods, to present recent and ongoing work. We are interested in the use of formal methods to specify, model, or verify autonomous and/or robotic systems; in whole or in part. We are also interested in successful industrial applications and potential future directions for this emerging application of formal methods.

As in previous years, FMAS 2021 encouraged the submission of both long and short papers. In total, we received 18 papers – 9 long papers and 9 short papers – from authors in Brazil, France, Germany, Iran, Italy, Norway, Slovakia, the United Kingdom, and the United States of America. This was a fantastic number of papers for FMAS, more than the number of submissions from the previous two workshops put together. Each paper received three reviews, and we accepted 12 papers in total: 8 long papers and 4 short papers.

FMAS 2021 hosted two invited speakers. Clare Dixon, from the University of Manchester, was invited to talk about "Help or Hazard: Towards Verifying Autonomous Robot Systems". Divya Gopinath, from the Robust Software Engineering (RSE) group at the NASA Ames Research Center, was invited to talk about "Understanding and Verifying Deep Neural Networks".

We would like to thank our brilliant programme committee who have once again delivered detailed and constructive reviews to the authors, despite the ongoing disruption and stress caused by the COVID-19 pandemic. We're proud to have such an enthusiastic collection of reviewers, and we want to thank them for their time and effort – especially with the unexpected number of submissions to FMAS 2021. We would like to thank our invited speakers, Clare Dixon and Divya Gopinath; the authors who submitted papers; our EPTCS editor, Martin Wirsing; and all of the attendees this year.

Marie Farrell and Matt Luckcuck,
Department of Computer Science, Maynooth University, Ireland

Program Committee


Help or Hazard: Towards Verifying Autonomous Robot Systems

Clare Dixon (Universtiy of Manchester, UK)

Autonomous robotic systems are being developed to operate in a range of environments: industrial, transport, healthcare and domestic and have the potential to be of great benefit to society. Further, they may be assisting, collaborating with or working near humans. We must make sure they are safe, robust, reliable and trustworthy and they themselves don't become become the hazard. In this talk I will discuss ways to verify such systems, relating to research carried out over a number of projects with different types of robot, operating in a range of scenarios.

Acknowledgements
This talk is based upon joint work with collaborators carried out on the following projects.

Understanding and Verifying Deep Neural Networks

Divya Gopinath (NASA Ames Research Center)

Deep Neural Networks (DNNs) have gained immense popularity in recent times and have widespread use in applications such as image classification, sentiment analysis, speech recognition and also in safety-critical applications such as autonomous driving. However, they suffer limitations such as lack of explainability and robustness which raise safety and security concerns in their usage. Further, the complex structure and large input spaces of DNNs act as an impediment to thorough verification and testing. The SafeDNN project at the Robust Software Engineering (RSE) group at NASA aims at exploring techniques to ensure that systems that use deep neural networks are safe, robust and interpretable. In this talk, I would be presenting our technique Prophecy[1] that automatically infers formal properties of deep neural network models. The tool extracts patterns based on neuron activations as preconditions that imply certain desirable output properties of the model. I would be highlighting case studies that use Prophecy in obtaining explanations for network decisions, understanding correct and incorrect behavior, providing formal guarantees with respect to safety and robustness, and debugging neural network models. We have applied the tool on image classification networks, neural network controllers providing turn advisories in unmanned aircrafts, regression models used for autonomous center-line tracking in aircrafts and neural network object detectors.

References

  1. Divya Gopinath, Ankur Taly, Hayes Converse, Corina Pasareanu (2019): Property Inference for Deep Neural Networks 34th IEEE/ACM International Conference on Automated Software Engineering (ASE), doi:10.1109/ASE.2019.00079.