A Rational Agent Controlling an Autonomous Vehicle: Implementation and Formal Verification

Lucas E. R. Fernandes
(UTFPR, Ponta Grossa, Parana, Brazil)
Vinicius Custodio
(UTFPR, Ponta Grossa, Parana, Brazil)
Gleifer V. Alves
(UTFPR, Ponta Grossa, Parana, Brazil)
Michael Fisher
(University of Liverpool, Liverpool, United Kingdom)

The development and deployment of Autonomous Vehicles (AVs) on our roads is not only realistic in the near future but can also bring significant benefits. In particular, it can potentially solve several problems relating to vehicles and traffic, for instance: (i) possible reduction of traffic congestion, with the consequence of improved fuel economy and reduced driver inactivity; (ii) possible reduction in the number of accidents, assuming that an AV can minimise the human errors that often cause traffic accidents; and (iii) increased ease of parking, especially when one considers the potential for shared AVs. In order to deploy an AV there are significant steps that must be completed in terms of hardware and software. As expected, software components play a key role in the complex AV system and so, at least for safety, we should assess the correctness of these components.

In this paper, we are concerned with the high-level software component(s) responsible for the decisions in an AV. We intend to model an AV capable of navigation; obstacle avoidance; obstacle selection (when a crash is unavoidable) and vehicle recovery, etc, using a rational agent. To achieve this, we have established the following stages. First, the agent plans and actions have been implemented within the Gwendolen agent programming language. Second, we have built a simulated automotive environment in the Java language. Third, we have formally specified some of the required agent properties through LTL formulae, which are then formally verified with the AJPF verification tool. Finally, within the MCAPL framework (which comprises all the tools used in previous stages) we have obtained formal verification of our AV agent in terms of its specific behaviours. For example, the agent plans responsible for selecting an obstacle with low potential damage, instead of a higher damage obstacle (when possible) can be formally verified within MCAPL. We must emphasise that the major goal (of our present approach) lies in the formal verification of agent plans, rather than evaluating real-world applications. For this reason we utilised a simple matrix representation concerning the environment used by our agent.

In Lukas Bulwahn, Maryam Kamali and Sven Linker: Proceedings First Workshop on Formal Verification of Autonomous Vehicles (FVAV 2017), Turin, Italy, 19th September 2017, Electronic Proceedings in Theoretical Computer Science 257, pp. 35–42.
Published: 7th September 2017.

ArXived at: https://dx.doi.org/10.4204/EPTCS.257.5 bibtex PDF
References in reconstructed bibtex, XML and HTML format (approximated).
Comments and questions to: eptcs@eptcs.org
For website issues: webmaster@eptcs.org