A Doxastic Characterisation of Autonomous Decisive Systems

Astrid Rakow

A highly autonomous system (HAS) has to assess the situation it is in and derive beliefs, based on which, it decides what to do next. The beliefs are not solely based on the observations the HAS has made so far, but also on general insights about the world, in which the HAS operates. These insights have either been built in the HAS during design or are provided by trusted sources during its mission. Although its beliefs may be imprecise and might bear flaws, the HAS will have to extrapolate the possible futures in order to evaluate the consequences of its actions and then take its decisions autonomously. In this paper, we formalize an autonomous decisive system as a system that always chooses actions that it currently believes are the best. We show that it can be checked whether an autonomous decisive system can be built given an application domain, the dynamically changing knowledge base and a list of LTL mission goals. We moreover can synthesize a belief formation for an autonomous decisive system. For the formal characterization, we use a doxastic framework for safety-critical HASs where the belief formation supports the HAS's extrapolation.

In Matt Luckcuck and Marie Farrell: Proceedings Fourth International Workshop on Formal Methods for Autonomous Systems (FMAS) and Fourth International Workshop on Automated and verifiable Software sYstem DEvelopment (ASYDE) (FMAS2022 ASYDE2022), Berlin, Germany, 26th and 27th of September 2022, Electronic Proceedings in Theoretical Computer Science 371, pp. 103–119.
Published: 27th September 2022.

ArXived at: https://dx.doi.org/10.4204/EPTCS.371.8 bibtex PDF
References in reconstructed bibtex, XML and HTML format (approximated).
Comments and questions to: eptcs@eptcs.org
For website issues: webmaster@eptcs.org