Trust Modelling and Verification Using Event-B

Asieh Salehi Fathabadi
(University of Southampton)
Vahid Yazdanpanah
(University of Southampton)

Trust is a crucial component in collaborative multiagent systems (MAS) involving humans and autonomous AI agents. Rather than assuming trust based on past system behaviours, it is important to formally verify trust by modelling the current state and capabilities of agents. We argue for verifying actual trust relations based on agents abilities to deliver intended outcomes in specific contexts. To enable reasoning about different notions of trust, we propose using the refinement-based formal method Event-B. Refinement allows progressively introducing new aspects of trust from abstract to concrete models incorporating knowledge and runtime states. We demonstrate modelling three trust concepts and verifying associated trust properties in MAS. The formal, correctness-by-construction approach allows to deduce guarantees about trustworthy autonomy in human-AI partnerships. Overall, our contribution facilitates rigorous verification of trust in multiagent systems.

In Marie Farrell, Matt Luckcuck, Mario Gleirscher and Maike Schwammberger: Proceedings Fifth International Workshop on Formal Methods for Autonomous Systems (FMAS 2023), Leiden, The Netherlands, 15th and 16th of November 2023, Electronic Proceedings in Theoretical Computer Science 395, pp. 10–16.
Published: 15th November 2023.

ArXived at: bibtex PDF
References in reconstructed bibtex, XML and HTML format (approximated).
Comments and questions to:
For website issues: