Comparing Differentiable Logics for Learning Systems: A Research Preview

Thomas Flinkow
(Maynooth University)
Barak A. Pearlmutter
(Maynooth University)
Rosemary Monahan
(Maynooth University)

Extensive research on formal verification of machine learning (ML) systems indicates that learning from data alone often fails to capture underlying background knowledge. A variety of verifiers have been developed to ensure that a machine-learnt model satisfies correctness and safety properties, however, these verifiers typically assume a trained network with fixed weights. ML-enabled autonomous systems are required to not only detect incorrect predictions, but should also possess the ability to self-correct, continuously improving and adapting. A promising approach for creating ML models that inherently satisfy constraints is to encode background knowledge as logical constraints that guide the learning process via so-called differentiable logics. In this research preview, we compare and evaluate various logics from the literature in weakly-supervised contexts, presenting our findings and highlighting open problems for future work. Our experimental results are broadly consistent with results reported previously in literature; however, learning with differentiable logics introduces a new hyperparameter that is difficult to tune and has significant influence on the effectiveness of the logics.

In Marie Farrell, Matt Luckcuck, Mario Gleirscher and Maike Schwammberger: Proceedings Fifth International Workshop on Formal Methods for Autonomous Systems (FMAS 2023), Leiden, The Netherlands, 15th and 16th of November 2023, Electronic Proceedings in Theoretical Computer Science 395, pp. 17–29.
Published: 15th November 2023.

ArXived at: https://dx.doi.org/10.4204/EPTCS.395.3 bibtex PDF
References in reconstructed bibtex, XML and HTML format (approximated).
Comments and questions to: eptcs@eptcs.org
For website issues: webmaster@eptcs.org