Reliable Natural Language Understanding with Large Language Models and Answer Set Programming

Abhiramon Rajasekharan
(The University of Texas at Dallas)
Yankai Zeng
(The University of Texas at Dallas)
Parth Padalkar
(The University of Texas at Dallas)
Gopal Gupta
(The University of Texas at Dallas)

Humans understand language by extracting information (meaning) from sentences, combining it with existing commonsense knowledge, and then performing reasoning to draw conclusions. While large language models (LLMs) such as GPT-3 and ChatGPT are able to leverage patterns in the text to solve a variety of NLP tasks, they fall short in problems that require reasoning. They also cannot reliably explain the answers generated for a given question. In order to emulate humans better, we propose STAR, a framework that combines LLMs with Answer Set Programming (ASP). We show how LLMs can be used to effectively extract knowledge—represented as predicates—from language. Goal-directed ASP is then employed to reliably reason over this knowledge. We apply the STAR framework to three different NLU tasks requiring reasoning: qualitative reasoning, mathematical reasoning, and goal-directed conversation. Our experiments reveal that STAR is able to bridge the gap of reasoning in NLU tasks, leading to significant performance improvements, especially for smaller LLMs, i.e., LLMs with a smaller number of parameters. NLU applications developed using the STAR framework are also explainable: along with the predicates generated, a justification in the form of a proof tree can be produced for a given output.

In Enrico Pontelli, Stefania Costantini, Carmine Dodaro, Sarah Gaggl, Roberta Calegari, Artur D'Avila Garcez, Francesco Fabiano, Alessandra Mileo, Alessandra Russo and Francesca Toni: Proceedings 39th International Conference on Logic Programming (ICLP 2023), Imperial College London, UK, 9th July 2023 - 15th July 2023, Electronic Proceedings in Theoretical Computer Science 385, pp. 274–287.
Published: 12th September 2023.

ArXived at: https://dx.doi.org/10.4204/EPTCS.385.27 bibtex PDF

Comments and questions to: eptcs@eptcs.org
For website issues: webmaster@eptcs.org