Published: 17th November 2023 DOI: 10.4204/EPTCS.396 ISSN: 2075-2180 |
Preface | |
Invited Paper:
On the Practicality and Soundness of Refinement Types Niki Vazou | 1 |
Contextual Refinement Types Antoine Gaulin and Brigitte Pientka | 4 |
Semi-Automation of Meta-Theoretic Proofs in Beluga Johanna Schwartzentruber and Brigitte Pientka | 20 |
Parallel Verification of Natural Deduction Proof Graphs James T. Oswald and Brandon Rozek | 36 |
An Interpretation of E-HA^w inside HA^w Félix Castro | 52 |
This volume contains a selection of papers presented at LFMTP 2023, the 18th International Workshop on Logical Frameworks and Meta-Languages: Theory and Practice. The workshop, affiliated with the 8th International Conference on Formal Structures for Computation and Deduction (FSCD), was held in Rome, Italy, on July 2, 2023. We are very grateful to the organization of FSCD for providing the infrastructure and coordination with other events.
Logical frameworks and meta-languages form a common substrate for representing, implementing and reasoning about a wide variety of deductive systems of interest in logic and computer science. Their design, implementation and their use in reasoning tasks, ranging from the correctness of software to the properties of formal systems, have been the focus of considerable research over the last two decades. This workshop brings together designers, implementors and practitioners to discuss various aspects impinging on the structure and utility of logical frameworks, including the treatment of variable binding, inductive and co-inductive reasoning techniques and the expressiveness and lucidity of the reasoning process.
For this edition, we received 5 submissions, which were reviewed by at least three members of the program committee. After thorough evaluation, the program committee decided to accept the 5 papers for presentation and selected 4 for inclusion in the present EPTCS volume.
We want to express our sincere thankfulness to all the authors who submitted papers to the workshop. We would also like to acknowledge the exceptional efforts of the program committee, that ensured the high quality of the event. The program committee was formed by Roberto Blanco (MPI-SP), Frédéric Blanqui (Inria), Ana Bove (Chalmers University of Technology), Amy Felty (University of Ottawa), Assia Mahboubi (Inria), Narciso Martí-Oliet (Universidad Complutense de Madrid), Gopalan Nadathur (University of Minnesota), Clément Pit-Claudel (Amazon AWS), Andrei Popescu (University of Sheffield), and Claudio Sacerdoti Coen (University of Bologna). We extend our appreciation to the external reviewers, Théo Winterhalter and Samuele Maschio, for their detailed comments engaging in fruitful discussions.
Lastly, we would like to convey to Niki Vazou our deep gratitude for her invaluable contribution as the invited speaker at LFMTP 2023.
October 04, 2023
Alberto Ciaffaglione and Carlos Olarte.
PC chairs of LFMTP 2023.
Refinement types [5, 3]
are a software verification technique that extends types of an existing
programming language with logical predicates, to verify critical program properties not expressible by
the existing type system.
As an example, consider the type of the list indexing function
(!!
) that takes as input a list xs
and an index i
and returns
the i
th element of xs
Existing Type : (!!) :: [a] -> Int -> a
Refined Type : (!!) :: xs:[a] -> i:{Int | 0 <= i < len xs} -> a
Here, the two arguments are named and the logical predicate
0 <= i < len xs
is used to refine the type of the index
expressing safe indexing.
With this refinement type, each time list indexing is used, i.e., xs !! i
,
the refinement type checker will check that the index i
is within bounds.
Verification is automated using SMT solvers [1].
For example, to show that [1,2,3] !! 2
is safe, the refinement type checker
will generate the Verification Condition (VC) 0 <= 2 < len [1,2,3]
that is valid only when the code satisfies the refinement type specifications
(here safe-indexing).
Then, the VC is passed to an SMT solver that will prove it valid,
thus the code is safe.
Finally, refinement types are carefully designed to
ensure decidable verification, which is crucial for both practicality and predictability.
To establish decidability, refinement types
restrict the language of specifications to a decidable fragment of logic,
and design the typing rules, that essentially generate the VCs, so that
the logical fragment is preserved.
To express more complex concepts, such as quantified properties, verification is type based.
For example, [{v:Int | v <= 42}]
expresses the type of lists of integers where
all elements are smaller than 42, but without the need to quantify over the list elements.
In short, refinement types are designed to be naturally integrated, automated, and decidable, i.e., practical. For that reason, they have already been adopted by various programming languages, e.g., Haskell[12], Ruby [6], Scala [4], and Rust [7]. Yet, in the name of practicality, most refinement type implementations, sacrifice soundness.
The implementation of refinement type checkers is usually large and error prone. A refinement type checker has to trust the compiler of the underlying language to generate an intermediate program representation (IPR), the type checking rules (adapted to accommodate the IPR) to generate logical verification conditions (VC) and the SMT to validate the VCs. All these three trusted components consist of big code bases that inevitably contain bugs and can potentially lead to unsound verification. Indeed, the implementations of F* [10], Stainless [4], and Liquid Haskell [12] respectively consist of 1.3M, 185.3K, and 423K lines of code, and none of these verifiers isolates a trusted kernel. Approximately five unsoundness bugs per year are reported in each system.
The logic of refinement types is not well understood, leaving it unclear for the users what assumptions are safe to be made and which lead to inconsistencies, and thus unsound verification. For example, we recently discovered [11] that function extensionality had been encoded inconsistently in Liquid Haskell for many years. The inconsistent encoding seemed natural and was assumed by both the developers and users of Liquid Haskell. However under the assumption of functional extensionality Liquid Haskell could prove false, invalidating all the user's verification effort.
Having acknowledged the practicality and unsoundness of refinement types, we argue that the community should focus on making refinement types sound, inspired by techniques from the type theory (e.g., Coq [2], Agda [9], and Lean [8]) that design type systems to be sound by construction.