Published: 12th October 2019
DOI: 10.4204/EPTCS.307
ISSN: 2075-2180

EPTCS 307

Proceedings of the Fourteenth Workshop on
Logical Frameworks and Meta-Languages: Theory and Practice
Vancouver, Canada, 22nd June 2019

Edited by: Dale Miller and Ivan Scagnetto

Preface
1
Invited Presentation: Combining Tactics, Normalization, and SMT Solving to Verify Systems Software (Extended Abstract)
Chris Hawblitzel
3
Invited Presentation: Cocon: A Type Theory for Defining Logics and Proofs (Extended Abstract)
Brigitte Pientka
6
A Definitional Implementation of the Lax Logical Framework LLFP in Coq, for Supporting Fast and Loose Reasoning
Fabio Alessi, Alberto Ciaffaglione, Pietro Di Gianantonio, Furio Honsell and Marina Lenisa
8
GF + MMT = GLF – From Language to Semantics through LF
Michael Kohlhase and Jan Frederik Schaefer
24
Rapid Prototyping Formal Systems in MMT: 5 Case Studies
Dennis Müller and Florian Rabe
40
A Weakly Initial Algebra for Higher-Order Abstract Syntax in Cedille
Aaron Stump
55

Preface

This volume contains a selection of papers presented at LFMTP 2019, the 14th International Workshop on Logical Frameworks and Meta-Languages: Theory and Practice (LFMTP), held on June 22, 2019, in Vancouver, Canada. The workshop was affiliated with the 34th Annual ACM/IEEE Symposium on Logic in Computer Science (LICS).

Logical frameworks and meta-languages form a common substrate for representing, implementing and reasoning about a wide variety of deductive systems of interest in logic and computer science. Their design, implementation and their use in reasoning tasks, ranging from the correctness of software to the properties of formal systems, have been the focus of considerable research over the last two decades. This workshop will bring together designers, implementors and practitioners to discuss various aspects impinging on the structure and utility of logical frameworks, including the treatment of variable binding, inductive and co-inductive reasoning techniques and the expressiveness and lucidity of the reasoning process.

We received 5 submissions in total, which were reviewed by 3 program committee members. Out of those, 2 were selected as work in progress reports for presentation and 3 were selected for presentation and publication. After the workshop, a revised and extended version of one of the work in progress reports was included in these proceedings.

In addition to the submission presentations, the program included 2 invited talks by Chris Hawblitzel (Systems Research Group, Microsoft Research, Redmond, USA) and Brigitte Pientka (McGill University, Canada).

We want to thank the members of the Program Committee for their efforts in providing timely reviews and for the useful suggestions and participation on the decisions. Moreover, we want to thank the authors and the invited speakers, since their contributions and presentations at the meeting stimulated interesting discussions with the attendees, making the event a success. We are also very grateful to the organization of LICS for providing the infrastructure and coordination with other events.

Dale Miller and Ivan Scagnetto

August 31, 2019

Program Committee of LFMTP 2019


Combining Tactics, Normalization, and SMT Solving to Verify Systems Software (Extended Abstract)

Chris Hawblitzel (Microsoft Research)

Formal verification tools often start with high-level abstractions like functions and datatypes. Systems software, though, is full of low-level byte and integer manipulation. Some systems software even contains hand-coded assembly language for maximum performance or for accessing low-level hardware resources. Formal verification of such software can benefit from better verification tool support for low-level features, even if the tools were originally designed based on high-level abstractions.

For example, the Everest project has been using the F* language [3] to verify the miTLS protocol stack [1], the EverCrypt/HACL* cryptography library [5], and the EverParse parsing/printing library [4]. Although F* is a high-level functional language based on a dependently typed lambda calculus (the calculus of inductive constructions), it takes advantage of low-level reasoning about booleans, bytes, and words by relying on an SMT solver (Z3). The automation provided by an SMT solver eases the verification of miTLS, EverCrypt, and EverParse.

Nevertheless, SMT solvers cannot solve all verification tasks quickly, because many problems are NP-hard (SAT solving, linear integer arithmetic), or even undecidable (nonlinear integer arithmetic). Relying entirely on an SMT solver for all low-level reasoning leads to slow proofs or (for undecidable problems) unreliable proofs, making verified software development and software maintenance painful. Therefore, Everest code uses a combination of verification techniques to make verification as easy, fast, and reliable as possible. Because of its native support for dependent types, SMT solving, and tactics, F* provides an ideal platform for experimenting with such combinations.

For example, F*’s support for dependent types means that the F* type checker must be able to evaluate (normalize) expressions during type checking. It can check that the value 7 has type “if 2 < 3 then int else bool” by normalizing “if 2 < 3 then int else bool” to int, for instance. Since normalization is implicitly necessary for type checking anyway, F* also makes normalization available as an explicit feature for programmers to use directly. Furthermore, F* allows programmers to customize each normalization, selectively enabling or disabling evaluation of arithmetic operations, pattern matches, and unrolling of specific function definitions. This customizable normalization feature is often useful for low-level verification tasks. Consider the following F* code:

let nat8 = n:nat{n < 0x100}  
let rec bytes_to_nat (s:seq nat8) (i:nat{i <= length s}) : nat =  
  if i = 0 then 0  
  else s.[i - 1] + 0x100 * bytes_to_nat s (i - 1)

The first declaration defines the type nat8 to be a refinement of the natural number type nat such that any nat8 value must be less than 256. The second declaration defines a recursive function that converts a big-endian sequence “s” of nat8 values into a single natural number. To reason about such a recursive definition, an SMT solver must unroll the definition some number of times, but the number of unrollings required to complete a proof is in general undecidable. By default, F* is conservative and instructs the SMT solver to only unroll a limited number of times (one unrolling, for example). Rather than having the SMT solver unroll the function, a programmer can instruct the F* normalizer to unroll a given function application before generating a query to the SMT solver. This enables a combination of proof-by-normalization verification (also known as “proof by reflection”) and regular SMT verification.

While the example above is a very simple use combining SMT solving with normalization, the technique extends to more complex low-level verification scenarios. For example, the Vale tool [2] defines a verification condition (VC) generator for verifying assembly language code. It does so by expressing the VC generator in F*, verifying the correctness of the VC generator itself in F*, and then using F*’s normalizer to execute the VC generator on a given assembly language program, producing a VC that F* then sends to the SMT solver. F* did not require any additional features to support this scenario; the Vale tool simply used F*’s existing features for dependent types, normalization, and SMT solving.

Programmers can also use F* tactics [3] to manipulate formulas before F* sends a query to the SMT solver. In F*, tactics are written in the F* language itself, but have access to proof states and goals and can inspect existing F* expressions and generate new F* expressions. In many cases, a specialized tactic can process a formula more efficiently than a general purpose SMT solver can. For example, low-level cryptography code often relies on reasoning about integer add, multiply, and modulus operations. Proofs about such integer operations may involve both linear arithmetic (multiplication/modulus of variables by constants, such as 2*x) and nonlinear arithmetic (multiplication/modulus of variables by variables, such as x*y). Since nonlinear integer arithmetic is undecidable, a specialized tactic can simplify a formula before F* generates an SMT query so that the SMT solver only needs to worry about the remaining linear arithmetic. The example below uses a tactic “canon semiring”, written in F*, to reorder variables within nonlinear multiplications into a canonical form (e.g. rewriting c*(b*a) as a*(b*c)). Given these reorderings, the SMT solver then automatically completes the rest of the proof.

let demo_canonizer (a b c d e x:int) : Lemma  
  (requires x == d * e)  
  (ensures (a * (b * c) + (2 * d) * e == e * d + c * (b * a) + x))  
  =  
  assert_by_tactic (c * (b * a) == a * (b * c)) (fun _ -> canon_semiring int_cr);  
  assert_by_tactic (e * d == d * e) (fun _ -> canon_semiring int_cr);  
  assert_by_tactic ((2 * d) * e == 2 * (d * e)) (fun _ -> canon_semiring int_cr)

The example above relies on the ability of F* tactics to inspect F* expressions (e.g. to inspect the expression c*(b*a)), but F* tactics can also generate new expressions. This allows a form of meta-programming where F* tactics can generate new F* programs. For example, tactics can inspect F* types and automatically generate parsers and printers for values of those types [3]. The SMT solver can then automatically verify properties of the generated code: parsing a printed value results in the original value, for instance.

Each of the examples described above uses some combination of SMT solving, normalization and tactics to verify aspects of low-level code. Since no one verification technique is likely to be ideal for all verification problems, verification frameworks should make it as convenient as possible for programmers to combine these techniques.

References

[1]   Karthikeyan Bhargavan, Antoine Delignat-Lavaud, Cédric Fournet, Markulf Kohlweiss, Jianyang Pan, Jonathan Protzenko, Aseem Rastogi, Nikhil Swamy, Santiago Zanella-Béguelin & Jean Karim Zinzindohoué (2017): Implementing and Proving the TLS 1.3 Record Layer. In: 2017 IEEE Symposium on Security and Privacy (SP), pp. 463–482, doi:10.1109/SP.2017.58.

[2]   Aymeric Fromherz, Nick Giannarakis, Chris Hawblitzel, Bryan Parno, Aseem Rastogi & Nikhil Swamy (2019): A Verified, Efficient Embedding of a Verifiable Assembly Language. Proc. ACM Program. Lang. 3(POPL), pp. 63:1–63:30, doi:10.1145/3290376.

[3]   Guido Martínez, Danel Ahman, Victor Dumitrescu, Nick Giannarakis, Chris Hawblitzel, Cătălin Hriţcu, Monal Narasimhamurthy, Zoe Paraskevopoulou, Clément Pit-Claudel, Jonathan Protzenko, Tahina Ramananandro, Aseem Rastogi & Nikhil Swamy (2019): Meta-F*: Proof Automation with SMT, Tactics, and Metaprograms. In Luís Caires, editor: Programming Languages and Systems, Springer International Publishing, Cham, pp. 30–59, doi:10.1007/978-3-030-17184-1_2.

[4]   Tahina Ramananandro, Antoine Delignat-Lavaud, Cédric Fournet, Nikhil Swamy, Tej Chajed, Nadim Kobeissi & Jonathan Protzenko (2019): EverParse: Verified Secure Zero-Copy Parsers for Authenticated Message Formats. In: 28th USENIX Security Symposium (USENIX Security 19), USENIX Association, Santa Clara, CA, pp. 1465–1482. Available at https://www.usenix.org/conference/usenixsecurity19/presentation/delignat-lavaud.

[5]   Jean-Karim Zinzindohoué, Karthikeyan Bhargavan, Jonathan Protzenko & Benjamin Beurdouche (2017): HACL*: A Verified Modern Cryptographic Library. In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, CCS ’17, ACM, New York, NY, USA, pp. 1789–1806, doi:10.1145/3133956.3134043.


Cocon: A Type Theory for Defining Logics and Proofs (Extended Abstract)

Brigitte Pientka (McGill University, Canada)

Back in the 1980’s, two different foundations were proposed as a foundation for mechanizing metatheory. One such foundation is the Edinburgh Logical Framework (LF) proposed by Harper et al. (1993). LF provides general and uniform treatment of syntax, rules, and proofs using a dependently typed λ-calculus. In particular, we can exploit the LF function space to represent variable bindings and scope of hypothesis in proof derivations using higher-order abstract syntax (HOAS) trees. This approach can offer substantial benefits: programmers do not need to build up the basic mathematical infrastructure, they can work at a higher-level of abstraction, encodings are more compact, and hence it is easier to mechanize formal systems together with their meta-theory. However, it also is intentionally weak and lacks case-analysis and recursion.

Dependent type theories such as the Calculus of Construction or Martin-Löf Type Theory provide another foundation for formalizing mathematics. While they do not provide intrinsic support for common issues related to modelling the scope of assumptions and substitution, their expressive power goes well beyond what LF offers: they provide a hierarchy of universes and support recursive computation on types and terms.

In this talk, I describe Cocon, a dependent type theory that allows us to mix the (weak) LF function space that is used to represent higher-order abstract syntax (HOAS) trees with the (strong) function space that allows us to describe (recursive) computation over types and terms. We mediate between HOAS representations and computations using contextual modal types (Nanevski et al.2008Pientka2008). Our type theory also supports an infinite hierarchy of universes and hence supports type-level computation—thereby providing metaprogramming and (small-scale) reflection.

I will motivate the underlying ideas, survey some of the results that lead to the design of Cocon, and show how this work resolves the tension between HOAS-based proof environments and dependent type theories such as the Calculus of Construction or Martin-Löf Type Theory. As such it lays the foundation to incorporate the LF methodology into dependently typed proof assistants such as Agda (Norell2007) and Coq (Bertot and Castéran2004) and hence bridges the longstanding gap between these two worlds.

This work is based on (Pientka et al.2019a) and an extended version is available at (Pientka et al.2019b).

References

    Bertot, Y. and Castéran, P. (2004). Interactive Theorem Proving and Program Development. Coq’Art: The Calculus of Inductive Constructions. Springer, doi:10.1007/978-3-662-07964-5

    Harper, R., Honsell, F., and Plotkin, G. (1993). A framework for defining logics. Journal of the ACM, 40(1):143–184, doi:10.1145/138027.138060

    Nanevski, A., Pfenning, F., and Pientka, B. (2008). Contextual modal type theory. ACM Transactions on Computational Logic, 9(3):1–49, doi:10.1145/1352582.1352591

    Norell, U. (2007). Towards a practical programming language based on dependent type theory. PhD thesis, Department of Computer Science and Engineering, Chalmers University of Technology. Technical Report 33D.

    Pientka, B. (2008). A type-theoretic foundation for programming with higher-order abstract syntax and first-class substitutions. In 35th ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL’08), pages 371–382. ACM Press, doi:10.1145/1328438.1328483

    B. Pientka, Abel, A., Ferreira, F., Thibodeau, D., and Zucchini, R. (2019) A type theory for defining logics and proofs. In 34th IEEE/ ACM Symp. on Logic in Computer Science (LICS). IEEE Computer Society, 2019. , doi:10.1109/LICS.2019.8785683

    Pientka, B., Abel, A., Ferreira, F., Thibodeau, D., and Zucchini, R. (2019). Cocon: Computation in contextual type theory. https://arxiv.org/abs/1901.03378.