computational higher-order logics

Get Complete Project Material File(s) Now! »

The λΠ-calculus

The λΠ calculus is a typed λ-calculus with dependent types. Known under many names (λΠ, LF, λP ), it was proposed in a seminal paper by Harper, Honsell, and Plotkin [HHP93] as a logical framework to define logics and express proofs in those logics. The calculus is small but powerful enough to express a wide variety of logics. There are two features that make this possible. First, λ-abstractions allow the convenient and efficient representation of binders using higher-order abstract syntax (HOAS). Second, Π-types allow the correct representation of dependencies. It gained a lot of popularity and was implemented in systems such as Twelf1 [PS99], which has been successfully used as a logical framework in various projects [App01, SS06].
Our work draws heavily from the λΠ tradition. The λΠ-calculus modulo rewriting is an extension of this system with rewrite rules, and our embeddings rely on the same principles. In this chapter, we recall the theory of λΠ and show the main ideas behind its capability to express logics. In particular, we will show that there are two main categories of embeddings, along the lines of Geuvers [GB99]: the computational embeddings and the higher-order embeddings. We will show that they have different properties and that they are incompatible, thus justifying the extension with rewriting of the next chapter.

Definition

We choose a presentation close to pure type systems, which does not make a distinction between terms and types. The reason behind this choice is that it leads to less duplication in the syntax and typing rules. The presentation of Harper et al. [HHP93] uses separate syntactic categories for objects, types, and kinds, as well as a separate signature Σ for global constants to better differentiate between the declarations that are part of the logic specification, and those that are local variable declarations, i.e. free hypotheses and free term variables. We will not make these distinctions on the syntax level and instead rely on naming conventions to differentiate between the different classes. The original presentation can be found in Appendix B.1.
Definition 2.1.1 (Syntax). The syntax of λΠ is given by the following grammar:
sorts s ∈ S ::= Type | Kind
terms M, N, A, B ∈ T ::= x | s | Πx : A . B | λx : A . M | M N
contexts Σ, ∈ G ::= ? | , x : A
We try to use letters M, N for objects, letters A, B for types, and the letter K for kinds. We try to use the letter Σ for the context of global declarations (i.e. constants) that are proper to the entire logic and for the context of local declarations.
Definition 2.1.2 (Typing). The typing relations
• ` M : A, meaning that the term M has type A in the context ,
• ` WF, meaning that the context is well-formed,
are derived by the rules in Figure 2.1. A term A is a well-formed type in when ` WF and either ` A : s or A = s for some s ∈ S. We write this as ` A WF. It is inhabited in when ∃M, ` M : A. We write ` M : A : B as a shortcut for ` M : A and
` A : B. We sometimes write `λΠ instead of ` to disambiguate.
The λΠ-calculus enjoys many of the usual desirable properties for systems of typed lambda calculus, including confluence, subject reduction, strong normalization, and decidability of type checking. Since it is an instance of a pure type system and since we will not use it directly, we will postpone the presentation of these properties to the chapter on pure type systems (Chapter 4).

As a first-order logic calculus

The λΠ calculus-can be seen as the calculus of minimal first-order logic through the Curry–Howard correspondence. First-order logic allows predicates and quantification over objects, which correspond naturally to dependent types: a universal quantification ∀x.A (x) is interpreted as a dependent function type Πx.A (x) and a proof of ∀x.A (x) is interpreted as a function taking an object x and producing a proof of A (x).
Consider a signature in first-order logic, with function symbols f of arity nf and predicate symbols p of arity np. First, we declare in our context Σ a type ι : Type which is the type of the objects of the logic. We then extend the signature with declarations f : ιn → ι and p : ιn → Type. The propositions of minimal predicate logic then have a direct interpretation

As a logical framework

Instead of interpreting propositions as types, we declare a type of propositions o : Type and constructors of the type o representing implication and universal quantification:
imp : o → o → o,
forall : (ι → o) → o.
A predicate symbol p of arity n is now declared as p : ιn → o. The propositions are now interpreted as terms of type o:
[p (M1, . . . , Mn)] = p [M1] . . . [Mn]
[A ⇒ B] = imp [A] [B]
[∀x.A] = forall (λx : ι . A) .
Notice the use of higher order abstract syntax (HOAS) to represent binding in the translation of universal quantification. The terms of type o can be thought of as codes representing propositions. To interpret ` A, we add proof : o → Type which takes the code of a proposition to the type of its proofs. We then add constructors for each of the inference rules of minimal predicate logic:
imp_intro : Πp, q : o . (proof p → proof q) → proof (imp p q) ,
imp_elim : Πp, q : o . proof (imp p q) → proof p → proof q,
forall_intro : Πp : (ι → o) . (Πx : ι . proof (p x)) → proof (forall p) ,
forall_elim : Πp : (ι → o) . proof (forall p) → Πx : ι . proof (p x) .
A proof of A is now translated as a term of type proof [A]. If we write JAK for proof [A], the translation becomes:
The embedding again satisfies the property π `A ⇐⇒ Σ,J K`[π]:JAK.
This embedding is not computational, as it does not preserve reductions. Indeed, it is straighforward to see that the translation of a cut is a normal form. In fact, the translation of any proof is a normal form. There are well-typed non-normal forms that do not belong to the translation, such as
(λp : o . imp_intro p p (λh : proof p . h)) (imp q q)
but those can be reduced to a normal form such as
imp_intro (imp q q) (imp q q) (λh : proof (imp q q) . h)
which is the translation of a proof of (q ⇒ q) ⇒ (q ⇒ q).
However, β -reduction is still very important because we need it for the HOAS representation: a substitution is represented by a function application (such as p x in the type of forall_elim), and β reduction is necessary for it to give the expected result. We say that β-reduction is administrative3 in this embedding because a step of β-reduction does not correspond to anything in the original system, and in fact β-equivalent terms represent the same proof. With this embedding, the strong normalization of β-reduction in λΠ has absolutely no bearing on the original logic.

Limitations of λΠ

We saw two kinds of embeddings, which we call computational embeddings and higher-order embeddings. The first interprets propositions directly as types and can interpret the proofs of minimal first-order logic as well as systems that are strictly smaller (e.g. minimal predicate logic, simply typed λ-calculus, etc.). It is computational because it preserves reductions. The second interprets propositions as objects and has a predicate proof for the type of their proofs. It is not computational because it does not preserve reduction but, since it represents propositions as objects, it can express higher-order quantification and therefore interpret systems that are strictly larger than λΠ.
For example, we cannot interpret full predicate logic in the first approach because there is no counter-part for conjunction (∧) and disjunction (∨) in λΠ. However, it is easy to extend the embedding of the second approach to include those:
and : o → o → o,
and_intro : Πp, q : o . proof p → proof q → proof (and p q) ,
and_elim1 : Πp, q : o . proof (and p q) → proof p,
and_elim2 : Πp, q : o . proof (and p q) → proof q,
or : o → o → o,
or_intro1 : Πp, q : o . proof p → proof (or p q) ,
or_intro1 : Πp, q : o . proof q → proof (or p q) ,
or_elim : Πp, q, r : o . proof (or p q) →
(proof p → proof r) → (proof q → proof r) → proof r.
Similarly, it is possible to embed higher-order systems such as system F and higher-order logic (HOL) in λΠ using this approach. In fact, the embedding of system F is very close the embedding of minimal predicate logic given above: it is obtained by just changing forall to quantify over propositions instead of objects and changing the type of the other constants correspondingly. After renaming the constants to fit better with the language of system F, the embedding of the types becomes:
type : Type,
arrow : type → type → type,
forall : (type → type) → type.
One should not confuse type (with a lowercase “t”), which is the type of (the representation of) the types of system F, and Type (with an uppercase “T”), which is the type of the types of λΠ. The representation of the types of system F as objects is precisely what allows us to express polymorphism. The embedding of terms becomes:
term : type → Type,
lam : Πa, b : type . (term a → term b) → term (arrow a b) ,
app : Πa, b : type . term (arrow a b) → term a → term b,
biglam : Πp : (type → type) . (Πx : type . term (p x)) → term (forall p) ,
bigapp : Πp : (type → type) . term (forall p) → Πx : type . term (p x) .
Again, this embedding is not computational because it does not preserve reduction: if
M −→β M0 in system F then [M] β [M0 ] in λΠ. In fact, it is not possible to have a 6−→ computational embedding of system F in λΠ without extending the language, assuming of course that we are interested in an embedding that is correct, that is sound and complete.4 In that sense, we cannot have an embedding that is both higher-order and computational.
Why is that a problem? Consider the calculus of constructions, which contains both higher-order quantification and dependent types. Because of higher-order quantification, we know we need to follow the second approach. At the same time, we need to account for the conversion rule:
In the embedding, we must have J K ` [M] : JAK. However, since the embedding does not preserve reduction, we have JAK 6≡JBK. Therefore, J K 6`[M] : JBK, which means the embedding is incomplete. As soon as we want to encode a logic that is at the same time computational and higher-order, we have to choose between a computational embedding (which would be incomplete because it cannot do higher-order quantification) or a higher-order embedding (which would be incomplete because it does not simulate computation), unless it is unsound.
There is a third alternative, which is to encode the equivalence of types as a relation equiv and have a constructor, called a coercion, for the conversion rule:
conv : Πa, b : type . equiv a b → term a → term b .
However, these embeddings are very costly because they require encoding the proof of equivalence inside the proof term. These proofs can be quite large in practice. As a matter of fact, the technique of proof by reflection [BC04] relies precisely on making use of the computational power of the logic to avoid them. That technique has been successfully used in proof assistants, for example in Coq to build useful libraires such as ssreflect [GLR09] and prove important theorems such as the 4 color theorem [Gon05, Gon08].
To encode rich theories like the calculus of constructions or intuitionistic type theory, we therefore need a framework that is strictly larger than λΠ that can express embeddings that are at the same time higher-order and computational. This is where the λΠ-calculus modulo rewriting comes in. By adding rewrite rules to the framework, we can recover the computational aspect of the logic, and thus ensure that the embedding is complete. We show how to do this in Chapter 3.

READ  Favorable Propagation Analysis and Multi-Stage Linear Detection 

Table of contents :

Introduction 
1 Introduction 
1.1 Proof systems
1.1.1 Formalization of mathematics
1.1.2 Software verification
1.2 Universality of proofs
1.2.1 Computational reasoning
1.2.2 Higher-order reasoning
1.3 Type theory and logical frameworks
1.3.1 The Curry–Howard correspondence
1.3.2 The logical framework approach
1.4 Computational higher-order logics
1.4.1 Pure type systems
1.4.2 Universes and cumulativity
1.5 Contributions
1.6 Preliminary notions and notations
I Frameworks and embeddings
2 The -calculus 
2.1 Definition
2.2 As a first-order logic calculus
2.3 As a logical framework
2.4 Limitations of
3 The -calculus modulo rewriting
3.1 Definition
3.2 Basic properties
3.2.1 Soundness properties
3.2.2 Completeness properties
3.3 Computational higher-order embeddings
II Pure type systems 
4 Pure type systems 
4.1 Definition
4.2 Examples of pure type systems
4.3 Basic properties
4.4 Inter-system properties
4.4.1 Subsystems
4.4.2 Morphisms
5 Embedding pure type systems 
5.1 Translation
5.2 Completeness
5.2.1 Preservation of substitution
5.2.2 Preservation of equivalence
5.2.3 Preservation of typing
5.3 Alternative embeddings
5.3.1 Embedding without in the rewriting at the level of types
5.3.2 Embedding without rewriting at the level of types
6 Conservativity 
6.1 Normalization and conservativity
6.2 Proof of conservativity
6.2.1 Conservativity of equivalence
6.2.2 Conservativity of typing
7 Application: translating HOL to Dedukti 
7.1 HOL
7.2 Translation
7.2.1 Translation of types
7.2.2 Translation of terms
7.2.3 Translation of proofs
7.3 Completeness
7.4 Implementation
7.4.1 Proof retrieval
7.4.2 Holide: HOL to Dedukti
7.4.3 Extensions
III Cumulative and infinite hierarchies 
8 Cumulative type systems 
8.1 Definition
8.1.1 Specification and syntax
8.1.2 Subtyping
8.1.3 Typing
8.2 Basic properties
8.3 Inter-system properties
8.3.1 Subsystems
8.3.2 Morphisms
8.3.3 Closures
8.4 Principal typing
8.4.1 Definition
8.4.2 Derivation rules
8.4.3 Soundness
8.4.4 Completeness
8.5 Examples of CTSs with principal typing
9 Embedding cumulativity 
9.1 Translation
9.2 Completeness
9.2.1 Preservation of substitution
9.2.2 Preservation of equivalence
9.2.3 Preservation of typing
10 Infinite universe hierarchies 
10.1 Predicative universes
10.2 Impredicative universe
10.3 Cumulativity
11 Application: translating CIC to Dedukti 
11.1 Inductive types
11.1.1 Inductive types and eliminators
11.1.2 Eliminators in the -calculus modulo rewriting
11.2 Other features
11.2.1 Modules
11.2.2 Local let definitions and local fixpoints
11.2.3 Floating universes
11.2.4 Universe polymorphism
11.3 Implementation
11.3.1 Coqine: Coq to Dedukti
11.3.2 Krajono: Matita to Dedukti
Conclusion
12 Conclusion 
12.1 Related work
12.1.1 Logical embeddings
12.1.2 Proof interoperability
12.2 Future work
12.2.1 Improving the translations
12.2.2 Designing new embeddings
12.2.3 Interoperability in Dedukti
Appendix

GET THE COMPLETE PROJECT

Related Posts