5 include "tutorial/chapter3.ma".
6 include "basics/logic.ma".
8 (* In the previous section, we introduced many interesting logical connectives
9 by means of inductive definitions in the sort Prop. Do the same constructions
10 make any sense in Type? The answer is yes! Not only they make sense, but they
11 are indeed very familiar constructions: cartesian product, disjoint sum, empty
12 and singleton types, and ``sigma types'' (disjoint union of families of types
13 indexed over a given base type). This is not a coincidence, but a consequence of
14 a strong principle known under the name of ``Propositions as Types analogy'' (or
15 Curry-Howard correspondence).
17 We shall first discuss the constructions, and then we shall come back on the
20 (***************************** Cartesian Product ******************************)
22 (* The cartesian product of two types A and B is defined as follows: *)
24 inductive Prod (A,B:Type[0]) : Type[0] ≝
25 pair : A → B → Prod A B.
27 (* Observe that the definition is identical to the definition of the logical
28 conjunction, but for the fact that the sort Prop has been replaced by Type[0].
30 The following declarations allows us to use the canonical notations A × B for
31 the product and 〈a,b〉 for the pair of the two elements a and b. *)
33 notation "hvbox(x break × y)" with precedence 70 for @{ 'product $x $y}.
34 interpretation "Cartesian product" 'product A B = (Prod A B).
36 notation "hvbox(\langle term 19 a, break term 19 b\rangle)"
37 with precedence 90 for @{ 'pair $a $b}.
38 interpretation "Pair construction" 'pair x y = (pair ?? x y).
40 (* We can define the two projections fst and snd by a simple case analysis of
43 definition fst ≝ λA,B.λp:A×B. match p with [pair a b ⇒ a].
44 definition snd ≝ λA,B.λp:A×B. match p with [pair a b ⇒ b].
46 (* As in the case of inductive propositions, Matita automatically generates
47 elimination principles for A × B. In this case, however, it becomes interesting
48 to consider the possibility that the proposition towards which we are
49 eliminating a given pair p:A × B$ contains a copy of p itself. In other words,
50 if we have p:A × B in the current context, it is possible that p also occurs in
51 the current goal, that is that the current goal "depends" on p.
53 A typical example is in the proof of the so called ``surjective pairing''
56 lemma surjective_pair: ∀A,B.∀p:A×B. p = 〈fst ?? p, snd ?? p〉.
58 (* where p explicitly occurs in the conclusion. The proof is straightforward: we
59 introduce A, B and p and proceed by cases on p: since p is a pair, the only
60 possible case is that it is of the form 〈a,b〉 for some a and b.*)
62 #A #B #p cases p #a #b
64 (* At this point the goal looks like
65 〈a,b〉 = 〈fst A B 〈a,b〉, snd A B 〈a,b〉〉
66 and the two sides of the equation are convertible. *)
69 (* When we call cases p we are actually applying the dependent elimination
70 principle Prod_ind for the product, so it becomes interesting to have a look at
71 it (we shall postpone the discussion of the way it is generated in a later
74 ∀A,B.∀P:A×B →Prop.(∀a:A,∀b:B. P 〈a,b〉) → ∀x:A×B.P x
76 The previous principle has a very natural backward reading: in order to prove
77 that the property (P x) holds for any x of type A×B it is enough to prove P〈a,b〉
78 for any a:A and b:B. *)
80 (******************************* Disjoint Sum *********************************)
82 (* By reflecting in Type the definition of the logical disjunction we obtain the
83 disjoint union (the sum) of two types: *)
85 inductive Sum (A,B:Type[0]) : Type[0] ≝
89 notation "hvbox(a break + b)"
90 left associative with precedence 55 for @{ 'plus $a $b }.
92 interpretation "Disjoint union" 'plus A B = (Sum A B).
94 (* The two constructors are the left and right injections. The dependent
95 elimination principle Sum_ind has the following shape:
98 (∀a:A.P (inl A B a)) → (∀b:B.P (inr A B b)) → ∀x:A×B.P x
100 that is, in order to prove the property (P x) for any x:A+B it is enough to
101 prove P (inl A B a) and P (inr A B b) for all a:A and b:B. *)
103 (************************* Empty Type and Unit Type ***************************)
105 (* The counterparts of False and True are, respectively, an empty type and a
108 inductive void : Type[0] ≝.
109 inductive unit : Type[0] ≝ it: unit.
111 (* The elimination principle void_ind for void is simply
113 ∀P:void→Prop.∀x:void.P x
115 stating that any property is true for an element of type void (since we have no
118 Similarly, the elimination principle for the unit type is:
120 ∀P:unit→Prop. P it → ∀x:unit.P x
122 Indeed, in order to prove that a property is true for any element of the unit
123 type, it is enough to prove it for the unique (canonical) inhabitant "it".
125 As an exercise, let us prove that all inhabitants of unit are equal to each
128 lemma eq_unit: ∀a,b:unit. a = b.
130 (* The idea is to proceed by cases on a and b: we have only one possibility,
131 namely a=it and b=it, hence we end up to prove it=it, that is trivial. Here is
134 #a cases a #b cases b // qed. (* also: * * // qed. *)
136 (* It is interesting to observe that we get exactly the same behavior by
137 directly applying unit_ind instead of proceeding by cases. In fact, this is an
138 alternative proof: *)
140 lemma eq_unit_again: ∀a,b:unit. a = b.
141 @unit_ind @unit_ind // qed.
143 (********************* Sigma Types and dependent matching *********************)
145 (* As a final example, let us consider ``type'' variants of the existential
146 quantifier; in this case we have two interesting possibilities: *)
148 inductive Sig (A:Type[0]) (Q:A → Prop) : Type[0] ≝
149 Sig_intro: ∀x:A. Q x → Sig A Q.
151 inductive DPair (A:Type[0]) (Q:A → Type[0]) : Type[0] ≝
152 DPair_intro: ∀x:A. Q x → DPair A Q.
154 (* In the first case (traditionally called sigma type), an element of type
155 (Sig A P) is a pair (Sig_intro ?? a h) where a is an element of type A and h is a
156 proof that the property (P a) holds.
158 A suggestive reading of (Sig A P) is as the subset of A enjoying the property P,
159 that is {a:A | P(a)}.
161 In the second case, an element of DPair A B is a "dependent" pair
162 (DPair_intro ?? a h) where a is element of type A and h maps a into an element in
163 (B a); the intuition is to look at DProd A B as a (disjoint) family of sets B_a
164 indexed over elements a:A.
166 In both case it is possible to define projections extracting the two components
167 of the pair. Let us discuss the case of the sigma type (the other one is
170 Extracting the first component (the element) is easy: *)
172 definition Sig_fst ≝ λA:Type[0].λP:A→Prop.λx:Sig A P.
173 match x with [Sig_intro a h ⇒ a].
175 (* Getting the second component is a bit trickier. The first problem is
176 the type of the result: given a pair (Sig_intro ?? a h): Sig A P, the type of
177 h is (P a), that is P applied to the first argument of the pair of which we want
178 to extract the second component!
179 Luckily, in a language with dependent types, it is not a problem to write such a
182 definition Sig_snd : ∀A,P.∀x:Sig A P.P (Sig_fst A P x) ≝ ...
184 A subtler problem is met when we define the body. If we just write
186 definition Sig_snd : ∀A,P.∀x.Sig A P → P (Sig_fst A P x) ≝ λA,P,x.
187 match x with [Sig_intro a h ⇒ h].
189 the system will complain about the type of h. The point is that the type of this
190 term depends on a, that is, in a not so trivial way, from the input argument x.
191 In this situations, the type inference algorithm of Matita requires a little
192 help: in particular the user is asked to explicitly provide the return type of
193 the match expression, that is a map uniformly expressing the type of all
194 branches of the case as a function of the matched expression.
195 In our case we only have one branch and the return type is
197 λx.P (Sig\_fst A P x)
199 We declare such a type immediately after the match, introduces by the keyword
202 definition Sig_snd : ∀A,P.∀x:Sig A P.P (Sig_fst A P x) ≝ λA,P,x.
203 match x return λx.P (Sig_fst A P x) with [Sig_intro a h ⇒ h].
205 (* Remark: The careful reader may have possibly remarked that we also did a
206 systematic abuse of the arrow symbol: the expression A → B was sometimes
207 interpreted as the "implication" between A and B and sometimes as the "function
208 space" between A and B. The reason of this fact is that, actually, in a
209 foundational system like the Calculus of Construction, they are the very same
210 notion: we only distinguish them according to the sort of the resulting
212 Similarly for the dependent product ∏x:A.B: if the resulting expression is of
213 sort Prop we shall look at it as a "universal quantification" (using the
214 notation ∀x:A.B), while if it is in Type we shall typically regard it as a
215 generalized cartesian product of a family of types B indexed over a base type A.
218 (************************ Kolmogorov interpretation ***************************)
220 (* The previous analogy between propositions and types is a consequence of a
221 deep philosophical interpretation of the notion of proof in terms of a
222 constructive procedure that transforms proofs of the premises into a proof of
223 the conclusion. This usually referred to as Kolmogorov interpretation, or
224 Brouwer–Heyting–Kolmogorov (BHK) interpretation.
226 The interpretation states what is intended to be a proof of a given formula φ,
227 and is specified by induction on the structure of A:
229 - a proof of P∧Q is a pair 〈a,b〉 where a is a proof of P and b is a proof of Q;
230 - a proof of P∨Q is a pair 〈a,b〉 where either a is 0 and b is a proof of P, or
231 a=1 and b is a proof of Q;
232 - a proof of P→Q is a function f which transforms a proof of x:P into a proof of
234 - a proof of ∃x:S.φ(x) is a pair 〈a,b〉 where a is an element of S, and b is a
236 - a proof of ∀x:S.ϕ(x) is a function f which transforms an element x of S into a
238 - the formula ¬P is defined as P → False, so a proof of it is a function f which
239 transforms a proof of P into a proof of False;
240 - there is no proof of False.
242 For instance, a proof of the formula P → P is a function transforming a proof of
243 P into a proof of P: but we can just take the identity!
245 You can explicitly exploit this idea for writing proofs in Matita. Instead of
246 declaring a lemma and proving it interactively, you may define your lemma as if
247 it was the type of an expression, and directly proceed to inhabit it with its
250 definition trivial: ∀P:Prop.P→P ≝ λP,h.h.
252 (* It is interesting to observe that this is really the very same proof
253 (intensionally!!) that would be produce interactively, as testified by the
254 following fragment of code: *)
256 lemma trivial1: ∀P:Prop.P→P. #P #h @h qed.
257 lemma same_proofs: trivial = trivial1. @refl qed.
259 (* Even more interestingly, we can do the opposite, namely define functions
261 Suppose for instance that we need to define a function with the following
262 type: ∀A,B,C:Type[0]. (A → B → C) → A× B → C.
263 If we just declare the type of the function followed by a fullstop, Matita will
264 start an interactive session completely analogous to a proof session, and we can
265 use the usual tactics to ``close the goal'', that is to inhabit the type. *)
267 definition uncurrying: ∀A,B,C:Type[0]. (A→B→C)→A×B→C.
268 #A #B #C #f * @f qed.
270 (********************** The Curry-Howard correspondence ***********************)
272 (* The philosophical ideas contained in the BHK interpretation of a proof as a
273 constructive procedure building a proof of the conclusion from proofs of the
274 hypothesis get a precise syntactical systematization via the so called
275 Curry-Howard correspondence, expressing a direct relationship between computer
277 The Curry-Howard correspondence, also known as proofs-as-programs analogy, is a
278 generalization of a syntactic analogy between systems of formal logic and
279 computational calculi first observed by Curry for Combinatory Logic and Hilbert-
280 style deduction systems, and later by Howard for λ-calculus and Natural
281 Deduction: in both cases the formation rules for well typed terms have precisely
282 the same shape of the logical rules of introduction of the correspondent
285 As a consequence, the expression
289 really has a double possible reading:
291 - M is a term of type A
292 - M is a proof of the proposition A
294 In both cases, M is a witness that the object A is "inhabited". A free variable
295 x:A is an assumption about the validity of A (we assumeto have an unknown proof
296 named x). Let us consider the cases of the introduction and elimination rule of
297 the implication in natural deduction, that are particularly interesting:
299 Γ,A ⊢ B Γ ⊢ A → B Γ ⊢ A
300 ---------- -------------------
303 The first step is to enrich the representation by decorating formulae with
304 explicit proof terms. In particular, formulae in the context, being assumptions,
305 will be decorated with free variables (different form each others), while the
306 conclusion will be decorated with a term expression with free variables
307 appearing in the context.
309 Suppose Γ,x:A ⊢ M:B. What is the expected decoration for A → B?
310 According to Kolmogorov interpretation, M is a procedure transforming the proof
311 x:A into a proof of M:B; the proof of A → B is hence the function that, taken x
312 as input, returns M, and our canonical notation for expressing such a function
313 is λx:A.M. Hence we get:
319 that is precisely the typing rule for functions.
321 Similarly, let us suppose that Γ ⊢ M:A → B, and Γ ⊢ N:A, and let us derive a
322 natural proof decoration for the arrow elimination rule (that is just the well
323 known modus ponens rule). Again, we get inspiration from Kolmogorov
324 interpretation: a proof M:A → B is a function transforming a proof of A into a
325 proof of B hence, since we have a proof N:A in order to get a proof of B it is
326 enough to apply M to the argument N:
328 Γ ⊢ M: A → B Γ ⊢ N: A
329 --------------------------
332 But this is nothing else than the typing rule for the application!
334 There is still a point that deserve discussion: the most interesting point of p
335 rograms is that they can be executed (in a functional setting, terms can be
336 reduced to a normal form). By the Curry-Howard correspondence, this should
337 correspond to a normalization procedure over proofs: does this operation make
338 any sense at the logical level? Again the answer is yes: not only it makes
339 sense, but it was independently investigated in the field of proof theory. A
340 reducible expression corresponds to what is traditionally called a "cut". A cut
341 is a logical ``detour'' typically arising by an introduction rule immediately
342 followed by an elimination rule for the same connective, as in a beta-redex, where
343 we have a direct "interaction" between an application and a λ-abstraction:
345 (beta-rule) λx.M N → M[N/x]
347 One of the main meta-theoretical results that is usually investigated on proof
348 systems is if they enjoy the so called "cut-elimination" principle, that is if
349 the cut-rule is "admissible": any proof that makes use of cuts can be turned
350 into a cut-free proof. Since a cut is redex, a cut-free proof is a term that
351 does not contain any redex, that is a term in normal form.
352 Hence, the system enjoys cut-elimination if and only if the corresponding
353 calculus is normalizing.
355 Cut-elimination is a major tool of proof theory, with important implications on
356 e.g. "consistency", "automation" and "interpolation".
358 Let us make a final remark. If a program is a proof, then what does it
359 correspond to the act of verifying the correctness of a proof M of some
360 proposition A? Well, M is just an expression that is supposed to have type A, so
361 "proof verification" is nothing else that "type checking"!
363 This should also clarify that proof verification is a much easier task than
364 proof search. While the former corresponds to type checking, the second
365 one corresponds to the automatic synthesis of a program from a specification!
367 The main ideas behind the Curry-Howard correspondence are summarized in the
370 Curry-Howard Correspondence
376 cut | reducible expression (redex)
377 cut free | normal form
378 cut elimination | normalization
379 correctness |type checking
382 (******************************* Prop vs. Type ********************************)
384 (* In view of the Curry-Howard analogy, the reader could wonder if there is any
385 actual difference between the two sorts Prop and Type in a system like the
386 Calculus of Constructions, or if it just a matter of flavour.
388 In fact there is a subtle difference concerning the type of product types over
389 the given sorts. Consider for instance a higher order sentence of
390 the kind ∀P:Prop.P: this is just a sentence asserting that any proposition is
391 true, and it looks natural to look at it as an object of sort Prop. However, if
392 this is the case, when we are quantifying over ``all propositions' we are also
393 implicitly quantifying over the proposition we are in the act of defining, that
394 creates a strange and possibly dangerous circularity.
396 In mathematics, definitions of this kind, where the definition (directly or
397 indirectly) mentions or quantifies over the entity being defined are called
400 The opposite notion of impredicativity is "predicativity", which essentially
401 entails building stratified (or ramified) theories where quantification over
402 lower levels results in objects of some new type.
404 Impredicativity can be dangerous: a well known example is Russel's ``set of all
405 sets'' resulting in famous logical paradox: does such a set contain itself as an
406 element? If it does then by definition it should not, and if it does not then by
407 definition it should.
409 A predicative approach would consist in distinguishing e.g. between ``small''
410 sets and ``large'' sets (collections), where the set of all small sets would
411 result in a large set.
413 In fact, if we look at ∀P:Type[0].P$ as a dependent product, and we identify
414 Type[0] as a universe of (small) ``sets'', it would seem strange to conclude
415 that quantifying over all (small) sets we could obtain another (small) set.
417 In Matita, ∀P:Type[0].P has in fact type Type[1] and not Type[0], where Type[1]
418 is also the type of Type[0].
420 So, while Prop is impredicative, sorts of the kind Type[i] define a potentially
421 infinite hierarchy of predicative sorts.
423 The difference between predicative and impredicative sorts is in the typing rule
424 for the product ∏x:A.B: if the sort of B is Prop then ∏x:A.B has type Prop
425 whatever the sort of A is; if A:Type[i] and B:Type[j], then ∏x:A.B:Type[k] where
426 Type[k] is the smallest sort strictly larger than Type[i] and Type[j]:
428 Γ ⊢ A : s Γ,x:A ⊢ B:Prop Γ ⊢ A : Type[i] Γ,x:A ⊢ B:Type[j]
429 --------------------------- ------------------------------------
430 Γ ⊢ ∏x:A.B : Prop Γ ⊢ ∏x:A.B : Type[k]
434 It is worth to observe that in Matita, the index i is just a label: constraints
435 among universes are declared by the user. The standard library (see
436 "basics/pts.ma" declares a few of them, with the following relations:
438 Type[0] < Type[1] < Type[2] < …
440 The impredicativity of Prop is not a problem from the point of view of the
441 logical consistency, but there is a price to be paid for this: we are not
442 allowed to eliminate Prop vs. Type[i]. In concrete terms this means that while
443 we are allowed to build terms, types or even propositions by structural
444 induction on terms, the only think we can do by structural induction/case
445 analysis on proofs is to build other proofs.
447 For instance, we know that a proof of p:A∨B is either a proof of A or a proof B,
448 and one could be tempted to define a function that returns the boolean true in
449 the first case and false otherwise, by performing a simple case analysis on p:
451 definition discriminate_to_bool ≝ λA,B:Prop.λp:A∨B.
457 If you type the previous definition, Matita will complain with the following
458 error message: ``TypeChecker failure: Sort elimination not allowed: Prop towards
460 Even returning the two propositions True and False intead of two booleans would
463 definition discriminate_to_Prop ≝ λA,B:Prop.λp:A∨B.
466 | or_intror b ⇒ False
469 The error message is the same as before: in both cases the sort of the branches
470 is Type[0]. The only think we can do is to return other proof, like in the
471 following example: *)
473 definition or_elim ≝ λA,B,C:Prop.λp:A∨B.λf:A→C.λg:B→C.
479 (* Exercise: repeat the previous examples in interactive mode, eliminating the