2 EVERYTHING IS AN INDUCTIVE TYPE
5 include "basics/pts.ma".
7 (* As we mentioned several times, very few notions are really primitive in
8 Matita: one of them is the notion of universal quantification (or dependent
9 product type) and the other one is the notion of inductive type. Even the arrow
10 type A → B is not really primitive: it can be seen as a particular case of the
11 dependent product ∀x:A.B in the degenerate case when B does not depends on x.
12 All the other familiar logical connectives - conjunction, disjunction, negation,
13 existential quantification, even equality - can be defined as particular
15 We shall look at these definitions in this section, since it can be useful to
16 acquire confidence with inductive types, and to get a better theoretical grasp
20 (******************************* Conjunction **********************************)
22 (* In natural deduction, logical rules for connectives are divided in two
23 groups: there are introduction rules, allowing us to introduce a logical
24 connective in the conclusion, and there are elimination rules, describing how to
25 de-construct information about a compound proposition into information about its
26 constituents (that,is, how to use an hypothesis having a given connective as its
29 Consider conjunction. In order to understand the introduction rule for
30 conjunction, you should answer the question: how can we prove A∧B? The answer is
31 simple: we must prove both A and B. Hence the introduction rule for conjunction
34 The general idea for defining a logical connective as an inductive type is
35 simply to define it as the smallest proposition generated by its introduction
38 For instance, in the case of conjunction, we have the following definition: *)
40 inductive And (A,B:Prop) : Prop ≝
41 conj : A → B → And A B.
43 (* The corresponding elimination rule is induced by minimality: if we have a
44 proof of A∧B it may only derive from the conjunction of a proof of A and a proof
45 of B. A possible way to formally express the elimination rule is the following:
47 And_elim: ∀A,B,P:Prop. (A → B → P) → A∧B → P
49 that is, for all A and B, and for any proposition P if we need to prove P under
50 the assumption A∧B, we can reduce it to prove P under the pair of assumptions A
53 It is interesting to observe that the elimination rule can be easily derived
54 from the introduction rule in a completely syntactical way.
55 Basically, the general structure of the (non recursive, non dependent)
56 elimination rule for an inductive type T is the following
58 ∀A1,… An.∀P:Prop. C1 → … → Cn → T → P
60 where A1, … An are the parameters of the inductive type, and every Ci is derived
61 from the type Ti of a constructor ci of T by just replacing T with P in it. For
62 instance, in the case of the conjunction we only have one constructor of type
63 A → B → A∧B and replacing A∧B with P we get C = A \to B \to P.
65 Every time we declare a new inductive proposition or type T, Matita
66 automatically generates an axiom called T\_ind that embodies the elimination
67 principle for this type. The actual shape of the elimination axiom is sensibly
68 more complex of the one described above, since it also takes into account the
69 possibility that the predicate P depends over the term of the given inductive
70 type (and possible arguments of the inductive type).
71 We shall discuss the general case in Section ??.
73 Actually, the elimination tactic elim for an element of type T is a
74 straightforward wrapper that applies the suitable elimination axiom.
75 Let us see this in action on a simple lemma. Suppose we want to prove the
76 following trivial result:
78 lemma And_left: ∀A,B:Prop. And A B →A.
79 (* After introducing A and B we could decompose with * the hypothesis A∧B to
80 get the two premises A and B and use the former to close the goal, as expressed
81 by the following commands
83 However, a viable alternative is to explicitly apply the elimination principle:
85 #A #B @And_ind // qed.
88 (********************************* Disjunction ********************************)
90 (* Let us apply the previous methodology to other connectives, starting with
91 disjunction. The first point is to derive the introduction rule(s). When can we
92 conclude A∨B? Clearly, we must either have a proof of A, or a proof of B. So, we
93 have two introduction rules, in this case:
96 that leads us to the following inductive definition of disjunction: *)
98 inductive Or (A,B:Prop) : Prop ≝
99 or_introl : A → Or A B
100 | or_intror : B → Or A B.
102 (* The elimination principle, automatically generated by the system is
104 or_ind: ∀A,B,P:Prop. (A → P) → (B → P) → A∨B → P
106 that is a traditional formulation of the elimination rule for the logical
107 disjunction in natural deduction: if P follows from both A and B, then it also
108 follows from their disjunction. *)
110 (************************************ False ***********************************)
112 (* More surprisingly, we can apply the same methodology to define the constant
113 False. The point is that, obviously, there is no (canonical) way to conclude
114 False: so we have no introduction rule, and we must define an inductive type
115 without constructors, that is accepted by the system. *)
117 inductive False: Prop ≝ .
119 (* The elimination principle is in this case
121 False_ind: ∀P:Prop.False → P
123 that is the well known principle of explosion: ``ex falso quodlibet''. *)
125 (************************************ True ************************************)
127 (* What about True? You may always conclude True, hence the corresponding
128 inductive definition just has one trivial constructor, traditionally named I: *)
130 inductive True: Prop ≝ I : True.
132 (* As expected, the elimination rule is not informative in this case: the only
133 way to conclude P from True is to prove P:
135 True_ind: ∀P:Prop.P → True → P
138 (******************************** Existential *********************************)
140 (* Finally, let us consider the case of existential quantification. In order to
141 conclude ∃x:A.Q x we need to prove Q a for some term a:A. Hence, the
142 introduction rule for the existential looks like the following:
146 from which we get the following inductive definition, parametric in A and Q: *)
148 inductive ex (A:Type[0]) (Q:A → Prop) : Prop ≝
149 ex_intro: ∀x:A. Q x → ex A Q.
151 (* The elimination principle automatically generated by Matita is
153 ex_ind: ∀A.∀Q:A→Prop.∀P:Prop. (∀x:A. Q x → P) → ∃x:A.Q → P
155 That is, if we know that P is a consequence of Q x for any x:A, then it is
156 enough to know ∃x:A.Q x to conclude P. It is also worth to spell out the
157 backward reading of the previous principle.
158 Suppose we need to prove P under the assumption ∃x:A.Q. Then, eliminating the
159 latter amounts to assume the existence of x:A such that Q\; x and proceed
160 proving P under these new assumptions. *)
162 (****************************** A bit of notation *****************************)
164 (* Since we make a frequent use of logical connectives and quantifiers, it would
165 be nice to have the possibility to use a more familiar-looking notation for
166 them. As a matter of fact, Matita offers you the possibility to associate your
167 own notation to any notion you define.
169 To exploit the natural overloading of notation typical of scientific literature,
170 its management is splitted in two steps, in Matita: one from presentation level
171 to content level, where we associate a notation to a fragment of abstract
172 syntax, and one from content level to term level, where we provide (possibly
173 multiple) interpretations for the abstract syntax.
175 The mapping between the presentation level (i.e. what is typed on the keyboard
176 and what is displayed in the sequent window) and the content level is defined
177 with the "notation" command. When followed by >, it defines an input (only)
178 notation (that is, used in parsing but not in pretty printing). *)
180 notation "hvbox(a break \land b)"
181 left associative with precedence 35 for @{ 'and $a $b }.
183 (* This declaration associates the infix notation "a \lanb b" - rendered a ∧ b
184 by the UTF-8 graphical facilities - with an abstract syntax tree composed by the
185 new symbol 'and applied to the result of the parsing of input argument a and b.
187 The presentation pattern is always enclosed in double quotes. The special
188 keyword "break" indicates the breaking point and the box schema hvbox indicates
189 a horizontal or vertical layout, according to the available space for the
190 rendering (they can be safely ignored for the scope of this tutorial).
192 The content pattern begins right after the for keyword and extends to the end of
193 the declaration. Parts of the pattern surrounded by @{… } denote verbatim
194 content fragments, those surrounded by ${… }$ denote meta-operators and
195 meta-variables (for example $a) referring to the meta-variables occurring in
196 the presentation pattern.
198 The declaration also instructs the system that the notation is supposed to be
199 left associative and provides information about the syntactical precedence of
200 the operator, that governs the way an expression with different operators is
201 interpreted by the system.
202 For instance, suppose we declare the logical disjunction at a lower precedence:
205 notation "hvbox(a break \lor b)"
206 left associative with precedence 30 for @{ 'or $a $b }.
208 (* Then, an expression like A ∧ B ∨ C will be understood as (A ∧ B) ∨ C and not
210 An annoying aspect of notation is that it will eventually interfere with the
211 system parser, so introducing operators with the suitable precedence is an
212 important and delicate issue. The best thing to do is to consult the file
213 ``basics/core\_notation.ma'' and, unless you cannot reuse an already existing
214 notation overloading it (that is the recommended solution), try to figure out by
215 analogy the most suited precedence for your operator.
217 We cannot use the notation yet. Suppose we type:
219 lemma And_comm: ∀A,B. A∧B → B∧A.
221 Matita will complain saying that there is no interpretation for the symbol
222 'and. In order to associate an interpretation with content
223 elements, we use the following commands: *)
225 interpretation "logical and" 'and x y = (And x y).
226 interpretation "logical or" 'or x y = (Or x y).
228 (* With these commands we are saying that a possible interpretation of the
229 symbol 'and+ is the inductive type And, and a possible interpretation of the
230 symbol 'or is the inductive type Or.
231 Now, the previous lemma is accepted: *)
233 lemma And_comm: ∀A,B. A∧B → B∧A.
236 (* We can give as many interpretations for a same symbol as we wish: Matita will
237 automatically try to guess the correct one according to the context inside which
238 the expression occurs.
239 To make an example, let us define booleans and two boolean functions andb and
242 inductive bool : Type[0] ≝ tt : bool | ff : bool.
244 definition andb ≝ λb1,b2.
245 match b1 with [tt ⇒ b2 | ff ⇒ ff ].
247 definition orb ≝ λb1,b2.
248 match b1 with [tt ⇒ tt | ff ⇒ b2 ].
250 (* Then, we could then provide alternative interpretations of 'and and 'or+ over
253 interpretation "boolean and" 'and x y = (andb x y).
254 interpretation "boolena or" 'or x y = (orb x y).
256 (* In the following lemma, the disjunction would be intrerpreted as
257 propositional disjunnction, since A and B must be propositions *)
259 lemma Or_comm: ∀A,B. A∨B → B∨A.
262 (* On the other side, in the following lemma the disjunction is interpreted as
263 boolean disjunction, since a and b are boolean expressions *)
265 lemma andb_comm: ∀a,b.∀P:bool→Prop. P (a∨b) → P (b∨a).
268 (* Let us conclude this section with a discussion of the notation for the
269 existential quantifier. *)
271 notation "hvbox(\exists ident i : ty break . p)"
272 right associative with precedence 20 for
273 @{'exists (\lambda ${ident i} : $ty. $p) }.
275 (* The main novelty is the special keyword "ident" that instructs the system
276 that the variable i is expected to be an identifier. Matita abstract syntax
277 trees include lambda terms as primitive data types, and the previous declaration
278 simply maps the notation ∃x.P into a content term of the form ('exists (λx.p))
279 where p is the content term obtained from P.
281 The corresponding interpretation is then straightforward: *)
283 interpretation "exists" 'exists x = (ex ? x).
285 (* The notational language of Matita has an additional list operator for dealing
286 with variable-size terms having a regular structure. Such operator has a
287 corresponding fold operator, to build up trees at the content level.
289 For example, in the case of quantifiers, it is customary to group multiple
290 variable declarations under a same quantifier, writing e.g. ∃x,y,z.P instead of
293 This can be achieved by the following notation: *)
295 notation > "\exists list1 ident x sep , opt (: T). term 19 Px"
298 @{ ${ fold right @{$Px} rec acc @{'exists (λ${ident x}:$T.$acc)} } }
299 @{ ${ fold right @{$Px} rec acc @{'exists (λ${ident x}.$acc)} } }
302 (* The presentational pattern matches terms starting with the existential
303 symbol, followed by a list of identifiers separated by commas, optionally
304 terminated by a type declaration, followed by a fullstop and finally by the body
305 term. We use list1 instead of list0 since we expect to have at least one
306 identifier: conversely, you should use list0 when the list can possibly be
309 The "default" meta operator at content level matches the presentational opt and
310 has two branches, which are chosen depending on the matching of the optional
311 subexpression. Let us consider the first, case, where we have
312 an explicit type. The content term is build by folding the function
314 rec acc @{'exists (λ${ident x}:$T.$acc)}
316 (where "rec" is the binder, "acc" is the bound variable and the rest is the
317 body) over the initial content expression @{$Px}. *)
319 lemma distr_and_or_l : ∀A,B,C:Prop. A ∧(B ∨ C) → (A ∧ B) ∨ (A ∧ C).
321 [#H1 %1 % [@H |@H1] |#H1 %2 % //]
324 lemma distr_and_or_r : ∀A,B,C:Prop. (A ∧ B) ∨ (A ∧ C) → A ∧ (B ∨ C).
326 [% [// | %1 //] | % [// | %2 //]
330 lemma distr_or_and_l : ∀A,B,C:Prop. A ∨(B ∧ C) → (A ∨ B) ∧ (A ∨ C).
332 [#H % [%1 // |%1 //] | * #H1 #H2 % [%2 // | %2 //] ]
335 lemma distr_or_and_r : ∀A,B,C:Prop. (A ∨ B) ∧ (A ∨ C) → A ∨ (B ∧ C).
340 definition neg ≝ λA:Prop. A → False.
342 lemma neg_neg : ∀A. A → neg (neg A).
343 #A #H normalize #H1 @(H1 H)
346 (***************************** Leibniz Equality *******************************)
348 (* Even equality is a derived notion in Matita, and a particular case of an
349 inductive type. The idea is to define it as the smallest relation containing
350 reflexivity (that is, as the smallest reflexive relation over a given type). *)
352 inductive eq (A:Type[0]) (x:A) : A → Prop ≝
355 (* We can associate the standard infix notation for equality via the following
358 notation "hvbox(a break = b)"
359 non associative with precedence 45 for @{ 'eq ? $a $b }.
361 interpretation "leibnitz's equality" 'eq t x y = (eq t x y).
363 (* The induction principle eq_ind automatically generated by the system
364 (up to dependencies) has the following shape:
366 ∀A:Type[0].∀x:A. ∀P:A→Prop. P x → ∀y:A. x = y → P y
368 This principle is usually known as ``Leibniz equality'': two objects x and y are
369 equal if they cannot be told apart, that is for any property P, P x implies P y.
371 As a simple exercise, let us also proof the following variant: *)
373 lemma eq_ind_r: ∀A:Type[0].∀x:A. ∀P:A→Prop. P x → ∀y:A. y = x → P y.
374 #A #x #P #Hx #y #eqyx <eqyx in Hx; // qed.
376 (* The order of the arguments in eq_ind may look a bit aleatoric but, as we shall
377 see, it is motivated by the underlying structure of the inductive type.
378 Before discussing the way eq_ind is generated, it is time to make an important
379 discussion about the parameters of inductive types.
381 If you look back at the definition of equality, you see that the first argument
382 x has been explicitly declared, together with A, as a formal parameter of the
383 inductive type, while the second argument has been left implicit in the
384 resulting type A → Prop. One could wonder if this really matters, and in
385 particular if we could use the following alternative definitions:
387 inductive eq1 (A:Type[0]) (x,y:A) : Prop ≝
390 inductive eq2 (A:Type[0]): A → A → Prop ≝
393 The first one is just wrong. If you try to type it, you get the following error
394 message: ``CicUnification failure: Can't unify x with y''.
395 The point is that the role of parameters is really to define a family of types
396 uniformly indexed over them. This means that we expect all occurrences of the
397 inductive type in the type of constructors to be precisely instantiated with the
398 input parameters, in the order they are declared. So, if A,x and y are
399 parameters for eq1, then all occurrences of this type in the type of
400 constructors must have be of the kind eq1 A x y (while we have eq1 A x x, that
401 explains the typing error).
403 If you cannot express an argument as a parameter, the only alternative is to
404 implicitly declare it in the type of the inductive type.
405 Henceforth, when we shall talk about ``arguments'' of inductive types, we shall
406 implicitly refer to arguments which are not parameters (sometimes, people call
407 them ``right'' and ``left'' parameters, according to their position w.r.t the
408 double points in the type declaration. In general, it is always possible to
409 declare everything as an argument, but it is a very good practice to shift as
410 many argument as possible in parameter position. In particular, the definition
411 of eq2 is correct, but the corresponding induction principle eq2_ind is not as
412 readable as eq_ind. *)
414 inductive eq2 (A:Type[0]): A → A → Prop ≝
417 (* The elimination rule for a (non recusive) inductive type T having a list
418 of parameters A1,…,An and a list of arguments B1,…,Bm, has the following shape
419 (still, up to dependencies):
421 ∀a1:A1,… ∀an:An,∀P:B1 → … → Bm → Prop.
422 C1 → … → Ck → ∀x1:B1…∀xm:Bm.T a1 … an b1 … bm → P x1 … xm
424 where Ci is obtained from the type Ti of the constructor ci replacing in it each
425 occurrence of (T a1 … an t1 … tm) with (P t1 … tm).
427 For instance, eq2 only has A as parameter, and two arguments.
428 The corresponding elimination principle eq2_ind is as follows:
430 ∀A:Type[0].∀P:A → A → Prop. ∀z.P z z → ∀x,y:A. eq2 A x y → P x y
432 As we prove below, eq2_ind and eq_ind are logically equivalent (that is, they
433 mutually imply each other), but eq2_ind is slighty more complex and innatural.
436 lemma eq_to_eq2: ∀A,a,b. a = b → eq2 A a b.
437 #A #a #b #Heq <Heq // qed.
439 lemma eq2_to_eq: ∀A,a,b. eq2 A a b → a = b.
440 #A #a #b #Heq2 elim Heq2 // qed.
442 (******************* Equality, convertibility, inequality *********************)
444 (* Leibniz equality is a pretty syntactical (intensional) notion: two objects
445 are equal when they are the ``same'' term. There is however, an important point
446 to understand here: the notion of equality on terms is convertibility: two terms
447 are equal if they are identical once they have been transformed in normal form.
448 For this reason, not only 2=2 but also 1+1=2 since the normal form of 1+1 is 2.
450 Having understood the notion of equality, one could easily wonder how can we
451 prove that two objects are differents. For instance, in Peano arithmetics, the
452 fact that for any x, 0 ≠ S x is an independent axiom. With our inductive
453 definition on natural numbers, can we prove it, or are we supposed to assume
454 such a property axiomatically? *)
456 inductive nat : Type[0] ≝
460 (* In fact, in a logical system like the Calculus of Construction it is possible
461 to prove it. We shall discuss the proof here, since it is both elegant and
464 The crucial point is to define, by case analysis on the structure of the
465 inductive term, a characteristic property for the different cases.
467 For instance, in the case of natural numbers, we could define a
468 property not_zero as follow: *)
470 definition not_zero: nat → Prop ≝
471 λn: nat. match n with [ O ⇒ False | (S p) ⇒ True ].
473 (* The possibility of defining predicates by structural recursion on terms is
474 one of the major characteristic of the Calculus of Inductive Constructions,
475 known as strong elimination.
477 Suppose now we want to prove the following property: *)
479 theorem not_eq_O_S : ∀n:nat. O = S n → False.
482 (* After introducing n and the hypothesis H:O = S n we are left with the goal
483 False. Now, observe that (not_zero O) reduces to False, hence these two terms
484 are convertible, that is identical. So, it should be possible to replace False
485 with (not_zero O) in the conclusion, since they are the same term.
487 The tactic that does the job is the "change with" tactic. The invocation of
488 "change with t" checks that the current goal is convertible with t, and in this
489 case t becomes the new current goal.
491 In our case, typing *)
493 change with (not_zero O);
495 (* we get the new goal (not_zero O). But we know, by H, that O=S n,
496 hence by rewriting we get the the goal (not_zero (S n)) that reduces} to True,
497 whose proof is trivial: *)
500 (* Using a similar technique you can always prove that different constructors of
501 a same inductive type are distinct from each other; actually, this technique is
502 also at the core of the "destruct" tactics of the previous chapter, in order to
503 automatically close absurd cases. *)
505 (********************************** Inversion *********************************)
506 include "basics/logic.ma".
508 (* The only inductive type that really needs arguments is equality. In all other
509 cases you could conceptually get rid of them by adding, inside the type of
510 constructors, the suitable equalities that would allow to turn arguments into
513 Consider for instance our opp predicate of the introduction: *)
515 inductive bank : Type[0] ≝
519 inductive opp : bank → bank → Prop ≝
520 | east_west : opp east west
521 | west_east : opp west east.
523 (* The predicate has two arguments, and since they are mixed up in the type of
524 constructors we cannot express them as parameters. However, we could state it in
525 the following alternative way: *)
527 inductive opp1 (b1,b2: bank): Prop ≝
528 | east_west : b1 = east → b2 = west → opp1 b1 b2
529 | west_east : b1 = west → b2 = east → opp1 b1 b2.
531 (* Or also, more elegantly, in terms of the opposite function *)
533 definition opposite ≝ λb. match b with [east ⇒ west | west ⇒ east].
535 inductive opp2 (b1,b2: bank): Prop ≝
536 | opposite_to_opp2 : b1 = opposite b2 → opp2 b1 b2.
538 (* Suppose now to know (opp x west), where x:bank. From this hypothesis we
539 should be able to conclude x = east, but this is slightly less trivial then one
542 lemma trivial_opp : ∀x. opp x west → x = east.
543 (* After introducing x and H:opp x west, one could be naturally tempted to
544 proceed by cases on H, but this would lead us nowhere: in fact it would generate
545 the following two subgoals:
550 The first one is trivial, but the second one is false and cannot be proved.
551 Also working by induction on $x$ (clearly, before introducing H)
552 does not help, since we would get the goals
554 G_1: opp east west → east=east
555 G_2: opp west west → west=east
557 Again, the first goal is trivial, but proving that (opp west west) is absurd has
558 about the same complexity of the original problem.
559 In fact, the best approach consists in "generalizing" the statement to something
560 similar to lemma opp_to_opposite of the introduction, and then prove it as a
561 corollary of the latter. *)
562 #x change with (opp x west → x=opposite west); generalize in match west;
565 (* It is interesting to observe that we would not have the same problem with
566 opp1 or opp2. For instance: *)
568 lemma trivial_opp1 : ∀x. opp1 x west → x = east.
570 (* applying cases analysis on H:opp1 x west, we obtain the two subgoals
572 G_1: x=east → west=west → x=east
573 G_2: x=west → west=east → x=east
575 The first one is trivial, and the second one is easily closed using descruct. *)
576 [//|#H1 #H2 destruct (H2)] qed.
578 (* The point is that pattern matching is not powerful enough to discriminate the
579 structure of arguments of inductive types. To this aim, however, you may exploit
580 an alternative tactics called "Inversion". *)
582 (* Suppose to have in the local context an expression H:T t1 … tn, where T is
583 some inductive type. Then, inversion applied to H derives for each possible
584 constructor ci of (T t1 … tn), all the necessary conditions that should hold in
585 order to have c_i:T t1 … tn.
587 For instance, let us consider again our trivial problem trivial_opp: *)
589 lemma trivial_opp2: ∀x. opp x west → x = east.
592 (* the previous invocation to inversion results in the following subgoals:
594 G1: x=east → west=west →x=east
595 G2: x=west → west=east →x=east
597 How can we prove (opp t1 t2)? we have only two possibility: via east_west, that
598 implies t1=east and t2=west, or via west_east, that implies t1=west and t2=west.
599 Note also that G1 and G2 are precisely the same goals generated by case analysis
600 on opp1. So we can vlose the goal in exactly the sane way: *)
601 [//|#H1 #H2 destruct (H2)] qed.
603 (* Let us remark that invoking inversion on inductive types without arguments
604 does not make any sense, and has no practical effect. *)
606 (* A final worning. Inversion generates equalities, and by convention it uses
607 the standard leibniz quality defined basics/logic.ma. This is the reason why we
608 included this file at the beinning of this section, instead of working with our