(*
-Polymorphism and Higher Order
+ EVERYTHING IS AN INDUCTIVE TYPE
+*)
+
+include "basics/pts.ma".
+
+(* As we mentioned several times, very few notions are really primitive in
+Matita: one of them is the notion of universal quantification (or dependent
+product type) and the other one is the notion of inductive type. Even the arrow
+type A → B is not really primitive: it can be seen as a particular case of the
+dependent product ∀x:A.B in the degenerate case when B does not depends on x.
+All the other familiar logical connectives - conjunction, disjunction, negation,
+existential quantification, even equality - can be defined as particular
+inductive types.
+ We shall look at these definitions in this section, since it can be useful to
+acquire confidence with inductive types, and to get a better theoretical grasp
+over them.
+*)
+(******************************* Conjunction **********************************)
+
+(* In natural deduction, logical rules for connectives are divided in two
+groups: there are introduction rules, allowing us to introduce a logical
+connective in the conclusion, and there are elimination rules, describing how to
+de-construct information about a compound proposition into information about its
+constituents (that,is, how to use an hypothesis having a given connective as its
+principal operator).
+
+Consider conjunction. In order to understand the introduction rule for
+conjunction, you should answer the question: how can we prove A∧B? The answer is
+simple: we must prove both A and B. Hence the introduction rule for conjunction
+is A → B → A∧B.
+
+The general idea for defining a logical connective as an inductive type is
+simply to define it as the smallest proposition generated by its introduction
+rule(s).
+
+For instance, in the case of conjunction, we have the following definition: *)
+
+inductive And (A,B:Prop) : Prop ≝
+ conj : A → B → And A B.
+
+(* The corresponding elimination rule is induced by minimality: if we have a
+proof of A∧B it may only derive from the conjunction of a proof of A and a proof
+of B. A possible way to formally express the elimination rule is the following:
+
+ And_elim: ∀A,B,P:Prop. (A → B → P) → A∧B → P
+
+that is, for all A and B, and for any proposition P if we need to prove P under
+the assumption A∧B, we can reduce it to prove P under the pair of assumptions A
+and B.
+
+It is interesting to observe that the elimination rule can be easily derived
+from the introduction rule in a completely syntactical way.
+Basically, the general structure of the (non recursive, non dependent)
+elimination rule for an inductive type T is the following
+
+ ∀A1,… An.∀P:Prop. C1 → … → Cn → T → P
+
+where A1, … An are the parameters of the inductive type, and every Ci is derived
+from the type Ti of a constructor ci of T by just replacing T with P in it. For
+instance, in the case of the conjunction we only have one constructor of type
+A → B → A∧B and replacing A∧B with P we get C = A \to B \to P.
+
+Every time we declare a new inductive proposition or type T, Matita
+automatically generates an axiom called T\_ind that embodies the elimination
+principle for this type. The actual shape of the elimination axiom is sensibly
+more complex of the one described above, since it also takes into account the
+possibility that the predicate P depends over the term of the given inductive
+type (and possible arguments of the inductive type).
+We shall discuss the general case in Section ??.
+
+Actually, the elimination tactic elim for an element of type T is a
+straightforward wrapper that applies the suitable elimination axiom.
+Let us see this in action on a simple lemma. Suppose we want to prove the
+following trivial result:
*)
-include "tutorial/chapter2.ma".
-include "basics/bool.ma".
-
-(* Matita supports polymorphic data types. The most typical case are polymorphic
-lists, parametric in the type of their elements: *)
-
-inductive list (A:Type[0]) : Type[0] ≝
- | nil: list A
- | cons: A -> list A -> list A.
-
-(* The type notation list A is the type of all lists with elements of type A:
-it is defined by two constructors: a polymorphic empty list (nil A) and a cons
-operation, adding a new head element of type A to a previous list. For instance,
-(list nat) and and (list bool) are lists of natural numbers and booleans,
-respectively. But we can also form more complex data types, like
-(list (list (nat → nat))), that is a list whose elements are lists of functions
-from natural numbers to natural numbers.
-
-Typical elements in (list bool) are for instance,
- nil nat - the empty list of type nat
- cons nat true (nil nat) - the list containing true
- cons nat false (cons nat (true (nil nat))) - the list containing false and true
-and so on.
-
-Note that both constructos nil and cons are expecting in input the type parameter:
-in this case, bool.
+lemma And_left: ∀A,B:Prop. And A B →A.
+(* After introducing A and B we could decompose with * the hypothesis A∧B to
+get the two premises A and B and use the former to close the goal, as expressed
+by the following commands
+ #A #B * // qed.
+However, a viable alternative is to explicitly apply the elimination principle:
*)
+#A #B @And_ind // qed.
-(*
-Defining your own notation
-We now add a bit of notation, in order to make the syntax more readable. In
-particular, we would like to write [] instead of (nil A) and a::l instead of
-(cons A a l), leaving the system the burden to infer A, whenever possible.
-*)
+(********************************* Disjunction ********************************)
+
+(* Let us apply the previous methodology to other connectives, starting with
+disjunction. The first point is to derive the introduction rule(s). When can we
+conclude A∨B? Clearly, we must either have a proof of A, or a proof of B. So, we
+have two introduction rules, in this case:
+ A → A∨B and
+ B → A∨B
+that leads us to the following inductive definition of disjunction: *)
-notation "hvbox(hd break :: tl)"
- right associative with precedence 47
- for @{'cons $hd $tl}.
+inductive Or (A,B:Prop) : Prop ≝
+ or_introl : A → Or A B
+ | or_intror : B → Or A B.
+
+(* The elimination principle, automatically generated by the system is
-notation "[ list0 x sep ; ]"
- non associative with precedence 90
- for ${fold right @'nil rec acc @{'cons $x $acc}}.
+ or_ind: ∀A,B,P:Prop. (A → P) → (B → P) → A∨B → P
+
+that is a traditional formulation of the elimination rule for the logical
+disjunction in natural deduction: if P follows from both A and B, then it also
+follows from their disjunction. *)
-notation "hvbox(l1 break @ l2)"
- right associative with precedence 47
- for @{'append $l1 $l2 }.
+(************************************ False ***********************************)
-interpretation "nil" 'nil = (nil ?).
-interpretation "cons" 'cons hd tl = (cons ? hd tl).
+(* More surprisingly, we can apply the same methodology to define the constant
+False. The point is that, obviously, there is no (canonical) way to conclude
+False: so we have no introduction rule, and we must define an inductive type
+without constructors, that is accepted by the system. *)
-(*
-Basic operations on lists
+inductive False: Prop ≝ .
-Let us define a few basic functions over lists. Our first example is the
-append function, concatenating two lists l1 and l2. The natural way is to proceed
-by recursion on l1: if it is empty the result is simply l2, while if l1 = hd::tl
-then we recursively append tl and l2 , and then add hd as first element. Note that
-the append function itself is polymorphic, and the first argument it takes in input
-is the type A of the elements of two lists l1 and l2.
-Moreover, since the append function takes in input several parameters, we must also
-specify in the its defintion on which one of them we are recurring: in this case l1.
-If not othewise specified, recursion is supposed to act on the first argument of the
-function.*)
+(* The elimination principle is in this case
-let rec append A (l1: list A) l2 on l1 ≝
- match l1 with
- [ nil ⇒ l2
- | cons hd tl ⇒ hd :: append A tl l2 ].
+ False_ind: ∀P:Prop.False → P
+
+that is the well known principle of explosion: ``ex falso quodlibet''. *)
-interpretation "append" 'append l1 l2 = (append ? l1 l2).
+(************************************ True ************************************)
-(* As usual, the function is executable. For instance, (append A nil l) reduces to
-l, as shown by the following example: *)
-
-example nil_append: ∀A.∀l:list A. [] @ l = l.
-#A #l normalize // qed.
-
-(* Proving that l @ [] = l is just a bit more complex. The situation is exactly
-the same as for the addition operation of the previous chapter: since append is
-defined by recutsion over the first argument, the computation of l @ [] is stuck,
-and we must proceed by induction on l *)
-
-lemma append_nil: ∀A.∀l:list A.l @ [] = l.
-#A #l (elim l) normalize // qed.
+(* What about True? You may always conclude True, hence the corresponding
+inductive definition just has one trivial constructor, traditionally named I: *)
-(* similarly, we can define the two functions head and tail. Since we can only define
-total functions, we should decide what to do in case the input list is empty.
-For tl, it is natural to return the empty list; for hd, we take in input a default
-element d of type A to return in this case. *)
-
-definition head ≝ λA.λl: list A.λd:A.
- match l with [ nil ⇒ d | cons a _ ⇒ a].
+inductive True: Prop ≝ I : True.
-definition tail ≝ λA.λl: list A.
- match l with [ nil ⇒ [] | cons hd tl ⇒ tl].
-
-example ex_head: ∀A.∀a,d,l. head A (a::l) d = a.
-#A #a #d #l normalize // qed.
-
-example ex_tail: tail ? (cons ? true []) = [].
-normalize // qed.
-
-theorem associative_append:
-∀A.∀l1,l2,l3: list A. (l1 @ l2) @ l3 = l1 @ (l2 @ l3).
-#A #l1 #l2 #l3 (elim l1) normalize // qed.
-
-(* Problemi con la notazione *)
-lemma a_append: ∀A.∀a.∀l:list A. (a::[]) @ l = a::l.
-// qed.
-
-theorem append_cons:
-∀A.∀a:A.∀l,l1: list A.l@(a::l1)= (l @ (cons ? a [])) @ l1.
-// qed.
-
-(* Other typical functions over lists are those computing the length
-of a list, and the function returning the nth element *)
-
-let rec length (A:Type[0]) (l:list A) on l ≝
-match l with
- [ nil ⇒ O
- | cons a tl ⇒ S (length A tl)].
-
-let rec nth n (A:Type[0]) (l:list A) (d:A) ≝
- match n with
- [O ⇒ head A l d
- |S m ⇒ nth m A (tail A l) d].
-
-example ex_length: length ? (cons ? O []) = S O.
-normalize // qed.
-
-example ex_nth: nth (S O) ? (cons ? (S O) (cons ? O [])) O = O.
-normalize // qed.
-
-(* Proving that the length of l1@l2 is the sum of the lengths of l1
-and l2 just requires a trivial induction on the first list. *)
-
- lemma length_add: ∀A.∀l1,l2:list A.
- length ? (l1@l2) = add (length ? l1) (length ? l2).
-#A #l1 elim l1 normalize // qed.
-
-(*
-Comparing Costructors
-
-Let us come to a more interesting question. How can we prove that the empty
-list is different from any list with at least one element, that is from any list
-of the kind (a::l)? We start defining a simple predicate stating if a list is
-empty or not. The predicate is computed by inspection over the list *)
-
-definition is_nil: ∀A:Type[0].list A → Prop ≝
-λA.λl.match l with [ nil ⇒ l = [] | cons hd tl ⇒ (l ≠ [])].
-
-(* Next we need a simple result about negation: if you wish to prove ¬P you are
-authorized to add P to your hypothesis: *)
-
-lemma neg_aux : ∀P:Prop. (P → ¬P) → ¬P.
-#P #PtonegP % /3/ qed.
-
-theorem diff_cons_nil:
-∀A:Type[0].∀l:list A.∀a:A. a::l ≠ [].
-#A #l #a @neg_aux #Heq
-(* we start assuming the new hypothesis Heq of type a::l = [] using neg_aux.
-Next we use the change tactic to pass from the current goal a::l≠ [] to the
-expression is_nil a::l, convertible with it. *)
-(change with (is_nil ? (a::l)))
-(* Now, we rewrite with Heq, obtaining (is_nil A []), that reduces to the trivial
-goal [] = [] *)
->Heq // qed.
+(* As expected, the elimination rule is not informative in this case: the only
+way to conclude P from True is to prove P:
-(* As an application of the previous result let us prove that l1@l2 is empty if
-and only if both l1 and l2 are empty.
-The idea is to proceed by cases on l1: if l1=[] the statement is trivial; on the
-other side, if l1 = a::tl, then the hypothesis (a::tl)@l2 = [] is absurd, hence we
-can prove anything from it.
-When we know we can prove both A and ¬A, a sensible way to proceed is to apply
-False_ind: ∀P.False → P to the current goal, that breaks down to prove False, and
-then absurd: ∀A:Prop. A → ¬A → False to reduce to the contradictory cases.
-Usually, you may invoke automation to take care to solve the absurd case. *)
-
-lemma nil_to_nil: ∀A.∀l1,l2:list A.
- l1@l2 = [] → l1 = [] ∧ l2 = [].
-#A #l1 cases l1 normalize /2/ #a #tl #l2 #H @False_ind /2/ qed.
-
-(*
-Higher Order Functionals
-
-Let us come to some important, higher order, polymorphic functionals
-acting over lists. A typical example is the map function, taking a function
-f:A → B, a list l = [a1; a2; ... ; an] and returning the list
-[f a1; f a2; ... ; f an]. *)
-
-let rec map (A,B:Type[0]) (f: A → B) (l:list A) on l: list B ≝
- match l with [ nil ⇒ [] | cons x tl ⇒ f x :: (map A B f tl)].
-
-(* Another major example is the fold function, that taken a list
-l = [a1; a2; ... ;an], a base value b:B, and a function f: A → B → B returns
-(f a1 (f a2 (... (f an b)...))). *)
-
-let rec foldr (A,B:Type[0]) (f:A → B → B) (b:B) (l:list A) on l :B ≝
- match l with [ nil ⇒ b | cons a l ⇒ f a (foldr A B f b l)].
-
-(* As an example of application of foldr, let us use it to define a filter
-function that given a list l: list A and a boolean test p:A → bool returns the
-sublist of elements satisfying the test. In this case, the result type B of
-foldr should be (list A), the base value is [], and f: A → list A →list A is
-the function that taken x and l returns x::l, if x satisfies the test, and l
-otherwise. We use an if_then_else function included from bool.ma to this purpose. *)
-
-definition filter ≝
- λT.λp:T → bool.
- foldr T (list T) (λx,l0. if p x then x::l0 else l0) [].
-
-(* Here are a couple of simple lemmas on the behaviour of the filter function.
-It is often convenient to state such lemmas, in order to be able to use rewriting
-as an alternative to reduction in proofs: reduction is a bit difficult to control.
+ True_ind: ∀P:Prop.P → True → P
*)
-lemma filter_true : ∀A,l,a,p. p a = true →
- filter A p (a::l) = a :: filter A p l.
-#A #l #a #p #pa (elim l) normalize >pa // qed.
+(******************************** Existential *********************************)
+
+(* Finally, let us consider the case of existential quantification. In order to
+conclude ∃x:A.Q x we need to prove Q a for some term a:A. Hence, the
+introduction rule for the existential looks like the following:
+
+ ∀a:A. Q a → ∃x.Q x
+
+from which we get the following inductive definition, parametric in A and Q: *)
+
+inductive ex (A:Type[0]) (Q:A → Prop) : Prop ≝
+ ex_intro: ∀x:A. Q x → ex A Q.
+
+(* The elimination principle automatically generated by Matita is
+
+ ex_ind: ∀A.∀Q:A→Prop.∀P:Prop. (∀x:A. Q x → P) → ∃x:A.Q → P
+
+That is, if we know that P is a consequence of Q x for any x:A, then it is
+enough to know ∃x:A.Q x to conclude P. It is also worth to spell out the
+backward reading of the previous principle.
+Suppose we need to prove P under the assumption ∃x:A.Q. Then, eliminating the
+latter amounts to assume the existence of x:A such that Q\; x and proceed
+proving P under these new assumptions. *)
+
+(****************************** A bit of notation *****************************)
+
+(* Since we make a frequent use of logical connectives and quantifiers, it would
+be nice to have the possibility to use a more familiar-looking notation for
+them. As a matter of fact, Matita offers you the possibility to associate your
+own notation to any notion you define.
+
+To exploit the natural overloading of notation typical of scientific literature,
+its management is splitted in two steps, in Matita: one from presentation level
+to content level, where we associate a notation to a fragment of abstract
+syntax, and one from content level to term level, where we provide (possibly
+multiple) interpretations for the abstract syntax.
+
+The mapping between the presentation level (i.e. what is typed on the keyboard
+and what is displayed in the sequent window) and the content level is defined
+with the "notation" command. When followed by >, it defines an input (only)
+notation (that is, used in parsing but not in pretty printing). *)
+
+notation "hvbox(a break \land b)"
+ left associative with precedence 35 for @{ 'and $a $b }.
+
+(* This declaration associates the infix notation "a \lanb b" - rendered a ∧ b
+by the UTF-8 graphical facilities - with an abstract syntax tree composed by the
+new symbol 'and applied to the result of the parsing of input argument a and b.
+
+The presentation pattern is always enclosed in double quotes. The special
+keyword "break" indicates the breaking point and the box schema hvbox indicates
+a horizontal or vertical layout, according to the available space for the
+rendering (they can be safely ignored for the scope of this tutorial).
+
+The content pattern begins right after the for keyword and extends to the end of
+the declaration. Parts of the pattern surrounded by @{… } denote verbatim
+content fragments, those surrounded by ${… }$ denote meta-operators and
+meta-variables (for example $a) referring to the meta-variables occurring in
+the presentation pattern.
+
+The declaration also instructs the system that the notation is supposed to be
+left associative and provides information about the syntactical precedence of
+the operator, that governs the way an expression with different operators is
+interpreted by the system.
+For instance, suppose we declare the logical disjunction at a lower precedence:
+*)
+
+notation "hvbox(a break \lor b)"
+ left associative with precedence 30 for @{ 'or $a $b }.
+
+(* Then, an expression like A ∧ B ∨ C will be understood as (A ∧ B) ∨ C and not
+A ∧ (B ∨ C).
+An annoying aspect of notation is that it will eventually interfere with the
+system parser, so introducing operators with the suitable precedence is an
+important and delicate issue. The best thing to do is to consult the file
+``basics/core\_notation.ma'' and, unless you cannot reuse an already existing
+notation overloading it (that is the recommended solution), try to figure out by
+analogy the most suited precedence for your operator.
+
+We cannot use the notation yet. Suppose we type:
+
+ lemma And_comm: ∀A,B. A∧B → B∧A.
+
+Matita will complain saying that there is no interpretation for the symbol
+'and. In order to associate an interpretation with content
+elements, we use the following commands: *)
+
+interpretation "logical and" 'and x y = (And x y).
+interpretation "logical or" 'or x y = (Or x y).
+
+(* With these commands we are saying that a possible interpretation of the
+symbol 'and+ is the inductive type And, and a possible interpretation of the
+symbol 'or is the inductive type Or.
+Now, the previous lemma is accepted: *)
+
+lemma And_comm: ∀A,B. A∧B → B∧A.
+#A #B * /2/ qed.
+
+(* We can give as many interpretations for a same symbol as we wish: Matita will
+automatically try to guess the correct one according to the context inside which
+the expression occurs.
+To make an example, let us define booleans and two boolean functions andb and
+orb. *)
+
+inductive bool : Type[0] ≝ tt : bool | ff : bool.
+
+definition andb ≝ λb1,b2.
+ match b1 with [tt ⇒ b2 | ff ⇒ ff ].
+
+definition orb ≝ λb1,b2.
+ match b1 with [tt ⇒ tt | ff ⇒ b2 ].
+
+(* Then, we could then provide alternative interpretations of 'and and 'or+ over
+them: *)
+
+interpretation "boolean and" 'and x y = (andb x y).
+interpretation "boolena or" 'or x y = (orb x y).
+
+(* In the following lemma, the disjunction would be intrerpreted as
+propositional disjunnction, since A and B must be propositions *)
+
+lemma Or_comm: ∀A,B. A∨B → B∨A.
+#A #B * /2/ qed.
+
+(* On the other side, in the following lemma the disjunction is interpreted as
+boolean disjunction, since a and b are boolean expressions *)
+
+lemma andb_comm: ∀a,b.∀P:bool→Prop. P (a∨b) → P (b∨a).
+* * // qed.
+
+(* Let us conclude this section with a discussion of the notation for the
+existential quantifier. *)
+
+notation "hvbox(\exists ident i : ty break . p)"
+ right associative with precedence 20 for
+ @{'exists (\lambda ${ident i} : $ty. $p) }.
+
+(* The main novelty is the special keyword "ident" that instructs the system
+that the variable i is expected to be an identifier. Matita abstract syntax
+trees include lambda terms as primitive data types, and the previous declaration
+simply maps the notation ∃x.P into a content term of the form ('exists (λx.p))
+where p is the content term obtained from P.
+
+The corresponding interpretation is then straightforward: *)
+
+interpretation "exists" 'exists x = (ex ? x).
+
+(* The notational language of Matita has an additional list operator for dealing
+with variable-size terms having a regular structure. Such operator has a
+corresponding fold operator, to build up trees at the content level.
+
+For example, in the case of quantifiers, it is customary to group multiple
+variable declarations under a same quantifier, writing e.g. ∃x,y,z.P instead of
+∃x.∃y.∃z.P.
+
+This can be achieved by the following notation: *)
+
+notation > "\exists list1 ident x sep , opt (: T). term 19 Px"
+ with precedence 20
+ for ${ default
+ @{ ${ fold right @{$Px} rec acc @{'exists (λ${ident x}:$T.$acc)} } }
+ @{ ${ fold right @{$Px} rec acc @{'exists (λ${ident x}.$acc)} } }
+ }.
+
+(* The presentational pattern matches terms starting with the existential
+symbol, followed by a list of identifiers separated by commas, optionally
+terminated by a type declaration, followed by a fullstop and finally by the body
+term. We use list1 instead of list0 since we expect to have at least one
+identifier: conversely, you should use list0 when the list can possibly be
+empty.
+
+The "default" meta operator at content level matches the presentational opt and
+has two branches, which are chosen depending on the matching of the optional
+subexpression. Let us consider the first, case, where we have
+an explicit type. The content term is build by folding the function
+
+ rec acc @{'exists (λ${ident x}:$T.$acc)}
+
+(where "rec" is the binder, "acc" is the bound variable and the rest is the
+body) over the initial content expression @{$Px}. *)
+
+
+(***************************** Leibniz Equality *******************************)
+
+(* Even equality is a derived notion in Matita, and a particular case of an
+inductive type. The idea is to define it as the smallest relation containing
+reflexivity (that is, as the smallest reflexive relation over a given type). *)
+
+inductive eq (A:Type[0]) (x:A) : A → Prop ≝
+ refl: eq A x x.
-lemma filter_false : ∀A,l,a,p. p a = false →
- filter A p (a::l) = filter A p l.
-#A #l #a #p #pa (elim l) normalize >pa normalize // qed.
+(* We can associate the standard infix notation for equality via the following
+declarations: *)
-(* As another example, let us redefine the map function using foldr. The
-result type B is (list B), the base value b is [], and the fold function
-of type A → list B → list B is the function mapping a and l to (f a)::l.
+notation "hvbox(a break = b)"
+ non associative with precedence 45 for @{ 'eq ? $a $b }.
+
+interpretation "leibnitz's equality" 'eq t x y = (eq t x y).
+
+(* The induction principle eq_ind automatically generated by the system
+(up to dependencies) has the following shape:
+
+ ∀A:Type[0].∀x:A. ∀P:A→Prop. P x → ∀y:A. x = y → P y
+
+This principle is usually known as ``Leibniz equality'': two objects x and y are
+equal if they cannot be told apart, that is for any property P, P x implies P y.
+
+As a simple exercise, let us also proof the following variant: *)
+
+lemma eq_ind_r: ∀A:Type[0].∀x:A. ∀P:A→Prop. P x → ∀y:A. y = x → P y.
+#A #x #P #Hx #y #eqyx <eqyx in Hx; // qed.
+
+(* The order of the arguments in eq_ind may look a bit aleatoric but, as we shall
+see, it is motivated by the underlying structure of the inductive type.
+Before discussing the way eq_ind is generated, it is time to make an important
+discussion about the parameters of inductive types.
+
+If you look back at the definition of equality, you see that the first argument
+x has been explicitly declared, together with A, as a formal parameter of the
+inductive type, while the second argument has been left implicit in the
+resulting type A → Prop. One could wonder if this really matters, and in
+particular if we could use the following alternative definitions:
+
+ inductive eq1 (A:Type[0]) (x,y:A) : Prop ≝
+ refl1: eq1 A x x.
+
+ inductive eq2 (A:Type[0]): A → A → Prop ≝
+ refl2: ∀x.eq2 A x x.
+
+The first one is just wrong. If you try to type it, you get the following error
+message: ``CicUnification failure: Can't unify x with y''.
+The point is that the role of parameters is really to define a family of types
+uniformly indexed over them. This means that we expect all occurrences of the
+inductive type in the type of constructors to be precisely instantiated with the
+input parameters, in the order they are declared. So, if A,x and y are
+parameters for eq1, then all occurrences of this type in the type of
+constructors must have be of the kind eq1 A x y (while we have eq1 A x x, that
+explains the typing error).
+
+If you cannot express an argument as a parameter, the only alternative is to
+implicitly declare it in the type of the inductive type.
+Henceforth, when we shall talk about ``arguments'' of inductive types, we shall
+implicitly refer to arguments which are not parameters (sometimes, people call
+them ``right'' and ``left'' parameters, according to their position w.r.t the
+double points in the type declaration. In general, it is always possible to
+declare everything as an argument, but it is a very good practice to shift as
+many argument as possible in parameter position. In particular, the definition
+of eq2 is correct, but the corresponding induction principle eq2_ind is not as
+readable as eq_ind. *)
+
+inductive eq2 (A:Type[0]): A → A → Prop ≝
+ refl2: ∀x.eq2 A x x.
+
+(* The elimination rule for a (non recusive) inductive type T having a list
+of parameters A1,…,An and a list of arguments B1,…,Bm, has the following shape
+(still, up to dependencies):
+
+ ∀a1:A1,… ∀an:An,∀P:B1 → … → Bm → Prop.
+ C1 → … → Ck → ∀x1:B1…∀xm:Bm.T a1 … an b1 … bm → P x1 … xm
+
+where Ci is obtained from the type Ti of the constructor ci replacing in it each
+occurrence of (T a1 … an t1 … tm) with (P t1 … tm).
+
+For instance, eq2 only has A as parameter, and two arguments.
+The corresponding elimination principle eq2_ind is as follows:
+
+ ∀A:Type[0].∀P:A → A → Prop. ∀z.P z z → ∀x,y:A. eq2 A x y → P x y
+
+As we prove below, eq2_ind and eq_ind are logically equivalent (that is, they
+mutually imply each other), but eq2_ind is slighty more complex and innatural.
*)
-definition map_again ≝ λA,B,f,l. foldr A (list B) (λa,l.f a::l) [] l.
+lemma eq_to_eq2: ∀A,a,b. a = b → eq2 A a b.
+#A #a #b #Heq <Heq // qed.
+
+lemma eq2_to_eq: ∀A,a,b. eq2 A a b → a = b.
+#A #a #b #Heq2 elim Heq2 // qed.
+
+(******************* Equality, convertibility, inequality *********************)
+
+(* Leibniz equality is a pretty syntactical (intensional) notion: two objects
+are equal when they are the ``same'' term. There is however, an important point
+to understand here: the notion of equality on terms is convertibility: two terms
+are equal if they are identical once they have been transformed in normal form.
+For this reason, not only 2=2 but also 1+1=2 since the normal form of 1+1 is 2.
+
+Having understood the notion of equality, one could easily wonder how can we
+prove that two objects are differents. For instance, in Peano arithmetics, the
+fact that for any x, 0 ≠ S x is an independent axiom. With our inductive
+definition on natural numbers, can we prove it, or are we supposed to assume
+such a property axiomatically? *)
+
+inductive nat : Type[0] ≝
+ O : nat
+ | S : nat →nat.
+
+(* In fact, in a logical system like the Calculus of Construction it is possible
+to prove it. We shall discuss the proof here, since it is both elegant and
+instructive.
-(*
-Extensional equality
+The crucial point is to define, by case analysis on the structure of the
+inductive term, a characteristic property for the different cases.
-Can we prove that map_again is "the same" as map? We should first of all
-clarify in which sense we expect the two functions to be equal. Equality in
-Matita has an intentional meaning: it is the smallest predicate induced by
-convertibility, i.e. syntactical equality up to normalization. From an
-intentional point of view, map and map_again are not functions, but programs,
-and they are clearly different. What we would like to say is that the two
-programs behave in the same way: this is a different, extensional equality
-that can be defined in the following way. *)
+For instance, in the case of natural numbers, we could define a
+property not_zero as follow: *)
-definition ExtEq ≝ λA,B:Type[0].λf,g:A→B.∀a:A.f a = g a.
+definition not_zero: nat → Prop ≝
+ λn: nat. match n with [ O ⇒ False | (S p) ⇒ True ].
-(* Proving that map and map_again are extentionally equal in the
-previous sense can be proved by a trivial structural induction on the list *)
+(* The possibility of defining predicates by structural recursion on terms is
+one of the major characteristic of the Calculus of Inductive Constructions,
+known as strong elimination.
-lemma eq_maps: ∀A,B,f. ExtEq ?? (map A B f) (map_again A B f).
-#A #B #f #n (elim n) normalize // qed.
+Suppose now we want to prove the following property: *)
-(* Let us make another remark about extensional equality. It is clear that,
-if f is extensionally equal to g, then (map A B f) is extensionally equal to
-(map A B g). Let us prove it. *)
+theorem not_eq_O_S : ∀n:nat. O = S n → False.
+#n #H
-theorem eq_map : ∀A,B,f,g. ExtEq A B f g → ExtEq ?? (map A B f) (map A B g).
-#A #B #f #g #eqfg
+(* After introducing n and the hypothesis H:O = S n we are left with the goal
+False. Now, observe that (not_zero O) reduces to Fals, hence these two terms
+convertible, that is identical. So, it should be possible to replace False with
+(not_zero O) in the conclusion, since they are the same term.
+
+The tactic that does the job is the "change with" tactic. The invocation of
+"change with t" checks that the current goal is convertible with t, and in this
+case t becomes the new current goal.
+
+In our case, typing *)
+
+change with (not_zero O);
-(* the relevant point is that we cannot proceed by rewriting f with g via
-eqfg, here. Rewriting only works with Matita intensional equality, while here
-we are dealing with a different predicate, defined by the user. The right way
-to proceed is to unfold the definition of ExtEq, and work by induction on l,
-as usual when we want to prove extensional equality between functions over
-inductive types; again the rest of the proof is trivial. *)
+(* we get the new goal (not_zero O). But we know, by H, that O=S n,
+hence by rewriting we get the the goal (not_zero (S n)) that reduces} to True,
+whose proof is trivial: *)
+>H // qed.
-#l (elim l) normalize // qed.
+(* Using a similar technique you can always prove that different constructors of
+a same inductive type are distinct from each other; actually, this technique is
+also at the core of the "destruct" tactics of the previous chapter, in order to
+automatically close absurd cases. *)
+
+(********************************** Inversion *********************************)
+include "basics/logic.ma".
+
+(* The only inductive type that really needs arguments is equality. In all other
+cases you could conceptually get rid of them by adding, inside the type of
+constructors, the suitable equalities that would allow to turn arguments into
+parameters.
+
+Consider for instance our opp predicate of the introduction: *)
+
+inductive bank : Type[0] ≝
+ | east : bank
+ | west : bank.
+
+inductive opp : bank → bank → Prop ≝
+ | east_west : opp east west
+ | west_east : opp west east.
+
+(* The predicate has two arguments, and since they are mixed up in the type of
+constructors we cannot express them as parameters. However, we could state it in
+the following alternative way: *)
+
+inductive opp1 (b1,b2: bank): Prop ≝
+ | east_west : b1 = east → b2 = west → opp1 b1 b2
+ | west_east : b1 = west → b2 = east → opp1 b1 b2.
+
+(* Or also, more elegantly, in terms of the opposite function *)
+
+definition opposite ≝ λb. match b with [east ⇒ west | west ⇒ east].
+
+inductive opp2 (b1,b2: bank): Prop ≝
+ | opposite_to_opp2 : b1 = opposite b2 → opp2 b1 b2.
+
+(* Suppose now to know (opp x west), where x:bank. From this hypothesis we
+should be able to conclude x = east, but this is slightly less trivial then one
+could expect. *)
+
+lemma trivial_opp : ∀x. opp x west → x = east.
+(* After introducing x and H:opp x west, one could be naturally tempted to
+proceed by cases on H, but this would lead us nowhere: in fact it would generate
+the following two subgoals:
+
+ G1: east=east
+ G2: west=east
+
+The first one is trivial, but the second one is false and cannot be proved.
+Also working by induction on $x$ (clearly, before introducing H)
+does not help, since we would get the goals
+
+ G_1: opp east west → east=east
+ G_2: opp west west → west=east
+
+Again, the first goal is trivial, but proving that (opp west west) is absurd has
+about the same complexity of the original problem.
+In fact, the best approach consists in "generalizing" the statement to something
+similar to lemma opp_to_opposite of the introduction, and then prove it as a
+corollary of the latter. *)
+#x change with (opp x west → x=opposite west); generalize in match west;
+#b * // qed.
+
+(* It is interesting to observe that we would not have the same problem with
+opp1 or opp2. For instance: *)
+
+lemma trivial_opp1 : ∀x. opp1 x west → x = east.
+#x #H cases H
+(* applying cases analysis on H:opp1 x west, we obtain the two subgoals
+
+ G_1: x=east → west=west → x=east
+ G_2: x=west → west=east → x=east
+
+The first one is trivial, and the second one is easily closed using descruct. *)
+ [//|#H1 #H2 destruct (H2)] qed.
+
+(* The point is that pattern matching is not powerful enough to discriminate the
+structure of arguments of inductive types. To this aim, however, you may exploit
+an alternative tactics called "Inversion". *)
+
+(* Suppose to have in the local context an expression H:T t1 … tn, where T is
+some inductive type. Then, inversion applied to H derives for each possible
+constructor ci of (T t1 … tn), all the necessary conditions that should hold in
+order to have c_i:T t1 … tn.
+
+For instance, let us consider again our trivial problem trivial_opp: *)
+
+lemma trivial_opp2: ∀x. opp x west → x = east.
+#x #H inversion H
+
+(* the previous invocation to inversion results in the following subgoals:
+
+ G1: x=east → west=west →x=east
+ G2: x=west → west=east →x=east
+
+How can we prove (opp t1 t2)? we have only two possibility: via east_west, that
+implies t1=east and t2=west, or via west_east, that implies t1=west and t2=west.
+Note also that G1 and G2 are precisely the same goals generated by case analysis
+on opp1. So we can vlose the goal in exactly the sane way: *)
+ [//|#H1 #H2 destruct (H2)] qed.
+
+(* Let us remark that invoking inversion on inductive types without arguments
+does not make any sense, and has no practical effect. *)
+
+(* A final worning. Inversion generates equalities, and by convention it uses
+the standard leibniz quality defined basics/logic.ma. This is the reason why we
+included this file at the beinning of this section, instead of working with our
+local equality. *)
-(*
-Big Operators
-
-Building a library of basic functions, it is important to achieve a
-good degree of abstraction and generality, in order to be able to reuse
-suitable instances of the same function in different context. This has not
-only the obvious benefit of factorizing code, but especially to avoid
-repeating proofs of generic properties over and over again.
-A really convenient tool is the following combination of fold and filter,
-that essentially allow you to iterate on every subset of a given enumerated
-(finite) type, represented as a list. *)
-
- let rec fold (A,B:Type[0]) (op:B→B→B) (b:B) (p:A→bool) (f:A→B) (l:list A) on l:B ≝
- match l with
- [ nil ⇒ b
- | cons a l ⇒ if p a then op (f a) (fold A B op b p f l) else
- (fold A B op b p f l)].
-
-(* It is also important to spend a few time to introduce some fancy notation
-for these iterators. *)
-
- notation "\fold [ op , nil ]_{ ident i ∈ l | p} f"
- with precedence 80
-for @{'fold $op $nil (λ${ident i}. $p) (λ${ident i}. $f) $l}.
-
-notation "\fold [ op , nil ]_{ident i ∈ l } f"
- with precedence 80
-for @{'fold $op $nil (λ${ident i}.true) (λ${ident i}. $f) $l}.
-
-interpretation "\fold" 'fold op nil p f l = (fold ? ? op nil p f l).
-
-theorem fold_true:
-∀A,B.∀a:A.∀l.∀p.∀op:B→B→B.∀nil.∀f:A→B. p a = true →
- \fold[op,nil]_{i ∈ a::l| p i} (f i) =
- op (f a) \fold[op,nil]_{i ∈ l| p i} (f i).
-#A #B #a #l #p #op #nil #f #pa normalize >pa // qed.
-
-theorem fold_false:
-∀A,B.∀a:A.∀l.∀p.∀op:B→B→B.∀nil.∀f.
-p a = false → \fold[op,nil]_{i ∈ a::l| p i} (f i) =
- \fold[op,nil]_{i ∈ l| p i} (f i).
-#A #B #a #l #p #op #nil #f #pa normalize >pa // qed.
-
-theorem fold_filter:
-∀A,B.∀a:A.∀l.∀p.∀op:B→B→B.∀nil.∀f:A →B.
- \fold[op,nil]_{i ∈ l| p i} (f i) =
- \fold[op,nil]_{i ∈ (filter A p l)} (f i).
-#A #B #a #l #p #op #nil #f elim l //
-#a #tl #Hind cases(true_or_false (p a)) #pa
- [ >filter_true // > fold_true // >fold_true //
- | >filter_false // >fold_false // ]
-qed.
-
-record Aop (A:Type[0]) (nil:A) : Type[0] ≝
-{op :2> A → A → A;
- nill:∀a. op nil a = a;
- nilr:∀a. op a nil = a;
- assoc: ∀a,b,c.op a (op b c) = op (op a b) c
-}.
-
-theorem fold_sum: ∀A,B. ∀I,J:list A.∀nil.∀op:Aop B nil.∀f:A → B.
- op (\fold[op,nil]_{i ∈ I} (f i)) (\fold[op,nil]_{i ∈ J} (f i)) =
- \fold[op,nil]_{i ∈ (I@J)} (f i).
-#A #B #I #J #nil #op #f (elim I) normalize
- [>nill//|#a #tl #Hind <assoc //]
-qed.
\ No newline at end of file